id
stringlengths 20
20
| score
int64 1
5
| normalized_score
float64 0.2
1
| content
stringlengths 217
3.74M
| sub_path
stringclasses 1
value |
---|---|---|---|---|
BkiUd47xK7FjYAb_7pWN
| 5 | 1 |
\section{Introduction} \label{seintro}
Let $\mathcal{N}$ be a non-Archimedean ordered field extension of $\ensuremath{\mathbb{R}}$ that is real closed and
complete in the order topology and whose Hahn group $S_\mathcal{N}$ is Archimedean, i.e. (isomorphic to) a subgroup of $\ensuremath{\mathbb{R}}$. Recall that $S_{\mathcal{N}}$ is the set of equivalence classes under the relation $\sim$ defined on $\mathcal{N}^*:=\mathcal{N}\setminus\{0\}$ as follows: For $x,y\in \mathcal{N}^*$, we say that $x$ is of the same order as $y$ and write $x\sim y$ if there exist
$n,m\in\ensuremath{\mathbb{N}}$
such that $n|x|>|y|$ and $m|y|>|x|$, where $|\cdot|$ denotes the ordinary absolute value on $\mathcal{N}$:
$ |x|=\max\left\{x,
-x\right \}$.
$S_{\mathcal{N}}$ is naturally endowed with an addition via $[x]+[y]=[x\cdot y]$ and an order via
$[x]<[y]$ if $|y|\ll|x|$ (which means $n|y|< |x|$ for all $n\in\mathbb{N}$), both of which are readily checked to be well-defined.
It follows that $(S_{\mathcal{N}},+,<)$ is an ordered group,
often referred to as the Hahn group or skeleton group, whose neutral element is $[1]$, the class of $1$.
The theorem of Hahn \cite{hahn} provides a complete classification
of non-Archimedean ordered field extensions of $\ensuremath{\mathbb{R}}$ in terms
of their skeleton groups. In fact, invoking the axiom of choice it
is shown that the elements of our field $\mathcal{N}$ can be written as
(generalized) formal power series (also called Hahn series) over its skeleton group $S_{\mathcal{N}}$ with real
coefficients, and the set of appearing exponents forms a
well-ordered subset of $S_{\mathcal{N}}$. That is, for all $x\in \mathcal{N}$, we have that
$x=\sum_{q\in S_{\mathcal{N}}}a_qd^{q}$;
with $a_q\in\setR$ for all $q$, $d$ a positive infinitely small element of $\mathcal{N}$, and the support of $x$, given by
$\mbox{supp}(x):=\{q\in S_\mathcal{N}: a_q\ne 0\}$,
forming a well-ordered subset of $S_{\mathcal{N}}$.
We define for $x\ne 0$ in $\mathcal{N}$,
$ \lambda(x)=\min\left(\mbox{supp}(x)\right)$,
which exists since $\mbox{supp}(x)$ is well-ordered. Moreover, we set $\lambda(0)=\infty$. Given a nonzero $x=\sum_{q\in \mbox{supp}(x)}a_qd^q$, then $x>0$ if and only if $a_{\lambda(x)}>0$.
The smallest such
field $\mathcal{N}$ is the Levi-Civita field $\mathcal{R}$, first introduced in \cite{levicivita1,levicivita2}. In this case
$S_\mathcal{R}=\ensuremath{\mathbb{Q}}$, and for any element $x\in\mathcal{R}$, supp$(x)$ is a left-finite
subset of $\ensuremath{\mathbb{Q}}$, i.e. below any rational bound $r$ there are only finitely many exponents in the Hahn representation of $x$. The Levi-Civita field
$\mathcal{R}$ is of particular interest because of its practical usefulness. Since the supports of the elements of $\mathcal{R}$ are left-finite, it is possible to represent these
numbers on a computer. Having infinitely
small numbers allows for many computational
applications; one
such application is the computation of derivatives of real functions representable on a computer \cite{rsdiffsf,dabul00}, where both the accuracy of
formula manipulators and the speed of classical numerical methods are achieved. For a review of the Levi-Civita field $\mathcal{R}$, see
\cite{rsrevitaly13} and references therein.
In the wider context of valuation theory, it is
interesting to note that the topology induced by the order on $\mathcal{N}$ is the same as the valuation topology $\tau_v$ introduced via the ultrametric $\Lambda:\mathcal{N}\times\mathcal{N}\rightarrow\mathbb{R}$, given
by $\Lambda(x,y)=\exp{(-\lambda(x-y))}$. It follows therefore that the field $\mathcal{N}$ is just a special
case of the class of fields discussed in \cite{schikhofbook}. For a general
overview of the algebraic properties of formal power series fields, we refer to the comprehensive overview by Ribenboim \cite{ribenboim92}, and
for an overview of the related valuation theory the book by Krull \cite{krull32}. A thorough and complete treatment of ordered structures can
also be found in \cite{priessbook}. A more comprehensive survey of all non-Archimedean fields can be found in \cite{barria-sham-18}.
\section{Weak Local Uniform Differentiability and Review of Recent Results}
Because of the total disconnectedness of the field $\mathcal{N}$ in the order topology, the standard theorems of real calculus like the
intermediate value theorem, the inverse function theorem, the mean value theorem, the implicit function theorem and Taylor's theorem require stronger smoothness criteria of the functions involved in order for the theorems to hold.
In this section we will present one such criterion: the so-called \lq weak local uniform differentiability\rq,
we will review recent work based on that smoothness criterion and then present new results.
In \cite{boo-sham-18}, we focus our attention on $\mathcal{N}$-valued functions of one variable. We study the properties of weakly locally uniformly differentiable (WLUD)
functions at a point $x_0\in\mathcal{N}$ or on an open subset $A$ of $\mathcal{N}$. In particular, we show that WLUD functions are $C^1$, they include all polynomial functions,
and they are closed under addition, multiplication and composition. Then we generalize the definition of weak local uniform differentiability to any order. In particular,
we study the properties of WLUD$^2$ functions at a point $x_0\in\mathcal{N}$ or on an open subset $A$ of $\mathcal{N}$; and we show that WLUD$^2$ functions are $C^2$,
they include all polynomial functions, and they are closed under addition, multiplication and composition. Finally, we
formulate and prove an inverse function theorem as well as a local intermediate value theorem and a local mean value theorem for these functions.
Here we only recall the main definitions and results (without proofs) in \cite{boo-sham-18} and refer the reader to that paper for the details.
\begin{defn}
Let $A\subseteq \mathcal{N}$ be open, let $f:A\rightarrow \mathcal{N}$, and let $x_0\in A$ be given. We say that $f$ is weakly locally uniformly differentiable (abbreviated as WLUD)
at $x_0$ if $f$ is differentiable in a neighbourhood $\Omega$ of $x_0$ in $A$ and if for every $\epsilon > 0$ in $\mathcal{N}$ there exists $\delta > 0$ in $\mathcal{N}$ such that $(x_0 - \delta, x_0 + \delta) \subset \Omega$, and
for
every $x,y \in (x_0 - \delta, x_0 + \delta)$ we have that $\abs{f(y) - f(x) - f^\prime (x)(y-x)} \le \epsilon \abs{y-x}$. Moreover, we say that $f$ is WLUD on $A$ if $f$
is WLUD at every point in $A$.
\end{defn}
We extend the WLUD concept to higher orders of differentiability and we define WLUD$^k$ as follows.
\begin{defn}\label{def:wludn}
Let $A\subseteq \mathcal{N}$ be open, let $f:A\rightarrow \mathcal{N}$, let $x_0\in A$, and let $k\in\mathbb{N}$ be given. We say that $f$ is WLUD$^k$ at $x_0$ if $f$ is $k$ times
differentiable in a neighbourhood $\Omega$ of $x_0$ in $A$ and if for every $\epsilon > 0$ in $\mathcal{N}$ there exists $\delta > 0$ in $\mathcal{N}$ such that $(x_0-\delta,x_0+\delta) \subset \Omega$, and for every
$x,y \in (x_0-\delta,x_0+\delta)$ we have that
\[
\left|f(y) - \sum\limits_{j=0}^k \frac{f^{(j)}(x)}{j!}(y-x)^j\right|\le \epsilon \left|y-x\right|^k.
\]
Moreover, we say that $f$ is WLUD$^k$ on $A$ if $f$ is WLUD$^k$ at every point in $A$. Finally, we say that $f$ is WLUD$^\infty$ at $x_0$ (respectively, on $A$) if $f$ is WLUD$^k$ at $x_0$ (respectively, on $A$) for every
$k\in\mathbb{N}$.
\end{defn}
\begin{theorem}[Inverse Function Theorem]
Let $A\subseteq\mathcal{N}$ be open, let $f:A\rightarrow \field $ be WLUD on $A$, and let $x_0 \in A$ be such that $f^\prime(x_0) \neq 0$. Then there exists a neighborhood $\Omega$
of $x_0$
in $A$ such that
\begin{enumerate}
\item $\left.f\right|_\Omega$ is one-to-one;
\item $f(\Omega)$ is open; and
\item $f^{-1}$ exists and is WLUD on $f(\Omega)$ with $(f^{-1})^\prime = 1/\left(f^\prime \circ f^{-1}\right)$.
\end{enumerate}
\end{theorem}
\begin{theorem}[Local Intermediate Value Theorem]\label{tlivt}
Let $A\subseteq\mathcal{N}$ be open, let $f:A\rightarrow \field $ be WLUD on $A$, and let $x_0 \in A$ be such that $f^\prime(x_0) \neq 0$. Then there exists a neighborhood $\Omega$
of $x_0$ in $A$ such that for
any $a<b$ in $f(\Omega)$ and for any $c\in (a,b)$, there is an $x\in\left(\min\left\{f^{(-1)}(a),f^{(-1)}(b)\right\},\max\left\{f^{(-1)}(a),f^{(-1)}(b)\right\}\right)$
such that $f(x)=c$.
\end{theorem}
\begin{theorem}[Local Mean Value Theorem]\label{thmvt}
Let $A\subseteq\mathcal{N}$ be open, let $f:A\rightarrow \field $ be WLUD$^2$ on $A$, and let $x_0 \in A$ be such that $f^{\prime\prime}(x_0) \neq 0$. Then there exists a neighborhood $\Omega$ of
$x_0$ in $A$ such that $f$ has the mean value property on $\Omega$. That is, for every $a,b\in \Omega$ with $a<b$, there exists $c\in (a,b)$ such that
\[
f^\prime(c) = \frac{f(b) - f(a)}{b-a}.
\]
\end{theorem}
As in the real case, the mean value property can be used to prove other important results. In particular, while L'H\^opital's rule does not hold for differentiable functions on
$\mathcal{N}$, we prove the result under similar conditions to those of the local mean value theorem.
\begin{theorem}[L'H\^opital's Rule]
Let $A\subset \mathcal{N}$ be open, let $f,g:A\rightarrow \mathcal{N}$ be WLUD$^2$ on $A$, and let $a\in A$ be such that $f^{\prime\prime}(a) \neq 0$ and $g^{\prime\prime}(a) \neq 0$.
Furthermore, suppose that $f(a) = g(a) = 0$, that there exists a neighborhood
$\Omega$ of $a$ in $A$ such that $g^{\prime}(x) \neq 0$ for every $x\in \Omega\setminus\{a\}$, and that $\lim\limits_{x\rightarrow a} f^{\prime}(x)/g^{\prime}(x)$ exists. Then
\[
\lim_{x\rightarrow a} \frac{f(x)}{g(x)} = \lim_{x\rightarrow a} \frac{f^{\prime}(x)}{g^{\prime}(x)}.
\]
\end{theorem}
In \cite{MultivWLUD}, we formulate and prove a Taylor theorem with remainder for WLUD$^k$ functions from $\mathcal{N}$ to $\mathcal{N}$. Then we extend the concept of WLUD to functions from $\mathcal{N}^n$ to
$\mathcal{N}^m$ with $m,n\in\mathbb{N}$ and study the properties of those functions as we did for functions from $\mathcal{N}$ to $\mathcal{N}$. Then we formulate and prove
the inverse function theorem for WLUD functions from $\mathcal{N}^n$ to
$\mathcal{N}^n$ and the implicit function theorem for WLUD functions from $\mathcal{N}^n$ to $\mathcal{N}^m$ with $m<n$ in $\mathbb{N}$.
As in the real case, the proof of Taylor's theorem with remainder uses the mean value theorem.
However, in the non-Archimedean setting, stronger conditions on the function are needed than in the real case for the formulation of the theorem.
\begin{theorem}\label{thmtaylor}
(Taylor's Theorem with Remainder) Let $A \subseteq \mathcal{N}$ be open, let $k\in\mathbb{N}$ be given, and let $f : A \rightarrow \mathcal{N}$ be WLUD$^{k+2}$ on $A$.
Assume further that $f^{(j)}$ is WLUD$^{2}$ on $A$ for $0\leq j \leq k$. Then, for every $x\in A$, there exists a neighborhood $U$ of $x$ in $A$ such that, for any $y \in U$,
there exists
$c \in \left[ \min(y, x),\max(y, x) \right]$ such that
\begin{equation}\label{eqtaylor}
f(y)=\sum_{j=0}^{k} \frac{f^{(j)}\left(x\right)}{j !}\left(y-x\right)^{j}+\frac{f^{(k+1)}(c)}{(k+1) !}\left(y-x\right)^{k+1}.
\end{equation}
\end{theorem}
Before we define weak local uniform differentiability for functions from $\mathcal{N}^n$ to $\mathcal{N}^m$ and then state the inverse function theorem and the implicit function theorem, we introduce the following notations.
\begin{notation} Let $A\subset\mathcal{N}^n$ be open, let $\boldsymbol{x_0}\in A$ be given, and let $\boldsymbol{f}:A\rightarrow\mathcal{N}^m$ be such that all the first order partial derivatives of $\boldsymbol{f}$ at $\boldsymbol{x_0}$ exist. Then $\boldsymbol{D}\boldsymbol{f}(\boldsymbol{x_0})$ denotes the linear map from $\mathcal{N}^n$ to $\mathcal{N}^m$ defined by the $m\times n$
Jacobian matrix of $\boldsymbol{f}$ at $\boldsymbol{x_0}$:
\[
\begin{pmatrix} \boldsymbol{f}^1_1(\boldsymbol{x_0}) & \boldsymbol{f}^1_2(\boldsymbol{x_0})& \ldots & \boldsymbol{f}^1_n(\boldsymbol{x_0})
\\ \boldsymbol{f}^2_1(\boldsymbol{x_0}) &
\boldsymbol{f}^2_2(\boldsymbol{x_0}) & \ldots & \boldsymbol{f}^2_n(\boldsymbol{x_0}) \\ \vdots & \vdots &\ddots & \vdots \\
\boldsymbol{f}^m_1(\boldsymbol{x_0}) & \boldsymbol{f}^m_2(\boldsymbol{x_0}) & \ldots & \boldsymbol{f}^m_n(\boldsymbol{x_0}) \end{pmatrix}
\]
with $\boldsymbol{f}^i_j(\boldsymbol{x_0})=\frac{\partial f_i}{\partial x_j}(\boldsymbol{x_0})$ for $1\le i\le m$ and $1\le j\le n$. Moreover, if $m=n$ then the determinant of the $n\times n$ matrix $\boldsymbol{D}\boldsymbol{f}(\boldsymbol{x_0})$ is denoted by $J\boldsymbol{f}(\boldsymbol{x_0})$.
\end{notation}
\begin{defn}[WLUD]\label{defWLUDnm}
Let $A\subset\mathcal{N}^n$ be open, let $\boldsymbol{f}:A \to \mathcal{N}^m$, and let $\boldsymbol{x_0}\in A$ be given. Then we say that $\boldsymbol{f}$ is weakly
locally uniformly differentiable (WLUD) at $\boldsymbol{x_0}$ if $\boldsymbol{f}$ is differentiable in a neighborhood
$\Omega$ of $\boldsymbol{x_0}$ in $A$ and if for every $\epsilon>0$ in $\mathcal{N}$ there exists $\delta>0$ in $\mathcal{N}$ such that $B_{\delta}(\boldsymbol{x_0}):=\left\{\boldsymbol{t}\in\mathcal{N}:\left|\boldsymbol{t}-\boldsymbol{x_0}\right|<\delta\right\}\subset\Omega$, and
for all $\boldsymbol{x},\boldsymbol{y}\in B_{\delta}(\boldsymbol{x_0})$ we have that
\[
\left|\boldsymbol{f}(\boldsymbol{y}) - \boldsymbol{f}(\boldsymbol{x}) -
\boldsymbol{D}\boldsymbol{f}(\boldsymbol{x})(\boldsymbol{y} - \boldsymbol{x})\right| \le \epsilon \vert \boldsymbol{y} - \boldsymbol{x} \vert.
\]
Moreover, we say that $\boldsymbol{f}$ is WLUD on $A$ if $\boldsymbol{f}$ is WLUD at every point in $A$.
\end{defn}
We show in \cite{MultivWLUD} that if $\boldsymbol{f}$ is WLUD at $\boldsymbol{x_0}$ (respectively on $A$) then $\boldsymbol{f}$ is C$^1$ at $\boldsymbol{x_0}$ (respectively on $A$). Thus, the class of WLUD functions at a point $\boldsymbol{x_0}$ (respectively on an open set $A$) is a subset of
the class of
$C^1$ functions at $\boldsymbol{x_0}$ (respectively on $A$). However, this is still large enough to include all polynomial functions. We also show in \cite{MultivWLUD} that if $\boldsymbol{f},\boldsymbol{g}$ are WLUD at $\boldsymbol{x_0}$ (respectively on $A$) and if $\alpha\in\mathcal{N}$ then $\boldsymbol{f}+\alpha\boldsymbol{g}$ and $\boldsymbol{f}\cdot\boldsymbol{g}$ are WLUD at $\boldsymbol{x_0}$ (respectively on $A$). Moreover, we show that if $\boldsymbol{f}:A \to \mathcal{N}^m$ is WLUD at $\boldsymbol{x_0}\in A$ (respectively on $A$) and if $\boldsymbol{g}:C \to \mathcal{N}^p$ is WLUD at $\boldsymbol{f}(\boldsymbol{x_0})\in C$ (respectively on $C$),
where $A$ is an open subset of $\mathcal{N}^n$, $C$ an open subset of $\mathcal{N}^m$ and $\boldsymbol{f}(A) \subseteq C$, then
$\boldsymbol{g} \circ \boldsymbol{f}$ is WLUD at $\boldsymbol{x_0}$ (respectively on $A$).
\begin{theorem}[Inverse Function Theorem]\label{IFT}
Let $A\subset\mathcal{N}$ be open, let $\boldsymbol{g}:A\rightarrow\mathcal{N}^n$ be WLUD on $A$ and let $\boldsymbol{t_0}\in A$ be such that
$J\boldsymbol{g}(\boldsymbol{t_0})\neq0$. Then there is a neighborhood $\Omega$ of $\boldsymbol{t_0}$ such that:
\begin{enumerate}
\item $\boldsymbol{g}|_\Omega$ is one-to-one;
\item $\boldsymbol{g}(\Omega)$ is open;
\item the inverse $\boldsymbol{f}$ of $\boldsymbol{g}|_\Omega$ is WLUD on $\boldsymbol{g}(\Omega)$; and
$\boldsymbol{D}\boldsymbol{f}(\boldsymbol{x})=\left[\boldsymbol{D}\boldsymbol{g}(\boldsymbol{t})\right]^{-1}$ for $\boldsymbol{t}\in\Omega$ and
$\boldsymbol{x}=\boldsymbol{g}(\boldsymbol{t})$.
\end{enumerate}
\end{theorem}
As in the real case, the inverse function theorem is used to prove the implicit function theorem. But before we state the implicit function theorem, we introduce the following notations.
\begin{notation}Let $A\subseteq\mathcal{N}^n$ be open and let $\boldsymbol{\Phi}: A\rightarrow\mathcal{N}^m$ be WLUD on $A$. For $\boldsymbol{t}=(t_1,...,t_{n-m},t_{n-m+1},...,t_{n} )\in A$, let
\begin{equation*}
\hat{\boldsymbol{t}}=(t_1,...,t_{n-m})\text{ and }\tilde{J}\boldsymbol{\Phi}(\boldsymbol{t})=\det\left(\dfrac{\partial(\Phi_1,...,\Phi_m)}{\partial(t_{n-m+1},...,t_{n})}\right).
\end{equation*}
\end{notation}
\begin{theorem}[Implicit Function Theorem]
Let $\boldsymbol{\Phi}:A\rightarrow\mathcal{N}^m$ be WLUD on $A$, where $A\subseteq\mathcal{N}^n$ is open and $1\leq m<n.$
Let $\boldsymbol{t_0}\in A$ be such that $\boldsymbol{\Phi}(\boldsymbol{t_0})=\boldsymbol{0}$ and
$\tilde{J}\boldsymbol{\Phi}(\boldsymbol{t_0})\neq0$. Then there exist a neighborhood $U$ of $\boldsymbol{t_0}$, a neighborhood $R$ of
$\hat{\boldsymbol{t_0}}$ and $\boldsymbol{\phi}:R\rightarrow\mathcal{N}^m$ that is WLUD on $R$ such that
\[
\tilde{J}\boldsymbol{\Phi}(\boldsymbol{t})\neq0
\text{ for all } \boldsymbol{t}\in U,
\]
and
\begin{equation*}
\{\boldsymbol{t}\in
U:\boldsymbol{\Phi}(\boldsymbol{t})=\boldsymbol{0}\}=\{(\hat{\boldsymbol{t}},\boldsymbol{\phi}(\hat{\boldsymbol{t}})):\hat{\boldsymbol{t}}\in
R\}.
\end{equation*}
\end{theorem}
\section{New Results}
This paper is a continuation of the work done in
\cite{boo-sham-18,MultivWLUD}. In the following section, we will generalize in Definition \ref{defWLUDn1k} and Definition \ref{defWLUDn1infty} the concepts of WLUD$^k$ and WLUD$^\infty$ to functions from $\mathcal{N}^n$ to $\mathcal{N}$; and we will formulate (in Theorem \ref{thmtaylorseries1} and Theorem \ref{thmtaylorseriesn} and their proofs) conditions under which a WLUD$^\infty$ $\mathcal{N}$-valued function at a point $x_0\in \mathcal{N}$ or a WLUD$^\infty$ $\mathcal{N}$-valued function at a point $\boldsymbol{x_0} \in \mathcal{N}^n$ will be analytic at that point.
\begin{theorem}\label{thmtaylorseries1}
Let $A \subseteq \mathcal{N}$ be open, let $x_0\in A$, and let $f : A \rightarrow \mathcal{N}$ be WLUD$^{\infty}$ at $x_0$. For each $k\in\mathbb{N}$, let $\delta_k>0$ in $\mathcal{N}$ correspond to $\epsilon=1$ in Definition \ref{def:wludn}. Assume that
\[
\limsup_{j\rightarrow\infty}\left(\frac{-\lambda\left(f^{(j)}(x_0)\right)}{j}\right)<\infty \text{ and }
\limsup _{k\rightarrow\infty}\lambda\left(\delta_k\right)<\infty.
\]
Then there exists a neighborhood $U$ of $x_0$ in $A$ such that, for any $x,y \in U$, we have that
\[
f(y)=\sum_{j=0}^{\infty} \frac{f^{(j)}\left(x\right)}{j!}\left(y-x\right)^{j}.
\]
That is, the Taylor series $\sum\limits_{j=0}^{\infty} \frac{f^{(j)}\left(x\right)}{j!}\left(y-x\right)^{j}$ converges in $\mathcal{N}$ to $f(y)$; and hence $f$ is analytic in $U$.
\end{theorem}
\begin{proof}
Let
\[
\lambda_0=\limsup_{j\rightarrow\infty}\left(\frac{-\lambda\left(f^{(j)}(x_0)\right)}{j}\right).
\]
Then $\lambda_0\in\mathbb{R}$ and $\lambda_0<\infty$; and, by \cite[Page 59]{schikhofbook}, we have that $\sum\limits_{j=0}^{\infty} \frac{f^{(j)}\left(x_0\right)}{j!}\left(x-x_0\right)^{j}$ converges in $\mathcal{N}$ for all $x\in \mathcal{N}$ satisfying $\lambda(x-x_0)>\lambda_0$.
For all $k\in \mathbb{N}$, we have that $(x_0-\delta_k, x_0+\delta_k)\subset A$, $f$ is $k$ times differentiable on $(x_0-\delta_k, x_0+\delta_k)$, and
\[
\left|f(x)-\sum\limits_{j=0}^{k} \frac{f^{(j)}\left(x_0\right)}{j!}\left(x-x_0\right)^{j}\right|\le \left|x-x_0\right|^k \text{ for all }x\in (x_0-\delta_k, x_0+\delta_k).
\]
Since $\limsup\limits _{k\rightarrow\infty}\lambda\left(\delta_k\right)<\infty$, there exists $t>0$ in $\mathbb{Q}$ such that $\limsup\limits _{k\rightarrow\infty}\lambda\left(\delta_k\right)<t<\infty$. Thus, there exists $N\in\mathbb{N}$ such that
\begin{equation}\label{eqtaylor1:1}
\lambda(\delta_k)<t\text{ for all }k>N.
\end{equation}
Let
$\delta>0$ in $\mathcal{N}$ be such that $\lambda(\delta)>\max\{\lambda_0, t,0\}$; this is possible since $\max\{\lambda_0, t,0\}<\infty$. It follows from (\ref{eqtaylor1:1})
that $\lambda(\delta)>\lambda(\delta_k)$ and hence $0<\delta\ll\delta_k$ for all $k>N$. Thus,
$(x_0-\delta, x_0+\delta)\subset A$, $f$ is infinitely often differentiable on $(x_0-\delta, x_0+\delta)$, and
\begin{equation}\label{eqtaylor1:2}
\left|f(x)-\sum\limits_{j=0}^{k} \frac{f^{(j)}\left(x_0\right)}{j!}\left(x-x_0\right)^{j}\right|\le \left|x-x_0\right|^k \forall\ x\in (x_0-\delta, x_0+\delta)\text{ and }\forall\ k>N.
\end{equation}
Moreover, for all $x\in (x_0-\delta, x_0+\delta)$, we have that $\lambda(x-x_0)\ge \lambda(\delta)>\lambda_0$ and hence $\sum\limits_{j=0}^{\infty} \frac{f^{(j)}\left(x_0\right)}{j!}\left(x-x_0\right)^{j}$ converges in $\mathcal{N}$. Let $U=(x_0-\delta, x_0+\delta)$.
First we show that
\[
f(x)=\sum_{j=0}^{\infty} \frac{f^{(j)}\left(x_0\right)}{j!}\left(x-x_0\right)^{j}\text{ for all }x\in U.
\]
Let $x\in U$ be given. Taking the limit in (\ref{eqtaylor1:2}) as $k\rightarrow\infty$, we get:
\[
0\le\lim_{k\rightarrow\infty}\left|f(x)-\sum\limits_{j=0}^{k} \frac{f^{(j)}\left(x_0\right)}{j!}\left(x-x_0\right)^{j}\right|\le \lim_{k\rightarrow\infty} \left|x-x_0\right|^k,
\]
from which we obtain
\[
0\le\left|f(x)-\lim_{k\rightarrow\infty}\sum\limits_{j=0}^{k} \frac{f^{(j)}\left(x_0\right)}{j!}\left(x-x_0\right)^{j}\right|\le \lim_{k\rightarrow\infty} \left|x-x_0\right|^k.
\]
Since $\lambda(x-x_0)\ge \lambda(\delta)>0$, we obtain that
$\lim\limits_{k\rightarrow\infty} \left|x-x_0\right|^k=0$.
It follows that
\[
0\le\left|f(x)-\sum\limits_{j=0}^{\infty} \frac{f^{(j)}\left(x_0\right)}{j!}\left(x-x_0\right)^{j}\right|\le 0
\]
from which we infer that
$f(x)=\sum\limits_{j=0}^{\infty} \frac{f^{(j)}\left(x_0\right)}{j!}\left(x-x_0\right)^{j}$ or, equivalently,
\begin{equation}\label{eqtaylor1:3}
f(x)=\sum\limits_{l=0}^{\infty} \frac{f^{(l)}\left(x_0\right)}{l!}\left(x-x_0\right)^{l}.
\end{equation}
Since the convergence of the Taylor series above is in the order (valuation) topology, we will show that the derivatives of $f$ at $x$ to any order are obtained by differentiating the power series in Equation (\ref{eqtaylor1:3}) term by term. That is, for all $j\in\mathbb{N}$,
\begin{equation}\label{eqtaylor1:4}
f^{(j)}(x)=\sum_{l=j}^{\infty} l(l-1)\ldots(l-j+1)\frac{f^{(l)}\left(x_0\right)}{l!}\left(x-x_0\right)^{l-j}.
\end{equation}
First note that, since $\lambda\left(l(l-1)\ldots(l-j+1)\right)=0$, it follows that $\sum_{l=j}^{\infty} l(l-1)\ldots(l-j+1)\frac{f^{(l)}\left(x_0\right)}{l!}\left(x-x_0\right)^{l-j}$ converges in $\mathcal{N}$ for all $j\in\mathbb{N}$. Using induction on $j$, it suffices to show that
\[
f^\prime(x)= \sum_{l=1}^{\infty} l\frac{f^{(l)}\left(x_0\right)}{l!}\left(x-x_0\right)^{l-1}=\sum_{l=1}^{\infty} \frac{f^{(l)}\left(x_0\right)}{(l-1)!}\left(x-x_0\right)^{l-1}.
\]
Let $h\in \mathcal{N}$ be such that $x+h\in U$. We will show that
\[
\lim_{h\rightarrow0}\left\{\frac{f(x+h)-f(x)}{h}\right\}=\sum_{l=1}^{\infty} \frac{f^{(l)}\left(x_0\right)}{(l-1)!}\left(x-x_0\right)^{l-1}.
\]
Thus,
\begin{eqnarray*}
&&\lim_{h\rightarrow0}\left\{\frac{f(x+h)-f(x)}{h}\right\}=\lim_{h\rightarrow0}\left\{\sum\limits_{l=0}^{\infty}\frac{f^{(l)}\left(x_0\right)}{l!}\frac{ \left(x+h-x_0\right)^{l}-\left(x-x_0\right)^{l}}{h}\right\}\\
&=&\lim_{h\rightarrow0}\left\{\sum\limits_{l=1}^{\infty}\frac{f^{(l)}\left(x_0\right)}{l!}\frac{ \left(x+h-x_0\right)^{l}-\left(x-x_0\right)^{l}}{h}\right\}\\
&=&\lim_{h\rightarrow0}\left\{\sum\limits_{l=1}^{\infty}\frac{f^{(l)}\left(x_0\right)}{l!} \left[(x+h-x_0)^{l-1}+(x+h-x_0)^{l-2}(x-x_0)+\cdots+(x-x_0)^{l-1}\right]\right\}\\
&=&\sum\limits_{l=1}^{\infty}\frac{f^{(l)}\left(x_0\right)}{l!} \left[l(x-x_0)^{l-1}\right]\\
&=&\sum_{l=1}^{\infty} \frac{f^{(l)}\left(x_0\right)}{(l-1)!}\left(x-x_0\right)^{l-1}.
\end{eqnarray*}
Now let $y\in U$ be given. Then
\begin{eqnarray*}
f(y)&=&\sum_{l=0}^{\infty} \frac{f^{(l)}\left(x_0\right)}{l!}\left(y-x_0\right)^{l}\\
&=&\sum_{l=0}^{\infty} \frac{f^{(l)}\left(x_0\right)}{l!}\left[(y-x)+(x-x_0)\right]^{l}\\
&=&\sum_{l=0}^{\infty} \sum_{j=0}^{l} \frac{f^{(l)}\left(x_0\right)}{l!}\left(\begin{array}{c}l\\j\end{array}\right) (y-x)^j(x-x_0)^{l-j}\\
&=&\sum_{l=0}^{\infty} \sum_{j=0}^{l} \frac{l(l-1)\ldots(l-j+1)}{j!}\frac{f^{(l)}\left(x_0\right)}{l!}(x-x_0)^{l-j}(y-x)^j.
\end{eqnarray*}
Since convergence in the order topology (valuation topology) entails absolute convergence, we can interchange the order of the summations in the last equality \cite{shamseddinephd,rspsio00}. We get:
\begin{eqnarray*}
f(y)&=&\sum_{j=0}^{\infty}\frac{1}{j!}\left[ \sum_{l=j}^{\infty}l(l-1)\ldots(l-j+1)\frac{f^{(l)}\left(x_0\right)}{l!}(x-x_0)^{l-j}\right](y-x)^j\\
&=&\sum_{j=0}^{\infty}\frac{f^{(j)}(x)}{j!}(y-x)^j
\end{eqnarray*}
where we made use of Equation (\ref{eqtaylor1:4}) in the last step.
\end{proof}
Replacing $m$ by $1$ in Definition \ref{defWLUDnm}, then the $m\times n$ matrix $\boldsymbol{D}\boldsymbol{f}(\boldsymbol{x})$ is replaced by the gradient of $\boldsymbol{f}$ at $\boldsymbol{x}$: $\boldsymbol{\nabla}f(\boldsymbol{x})$, and we readily obtain the definition of a WLUD $\mathcal{N}$-valued function at a point $\boldsymbol{x_0}$ or on an open subset $A$ of $\mathcal{N}^n$.
\begin{defn}\label{defWLUDn1}
Let $A\subset\mathcal{N}^n$ be open, let $f:A \to \mathcal{N}$, and let $\boldsymbol{x_0}\in A$ be given. Then we say that $f$ is WLUD at $\boldsymbol{x_0}$ if $f$ is differentiable in a neighborhood
$\Omega$ of $\boldsymbol{x_0}$ in $A$ and if for every $\epsilon>0$ in $\mathcal{N}$ there exists $\delta>0$ in $\mathcal{N}$ such that $B_{\delta}(\boldsymbol{x_0})\subset\Omega$, and
for all $\boldsymbol{x},\boldsymbol{y}\in B_{\delta}(\boldsymbol{x_0})$ we have that
\[
\left|f(\boldsymbol{y}) - f(\boldsymbol{x}) -
\boldsymbol{\nabla}f(\boldsymbol{x})\cdot(\boldsymbol{y} - \boldsymbol{x})\right| \le \epsilon \vert \boldsymbol{y} - \boldsymbol{x} \vert.
\]
Moreover, we say that $f$ is WLUD on $A$ if $f$ is WLUD at every point in $A$.
\end{defn}
Using Defintion \ref{def:wludn} and Definition \ref{defWLUDn1}, the natural way to define $k$ times weak local uniform differentiability (WLUD$^k$) at a point $\boldsymbol{x_0}$ or on an open subset $A$ of $\mathcal{N}^n$ is as follows.
\begin{defn}\label{defWLUDn1k}
Let $A\subset\mathcal{N}^n$ be open, let $f:A \to \mathcal{N}$, and let $\boldsymbol{x_0}\in A$ be given. Then we say that $f$ is WLUD$^k$ at $\boldsymbol{x_0}$ if $f$ is $k$-times differentiable in a neighborhood
$\Omega$ of $\boldsymbol{x_0}$ in $A$ and if for every $\epsilon>0$ in $\mathcal{N}$ there exists $\delta>0$ in $\mathcal{N}$ such that $B_{\delta}(\boldsymbol{x_0})\subset\Omega$, and
for all $\boldsymbol{\xi},\boldsymbol{\eta}\in B_{\delta}(\boldsymbol{x_0})$ we have that
\[
\left|f(\boldsymbol{\eta})-f(\boldsymbol{\xi})-\sum_{j=1}^{k} \frac{1}{j!}\left[(\boldsymbol{\eta}-\boldsymbol{\xi})\cdot\nabla\right]^{j}f(\boldsymbol{\xi})\right| \le \epsilon \vert \boldsymbol{\eta} - \boldsymbol{\xi} \vert^k,
\]
where
\begin{eqnarray*}
\left[(\boldsymbol{\eta}-\boldsymbol{\xi})\cdot\nabla\right]^{j}f(\boldsymbol{\xi})&=&
\left.\left[(\eta_1-\xi_1)\frac{\partial}{\partial x_1}+\cdots+(\eta_n-\xi_n)\frac{\partial}{\partial x_n}\right]^{j}f(\boldsymbol{x})
\right|_{\boldsymbol{x}=\boldsymbol{\xi}}\\
&=&\sum_{l_{1},\ldots,l_{j}=1}^{n} \left(
\left.\frac{\partial^j f(\boldsymbol{x})}{\partial_{x_{l_{1}}}\cdots\partial_{x_{l_{j}}}}\right|_{\boldsymbol{x}=\boldsymbol{\xi}}
\prod_{m=1}^{j}\left( \eta_{l_{m}}-\xi_{l_{m}}\right) \right).
\end{eqnarray*}
Moreover, we say that $f$ is WLUD$^k$ on $A$ if $f$ is WLUD$^k$ at every point in $A$.
\end{defn}
\begin{defn}\label{defWLUDn1infty}
Let $A\subset\mathcal{N}^n$ be open, let $f:A \to \mathcal{N}$, and let $\boldsymbol{x_0}\in A$ be given. Then we say that $f$ is WLUD$^\infty$ at $\boldsymbol{x_0}$ if $f$ is WLUD$^k$ at $\boldsymbol{x_0}$ for every
$k\in\mathbb{N}$. Moreover, we say that $f$ is WLUD$^\infty$ on $A$ if $f$ is WLUD$^\infty$ at every point in $A$.
\end{defn}
Now we are ready to state and prove the analog of Theorem \ref{thmtaylorseries1} for functions of $n$ variables.
\begin{theorem}\label{thmtaylorseriesn}
Let $A \subseteq \mathcal{N}^n$ be open, let $\boldsymbol{x_0}\in A$, and let $f : A \rightarrow \mathcal{N}$ be WLUD$^{\infty}$ at $\boldsymbol{x_0}$. For each $k\in\mathbb{N}$, let $\delta_k>0$ in $\mathcal{N}$ correspond to $\epsilon=1$ in Definition \ref{defWLUDn1k}. Assume that
\begin{eqnarray*}
\limsup_{{\tiny\begin{array}{l}j\rightarrow\infty\\l_1=1,\ldots,n\\
\vdots\\
l_j=1,\ldots,n
\end{array}}}\left(\frac{-\lambda\left(\left.\frac{\partial^j f(\boldsymbol{x})}{\partial_{x_{l_{1}}}\cdots\partial_{x_{l_{j}}}}\right|_{\boldsymbol{x}=\boldsymbol{x_0}} \right)}{j}\right)&<&\infty\\
&\mbox{ }&\\
\text{ and }
\limsup _{k\rightarrow\infty}\lambda\left(\delta_k\right)&<&\infty.
\end{eqnarray*}
Then there exists a neighborhood $U$ of $\boldsymbol{x_0}$ in $A$ such that, for any $\boldsymbol{\eta}\in U$, we have that
\[
f(\boldsymbol{\eta})=f(\boldsymbol{x_0})+\sum_{j=1}^{\infty} \frac{1}{j!}\left[(\boldsymbol{\eta}-\boldsymbol{x_0})\cdot\nabla\right]^{j}f(\boldsymbol{x_0}).
\]
\end{theorem}
\begin{proof}
Let
\[
\lambda_0=\limsup_{{\tiny\begin{array}{l}j\rightarrow\infty\\l_1=1,\ldots,n\\
\vdots\\
l_j=1,\ldots,n
\end{array}}}\left(\frac{-\lambda\left(\left.\frac{\partial^j f(\boldsymbol{x})}{\partial_{x_{l_{1}}}\cdots\partial_{x_{l_{j}}}}\right|_{\boldsymbol{x}=\boldsymbol{x_0}} \right)}{j}\right).
\]
Then $\lambda_0\in\mathbb{R}$ and $\lambda_0<\infty$.
For all $k\in \mathbb{N}$, we have that $B_{\delta_k}(\boldsymbol{x_0})\subset A$, $f$ is $k$ times differentiable on $B_{\delta_k}(\boldsymbol{x_0})$, and
\[
\left|f(\boldsymbol{\eta})-f(\boldsymbol{x_0})-\sum_{j=1}^{k} \frac{1}{j!}\left[(\boldsymbol{\eta}-\boldsymbol{x_0})\cdot\nabla\right]^{j}f(\boldsymbol{x_0})\right| \le \vert \boldsymbol{\eta} - \boldsymbol{x_0} \vert^k \text{ for all }\boldsymbol{\eta}\in B_{\delta_k}(\boldsymbol{x_0}).
\]
Since $\limsup\limits _{k\rightarrow\infty}\lambda\left(\delta_k\right)<\infty$, there exists $t>0$ in $\mathbb{Q}$ such that $\limsup\limits _{k\rightarrow\infty}\lambda\left(\delta_k\right)<t<\infty$. Thus, there exists $N\in\mathbb{N}$ such that
\begin{equation}\label{eqtaylorn:1}
\lambda(\delta_k)<t\text{ for all }k>N.
\end{equation}
Let
$\delta>0$ in $\mathcal{N}$ be such that $\lambda(\delta)>\max\{\lambda_0, t,0\}$. It follows from (\ref{eqtaylorn:1})
that $\lambda(\delta)>\lambda(\delta_k)$ and hence $0<\delta\ll\delta_k$ for all $k>N$. Thus,
$B_{\delta}(\boldsymbol{x_0})\subset A$, $f$ is infinitely often differentiable on $B_{\delta}(\boldsymbol{x_0})$, and
\begin{equation}\label{eqtaylorn:2}
\left|f(\boldsymbol{\eta})-f(\boldsymbol{x_0})-\sum_{j=1}^{k} \frac{1}{j!}\left[(\boldsymbol{\eta}-\boldsymbol{x_0})\cdot\nabla\right]^{j}f(\boldsymbol{x_0})\right| \le \vert \boldsymbol{\eta} - \boldsymbol{x_0} \vert^k \ \forall \boldsymbol{\eta}\in B_{\delta}(\boldsymbol{x_0})\text{ and }\forall k>N.
\end{equation}
Let $U=B_{\delta}(\boldsymbol{x_0})$; and let
$\boldsymbol{\eta}\in U$ be given.
Then we have that $\lambda(\vert \boldsymbol{\eta} - \boldsymbol{x_0} \vert)\ge \lambda(\delta)>\lambda_0$. We will show first that $\sum_{j=1}^{\infty} \frac{1}{j!}\left[(\boldsymbol{\eta}-\boldsymbol{x_0})\cdot\nabla\right]^{j}f(\boldsymbol{x_0})$ converges in $\mathcal{N}$. Since
$\lambda(\vert \boldsymbol{\eta} - \boldsymbol{x_0} \vert)>\lambda_0$, there exists $q>0$ in $\mathbb{Q}$ such that $\lambda(\vert \boldsymbol{\eta} - \boldsymbol{x_0} \vert)-q>\lambda_0$. Hence there exists $M\in\mathbb{N}$ such that
\[
\lambda(\vert \boldsymbol{\eta} - \boldsymbol{x_0} \vert)-q> \frac{-\lambda\left(\left.\frac{\partial^j f(\boldsymbol{x})}{\partial_{x_{l_{1}}}\cdots\partial_{x_{l_{j}}}}\right|_{\boldsymbol{x}=\boldsymbol{x_0}} \right)}{j}
\]
for all $j>M$ and for $l_1=1, \ldots, n$, $l_2=1, \ldots, n$, \ldots, $l_j=1, \ldots, n$. It follows that
\begin{eqnarray*}
\lambda\left(
\left.\frac{\partial^j f(\boldsymbol{x})}{\partial_{x_{l_{1}}}\cdots\partial_{x_{l_{j}}}}\right|_{\boldsymbol{x}=\boldsymbol{x_0}}
\prod_{m=1}^{j}\left( \eta_{l_{m}}-x_{0,l_{m}}\right) \right)
&\ge&\lambda \left(
\left.\frac{\partial^j f(\boldsymbol{x})}{\partial_{x_{l_{1}}}\cdots\partial_{x_{l_{j}}}}\right|_{\boldsymbol{x}=\boldsymbol{x_0}}
\vert \boldsymbol{\eta} - \boldsymbol{x_0} \vert^j\right)\\
&=&\lambda \left(
\left.\frac{\partial^j f(\boldsymbol{x})}{\partial_{x_{l_{1}}}\cdots\partial_{x_{l_{j}}}}\right|_{\boldsymbol{x}=\boldsymbol{x_0}}\right)+j\lambda\left(
\vert \boldsymbol{\eta} - \boldsymbol{x_0} \vert\right)\\
&>&jq
\end{eqnarray*}
for all $j>M$ and for $l_1=1, \ldots, n$, $l_2=1, \ldots, n$, \ldots, $l_j=1, \ldots, n$. Thus,
\begin{eqnarray*}
\lambda\left(\left[(\boldsymbol{\eta}-\boldsymbol{x_0})\cdot\nabla\right]^{j}f(\boldsymbol{x_0})\right)&=&
\lambda\left(\sum_{l_{1},\ldots,l_{j}=1}^{n} \left(
\left.\frac{\partial^j f(\boldsymbol{x})}{\partial_{x_{l_{1}}}\cdots\partial_{x_{l_{j}}}}\right|_{\boldsymbol{x}=\boldsymbol{x_0}}
\prod_{m=1}^{j}\left( \eta_{l_{m}}-x_{0,l_{m}}\right) \right)\right)\\
&>&jq
\end{eqnarray*}
for all $j>M$; and hence
\begin{eqnarray*}
\lim_{j\rightarrow\infty}\lambda\left(\frac1{j!}\left[(\boldsymbol{\eta}-\boldsymbol{x_0})\cdot\nabla\right]^{j}f(\boldsymbol{x_0})\right)&=&
\lim_{j\rightarrow\infty}\lambda\left(\left[(\boldsymbol{\eta}-\boldsymbol{x_0})\cdot\nabla\right]^{j}f(\boldsymbol{x_0})\right)\\
&\ge&q\lim_{j\rightarrow\infty}j=\infty.
\end{eqnarray*}
Thus,
\[
\lim_{j\rightarrow\infty}\left(\frac1{j!}\left[(\boldsymbol{\eta}-\boldsymbol{x_0})\cdot\nabla\right]^{j}f(\boldsymbol{x_0})\right)=0
\]
and hence $\sum_{j=1}^{\infty} \frac{1}{j!}\left[(\boldsymbol{\eta}-\boldsymbol{x_0})\cdot\nabla\right]^{j}f(\boldsymbol{x_0})$ converges in $\mathcal{N}$; that is,
\[
\lim\limits_{k\rightarrow\infty}\sum\limits_{j=1}^{k} \frac{1}{j!}\left[(\boldsymbol{\eta}-\boldsymbol{x_0})\cdot\nabla\right]^{j}f(\boldsymbol{x_0})\text{ exists in }\mathcal{N}.
\]
Taking the limit in (\ref{eqtaylorn:2}) as $k\rightarrow\infty$, we get:
\[
0\le\lim_{k\rightarrow\infty} \left|f(\boldsymbol{\eta})-f(\boldsymbol{x_0})-\sum_{j=1}^{k} \frac{1}{j!}\left[(\boldsymbol{\eta}-\boldsymbol{x_0})\cdot\nabla\right]^{j}f(\boldsymbol{x_0})\right|\le \lim_{k\rightarrow\infty} \vert \boldsymbol{\eta} - \boldsymbol{x_0} \vert^k,
\]
from which we obtain
\[
0\le \left|f(\boldsymbol{\eta})-f(\boldsymbol{x_0})-\lim_{k\rightarrow\infty}\sum_{j=1}^{k} \frac{1}{j!}\left[(\boldsymbol{\eta}-\boldsymbol{x_0})\cdot\nabla\right]^{j}f(\boldsymbol{x_0})\right|\le \lim_{k\rightarrow\infty} \vert \boldsymbol{\eta} - \boldsymbol{x_0} \vert^k.
\]
Since $\lambda(\vert\boldsymbol{\eta} - \boldsymbol{x_0}\vert)\ge \lambda(\delta)>0$, we obtain that
$\lim\limits_{k\rightarrow\infty} \left|\boldsymbol{\eta} - \boldsymbol{x_0}\right|^k=0$.
It follows that
\[
0\le\left|f(\boldsymbol{\eta})-f(\boldsymbol{x_0})-\sum_{j=1}^{\infty} \frac{1}{j!}\left[(\boldsymbol{\eta}-\boldsymbol{x_0})\cdot\nabla\right]^{j}f(\boldsymbol{x_0})\right|\le 0
\]
from which we infer that
\[
f(\boldsymbol{\eta})=f(\boldsymbol{x_0})+\sum_{j=1}^{\infty} \frac{1}{j!}\left[(\boldsymbol{\eta}-\boldsymbol{x_0})\cdot\nabla\right]^{j}f(\boldsymbol{x_0}).
\]
\end{proof}
|
train/arxiv
|
BkiUfuXxK0wg05VB91M5
| 5 | 1 |
\section{INTRODUCTION}
Motivated by the Callan-Rubakov effect in the context of magnetic
monopoles \cite{callan}, studies have been carried out recently
on the possibility that cosmic strings can also catalyze
baryon-number violation with strongly enhanced cross sections.
It has been shown that the wave function of a fermion scattering
off a cosmic string can acquire a large amplification factor near
the core of the string, leading to enhancement of the processes
that violate baryon number inside the string \cite{alford,perkin}.
The catalysis processes that have been studied include those
mediated by scalar fields and by the grand-unified X and Y gauge
bosons in the string core. Although strings, in contrast to
monopoles, have no magnetic fields outside, fermions can interact
quantum-mechanically with the long-range gauge fields via the
Aharonov-Bohm effect. Depending on the flux of the string and
the core model used, the enhanced catalysis cross sections (per
length) can be of the scale of strong interactions in comparison to
the much smaller geometrical cross section $\sim \Lambda_{GUT}^{-1}$,
where $\Lambda_{GUT} \sim 10^{16}$ GeV. In the early universe when
the density of cosmic strings is high, such processes can play
important roles, washing out any primordially-generated baryon
asymmetry \cite{RB1}, or conceivably even generating the baryon to
entropy ratio observed today.
Cosmic strings can be produced during certain phase transitions
when a gauge group G is broken down to a subgroup H by the vacuum
expectation value of some scalar field $\phi$. The topological
criterion for the existence of a string is a nontrivial fundamental
homotopy group of the vacuum manifold G/H, denoted by
$\pi_1(\hbox{G}/\hbox{H})$. For a connected and simply-connected
G, the general construction of the scalar field at large distances
from the string is given by
\begin{equation}
\phi(\theta) = g(\theta) \phi_0\,,
\quad g(\theta) = e^{i\tau\theta}\,.
\end{equation}
Here $\tau$ is some generator of G, $\theta$ is the azimuthal
angle measured around the string, and $g(0)$ and $g(2\pi)$
belong to two disconnected pieces of H. In the papers
referenced in the previous paragraph, the scalar field
responsible for the formation of the string is taken to
have the simple form $\phi(\theta) = e^{i\tau\theta} \phi_0
= e^{i\theta} \phi_0$. As a result, a non-Abelian string can be
modeled by a U(1) vortex, and the scattering of fermions in the
background fields of the string is governed by the Abelian Dirac
equation. In general however, for a given $\phi_0$, the generator
$\tau$ can be chosen such that $e^{i\tau\theta} \phi_0$ ``twists''
around the string in more complicated fashion than a phase
$e^{i\theta}$ times $\phi_0$. This gives rise to dynamically
different strings which are intrinsically non-Abelian
\cite{leandros}. One expects the complexity and rich
structure of such strings to lead to interesting effects
on fermions traveling around them. In particular, we will
demonstrate in this paper that for certain $\tau$'s, the
twisting of $\phi(\theta)$ can result in mixing of lepton
and quark fields, providing a mechanism for baryon number
violations distinct from the processes in Abelian strings
studied previously.
Since no strings are formed in the minimal SU(5) model, we choose
the gauge group SO(10) \cite{so10} in this paper as an example of
grand unified theories in investigating the B-violating process.
We will construct string configurations, solve numerically for the
undetermined functions, and study the baryon catalysis in the SO(10)
theory, although we expect such processes to occur in other
non-Abelian theories as well. In SO(10), stable strings can
be formed when Spin(10) --- the simply-connected covering group
of SO(10) --- is broken down to SU(5)$\times {\cal Z}_2$ by the
vacuum expectation value of a Higgs field $\phi$ in the {\bf 126}
representation \cite{kibble}. The generators of SO(10) transform
as the adjoint {\bf 45}, which transforms as {\bf 24} + {\bf 1}
+ {\bf 10} + $\bf{\bar{10}}$ under SU(5). The {\bf 24} and {\bf 1}
generate the subgroup SU(5)$\times$U(1), where the U(1) includes
simultaneous rotations in the 1-2, 3-4, 5-6, 7-8, and 9-10 planes.
We are interested in the generators outside SU(5) because to have
noncontractible loops at all, $g(\theta)$ in Eq.~(1) has to be
outside the unbroken H for some $\theta$. We will refer to the
U(1) generator as $\tau_{\rm all}$ and to any of the other 20
basis generators outside SU(5) as $\tau_1$; we name the
associated strings as string-$\tau_{\rm all}$ and string-$\tau_1$,
respectively. As we shall see, the scalar field of string-$\tau_1$
causes mixing of leptons and quarks while string-$\tau_{\rm all}$
is effectively Abelian and no such mixing occurs. Properties of
string-$\tau_{\rm all}$ such as the string mass per unit length
\cite{everett} and its superconducting capability in terms of
fermion zero modes \cite{witten} have been studied. We will
compare it with string-$\tau_1$, which will be the main subject
of study of this paper.
In Sec.~II, we give more detailed discussion of the Higgs {\bf 126}
and the breaking of Spin(10) to SU(5)$\times {\cal Z}_2$, and
elaborate on the B-violating mechanism due to the nontrivial
winding of the Higgs field. In Sec.~III, we write down an
{\it ansatz\ } for the field configuration of each string and
derive the corresponding equations of motion. The numerical
solutions and the energy of the strings are presented in Sec.~IV,
where we find that $\tau_1$-strings have lower energy than
$\tau_{\rm all}$-strings, probably for the entire range of the
parameters in the theory. Having shown that such strings are
energetically favorable, we turn to the scattering problem in
Sec.~V, where the Dirac equation in the background fields of
the strings is solved, and the differential cross section for
the B-violating processes in string-$\tau_1$ is calculated.
We also comment on the role of the self-adjoint parameters and
compute their values using our string solutions. To establish
a common notation and to facilitate reading of this paper,
we include in the Appendix a discussion about the relevant
aspects of the spinor representation {\bf 16} of SO(10), which
accommodates a single generation of left-handed fermions.
\section{SO(10) strings}
There is considerable freedom in the breakings of SO(10) down
to the low energy gauge group SU(3)$\times$U(1). Two commonly
studied examples include the breaking via an intermediate SU(5),
SO(10)$\rightarrow$SU(5), and the one via an intermediate
Pati-Salam SU(4)$\times$SU(2)$_L\times$SU(2)$_R$ \cite{pati}.
Details of the symmetry breaking patterns and the Higgs fields
inducing the breakings can be found in Ref.~6 and the papers
by Slansky and Rajpoot \cite{slansky}. Kibble, Lazarides and
Shafi argued that the strings formed during the phase transition
SO(10) $\rightarrow$SU(4)$\times$SU(2)$_L\times$SU(2)$_R$ become
boundaries of domain walls \cite{kibble}. Thus in this paper we
choose the SU(5) breaking pattern instead for its simplicity.
More precisely, we study strings formed when
Spin(10)$\rightarrow$SU(5)$\times{\cal Z}_2$ by the vacuum
expectation value of a Higgs {\bf 126} $\phi$. The nontrivial
element of ${\cal Z}_2$ corresponds to rotation by 2$\pi$ in SO(10).
The homotopy group $\pi_1(\hbox{Spin(10)}/\hbox{SU(5)}\times
{\cal Z}_2)$ is ${\cal Z}_2\,$; therefore a ${\cal Z}_2$ string
is formed during this phase transition. The subsequent symmetry
breakings can be implemented by the adjoint {\bf 45} of SO(10) and
the fundamental {\bf 10} in the usual fashion:
\begin{eqnarray}
\hbox{Spin(10)} &\stackrel{\bf 126}{\longrightarrow}&
\hbox{SU(5)}\times{\cal Z}_2 \nonumber\\
& \stackrel{\bf 45}{\longrightarrow} &
\hbox{SU(3)}\times\hbox{SU(2)}\times\hbox{U(1)}
\times{\cal Z}_2 \nonumber\\
& \stackrel{\bf 10}{\longrightarrow} &
\hbox{SU(3)}\times\hbox{U(1)}_{\hbox{em}}\times{\cal Z}_2\,.
\end{eqnarray}
This ${\cal Z}_2$ string survives all the symmetry breakings
since ${\cal Z}_2$ is preserved at low energies.
The {\bf 126} representation consists of fifth\--rank
anti-symmetric tensors satisfy\-ing the self\--duality condition
\begin{equation}
\phi_{i_1...i_5} = \frac{i}{5!} \epsilon_{i_1....i_{10}}
\phi_{i_6...i_{10}}.
\end{equation}
The component which acquires an expectation value $\langle\phi
\rangle$ transforms as an SU(5) singlet, and to write it down
explicitly, we first specify how the SU(5) subgroup is embedded
in SO(10). The fundamental representation of SO(10) consists of
10$\times$10 matrices, which can be labeled by indices $i, i = 1,
\ldots ,10\,.$ The generators of SO(10) in this representation
can be written as antisymmetric, purely imaginary matrices. The
generators of SU(5) in the fundamental representation are hermitian,
traceless 5$\times$5 matrices which can be written as
\begin{equation}
\tau_{\alpha \beta} = S_{\alpha \beta} + iA_{\alpha \beta}\,,
\end{equation}
where $\alpha,\beta =1,..,5$ label the matrix elements, and $S, A$
are real 5$\times$5 matrices, representing the real and imaginary
parts of $\tau$. Hermiticity and tracelessness of $\tau$ require
$S_{\alpha \beta} = S_{\beta \alpha}, A_{\alpha \beta} =
-A_{\beta\alpha}$, and $TrS=0$. A natural way to embed SU(5)
in SO(10) is to treat five-dimensional complex vectors as
ten-dimensional real vectors, {\it i.e.} replace the paired
indices ($\alpha, a$), where $\alpha = 1, \ldots ,5$ label a
five-dimensional vector and $a=1,2$ label its real and imaginary
parts, by the index $i,\,i=1, \ldots ,10$. Then, the generators
of the subgroup SU(5) of SO(10) can be expressed as
\begin{equation}
\tau_{\alpha a,\,\beta b} = i( A_{\alpha \beta}I_{ab} +
S_{\alpha \beta}M_{ab})\,,
\end{equation}
where $I$ is the 2$\times$2 identify matrix and $M = i \sigma_2\,,
\sigma_2$ being the second 2$\times$2 Pauli matrix. One can
convince oneself that in this $(\alpha, a)$ notation, the
rank-five antisymmetric Levi-Civita tensor
$\epsilon_{\alpha_1 \alpha_2 \alpha_3 \alpha_4 \alpha_5 }$ which
transforms as an SU(5) singlet in the SU(5) notation becomes
\begin{equation}
i^{f(a_1...a_5)} \epsilon_{\alpha_1\alpha_2\alpha_3
\alpha_4\alpha_5}\,,
\end{equation}
where $f(a_1 \ldots a_5)$ is defined to equal the number of $a_i$
that takes the value 2. It is also straightforward to check that
this expression satisfies the self-duality condition (Eq.~(3)).
Thus $\langle\phi\rangle$ is written as
\begin{equation}
\langle \phi_{\alpha_1 a_1...\alpha_5 a_5} \rangle
= \mu\ i^{f(a_1...a_5)}
\epsilon_{\alpha_1 \alpha_2 \alpha_3 \alpha_4 \alpha_5 }\,,
\end{equation}
where $\mu$ is a parameter.
Some words about our notation. The tensor indices $i_1,\ldots ,i_5$
of $\phi_{i_1 \ldots i_5}$ will be suppressed for convenience and
legibility whenever no ambiguity should arise. In the expressions
like $\tau\phi$ and $e^{i\tau\theta} \phi$ where $\tau$ operates
on $\phi$, $\tau$ is understood to be in the same representation
of $\phi$, {\it i.e.} $\tau$ is the short-hand for
\FL
\begin{equation}
\tau_{i_1 \ldots i_5j_1 \ldots j_5} =
\tau_{i_1j_1} \delta_{i_2j_2} \ldots \delta_{i_5j_5}
+ \delta_{i_1j_1} \tau_{i_2j_2} \ldots \delta_{i_5j_5}
+ \ldots
\end{equation}
With the symmetry breaking Spin(10)$\rightarrow$SU(5)$
\times{\cal Z}_2$, strings are formed. At spatial infinity,
the general form of $\phi$ is given by Eq.~(1). For the energy
to be finite, the co\-variant derivative of $\phi$,
$D_\mu \phi \equiv \partial_\mu \phi + eA_\mu \phi\ $, has to
vanish at spatial infinity; therefore the gauge field $A_\mu$
takes the form $A^\theta = i\frac{1}{er} \tau$, $A^r = 0\,,$ as
$r \rightarrow \infty$. In the core of the string, there is a
magnetic flux $\oint \vec{A} \cdot d\vec{l} = \frac{2\pi}{e}\tau$
pointing in the direction of $\tau$ in group space. Strings
carrying flux pointing in different directions in group space
are topologically equivalent since the only nontrivial winding
number here is one, but dynamically they can differ. Because
the scalar field $\phi(\theta)$ varies with $\theta$, the
embedding of the unbroken subgroup SU(5) in SO(10) outside
the string also varies with $\theta$. More precisely,
the generators $\tau^a_\theta, a=1, \ldots ,24$ of the unbroken
SU(5) at $\theta$ are related to the generators $\tau^a_0$ of
the unbroken SU(5) at $\theta=0$ by the similarity transformation
\begin{equation}
\tau^a_{\theta}= g(\theta)\tau_0^a g^{-1}(\theta)\,,\ \
g(\theta)=e^{i\tau\theta}\,.
\end{equation}
Consequently, the fermion fields which transform as {\bf 1},
$\bf{\bar 5}$ and {\bf 10} under SU(5) are also rotated as
one goes around the string. How the fields mix depends on
which direction in group space $\phi(\theta)$ winds.
The SO(10) generators can be written as 10$\times$10 matrices
of the form $(\tau^{ab})_{ij} = -i(\delta^a_i \delta^b_j
- \delta^b_i\delta^a_j)\,,$ where $a,b$ label the group
indices, $i,j$ label the matrix elements, and $a,b,i,j$ all
run from 1 to 10. In this notation $\tau_{\rm all}$ is given by
\begin{equation}
\tau_{\rm all}\equiv \frac{1}{5} (\tau^{12} +
\tau^{34} + \ldots +\tau^{9\,10})\,,
\end{equation}
where the factor of 1/5 is included for $\phi(\theta)$ to have a
$2\pi$ rotational period. It takes a little more effort to write
down the $\tau_1$'s. Let us first write the SU(5) generators
specified by Eq.~(5) in terms of $\tau^{ab}$ given above.
The four diagonal generators are trivial. For the other twenty
generators, one can group the 10$\times$10 space into 2$\times$2
blocks, and write the 45 $\tau^{ab}$'s as $\tau^{2\alpha-1,\,
2\beta-1}, \tau^{2\alpha-1,\,2\beta}, \tau^{2\alpha,\, 2\beta-1}$
and $\tau^{2\alpha, 2\beta}$, where $\alpha, \beta$ both run from
1 to 5. Then it is not hard to see that the twenty linear
combinations
\begin{eqnarray}
&& \frac{1}{2} (\tau^{2\alpha-1,\,2\beta}
-\tau^{2\alpha,\,2\beta-1})\,,\nonumber\\
&& \frac{1}{2} (\tau^{2\alpha-1,\,2\beta-1}
+\tau^{2\alpha,\,2\beta})\,,\quad \alpha < \beta
\end{eqnarray}
are all of the form of Eq.~(5), and therefore can be chosen
to be the twenty off-diagonal generators of SU(5). Note that
the superscripts $\alpha, \beta$ above label the group indices
while the subscripts $\alpha, \beta$ in Eq.~(5) label the
matrix elements. The twenty $\tau_1$'s outside SU(5) then can
be expressed by the other twenty linear combinations as
\begin{eqnarray}
\tau_1 &\equiv& \frac{1}{2}(\tau^{2\alpha-1,\,2\beta}
+\tau^{2\alpha,\,2\beta-1})\,,\nonumber\\
&& \frac{1}{2}(\tau^{2\alpha-1,\,2\beta-1} -
\tau^{2\alpha,\,2\beta})\,,\quad \alpha < \beta\,.
\end{eqnarray}
Other than the SU(5) group properties, the linear combinations
above can also be classified under the group SO(4), which is
locally isomorphic to SU(2)$\times$SU(2). For a given $\alpha$
and $\beta$ where $\alpha < \beta$, the two generators of
Eq.~(11) plus the diagonal
\begin{equation}
\frac{1}{2} (\tau^{2\alpha-1,\,2\beta-1}-\tau^{2\alpha,\,2\beta})
\end{equation}
can be easily shown to obey the SU(2) algebra. Similarly,
the two generators of Eq.~(12) plus
\begin{equation}
\frac{1}{2} (\tau^{2\alpha-1,\,2\beta-1}+\tau^{2\alpha,\,2\beta})
\end{equation}
generate another SU(2). Thus, for a given $\alpha$ and $\beta$
$(\alpha < \beta)$, the six generators of Eqs.~(11-14) generate
rotations in the 4-dimensional space spanned by vectors in the
$2\alpha-1, 2\alpha, 2\beta-1, 2\beta$ directions.
\section{Field Configurations}
The relevant part of the Lagrangian for the SO(10) theory is
given by
\begin{equation}
{\cal L} = \frac{1}{4} trF_{\mu \nu} F^{\mu \nu} +
(D_\mu \phi )^\ast (D^\mu \phi) - V(\phi)
\end{equation}
where $F_{\mu \nu} = -iF_{\mu \nu}^a \tau_a\,, A_{\mu} = -iA_{\mu}^a
\tau_a\,, F_{\mu \nu} = \partial_\mu A_\nu - \partial_\nu A_\mu +
e[A_\mu ,A_\nu ]\,, D_\mu = \partial_\mu + e A_\mu\ $;
$A^a_\mu, a=1, \ldots ,45$, are the SO(10) gauge fields and
$\phi$ is the Higgs {\bf 126}. The most general gauge-invariant
and renormalizable potential $V(\phi)$ contains all the distinct
contractions of two and four $\phi$'s:
\FL
\begin{eqnarray}
V(\phi) & = & v_1 \phi_{i_1 \ldots i_5} \phi^{\ast}_{i_1 \ldots i_5}
+ v_2 (\phi_{i_1 \ldots i_5} \phi^{\ast}_{i_1 \ldots i_5})^2
\nonumber\\
& + & v_3 \phi_{i_1 n_2 n_3 n_4 n_5}
\phi^\ast_{j_1 n_2 n_3 n_4 n_5}
\phi_{i_1 \ell_2 \ell_3 \ell_4 \ell_5}
\phi^\ast_{j_1 \ell_2 \ell_3 \ell_4 \ell_5} \nonumber\\
& + & v_4 \phi_{i_1 i_2 n_3 n_4 n_5}
\phi^\ast_{j_1 j_2 n_3 n_4 n_5}
\phi_{i_1 i_2 \ell_3 \ell_4 \ell_5}
\phi^\ast_{j_1 j_2 \ell_3 \ell_4 \ell_5} \nonumber\\
& + & v_5 \phi_{i_1 j_2 n_3 n_4 n_5}
\phi^\ast_{j_1 i_2 n_3 n_4 n_5}
\phi_{i_1 i_2 \ell_3 \ell_4 \ell_5}
\phi^\ast_{j_1 j_2 \ell_3 \ell_4 \ell_5} \nonumber\\
& + & v_6 \phi_{i_1 i_2 j_3 n_4 n_5}
\phi^\ast_{j_1 j_2 i_3 n_4 n_5}
\phi_{i_1 i_2 i_3 \ell_4 \ell_5}
\phi^\ast_{j_1 j_2 j_3 \ell_4 \ell_5}\,.\ \
\end{eqnarray}
In writing down the $v_3$ through $v_6$ terms above, one has to
consider two things: (1) the possible ways to contract the
indices, and (2) which $\phi$'s are to be complex conjugated.
One can deal with (1) without the complication of (2) by adopting
an equivalent real 252 representation for $\phi$ because a complex,
self-dual 126-dimensional tensor can be thought of as a real,
252-dimensional tensor by dropping the self-duality condition
and taking the real parts of the resulting complex, 252-dimensional
tensor. One can see there are only four distinct terms and they
are terms $v_3$ through $v_6$ in Eq.~(16) above. Then when $\phi$
is taken to be complex, two out of the four $\phi$'s have to be
complex conjugated to make the potential real. There are three
possibilities: $\phi\phi^{\ast} \phi\phi^{\ast},
\ \phi^{\ast}\phi\phi\phi^{\ast},
\ \phi\phi \phi^{\ast}\phi^{\ast}\ $, for each of the four
contractions $\phi\phi\phi\phi$ when $\phi$ is real. But
after the self-duality condition is applied, one can show
that only one of the three terms is actually independent.
The Euler-Lagrange equations of motion for $\phi$ and $A_\mu$
are given by
\begin{eqnarray}
&& D_\mu D^\mu \phi = -\frac{\partial V}{\partial \phi^\ast}\,,
\label{eq:EOMI}\\
&& Tr(\tau^{a\,2})(\partial_\mu F^{a\,\mu \nu} +
ef^{abc} A_\mu ^b F^{c\,\mu\nu}) \nonumber\\
&& \qquad = ie\{(D^\nu \phi)^\ast (\tau^a \phi) -
(\tau^a \phi)^\ast (D^\nu \phi)\} \,,
\end{eqnarray}
where $a$ is not summed over, and where a basis has been chosen
so that $Tr(\tau^a \tau^b)=\delta^{ab} Tr(\tau^{a\,2})$.
We construct for string-$\tau_{\rm all}$ a solution of
the following form:
\newline {\em Ansatz I\ }:
\begin{eqnarray}
\phi & = & f(r) e^{i\tau_{\rm all}\theta} \phi_0
= f(r) e^{i\theta} \phi_0\,, \nonumber\\
A^\theta & = & i\frac{g(r)}{er} \tau_{\rm all}\,,
\label{eq:ansI}\\
A^r & = & 0\,, \nonumber \
\end{eqnarray}
where $\phi_0 \equiv \langle\phi\rangle$ as defined in
Eq.~(7). The boundary conditions on the functions are
\begin{eqnarray}
f(0) = 0\,,\qquad &
f(r) \stackrel{r\rightarrow \infty}{\longrightarrow} \mu\,,
\nonumber\\
g(0) = 0\,,\qquad &
g(r) \stackrel{r\rightarrow\infty}{\longrightarrow} 1\,;
\end{eqnarray}
$V(\phi)$ is minimized at $f=\mu$. Inserting this {\it ansatz\ }
into the equations of motion and using the relations $\tau_{\rm all}
\tau_{\rm all} \phi_0 = \phi_0\ $ and $(\tau_{\rm all}\phi_0)^\ast
(\tau_{\rm all}\phi_0) = \phi_0^\ast \phi_0 = 3840 \equiv N$,
we obtain two coupled differential equations for $f(r)$ and $g(r)$:
\begin{eqnarray}
f^{\prime\prime} + \frac{1}{r} f^\prime
- \frac{(1-g)^2}{r^2} f & = & f(v_1+2Nv_2 f^2)
\,,\nonumber\\
Tr(\tau_{\rm all}^2) \left( g^{\prime\prime} - \frac{1}{r}
g^\prime \right) & = & -2N e^2 (1-g) f^2 \,,
\end{eqnarray}
where the prime denotes differentiation with respect to $r$, and
$Tr(\tau_{\rm all}^2) = \frac{2}{5}$ from Eq.~(10). An expansion
of $f(r)$ and $g(r)$ in powers of $r$ around the origin reveals
that $f(r)$ is odd in $r$ with a linear leading term, whereas
$g(r)$ is even in $r$ with a quadratic leading term.
Inserting {\it Ansatz I\ } for string-$\tau_{\rm all}$ into the
Lagrangian gives
\begin{eqnarray}
-{\cal L}^{\rm all} &=& \frac{Tr(\tau_{\rm all}^2)}{2e^2 r^2}
g^{\prime\,2}
+ N f^{\prime\,2} + N \frac{(1-g)^2}{r^2} f^2 \nonumber\\
&& + N(v_1 f^2 + Nv_2 f^4)\,.
\end{eqnarray}
As a consistency check, note that the equations of motion
obtained by varying ${\cal L}^{\rm all}$ with respect to the
functions $g$ and $f$ are identical to those in Eq.~(21).
Note that the parameters $v_3$ through $v_6$ in the potential
$V$ are absent from Eq.~(21) and ${\cal L}^{\rm all}$ above.
This is because whenever one index of a given $\phi$ is
contracted with one index of another $\phi$, this index is
summed over from 1 through 10, or in the $(\alpha, a)$ notation
discussed earlier, from $\alpha = 1$ through 5 and $a=1,2$. For
a given $\alpha$, the term with $a=2$ by definition has an extra
factor of $i^2=-1$ compared to the term with $a=1$. These two terms
cancel each other when they are added. Because this is true for
every $\alpha$, the third through the sixth terms in $V$ vanish
identically for the string-$\tau_{\rm all}$ {\it ansatz}.
To construct an {\it ansatz\ } for string-$\tau_1$, we need to
consider separately the two sets of generators in Eq.~(12),
which will be referred to as
\begin{eqnarray}
\tau_{1+} &=& \frac{1}{2}(\tau^{2\alpha-1,\,2\beta}
+\tau^{2\alpha,\,2\beta-1})\,, \nonumber\\
\tau_{1-} &=& \frac{1}{2}(\tau^{2\alpha-1,\,2\beta-1}
-\tau^{2\alpha,\,2\beta})\,,
\ \ \alpha < \beta\,.
\end{eqnarray}
As we shall see, it is sufficient to derive the equations of motion
for an {\it ansatz\ } based on a generator of the form $\tau_{1+}$.
By a simple redefinition, it will then be possible to construct
an {\it ansatz\ } based on a generator of the form $\tau_{1-}$.
For now, we consider the case when $\tau_1$ has the form $\tau_{1+}$.
The simple extension of {\it Ansatz I} with $\tau_{\rm all}$
replaced by $\tau_1$ does not work for string-$\tau_1$.
The problem arises from the term $\tau_1\tau_1 \phi$ on the
left-hand side of Eq.~(17) in which a new tensor $\phi_0^A$,
\begin{equation}
\tau_1\tau_1 \phi_0 = \phi^A_0 \,,
\end{equation}
is generated, where
\FL
\begin{equation}
\phi^A_{0\,i_1 \ldots i_5} \equiv
\left\{ \begin{array}{ll}
\phi_{0\,i_1 \ldots i_5}\,,\ &
\mbox{if two indices take the values} \\
& \mbox{$(2\alpha-1, 2\beta-1)$
or $(2\alpha, 2\beta)$}\,,\\
0 \,, & \mbox{otherwise}.
\end{array} \right.
\end{equation}
As a result, the differential equations for $g(r)$ and $f(r)$
are satisfied only if $g(r)=1$ or $f(r)=0$ everywhere, which
is not consistent with the boundary conditions given by Eq.~(20).
(Note that the solution $g=1$ and $f=\mu$ is the vacuum field
configuration expressed in a singular gauge.)
We construct a nontrivial solution for string-$\tau_1$ by
replacing $f(r)\phi_0$ and $\tau_{\rm all}$ in {\it Ansatz I}
with $(f_1(r)\phi_0 + f_2(r)\phi^A_0)$ and $\tau_1$ respectively.
Note that $\phi_0$ is not orthogonal to $\phi^A_0$ because
$\phi^A_{0\,i_1 \ldots i_5} \phi^\ast_{0\,i_1 \ldots i_5} \neq 0$.
Therefore instead of expanding $\phi$ in $\phi_0$ and $\phi^A_0$,
we will use the more convenient basis $\phi^A_0$ and $\phi^B_0$
where
\begin{equation}
\phi^B_0 \equiv \phi_0 - \phi^A_0\
\end{equation}
and $\phi^B_0$ is orthogonal to $\phi^A_0$:
\begin{equation}
\phi^A_{0\,i_1 \ldots i_5} \phi^{B\,\ast}_{0\,i_1 \ldots i_5}
= 0\,.
\end{equation}
{}From the definition of $\phi^A_0$ (Eq.~(25)) and the properties of
$\phi_0$, one can see that
\FL
\begin{equation}
\phi^B_{0\, i_1 \ldots i_5} =
\left\{ \begin{array}{ll}
\phi_{0\,i_1 \ldots i_5}\,,\ &
\mbox{if two indices take the values} \\
& \mbox{$(2\alpha-1, 2\beta)$ or $(2\alpha, 2\beta-1)$} \,,\\
0 \,, & \mbox{otherwise}
\end{array} \right.
\end{equation}
and $\phi^B_0$ is annihilated by $\tau_1$:
\begin{equation}
\tau_1 \phi^B_0 = 0\,.
\end{equation}
The solution constructed for string-$\tau_1$ is
\newline {\em Ansatz II\ }:
\begin{eqnarray}
\phi & = & e^{i\tau_1 \theta} \left\{ f_o(r) \phi^A_0 +
f_e(r) \phi^B_0 \right\} \,, \nonumber\\
A^\theta & = & i\frac{g(r)}{er} \tau_1 \,, \\
A^r & = & 0\,, \nonumber
\end{eqnarray}
where as will become clear in the next two paragraphs, the
functions $f_o(r)$ and $f_e(r)$ are named after their odd
and even parities in $r$.
At the origin, we require the fields to be regular. Since
$\phi^B_0$ is left invariant by $e^{i\tau_1\theta}$ (Eq.~(29))
but $\phi^A_0$ is not, at the origin $f_e(0)$ can be any
constant but $f_o(0)$ has to vanish. At large $r$,
the scalar field $\phi$ has to take the form
\begin{equation}
\phi \stackrel{r\rightarrow \infty}{\longrightarrow} \mu
\ e^{i\tau_1 \theta} \phi_0 = \mu\ e^{i\tau_1 \theta}
(\phi^A_0 + \phi^B_0)
\end{equation}
for the unbroken gauge group to be SU(5), so both $f_o(r)$
and $f_e(r)$ approach $\mu$ at large $r$. The boundary
conditions on the functions are
\begin{eqnarray}
& f_o(0)=0\,,\qquad &
f_o(r) \stackrel{r\rightarrow \infty}{\longrightarrow} \mu\,,
\nonumber\\
& f_e(0)= a_0\,,\qquad &
f_e(r) \stackrel{r\rightarrow \infty}{\longrightarrow} \mu\,,
\nonumber\\
& g(0)=0\,,\qquad &
g(r)\stackrel{r\rightarrow \infty}{\longrightarrow} 1\,,
\end{eqnarray}
where $a_0$ is a constant.
The equations of motion for $\phi$ and $A_\mu$ are closed when
the fields take the form in {\it Ansatz II\ }. We obtain three
coupled differential equations for $f_o(r),f_e(r)$ and $g(r)$.
The algebra involved in extracting these three equations, however,
is considerably more tedious than in the $\tau_{\rm all}$ case
mainly because the forms of $\phi^A_0, \phi^B_0$ and $\tau_1$ are
less symmetric. We will not present the algebra involved and simply
quote the results:
\FL
\begin{eqnarray}
f_e^{\prime\prime} + \frac{1}{r} f_e^\prime
& = & f_e \left\{ v_1 + N v_2 (f_o^2 + f_e^2)
-\frac{N}{25} e^2 \lambda_3 (f_o^2 - f_e^2) \right\}
\nonumber\\
f_o^{\prime\prime} + \frac{1}{r} f_o^\prime
& - & \frac{(1-g)^2}{r^2} f_o \nonumber\\
& = & f_o \left\{ v_1 + N v_2 (f_o^2 + f_e^2)
+ \frac{N}{25} e^2 \lambda_3 (f_o^2 - f_e^2) \right\}
\nonumber
\end{eqnarray}
\FL
\begin{equation}
Tr(\tau_1^2) \left( g^{\prime\prime} -
\frac{1}{r} g^\prime \right) = -N e^2 (1-g) f_o^2 \,,
\end{equation}
where $e^2 \lambda_3 \equiv v_3 + \frac{v_4}{4} + \frac{v_5}{4}
+ \frac{v_6}{12}$, and $Tr(\tau_1^2)=1$ from Eq.~(12). An expansion
of $g, f_o$ and $f_e$ in powers of $r$ around the origin gives
\begin{eqnarray}
f_o(r) & = & a_1 r + a_3 r^3 + \ldots \,,\nonumber\\
f_e(r) & = & a_0 + a_2 r^2 + \ldots \,,\nonumber\\
g(r) & = & b_2 r^2 + b_4 r^4 + \ldots \,,
\end{eqnarray}
where the coefficients of all the higher terms are related to
$a_0, a_1$ and $b_2$ recursively. The function $f_o$ is indeed
odd and $f_e$ even in $r$ as claimed earlier.
Inserting {\it Ansatz II\ } for string-$\tau_1$ into the
Lagrangian gives
\FL
\begin{equation}
-{\cal L}^1 = \frac{Tr(\tau_1^2)}{2e^2 r^2} g^{\prime\,2}
+ \frac{N}{2} \left( f_e^{\prime\,2} + f_o^{\prime\,2} \right)
+ \frac{N}{2} \frac{(1-g)^2}{r^2} f_o^2 + V_{ans}
\end{equation}
where
\begin{eqnarray}
V_{ans} &=& \frac{N}{2} \left\{ v_1 (f_o^2 + f_e^2)
+ \frac{N}{2} v_2 (f_o^2 + f_e^2)^2 \right. \nonumber\\
&& \left. +\frac{N}{50} e^2 \lambda_3 (f_o^2 - f_e^2)^2 \right\}\,.
\end{eqnarray}
Here again, note that the equations of motion obtained
by varying ${\cal L}^1$ with respect to the functions
$g, f_o$ and $f_e$ are identical to those in Eq.~(33).
Now let us consider the other case when $\tau_1$ has the form of
$\tau_{1-}$. One can show that Eq.~(24) now is $\tau_1\tau_1\phi_0
=\phi^B_0$, and instead of $\tau_1 \phi^B_0=0$, one has
$\tau_1 \phi^A_0=0$. Therefore by switching the definitions of
$\phi^A_0$ and $\phi^B_0$ in Eqs.~(25) and (28), all the equations
between (24) and (32) are preserved, and one can show that the
equations of motion are unchanged. We conclude that {\it Ansatz II}
applies to all twenty $\tau_1$'s, where for $\tau_{1+}$, $\phi^A_0$
and $\phi^B_0$ are defined by Eqs.~(25) and (28) respectively, but
for $\tau_{1-}$, the definitions of the two are reversed.
The equations of motion are given by Eq.~(33) for all cases.
\section{Numerical Calculations}
In this section we present the numerical solutions to the two
sets of differential equations (21) and (33) with the appropriate
boundary conditions at the origin and some large value of $r$.
We implemented two methods: the ``shooting'' and the relaxation
methods to handle this two-point boundary value problem. In the
``shooting'' method \cite{num rec}, an initial guess for the free
parameters at $r=0$ was made and then the equations were integrated
out to large $r$ where the boundary conditions were specified. As
the name of the method suggests, the true solutions were found by
adjusting the parameters at $r=0$ in the beginning of each iteration
to reduce the discrepancies from the desired boundary conditions at
large $r$ computed in the previous iteration. For string-$\tau_1$,
the small-$r$ expansion of the functions in Eq.~(34) gives
$g(0) = g^\prime(0) = 0\,, f_o(0) = f_o^{\prime\prime}(0) =
f_e^\prime(0) = 0\,$, and $f_e^{\prime\prime}(0)=2a_2\,,$ where
$a_2$ is related to $a_0$, $a_1$ and $b_2$, but the values of
\begin{eqnarray}
f_e(0) &=& a_0\,, \nonumber\\
f_o^\prime(0) &=& a_1\,, \nonumber\\
g^{\prime\prime}(0) &=& 2b_2\,,
\end{eqnarray}
were adjusted to match the boundary conditions at large $r$.
For string-$\tau_{\rm all}$, we have shown that $f(r)$ is odd and
$g(r)$ is even in $r$, with $f(r)=ar+\ldots$ and $g(r)=br^2+\ldots$.
Thus only the two values $f^\prime(0), g^{\prime\prime}(0)$ were free
parameters. At large $r$, discrepancies from the boundary condition
were corrected by the multi-dimensional Newton-Raphson method which
computed the corrections to the initial parameters. With an initial
guess for the parameters at $r=0$, this ``shooting'' process was
iterated until the ``targets'' were met. The fourth-order
Runge-Kutta method was used to integrate the equations.
We have also implemented a relaxation scheme for comparison.
In this method the first step is to express the string
energy as a function of the values of the functions $f$ and
$g$ (or $f_e$, $f_o$, and $g$) defined on an evenly spaced
mesh of points. While a Simpson's rule approximation worked
well for the middle range of parameters, a more sophisticated
approximation was used to extend the range of parameters that
could be treated. For each interval of two lattice spacings,
smooth functions $\tilde f$ and $\tilde g$ were defined by 2nd
order polynomial interpolation from the three mesh points
(midpoint and two end points); with the help of a symbolic
integration program, the integral defining the energy was
carried out exactly for the interpolated functions. (By this
method the energy obtained is a rigorous upper limit on
the true ground state string energy.) To avoid divergences
caused by the explicit factors of $1/r^2$ in the energy density,
the first interval had to be treated more carefully--- instead
of fitting the functions with a 2nd order polynomial, we fitted
the coefficients of the analytically determined power series,
such as Eq.~(34). Trial functions $f$ and $g$ were chosen,
and then the energy was minimized by varying each mesh point
one at a time, successively going through the lattice many
times. We found it efficient to begin with a coarse mesh
which was made successively finer by factors of 2,
interpolating the solution at each stage to obtain the first
trial solution for the next stage. For the final run in
each case we used 2048 points.
We found the results by the two methods to agree to
approximately one part in a million or better. In general we
were able to explore a wider parameter range with the relaxation
method than with the ``shooting'' method, but the qualitative
features given by the ``shooting'' method remained the same.
(The author wishes to thank Alan Guth for implementing the
relaxation part of the calculations.)
The dependence of the equations on the parameters in the theory can
be simplified if $r, f, f_o$ and $f_e$ are rescaled as ($v_1 < 0$)
\begin{eqnarray}
r & \rightarrow & \sqrt{-v_1} r\,, \nonumber\\
\{f\,, f_o\,, f_e\} & \rightarrow & \sqrt{\frac{2Nv_2}{-v_1}}
\{f\,, f_o\,, f_e\}\,.
\end{eqnarray}
Then only the following combinations of parameters appear
in the differential equations:
\begin{eqnarray}
\lambda_2 & \equiv & \frac{v_2}{e^2} \,,\nonumber\\
\lambda_3 & \equiv & \frac{1}{e^2} \left(
v_3 + \frac{v_4}{4} + \frac{v_5}{4} + \frac{v_6}{12}\right)\,.
\end{eqnarray}
The Hamiltonian densities ${\cal H}^{\rm all}$ and ${\cal H}^1$ for
the two strings are simply $-{\cal L}^{\rm all}$ and $-{\cal L}^1$
given by Eqs.~(22) and (35) because all fields are assumed to be
time-independent. With the same rescaling, one obtains
\begin{eqnarray}
\frac{v_2}{(-v_1)^2} {\cal H}^{\rm all} &=& \frac{1}{2} \left\{
\frac{2\lambda_2}{5r^2} g^{\prime\,2}
+ f^{\prime\,2} + \frac{(1-g)^2}{r^2} f^2 \right.\nonumber\\
&& \left. + \frac{1}{2}(1-f^2)^2 \right\}
\end{eqnarray}
and
\begin{eqnarray}
&&\frac{v_2}{(-v_1)^2} {\cal H}^1 = \frac{1}{2} \left\{
\frac{\lambda_2}{r^2} g^{\prime\,2}
+ \frac{ f_o^{\prime\,2} + f_e^{\prime\,2}}{2}
+ \frac{(1-g)^2}{2r^2} f_o^2 \right. \nonumber\\
&&\ + \left.
\frac{1}{2} \left( 1 - \frac{f_o^2 + f_e^2}{2} \right)^2
+ \frac{\lambda_3}{200\lambda_2} (f_o^2 - f_e^2)^2 \right\}
\end{eqnarray}
where the $\tau_{\rm all}$ equation depends on $\lambda_2$ only but
the $\tau_1$ equation depends on both $\lambda_2$ and $\lambda_3$.
Typical solutions for the two strings calculated from the
``shooting'' method are shown in Figs.~1 and 2, where
$\lambda_2 = 0.132$ and $\lambda_3 = 10.25$. For the same
$\lambda_2$ and $\lambda_3$, the solutions given by the
relaxation method appear indistinguishable visually from
those in Figs.~1 and 2. For string-$\tau_{\rm all}$, we
were able to find solutions in the approximate range $10^{-2}
< \lambda_2 < 10$ using the ``shooting'' method and $10^{-4}
< \lambda_2 <10^3$ using the relaxation method. For
string-$\tau_1$, we explored the range $5\times 10^{-2} <
\lambda_2 < 1$ and $0.5 < \lambda_3 < 10^2$. In general,
the functions converged more slowly near the two ends of each
range above, and we did not attempt to find solutions beyond
these limits. We numerically integrated ${\cal H}^{\rm all}$
and ${\cal H}^1$ for the solutions we computed, and found
string-$\tau_1$ to have the lower energy for all the parameters
we explored. In Fig.~3, the energy density $2\pi r {\cal H}$
of the two solutions shown in Figs.~1 and 2 is plotted, and
the energy of string-$\tau_1$ is clearly lower. For comparison,
we point out that the energy per unit length of
string-$\tau_{\rm all}$ in the range $0.9 < \lambda_2 < 4.0$ has
been calculated by Aryal and Everett \cite{everett}. Our values
in this range of parameters agree with theirs to within 1\%.
One of the most important properties of the two strings we
investigate in this paper is whether string-$\tau_1$ has
lower energy than string-$\tau_{\rm all}$. We just showed
that this is true for some range of the parameters. To
systematically explore a wider parameter range, however, it
is very laborious and time-consuming to calculate the $\tau_1$
solutions for different $\lambda_2$ and $\lambda_3$ first and
then compute the corresponding energy. Instead, we employ an
upper-bound argument to reduce the two-dimensional parameter space
$(\lambda_2, \lambda_3)$ to one. We set $f_o = f_e \equiv f_1$ in
the Lagrangian and take $g(r), f_1(r)$ as trial functions for
string-$\tau_1$. The advantage in using $f_o = f_e$ is that the
last term in Eq.~(41) vanishes, and the equations no longer depend
on $\lambda_3$. Moreover, Eqs.~(40) and (41) then have the same
functional form, differing only in the coefficients of the first
and the third terms, and one can solve the equations for
string-$\tau_1$ the same way as for string-$\tau_{\rm all}$ using
different values of $\lambda_2$. The corresponding energy,
denoted by $E_1(f_o=f_e)$, gives an upper bound on the true
energy of string-$\tau_1$ by the variational principle.
If $E_1(f_o=f_e) < E_{\rm all}$ for a given $\lambda_2$, then one
can conclude that string-$\tau_1$ has the lower energy for that
value of $\lambda_2$ and all values of $\lambda_3$.
(Note that in the limit of $\lambda_3 \rightarrow \infty$, the
trial functions approach the true string solution because for the
energy to be finite, the last term in Eq.~(41) requires $f_o
\rightarrow f_e$.) Our result is presented in Fig.~4, where the
ratio $E_1(f_o=f_e)/E_{\rm all}$ is plotted as a function of
log$\,\lambda_2$ for $10^{-4} < \lambda_2 < 2.5\times 10^3$.
Note that $E_1(f_o=f_e)/E_{\rm all} < 1$ for all 7 decades of
$\lambda_2$, and is approaching an asymptote of 1 (or possibly
less than 1) as $\lambda_2 \rightarrow 0$. For large $\lambda_2$,
we find the individual curves of $E_{\rm all}$ vs. log$\,\lambda_2$
and $E_1$ vs. log$\,\lambda_2$ approach straight lines,
suggesting that the ratio $E_1(f_o=f_e)/E_{\rm all}$ levels off at a
constant for large $\lambda_2$. We conclude that string-$\tau_1$ has
lower energy than string-$\tau_{\rm all}$ for $10^{-4} < \lambda_2
< 2.5\times 10^3$ and all $\lambda_3$, and probably is the ground
state for the entire range of the parameters in the theory.
\section{Scattering Solutions}
To study the scattering of fermions by an SO(10) cosmic string,
one first needs to understand the 16-dimensional spinor
representation of SO(10) to which the left-handed fermions
are assigned. Spinor representations certainly have been discussed
in the literature \cite{spin}, but to establish a common notation,
we discuss in the Appendix the construction of the generators,
the sixteen states and the identification of states with fermions
that are relevant to this paper.
Now we proceed to study the Dirac equation
\begin{equation}
(i\not\!\partial - e\not\! A^a \tau^a - m)\psi = 0
\end{equation}
in the background fields of string-$\tau_{\rm all}$ and $\tau_1$:
$A^a_\mu \tau^a = A^{\rm all}_\mu \tau_{\rm all}$ and $A^1_\mu
\tau_1$. As shown in the Appendix, the fermion fields can be
written as a 16-dimensional column vector where each component
is identified with a fermion given by Eq.~(A.16). The generators
$\tau_{\rm all}$ and $\tau_1$ can be written as 16$\times$16
hermitian matrices, where $\tau_{\rm all}$ is diagonal with one
diagonal entry equal to $\frac{1}{2}$, ten entries equal to
$\frac{1}{10}$ and five entries equal to $-\frac{3}{10}$. For
$\tau_1$, we choose $\tau_1 = \frac{1}{2} (\tau^{58}+\tau^{67})$ for
illustration. We find that $\tau_1$ takes the block-diagonal form
\begin{equation}
-\tau_1 = \frac{1}{2} \left( \begin{array}{cc}
B\ & 0\ \\
0\ & B\
\end{array} \right) \,,
\end{equation}
where
\begin{equation}
B = \left( \begin{array}{cccc}
0 \ & 0 \ & 0 \ & I \\
0 \ & 0 \ & 0 \ & 0 \\
0 \ & 0 \ & 0 \ & 0 \\
I \ & 0 \ & 0 \ & 0
\end{array} \right) \,,
\end{equation}
and $I$ is the 2$\times$2 identity matrix.
For string-$\tau_{\rm all}$, since $\tau_{\rm all}$ is diagonal,
Eq.~(42) decouples into sixteen equations, one for each component
of the wave function, and there is no mixing of leptons and quarks
due to twisting of the Higgs. However, since the sixteen
eigenvalues of $\tau_{\rm all}$ are all fractional, all sixteen
fermions scatter nontrivially off the string via the Aharonov-Bohm
effect. As pointed out by previous studies, the wave functions of
these fermions can be strongly enhanced near the core of the string,
leading to strong B-violating processes inside the string.
In the case of string-$\tau_1$, upon diagonalizing $\tau_1$ by a
unitary matrix $U$ and simultaneously rotating the fermion basis
$\psi$ in Eq.~(A.16) to $\tilde{\psi} \equiv U\psi$, we can
write $\tilde{\psi}$ as
\FL
\begin{eqnarray}
\tilde{\psi}
& = &( e^- + u_1^c\,, e^- - u_1^c\,, \nu^c+d_1\,, \nu^c-d_1\,,
u^c_2\,, u^c_3\,, d_3\,, d_2\,, \nonumber\\
& & u_3 + d_2^c\,, u_3 - d_2^c\,, u_2+d_3^c\,, u_2-d_3^c\,,
u_1\,, \nu\,, e^+\,, d_1^c)_L \nonumber
\end{eqnarray}
\begin{equation}
\quad \quad
\end{equation}
and Eq.~(42) again decouples into sixteen equations of
the form
\begin{equation}
(i\not\!\partial + e\lambda_i\not\! A^1 - m)
\tilde{\psi}_i = 0\,,
\label{eq:dede}
\end{equation}
where each $\tilde{\psi}_i$ interacts with the gauge field with
coupling strength $e\lambda_i\,; \lambda_i$ are the eigenvalues
of $-\tau_1$. The eigenvalues are $\lambda_i = \frac{1}{2}$ for
$e + u^c_1\,,\nu^c + d_1\,,u_3 + d^c_2\,,u_2 + d^c_3\,, \lambda_i =
-\frac{1}{2}$ for $e - u^c_1\,,\nu^c -d_1\,,u_3 - d^c_2\,,u_2 -
d^c_3\,,$ and $\lambda_i = 0$ for all others. Since the $e + u^c$
and $e - u^c$ components have opposite eigenvalues, we expect a pure
$e$ or $u^c$ to turn into a mixture of $e$ and $u^c$ as it
propagates around the string, producing baryon-number violation.
Before calculating the scattering amplitude, we first comment on
the choice of gauge in this problem. The fields in {\it Ansatz II}
(See Eq.~(30)) for string-$\tau_1$ were constructed in a gauge where
the scalar field $\phi$ winds with $\theta$ and the gauge field
falls off as $r^{-1}$ at large $r$. The particle content, however,
is probably most transparent in a different gauge where $\phi$ does
not wind with $\theta$ and $A_\mu \rightarrow 0$ at large $r$
everywhere except on a sheet of singularities at $\theta=0$.
We will refer to the former as the $1/r$-gauge and the latter
as the ``sheet'' gauge, in analogy with the ``string'' gauge of
a magnetic monopole. Continuing to work in the diagonalized basis,
the fermion fields in the ``sheet'' gauge, $\tilde{\psi}_0$,
are related to those in the $1/r$-gauge, $\tilde{\psi}$,
by the gauge transformation
\begin{equation}
\tilde{\psi}_0 = e^{-i\tau_1 (\pi-\theta)} \tilde{\psi}\,.
\label{eq:gauge}
\end{equation}
We will solve the Dirac equation and calculate the scattering
amplitude in the $1/r$-gauge, and then write down the baryon-number
violating cross section in the ``sheet'' gauge.
In the presence of an infinitely-thin $\tau_1$-string along the
$z$-axis, the gauge field $A^1_\mu$ takes the form
$A^{1\,r} = A^{1\,z}=0, A^{1\,\theta}= \frac{1}{er}\,,$
where $(r, \theta)$ denote the usual polar coordinates with
$\theta$ running counter-clockwise from the positive $x$-axis.
Owing to the symmetry along the $z$-axis, the matrix $\gamma_3$ in
Eq.~(46) drops out, and with the choice for the $\gamma$-matrices
\begin{eqnarray}
\gamma_0 &= \left( \begin{array}{cc}
\sigma_3 & 0 \\
0 & -\sigma_3
\end{array} \right) \,,
&\quad \gamma_1 = \left( \begin{array}{cc}
i\sigma_2 & 0 \\
0 & -i\sigma_2
\end{array} \right) \,, \nonumber\\
\gamma_2 &= \left( \begin{array}{cc}
-i\sigma_1 & 0 \\
0 & i\sigma_1
\end{array} \right) \,,
&\quad \gamma_3 = \left( \begin{array}{cc}
0\ \ &\ 1 \\
-1\ \ &\ 0
\end{array} \right) \,,
\end{eqnarray}
Eq.~(46) decouples into two independent equations
for the upper and lower 2-component spinors of $\tilde{\psi}_i$,
where the two equations differ by the sign of the mass term.
Writing the upper spinor of $\tilde{\psi}_i$ as
\begin{equation}
\left( \begin{array}{c}
\chi_1 (r) \\
\chi_2 (r) e^{i\theta}
\end{array} \right)
e^{in\theta - iEt} \,,
\end{equation}
one can show
\begin{equation}
\left( \begin{array}{cc}
m-E & -i\left( \partial_r + \frac{n+\lambda_i+1}{r} \right) \\
-i\left( \partial_r - \frac{n+\lambda_i}{r} \right) & -m-E
\end{array} \right)
\left( \begin{array}{c}
\chi_1 \\
\chi_2
\end{array} \right) = 0 \,,
\end{equation}
and the solutions are Bessel functions of order $(n+\lambda_i)$
and $-(n+\lambda_i)$:
\FL
\begin{equation}
\left( \begin{array}{c}
\chi_1 \\
\chi_2
\end{array} \right) =
\left( \begin{array}{c}
J_{\pm(n+\lambda_i)} (kr) \\
\pm \frac{ik}{E+m} J_{\pm(n+\lambda_i+1)} (kr)
\end{array} \right) \,,\ k=\sqrt{E^2-m^2}\,.
\end{equation}
The appropriate boundary conditions to impose, as pointed out
in Ref.~14, are the square-integrability of the wave functions
near the origin and a self-adjoint Hamiltonian. The usual
requirement that wave functions be regular at the origin is
sometimes too strong and has to be relaxed. Since $J_\nu (r)
\sim r^\nu / (2^\nu \nu!)$ for small $r$, one can see that
the solutions above are square-integrable if the $+$ sign
is chosen for the modes $n+\lambda_i > 0$, and the $-$ sign for
$n+\lambda_i < -1$. For the mode $ -1 < n+\lambda_i < 0$,
however, both choices are square-integrable albeit neither is
regular at the origin, and the solution takes the form
\FL
\begin{equation}
\left( \begin{array}{c}
\chi_1 \\
\chi_2
\end{array} \right) =
\left( \begin{array}{c}
\sin\mu\,J_{n+\lambda_i} + \cos\mu\,J_{-(n+\lambda_i)} \\
\frac{ik}{E+m}
(\sin\mu\,J_{n+\lambda_i+1} - \cos\mu\,J_{-(n+\lambda_i+1)})\\
\end{array} \right) \,,
\end{equation}
where $\mu$ is the self-adjoint parameter.
The scattering amplitude $f^{\lambda_i}(\theta)$ for the $i$th
fermion in $\tilde{\psi}$ appears in the asymptotic wave function
written as the sum of the incident plane wave and the scattered
part:
\begin{eqnarray}
\tilde{\psi}_i &\sim &
u_E e^{-i\lambda_i(\pi-\theta)} e^{i(kx - Et)} \nonumber\\
&& + \sqrt{\frac{i}{r}} v_E e^{-i\lambda_i(\pi-\theta)}
f^{\lambda_i}(\theta) e^{i(kr - Et)} \,,
\end{eqnarray}
where $u_E$ and $v_E$ are given by
\begin{equation}
u_E = \left( \begin{array}{c}
1 \\
\frac{k}{E+m}
\end{array} \right) \,, \quad
v_E = \left( \begin{array}{c}
1 \\
\frac{k}{E+m} e^{i\theta}
\end{array} \right) \,.
\end{equation}
Expanding $e^{ikx}=e^{ikr\cos\theta}$ and $e^{ikr}$ in Bessel
functions using
\begin{equation}
e^{ikr\cos\theta}
= \sum_{n=-\infty}^{\infty} i^n J_n(kr) e^{in\theta}\,,
\end{equation}
and with
\begin{equation}
f^{\lambda_i}(\theta) = \sum_{n=-\infty}^{\infty}
f_n^{\lambda_i} e^{in\theta}\,,
\end{equation}
Eq.~(53) can be matched to the solutions in Eq.~(51) mode by mode
at large $r$. Then the scattering amplitude can be calculated:
\begin{equation}
f^{\lambda_i} (\theta)
= \frac{i}{\sqrt{2\pi k}} e^{-i([\lambda_i]+1)\theta}
\left( \frac{ \sin\left( \frac{\theta}{2} - \pi\lambda_i
\right)} { \sin \frac{\theta}{2} }
- e^{2i\delta} \right)\,,
\end{equation}
where $[\lambda_i]$ denotes the largest integer less than
or equal to $\lambda_i$, and $\delta$ is related to $\lambda_i$
and the self-adjoint parameter $\tan\mu$ by \cite{gerbert}
\begin{equation}
\tan \delta
= \frac{1-\tan\mu}{1+\tan\mu}\,\tan\frac{\lambda_i \pi}{2}\,.
\end{equation}
With the gauge transformation Eq.~(47), one can easily see that
$(\tilde{\psi}_0)_i$ in the ``sheet'' gauge is given by Eq.~(53)
without the phase $e^{-i\lambda_i(\pi-\theta)}$.
To illustrate the processes that violate the baryon number, we
consider an incident beam of electrons propagating in the fields
of the string. We will study the $(e, u^c)$-subspace and ignore
other fermions since $e$ in $\psi$ is mixed with $u^c$ only. In
the ``sheet'' gauge, the eigenstates of $\tau_1$ can be written as
\begin{equation}
e + u^c = \left( \begin{array}{c}
1 \\
0
\end{array} \right) \,,\quad
e - u^c = \left( \begin{array}{c}
0 \\
1
\end{array} \right) \,,
\end{equation}
and the electron is simply given by
\begin{equation}
e = \left( \begin{array}{c}
\frac{1}{2} \\
\frac{1}{2}
\end{array} \right) \,.
\end{equation}
An incident wave of electrons can be written as
\begin{equation}
\tilde{\psi}^e_{0\,inc} = u_E \left( \begin{array}{c}
\frac{1}{2} \\
\frac{1}{2}
\end{array} \right) e^{i(kx-Et)} \,,
\end{equation}
which scatters into
\FL
\begin{equation}
\tilde{\psi}_{0\,sca} = \sqrt{\frac{i}{r}} v_E
\left\{ f^{\frac{1}{2}}(\theta)
\left( \begin{array}{c}
\frac{1}{2} \\
0
\end{array} \right)
+ f^{-\frac{1}{2}}(\theta)
\left( \begin{array}{c}
0 \\
\frac{1}{2}
\end{array} \right) \right\} e^{i(kr-Et)}\,.
\end{equation}
Note that the suppressed index on the 2-component spinors
$u_E$ and $v_E$ should not be confused with the index associated
with the 2-component eigenvectors used here to label the
$e + u^c$ and $e-u^c$ components of the Dirac field.
Rewriting $\tilde{\psi}_{0\,sca}$ above as
\FL
\begin{eqnarray}
\tilde{\psi}_{0\,sca} &=& \sqrt{\frac{i}{r}} v_E
\left\{ \left(
\frac{f^{\frac{1}{2}}(\theta) + f^{-\frac{1}{2}}(\theta)}{2}
\right)
\left( \begin{array}{c}
\frac{1}{2} \\
\frac{1}{2}
\end{array} \right)
\right. \nonumber\\
&& \left. + \left(
\frac{f^{\frac{1}{2}}(\theta) - f^{-\frac{1}{2}}(\theta)}
{2} \right)
\left( \begin{array}{c}
\frac{1}{2} \\
-\frac{1}{2}
\end{array} \right) \right\} e^{i(kr-Et)}\,,
\end{eqnarray}
one finds that the scattered wave consists of a mixture of electrons
and $u^c$-quarks.
The differential cross section per unit length for the production of
$u$-quark is defined by
\begin{equation}
\frac{d\sigma}{d\theta} = \lim_{r\rightarrow \infty}
\frac{\vec{J}_{sca}^u\cdot \vec{r}}{J_{inc}}
\end{equation}
where $J^i = \bar{\psi}\gamma^i \psi\ $. Substituting
$\tilde{\psi}_{0\,inc}$ and $\tilde{\psi}_{0\,sca}$
into the currents, one obtains
\begin{equation}
\frac{d\sigma}{d\theta} =
\frac{1}{4} \left| f^{\frac{1}{2}}(\theta)
-f^{-\frac{1}{2}}(\theta) \right|^2\,,
\end{equation}
which can be written out using Eq.~(57) as
\begin{equation}
\frac{d\sigma}{d\theta} = \frac{1}{2\pi k}
\left\{ \frac{\cos^4 \frac{\theta}{2}}{\sin^2 \frac{\theta}{2}}
+ \sin^2 \left( \frac{\theta}{2} - 2\delta \right)\right\}\,.
\end{equation}
The calculation above was done in the limit of zero string width.
Now let us examine the string core. The structure of the string
core is ``encoded'' in the self-adjoint parameter $\delta$ (or
$\mu$, related to $\delta$ by Eq.~(58)), which appears in the
differential cross section above. In general the self-adjoint
parameter is determined either from physical properties at the
origin or sometimes by symmetry arguments. Since the string
solutions have already been obtained in the previous section,
we can find $\mu$ by solving Eq.~(50) numerically for the mode
$-1 < n+\lambda_i < 0$, using the realistic form $g(r)/r$ for the
gauge field computed earlier in place of the $1/r$ in Eq.~(50).
As we have shown, $\lambda_i=\pm\frac{1}{2}$ for the fermions that
scatter nontrivially off the $\tau_1$-string. Thus the special
mode satisfying $-1 < n+\lambda_i < 0$ takes the value $n+\lambda_i
=-\frac{1}{2}$, where $n=-1$ for $\lambda_i=\frac{1}{2}$ and
$n=0$ for $\lambda_i=-\frac{1}{2}$. Recall that in the calculation
of $g(r)$, the radial distance $r$ was rescaled to the dimensionless
$\sqrt{-v_1} r\ (v_1 < 0)$, where $v_1$ is the quadratic coupling
in the Higgs potential in Eq.~(16). Rescaling $\chi_2$ and $r$ by
\begin{eqnarray}
\chi_2 & \rightarrow & i\frac{E+m}{k}\chi_2\,,\nonumber\\
r & \rightarrow & \sqrt{-v_1} r\,,
\end{eqnarray}
and replacing $\lambda_i$ in Eq.~(50) by $\lambda_i g(r)$,
Eq.~(50) can be rewritten as
\begin{eqnarray}
\partial_r \chi_1 & = & \frac{g(r)-2}{2r}\chi_1+\beta\chi_2
\nonumber\\
\partial_r \chi_2 & = & - \frac{g(r)}{2r}\chi_2
-\beta\chi_1
\end{eqnarray}
for $\lambda_i=\frac{1}{2}, n=-1$, and
\begin{eqnarray}
\partial_r \bar{\chi}_1 & = & -\frac{g(r)}{2r}\bar{\chi}_1
+ \beta\bar{\chi}_2 \nonumber\\
\partial_r \bar{\chi}_2 & = & \frac{g(r)-2}{2r}\bar{\chi}_2
-\beta\bar{\chi}_1
\end{eqnarray}
for $\lambda_i=-\frac{1}{2}, n=0$. The parameter $\beta$
is defined by
\begin{equation}
\beta \equiv k/\sqrt{-v_1}\,,
\end{equation}
and the bars over $\chi_1, \chi_2$ are used to distinguish the
solutions of $\lambda_i=-\frac{1}{2}$ from those of $\lambda_i
=\frac{1}{2}$. Upon closer inspection of the two sets of equations
above, one finds that Eq.~(69) is in fact identical to Eq.~(68) if
$\bar{\chi}_1$ is identified with $\chi_2$ and $\bar{\chi}_2$
with $-\chi_1$. What about the boundary conditions at the origin?
In Eq.~(49), for $n=-1$, the upper component depends on $\theta$
but the lower component does not, and vice versa for $n=0$.
Therefore $\chi_1$ and $\bar{\chi}_2$ must vanish at the origin for
the solution to be continuous, but $\chi_2$ and $\bar{\chi}_1$ can
be nonzero at $r=0$. One thus has $\bar{\chi}_1 = \chi_2$
and $\bar{\chi}_2 = -\chi_1$. Since Eq.~(68) is linear, the
value of $\chi_2(0)$ can be chosen arbitrarily when integrating
the differential equations.
The self-adjoint parameters $\mu$ for $\lambda_i=\frac{1}{2}$ and
$\bar\mu$ for $\lambda_i=-\frac{1}{2}$ are determined by matching
the solutions of Eq.~(68) to the asymptotic expression in Eq.~(52)
at some radius $r$. For $n+\lambda_i = -\frac{1}{2}$, the Bessel
functions in Eq.~(52) are simply $J_{\pm\frac{1}{2}}$, which have
the analytic forms
\begin{equation}
J_{\frac{1}{2}}(x)=\sqrt{\frac{2}{\pi x}} \sin x\,,\quad
J_{-\frac{1}{2}}(x)=\sqrt{\frac{2}{\pi x}} \cos x\,.
\end{equation}
Then Eq.~(52) leads to the simple expression for $\mu$ and
$\bar\mu$:
\begin{eqnarray}
\frac{\chi_1}{\chi_2} &=& \tan(\mu + \beta r) \,,\nonumber\\
\frac{\bar{\chi}_1}{\bar{\chi}_2} &=&
\tan(\bar\mu + \beta r) \,,
\end{eqnarray}
which can be inverted to give $\mu$ and $\bar\mu$ at a given $r$,
using $\chi_1$ and $\chi_2$ computed from Eq.~(68).
Using Eq.~(72) and trigonometric identities, one finds
\begin{equation}
\bar\mu = \mu + \frac{\pi}{2} \,.
\end{equation}
Note that the solutions depend on $\beta$ which appears
in Eq.~(68), and the quartic couplings $\lambda_2, \lambda_3$ in
the Higgs potential. The parameter $\beta$ defined in Eq.~(70)
measures the ratio of the incident fermion momentum $k$ to
the Higgs mass parameter $\sqrt{-v_1}$, which is of the order of
GUT energy scale. To put it another way, $\beta$ measures
the string width relative to the wavelength of the incident fermion.
In Fig.~5, we set $\beta = 1$ and plot $\mu$ computed from Eq.~(72)
at a given $r$ for three sets of $\lambda_2$ and $\lambda_3$.
The true value of $\mu$ is given by the limit $r \rightarrow
\infty$. In Fig.~6, we choose the same set of parameter as in
Figs. 1-3: $\lambda_2 = 0.132$ and $\lambda_3 = 10.25$; $\mu$
is shown for five values of $\beta$ ranging from 0.1 to 2.0.
One can see that as $\beta$ decreases, {\it i.e.} when the
wavelength of the fermion becomes large compared to the string
width, $\mu$ decreases.
\section{CONCLUSIONS}
We constructed two types of strings, string-$\tau_{\rm all}$
and string-$\tau_1$, in the SO(10) grand unified theory.
They are topologically equivalent but dynamically different
strings, produced during the phase transition $\hbox{Spin(10)}
\rightarrow\hbox{SU(5)}\times{\cal Z}_2$ in the early universe.
String-$\tau_{\rm all}$ is effectively Abelian, and can catalyze
baryon number violation with a strong cross section via grand-unified
processes inside the string. It has been the subject of study in
several recent papers. The richer Higgs structure of
string-$\tau_1$, on the other hand, has been shown in this paper
to induce baryon catalysis by mixing components in the
fermion multiplet, turning leptons into quarks as they travel
around the string. The underlying B-violating mechanism is
the ``twisting'' of the scalar field, which leads to different
unbroken SU(5) subgroups around the string. This mechanism is
distinct from the grand-unified processes which can only occur
inside the string core where the GUT symmetry is restored.
The corresponding string solutions have been calculated numerically
with both the ``shooting'' and the relaxation methods. The energy
of both strings was computed. With an additional upper bound
argument, we found string-$\tau_1$ to have lower energy than
string-$\tau_{\rm all}$ in a wide range of parameters:
$10^{-4} < \lambda_2 < 2.5\times 10^3$ and all $\lambda_3$. The
ratio of the upper bound on $\tau_1$ energy to the $\tau_{\rm all}$
energy increases as $\lambda_2$ decreases, and possibly approaches
one from below as $\lambda_2 \rightarrow 0$. Scattering of fermions
in the fields of string-$\tau_1$ has also been analyzed, and the
B-violating cross section is given by Eq.~(66). We conclude that
string-$\tau_1$ is more stable than string-$\tau_{\rm all}$, and
can catalyze baryon decay with strong cross sections via the
interesting mechanism of Higgs field twisting.
\nonum
\section{ACKNOWLEDGMENTS}
I wish to thank Alan Guth for many valuable suggestions on
this work and a critical reading of the manuscript. I am also
grateful for advice from Ed Bertschinger, Robert Brandenberger,
Jeffrey Goldstone, Roman Jackiw and Leandros Perivolaropoulos,
and assistance from Roger Gilson.
\nonum
\section{APPENDIX}
The generators of SO(2n) in the spinor representation can be
constructed from a set of $2^n \times 2^n$ hermitian matrices
$\Gamma_a^{(n)}, a=1, \ldots ,2n\,$, which satisfy the Clifford
algebra
\begin{equation}
\{\Gamma_a^{(n)},\Gamma_b^{(n)}\} = 2\delta_{ab}\,.
\eqnum{A.1}
\end{equation}
Starting with the two Pauli matrices for $n=1$
\begin{equation}
\Gamma_1^{(1)} = \left( \begin{array}{cc}
0\ & 1\ \\
1\ & 0\
\end{array} \right) \,, \quad
\Gamma_2^{(1)} = \left( \begin{array}{cc}
0 & -i \\
i & 0
\end{array} \right) \,, \eqnum{A.2}
\end{equation}
one can iteratively build the higher-dimensional
$\Gamma^{(n+1)}_a\ $ from the $\Gamma^{(n)}_a\ $ by
\begin{eqnarray}
\Gamma_a^{(n+1)} &=& \left( \begin{array}{cc}
\Gamma_a^{(n)} & 0 \\
0 & -\Gamma_a^{(n)}
\end{array} \right)\,,\ a=1,\ldots ,2n
\nonumber\\
\Gamma_{2n+1}^{(n+1)} &=& \left( \begin{array}{cc}
0\ \ & 1 \\
1\ \ & 0
\end{array} \right)\,, \nonumber\\
\Gamma_{2n+2}^{(n+1)} &=& \left( \begin{array}{cc}
0\ & -i \\
i\ & 0
\end{array} \right)\,. \eqnum{A.3}
\end{eqnarray}
One can check that these $\Gamma$ matrices satisfy the Clifford
algebra. The $\frac{2n(2n-1)}{2}$ generators of SO(2n) are
constructed by
\begin{equation}
M_{ab} = \frac{1}{4i} [\Gamma_a,\Gamma_b]\,,\ \ a,b=1,
\ldots ,2n \eqnum{A.4}
\label{eq:clifford}
\end{equation}
where $M_{ab}$ satisfy the SO(2n) commutation relations
\FL
\begin{equation}
[M_{ab},M_{cd}]=-i(\delta_{bc} M_{ad}+\delta_{ad} M_{bc}
-\delta_{ac} M_{bd}-\delta_{bd} M_{ac})\,. \eqnum{A.5}
\end{equation}
Thus far, we have used the explicit matrix notation to
construct $\Gamma$ and $M$. For convenience, however, we
will use an alternative notation in which each of the
$2^n \times 2^n$ matrices is written as a tensor product
of $n$ independent Pauli matrices, each acting on a different
two-dimensional space. We choose the convention that the first
matrix from the right in the tensor product acts on the largest
2$\times$2 block in the matrix notation, while the second from
the right acts on the next, and so on, with the matrix on the left
acting on the smallest 2$\times$2 block. In this notation,
the 10 $\Gamma$'s of SO(10) given by Eq.~(A.3) become
\begin{eqnarray}
\Gamma_1 &= \sigma_1 \sigma_3 \sigma_3 \sigma_3 \sigma_3\,,\ &
\Gamma_2 = \sigma_2 \sigma_3 \sigma_3 \sigma_3 \sigma_3\,,
\nonumber\\
\Gamma_3 &= I\ \sigma_1 \sigma_3 \sigma_3 \sigma_3\,,&
\Gamma_4 = I\ \sigma_2 \sigma_3 \sigma_3 \sigma_3\,,\nonumber\\
\Gamma_5 &= I\ I\ \sigma_1 \sigma_3 \sigma_3\,,&
\Gamma_6 = I\ I\ \sigma_2 \sigma_3 \sigma_3\,,\nonumber\\
\Gamma_7 &= I\ I\ I\ \sigma_1 \sigma_3\,,&
\Gamma_8 = I\ I\ I\ \sigma_2 \sigma_3\,,\nonumber\\
\Gamma_9 &= I\ I\ I\ I\ \sigma_1\,,&
\Gamma_{10} = I\ I\ I\ I\ \sigma_2 \,,
\eqnum{A.6}
\end{eqnarray}
and the 45 generators $M$ can be found accordingly.
Furthermore, one can write down the five diagonal $M$'s
that generate the Cartan sub-algebra:
\begin{eqnarray}
M_{12} &=& \frac{1}{2}\ \sigma_3 I I I I\,, \nonumber\\
M_{34} &=& \frac{1}{2}\ I \sigma_3 I I I\,, \nonumber\\
M_{56} &=& \frac{1}{2}\ I I\sigma_3 I I\,, \nonumber\\
M_{78} &=& \frac{1}{2}\ I I I\sigma_3 I\,, \nonumber\\
M_{9\,10} &=& \frac{1}{2}\ I I I I\sigma_3 \,.
\eqnum{A.7}
\end{eqnarray}
The eigenvalues of the five generators above can be used
to label the states in the spinor representation. Let
$\frac{1}{2}\epsilon_1, \ldots , \frac{1}{2}\epsilon_5$
be the eigenvalues of $M_{12}, \ldots ,M_{9\,10}$ respectively
with $\epsilon_i = +1$ or $-1$, and denote the states by
\begin{equation}
|\,\epsilon_1 \epsilon_2 \epsilon_3 \epsilon_4 \epsilon_5
\,\rangle\,. \eqnum{A.8}
\end{equation}
This 32-dimensional representation is reducible to two 16-dimensional
irreducible representations because there exists a chirality operator
\begin{eqnarray}
\chi &\equiv & (-i)^5 \Gamma_1 \Gamma_2 \ldots \Gamma_{10}
\nonumber\\
& = & \sigma_3\sigma_3\sigma_3\sigma_3\sigma_3\,,
\eqnum{A.9}
\end{eqnarray}
which satisfies the commutation relations
\begin{equation}
\{\,\chi, \Gamma_i\,\} = 0\,,\quad [\,\chi, M_{ab}\,]=0\,.
\eqnum{A.10}
\end{equation}
Moreover,
\begin{equation}
\chi |\,\epsilon_1 \epsilon_2 \epsilon_3 \epsilon_4 \epsilon_5
\,\rangle \ = \prod_{i} \epsilon_i
|\,\epsilon_1 \epsilon_2 \epsilon_3 \epsilon_4
\epsilon_5\,\rangle\,, \eqnum{A.11}
\end{equation}
where the eigenvalue $\prod_{i} \epsilon_i$ is $+1$ or $-1$
depending on whether the number of spins that are down
$(\epsilon_i=-1)$ is even or odd.
We assign the sixteen left-handed fermions to the states
of positive chirality, {\it i.e. } states with even number of
$\epsilon_i = -1$. The explicit identification of states
to fermions can be achieved by first breaking the SO(10)
10$\times$10 representation into an upper 6$\times$6 and
a lower 4$\times$4 blocks for the subgroups SO(6) and SO(4),
and then embedding SU(3) in SO(6) and SU(2) in SO(4).
The generators for SO(4) are $M_{ab}, a, b = 7,8,9,10$,
and with the choice \cite{spin}
\begin{equation}
\tau_i=\frac{1}{2} \epsilon_{ijk} M_{jk} - M_{i\,10}\,,
\ \ i,j,k = 7,8,9 \eqnum{A.12}
\end{equation}
for the generators of SU(2), one can easily verify that
the last two spins in $|\,\epsilon_1 \epsilon_2 \epsilon_3
\epsilon_4 \epsilon_5\,\rangle$ label the SU(2) states with
$|+ -\,\rangle, |- +\,\rangle$ labeling the doublets and
$|+ +\,\rangle, |- -\,\rangle$ the singlets.
Similarly, the first three spins in
$|\epsilon_1 \epsilon_2 \epsilon_3 \epsilon_4 \epsilon_5 \rangle$
label the SU(3) states with $|+ + +\,\rangle, |- - - \,\rangle$
labeling the singlets, and $|+ + -\,\rangle, |- + + \,\rangle$
with their permutations labeling the SU(3) triplets. One also
needs the charge operator $Q$ to make the assignment unique.
In SU(5), $Q = diag(1/3,1/3,1/3,0,-1)$, which takes the form
\begin{equation}
Q = \frac{1}{3} ( M_{12} + M_{34} + M_{56} ) - M_{9\,10}\,.
\eqnum{A.13}
\end{equation}
In the SO(10) spinor representation,
\begin{equation}
Q |\,\epsilon_1...\epsilon_5\,\rangle
= \left\{ \frac{1}{6} (\epsilon_1 + \epsilon_2 + \epsilon_3)
- \frac{\epsilon_5}{2} \right\} |\,\epsilon_1 \ldots \epsilon_5
\,\rangle\,. \eqnum{A.14}
\end{equation}
Putting all the above together one obtains
\begin{eqnarray}
|+ + + + +\,\rangle &= \nu^c\,,\ & |+ + + - -\,\rangle = e^+
\nonumber\\
|- - + + +\,\rangle &= u^c_1\,,\ & |- - + - -\,\rangle = d^c_1
\nonumber\\
|- + - + +\,\rangle &= u^c_2\,,\ & |- + - - -\,\rangle = d^c_2
\nonumber\\
|+ - - + +\,\rangle &= u^c_3\,,\ & |+ - - - -\,\rangle = d^c_3
\nonumber\\
|- - - + -\,\rangle &= \nu\,,\ & |- - - - +\,\rangle = e^-
\nonumber\\
|+ + - + -\,\rangle &= u_1\,,\ & |+ + - - +\,\rangle = d_1
\nonumber\\
|+ - + + -\,\rangle &= u_2\,,\ & |+ - + - +\,\rangle = d_2
\nonumber\\
|- + + + -\,\rangle &= u_3\,,\ & |- + + - +\,\rangle = d_3\,.
\eqnum{A.15}
\end{eqnarray}
Since we already know how to express the generators $M_{ab}$
as matrices, we can write the states as a single 32-dimensional
column vector which is projected into two 16-dimensional vectors
of positive and negative chirality by the operator
$P_\pm \equiv \frac{1}{2} (1\pm\chi)$. We find
\FL
\begin{equation}
\psi = (\nu^c\ u^c_1\ u^c_2\ u^c_3\ d_3\ d_2\ d_1\ e^-
\ u_3\ u_2\ u_1\ \nu\ e^+\ d^c_1\ d^c_2\
d^c_3)_L\,. \eqnum{A.16}
\end{equation}
In this paper, we studied two types of strings:
string-$\tau_{\rm all}$, where $\tau_{\rm all}$ is given by
Eq.~(10), and string-$\tau_1$, where $\tau_1$ can be any of
the generators in Eq.~(12). It is easy to see that in terms
of $M_{ab}, \tau_{\rm all}$ is written as
\begin{equation}
\tau_{\rm all} = \frac{1}{5}
(M_{12}+M_{34}+M_{56}+M_{78}+M_{9\,10})\,, \eqnum{A.17}
\end{equation}
and $|\,\epsilon_1 \ldots \epsilon_5\,\rangle$ is an eigenstate of
$\tau_{\rm all}$ with eigenvalue $\frac{1}{10} \sum_i \epsilon_i\,.$
For the left-handed fermions above, $\frac{1}{10} \sum_i \epsilon_i
= \frac{1}{2}$ for $\nu^c$, $\frac{1}{10}$ for $e^+, u, d, u^c$,
and $-\frac{3}{10}$ for $\nu, e^-, d^c$.
To study how $\tau_1$ act on the fermions, we write $\tau_{1+}$
and $\tau_{1-}$ defined in Eq.~(23) as a product of five Pauli
matrices using Eqs.~(A.4) and (A.6), and then replace the
matrices $\sigma_1$ and $\sigma_2$ by the usual raising and
lowering operators $\sigma_\pm=\frac{1}{2} (\sigma_1 \pm i\sigma_2)$.
One obtains
\begin{eqnarray}
\tau_{1+} & = & \frac{1}{2} (\tau^{2\alpha-1, 2\beta}
+ \tau^{2\alpha, 2\beta-1}) \nonumber\\
& = & I \ldots I \sigma_+ \sigma_3 \ldots \sigma_3
\sigma_+ I \ldots I \nonumber\\
&& + I \ldots I \sigma_- \sigma_3 \ldots \sigma_3
\sigma_- I \ldots I \eqnum{A.18}
\end{eqnarray}
and
\begin{eqnarray}
\tau_{1-} & = & \frac{1}{2} (\tau^{2\alpha-1, 2\beta-1}
- \tau^{2\alpha, 2\beta}) \nonumber\\
& = & I \ldots I \sigma_+ \sigma_3 \ldots \sigma_3
\sigma_- I \ldots I \nonumber\\
&& - I \ldots I \sigma_- \sigma_3 \ldots \sigma_3
\sigma_+ I \ldots I \eqnum{A.19}
\end{eqnarray}
where $\alpha,\beta=1, \ldots 5, \alpha < \beta$, and
the two $\sigma_\pm$ matrices in each term occupy the
$\alpha$th and $\beta$th positions from the left. Now one
can read off from the list of fermions above which particles
are mixed by a given $\tau_1$. For generators of the form
$\tau_{1+}$, one immediately finds that except for the case
$\alpha=4, \beta=5$, all mix leptons with quarks; when
$\alpha=4, \beta=5$, the generator mixes $(e^+, \nu^c),
(u_1^c, d_1^c), (u_2^c, d_2^c),$ and $(u_3^c, d_3^c)$.
For generators of the form $\tau_{1-}$, leptons are mixed
with quarks when $\alpha$ = 1, 2, or 3 and $\beta$ = 4 or 5.
\newpage
|
train/arxiv
|
BkiUdB7xK7IDF1Ddtq1d
| 5 | 1 |
\section{\protect\large \bf Introduction}
\hspace{2em}Since the discovery of high-T$_c$ superconductivity,$^1$
intensive theoretical work has been carried out to understand its properties.
Much of this effort was devoted to the analysis of two dimensional
electronic models,$^2$ in particular, the Hubbard$^3$ and
$t - J$ models.$^4$
In spite of their apparent simplicity, these models are very
difficult to study with analytical techniques. Actually,
there are no exact solutions of these models except in one dimension
(and even in this case, for the $t - J$ model only $J = 0$ and
$J = 2t$, i.e. the supersymmetric point, can be solved exactly).
In the parameter regime of interest for high-T$_c$
superconductivity,
these models can be regarded as strongly correlated electronic systems.
It is well known that most analytical methods, like Hartree-Fock $^5$
or RPA approximations, which are reliable for
weak coupling systems, have difficulties in dealing with
strongly correlated electrons. The same problem arises in
approximations like slave boson mean-field techniques.$^6$
In particular, for the
$t - J$ model it is not easy to decouple the charge and spin degrees
of freedom.
One should also note that in mean field calculations it is
necessary to make assumptions about ground state properties.
Numerical methods, on the other hand, are not biased by
any ``a priori" assumptions, and they have provided much of the reliable
information available for these models, as well as a useful check
of predictions formulated by analytical approximations.
Among the most widely used numerical techniques are the Monte Carlo
algorithms.$^7$ In particular, the version that uses the Hubbard-
Stratonovich transformation has been applied to the Hubbard
model$^8$ and several important results have been obtained.
An alternative to Monte Carlo techniques is the Lanczos method $^9$
which essentially gives the ground state of a given model for a
finite lattice.
{}From the ground state, we can compute all static and dynamical properties,
and in this sense, we obtain a complete characterization of a model at
zero temperature except for finite size effects.$^{10}$
This technique has provided important information about models of
correlated electrons.
For example, let us consider
a very recent work$^{11}$ where the $t - J$ model at quarter filling
has been studied.
In this work, strong signals of $d_{x^2-y^2}$
superconductivity close to the phase separation
border were found. These indications come from the study of
pairing correlations, Meissner effect and flux quantization
in the $4 \times 4$ lattice.
At quarter filling there are an equal number of holes and electrons
and we expect that at this point the finite size effects are
small.
However, if we consider the region physically relevant for
high-T$_c$ superconductivity which is close to half filling (doping
fraction $x \cong 0.10$), the number of
holes is very small (2 for the $4 \times 4$ lattice ) and then we
would expect
a weak signal for hole superconductivity.
Actually, most of the exact diagonalization studies of the $t - J$ model
on this lattice, using realistic couplings, have not found any
indications of superconductivity.
Then, in order to study the phase diagram of the $t-J$ model,
its properties, and the relation superconductivity-phase separation
in the physically
relevant region, it appears to be necessary to analyze larger clusters.
However, the 32 sites lattice with 4 holes requires the
diagonalization of a matrix of $\sim 2.25
\times 10^{10}$ states, which is unreachable with present-day computers.
Similar Hamiltonian matrix dimensions appear in many other situations.
In this paper we want to stress the need for developing new methods in
the context of diagonalization in a reduced basis set
in order to answer quantitatively the important
questions posed by models of high-T$_c$ superconductivity.
There are strong reasons why we should attempt to improve
diagonalization
schemes, rather than other approaches like
Monte Carlo methods. It is well known that
Monte Carlo simulations of fermionic models
present ``the minus sign problem",$^{12}$
which makes very difficult the study of these systems at the physically
interesting densities.
It is also well known that there are difficulties in the
analytical continuation procedure that is necessary to perform in
Monte Carlo calculations of dynamical
properties, and thus these techniques are not well developed.
The diagonalization procedures are free from the minus
sign problem, and as we mentioned at the beginning, all quantities
static and dynamical, can be computed from the ground state. Thus, it
is very important to extend these techniques to large clusters, and the
attempt discussed in this paper corresponds to a systematic expansion
of the Hilbert space.
\section{\protect\large \bf Systematic expansion of the basis set}
\hspace{2em}As it was described in the Introduction, the sizes of
the Hilbert
space necessary to study quantitatively problems relevant to high-$T_c$
superconductivity
are considerably larger than the dimensions that can
be reached with present computers (although the currently available
results for small clusters seem to be qualitatively reliable).
In this context, here we want to show that significant results can be
obtained by diagonalization of the Hamiltonian in a truncated
or reduced Hilbert space.$^{13}$ Some variations of this procedure have
been used for
many years in other fields such as chemical physics (see, for example,
Ref.$\:$14) where similar work has been recently discussed
by Wenzel and Wilson.$^{15}$
The method of diagonalization in a truncated basis is of course
justified only if a few coefficients $x_i$ of the ground state:
\begin{eqnarray}
\Psi_0 = \sum_{\scriptstyle i } x_i \phi_i,
\end{eqnarray}
\noindent
have significant weight. In some cases, fairly accurate properties of
the ground state
can be reached even with a small fraction of the total Hilbert space.
There are two questions that must be addressed to implement the proposed
technique:
\begin{enumerate}
\item It is necessary to choose an appropriate basis $\{\phi_i\}$ according to
the
physics of the problem. For example,
\begin{itemize}
\item real space $S^z$ representation for the $t-J$ model,
\item momentum space representation for the one-band Hubbard model in
weak coupling.
\end{itemize}
\item The algorithm must be able to find the most significant states
that contribute to the ground state wave function.
\end{enumerate}
The outline of the method we have developed and present in this paper,
which we call ``systematic
expansion of the Hilbert space'' (SEHS), is the following:
\begin{enumerate}
\item start from as few as possible states chosen according to the
expected behavior of the system (knowing quantum numbers of the ground
state greatly simplifies the work);
\item at each step $i$ expand the Hilbert space by applying the
Hamiltonian, or at least part of it, to the current set of states;
\item diagonalize in the new enlarged Hilbert space using the Lanczos
method;
\item retain the states with the largest weight, such that the
dimension of the Hilbert space is $N_i = \lambda N_{i-1}$, $1 < \lambda
\leq 2$ (``slow growth'' approach);
\item go back to step 2 until convergence in the physical quantities
is achieved,
or until the largest available dimension in the computer is reached.
\end{enumerate}
In an ideal situation, the states chosen at the starting point should
correspond to those
that carry most of the weight in the exact ground state. For some sets of
parameters (couplings, densities)
it is possible to guess these states.
However, different sets of parameters may have different behaviors,
and usually it is not possible to predict at which point the crossover
between them will occur. For example, in the $t-J$ model, for $J \gg t$ the
holes are bound together in pairs, so we can take as starting point
states where the holes are in nearest neighbor sites. On the
other hand, for $J \ll t$ the holes are not bound and move
around independently of each other and then it is not correct to
take the same states as before as the initial state for the iterations.
In this situation, the ``pruning" of the Hilbert space retaining
the most weighted states as indicated in point 4 is essential to
improve or correct the initial starting set of states. As we
discuss below, this procedure effectively works as a systematic method to
obtain and improve variational states. Moreover, it allows the
dimension of the Hilbert space to grow at a slow rate and the
behavior of the energy results smoother than in the case of the
straight application of the Hamiltonian. Below we will apply the
proposed
method to several cases relevant to theories of high-T$_c$ superconductors.
\section{\protect\large \bf Study of the $t-J$ model}
\hspace{2em}Let us apply the SEHS method
to the $t-J$ model$^4$ which is
defined by the Hamiltonian:
\begin{eqnarray}
H = - t \sum_{\scriptstyle <i j>, \sigma}
(\tilde{c}^{\dagger}_{i,\sigma}\tilde{c}_{j,\sigma} +
\tilde{c}^{\dagger}_{j,\sigma}\tilde{c}_{i,\sigma})
+ J \sum_{ <i j>}
({\bf S}_{i}\cdot {\bf S}_{j} - \frac{1}{4} n_{i} n_{j}),
\end{eqnarray}
\noindent
where the notation is standard. The first term describes the
hopping of holes or kinetic energy, while the second one corresponds to the
antiferromagnetic Heisenberg interaction. In this model the size of
the Hilbert space grows roughly as $3^{N_s}$, where $N_s$ is the number of
sites of the lattice, after taking into account the constraint of no
double occupancy.
In two dimensions (2D), this model has been studied at all fillings on
the $4 \times 4$ cluster.$^{10}$
Up to 2 holes, clusters of up to 26 sites have also been considered.
$^{16}$
First, let us briefly discuss the application of this
method
to the two dimensional $t-J_z$ model$^{17}$ which is obtained from the $t-J$
model by eliminating the spin exchange term in the Heisenberg
interaction.
Consider the case of one hole. In the limit of $J_z/t \gg
1$, the ground state of this model consists of a state in
which the hole is located at an arbitrary site surrounded by
an otherwise perfect N\'{e}el state.
In this limit the dimension of the Hilbert space
needed to get the physics of the problem is just equal to one
(plus all states translationally equivalent).
Now, as $J_z/t$ is reduced to the most interesting region, i.e. $J_z/t
\leq 1$, the hole gains kinetic energy at the expense of magnetic
energy and starts to move away from its initial position. As the hole
hops, it leaves behind a trail of overturned spins called a
``string".$^{18}$
As $J_z/t$ is
lowered, one must take into account longer and longer strings.
However the important string excitations are still of finite length,
and then in this case it is enough to keep a fraction of the total Hilbert
space
to describe it.
For this model, the application of the Hamiltonian, i.e. the hopping
term, to expand the Hilbert
space at each step has a direct physical meaning.
As we have shown in a previous paper, $^{17}$ it is possible to converge
to the ground state energy with several digits of accuracy
by retaining a small fraction of the full Hilbert space. As an example,
in Table I, the energy of the system for two holes is shown
for a cluster of 50 sites and $J_z/t = 0.3$ as a function of the
dimension of the Hilbert space. It is clear that the new technique works
very
well in this case. For more details see Ref. 17.
Let us now consider the $t - J$ model with the full Heisenberg
interaction (Eq. (2)). In this case, even in the absence of
holes, the ground state is characterized by the presence of
spin wave excitations that reduce the antiferromagnetic order from
its N\'{e}el (classical) value. Thus, in principle, we not only need to
physically
describe the modification of the spin background in the vicinity
of the holes, but also the spin exchanges that take place at arbitrary
distances from the holes which contribute significantly to the spin
background. This qualitative difference between the $t-J$ and $t-J_z$
models
can be detected by measuring
the distribution of weights $S(x)$ defined as the sum of the
weights $\mid x_i \mid ^2$ belonging to the interval $\left[ x, x +
\Delta \right]$. In Fig.1, we show $S(x)$ in the exact ground
state of the $4 \times 4$ lattice with two holes at $J_z = 0.6$ and $J=0.6$
(in general we take t=1), for the $t - J_z$ (Fig.1a) and $t - J$
(Fig.1b) models, respectively. It can be seen that in the latter, there is more
weight for very small absolute values of the coefficients $x_i$
of the ground state $\Psi_0$ (Eq. (1)).
Let us start the expansion of the Hilbert space from the same sets of
states considered for the $t-J_z$ model. At each step, the Hilbert
space is expanded by the application of both the hopping term and
the spin exchange term of the Heisenberg interaction.$^{19}$
In the language of perturbation theory, this is like a double expansion
around the Ising limit ($t-J_z$) with static holes, namely one or two
holes in an otherwise perfect N\'{e}el state. The expansion
with the spin exchange term of the Heisenberg interaction could be
regarded as a perturbation in the spin anisotropic parameter.
In Figs. 2-5, we show results for the $4 \times 4$ lattice .
These can be compared with results for the
exact ground state which can be easily computed.
In Fig. 2, the energy is shown as a function of the
dimension of the basis set, for two holes at $J = 0.2$. The
energies obtained with the ``truncation'' procedure (dot-dashed line)
are much better than the energies obtained without it (dashed line)
namely diagonalizing at step 3 of the method, but without truncating in
step 4. As explained before, this improvement helps in discarding states
with very small weight.
Finally, both
are much better than the energies obtained at each iteration
of the conventional Lanczos algorithm (full line).
In Fig. 3, the overlap between the variational wave functions
in the truncated Hilbert space with the exact ground state are
shown for both procedures with (dot-dashed line) and without
(dashed line) the elimination of the less weighted states or
``truncation''.
In Fig. 4, the evolution of the
hole-hole correlations at the {\em maximum} distance in this lattice
is shown as a function of the dimension of the Hilbert space. It can be
seen that the convergence with the ``truncation'' procedure is much faster
than without it, even for correlation functions. The notation in these
figures is the
same as for Fig. 2. A similar behavior was also obtained for the
spin-spin correlation at the maximum distance.
Finally, to complete the preliminary study on the
$4 \times 4$ lattice, we show in Fig. 5 the energies obtained
with the full basis set expansion procedure starting from the
N\'{e}el state (curve labeled 0); from the N\'{e}el state and all the
states obtained from it by one spin exchange (curve labeled 1);
from the N\'{e}el state and all the states obtained from it by two
spin exchanges (curve labeled 2); and so on. The energies at the
beginning of each set correspond to the variational states
discussed in Ref. 20. We see that the energies
obtained with the new method starting from the N\'{e}el state are
considerably better,
even for a very small number of iterations, than those corresponding
to Dagotto-Schrieffer's variational states. As a conclusion, even though we
cannot
reach the ground state as accurately as we did for the $t-J_z$ model, we still
can obtain a very good variational state compared with other
states discussed in the literature for finite lattices.
Now let us discuss clusters that cannot be studied with the conventional
Lanczos approach for lack of enough memory in present-day computers.
We will show results obtained for the
$t - J$ model on the $6 \times 6$ lattice with two holes, and $J = 0.4$.
The dimension of the Hilbert space is, in this case, $2.55 \times 10^9$ states
using translational and spin reversal symmetries.
In Fig. 6, the energy is plotted as a function of the dimension of
the Hilbert space (in a logarithmic scale).
With a full line we show the energies obtained at each step of the conventional
Lanczos algorithm, while with a dashed line we plot the energies obtained
expanding the Hilbert space by applying the Hamiltonian, and at each step
diagonalizing in the enlarged space using the Lanczos method, i.e.
steps 2 and 3 of
the method described
above. Finally, with circles and diamonds, we show the points
obtained by retaining the most weighted states, i.e. step 4 of our
method. The long-dashed line in zig-zag shows the order in which
every point is obtained starting with the circle at the top.
It is clear that a better convergence is achieved with the full
procedure of the SEHS method. After reaching the maximum
dimension that can be handled with the available computer, it is also
possible to use an extrapolation procedure to extract results at the dimension
of the total Hilbert space, but we have not attempted such an analysis
in the present paper.
(The energy for this particular system has been estimated
with a Green's Function Monte Carlo technique$^{21}$ to be
near $\sim -20.0$.) In principle, one should also compute other
physical quantities of interest at each coupling,
and then also extrapolate them to the full dimension.
Presumably, we can attribute the slow convergence of the ground state
energy with the size of the Hilbert space to the highly nontrivial (and
fluctuating) spin-1/2
background. Then, the convergence is not going to deteriorate if we
put more holes on
the lattice. On the other hand, Monte Carlo algorithms typically encounter
increasingly severe problems as the number of holes is increased, at
least if one remains close to half-filling.
The number of off-diagonal transitions for
both the hopping (dashed line) and the exchange (full line)
parts of the Hamiltonian as a function of the number of states
included in the basis set can be computed
at each step.
The result is that successive sets generated
during the process of enlargement of the Hilbert space are
increasingly more interacting, i.e. the Hamiltonian matrix becomes
more dense (See, for example, Fig. 11 in Ref. 13.)
\section{\protect\large \bf Application to the one-band Hubbard model}
\hspace{2em}The one-band Hubbard model is defined
by the Hamiltonian:
\begin{eqnarray}
H = - t \sum_{\scriptstyle <i j>, \sigma}
(c^{\dagger}_{i,\sigma}c_{j,\sigma} +
c^{\dagger}_{j,\sigma}c_{i,\sigma})
+ U \sum_{i} n_{i,\uparrow}n_{i,\downarrow},
\end{eqnarray}
\noindent where the notation is standard.
The size of the Hilbert space grows
as $4^{N_s}$, and thus it is even more difficult to study than the $t-J$
model from a numerical point of view.
In this case, the largest lattice considered in the literature
is the $4 \times 4$ lattice
for all dopings.$^{10,22}$
In momentum space, the Hamiltonian of the Hubbard model takes the
form:
\begin{eqnarray}
H = \sum_{\scriptstyle {\bf k }, \sigma} \epsilon ({\bf k })
c^{\dagger}_{ {\bf k } ,\sigma} c_{ {\bf k } ,\sigma} +
+ U \sum_{ {\bf k_1,k_2,k_3}} c^{\dagger}_{ {\bf k_1 },\uparrow}
c_{ {\bf k_2},\uparrow} c^{\dagger}_{ {\bf k_3},\downarrow}
c_{ {\bf k_1-k_2+k_3},\downarrow},
\end{eqnarray}
\noindent
where each {\bf k } runs over the Brillouin zone. The single
particle energies are given by $\epsilon ( {\bf k }) = - 2 t
(cos( k_x ) + cos(k_y ))$.
In the absence
of Coulomb repulsion, the model reduces to a tight binding
model which is easily solved. The total energy is the sum of
the single particle energies for all the momentum ${\bf k }$
up to the Fermi surface. Here, we have to distinguish between
two cases: the closed shell, in which the last shell is completely
occupied; and the open shell in which the last shell is partially
occupied. In the former case the ground state is not degenerate
while in the latter the degeneracy can be very large.
In the following,
we concentrate on the $6 \times 6$ cluster with 18 (9$\uparrow$ and
9$\downarrow$) and 26 (13$\uparrow$ and 13$\downarrow$) electrons
which correspond to $closed$ shell situations.
The dimensions of the Hilbert space for some closed shell cases
in this cluster are: for 10 electrons, $3.95 \times 10^{9}$; for 18 electrons,
$2.46 \times 10^{14}$; and for
26 electrons, $1.48 \times 10^{17}$ well beyond the reach of
techniques that fully diagonalize the full Hilbert space of the problem.
For the closed shell situations, our initial Hilbert space consists
of only one state, which is the ground state of the $U = 0$ case
(remember that we are working in momentum space).
The Hilbert space is expanded by applications of the second term
of Eq. (4), which contains the off diagonal transitions.
These terms create and annihilate pairs of
electrons in such a way that the total momentum is conserved.
In some other approaches the Hamiltonian is expanded through the
creation of single pair electron-hole excitations$^{23}$ but then
the total momentum is not conserved.
In the spirit of the general
procedure outlined in Section 2, we expand the Hilbert space by applying
the whole second term of Eq. (4). (Another possibility, which we have not
yet fully explored, is to expand the Hilbert space by taking only transitions
between the shells at both sides of the Fermi level, and then increase
successively the number of shells involved.)
The expansion of the Hilbert space by application of the Coulomb
term could also be considered as a weak-coupling perturbation
expansion in
a parameter which is proportional to $U$, but unlike other
perturbation schemes,$^{24}$ our procedure remains variational
in the sense that the energy is always an upper bound to the
exact ground state energy.$^{25}$
In Figs. 7 and 8, we show the convergence of the energy as a
function of the dimension of the Hilbert space for 18 and 26
electrons respectively, and for several values of $U$.
The energies are measured in units of $t$ as usual, and they
have been shifted in order to fit them into the same
plot and in order to compare their convergence.
It can be observed that the convergence is faster the fewer the electrons, and
as expected, the convergence is faster for smaller values of $U$.
For example, for the case of 26 electrons, for $U = 2$ we obtain
a value of -47.907, in good agreement with the Monte Carlo estimate
$^{26}$ of -47.87$\pm$0.05, i.e. the new technique reaches the same
accuracy as Monte Carlo methods.
The most important features in these plots are the presence of
discontinuities in the derivative of the energy,
and a ``wrong" concavity of the curves
(compared for example with the curvature in Figs. 5 and 6 of the
$t-J$ model). We do not have an explanation for this behavior,
although perhaps the long-range nature of the Coulomb interaction in
momentum space may matter.
The wrong curvature of the plots makes it difficult to assess
the convergence of the energy and to perform an extrapolation
procedure.
The points at which there are discontinuities in the derivative
are the points obtained by successive application of the
Hamiltonian starting from the initial state. All the other points
are obtained by pruning these
Hilbert spaces, and by applying the Hamiltonian to the reduced
spaces.
The somewhat strange behavior of the energy vs. the dimension of
the Hilbert space is an artifact of the momentum representation
chosen, and perhaps a manifestation of the shell structure
of the tight binding limit.
In the interval considered, i.e. $U \leq 4$, we found that the convergence
of the energies
obtained by working in the momentum representation is much faster
than the one obtained by working in real space. Presumably, the
opposite is true for larger values of $U$.
Finally, in Table II we provide comparisons of our estimates
with the results obtained using Quantum
Monte Carlo techniques,$^{26,27}$ as well as the results
obtained with a stochastic implementation of the modified Jacobi
method$^{28}$ also referred to as ``stochastic diagonalization'' (SD).
To obtain the results quoted in this Table, $N_R \sim 2 \times 10^4 $
important states were included in the SD calculation and a CPU
time of $\sim 10^4 $ seconds (for the $4 \times 4$ lattice) was required.
This CPU time is also what is required by our method for
$N_h \sim 10^6$. However, as reported in Ref. 28, and as it can be
seen in Table II, the energy is not yet converged and presumably $N_R$
has to be increased by a factor of $\sim 10$ in order to obtain the
same accuracy as our results. This translates to a factor of $\sim 100$
in the total CPU time, since in the SD algorithm the CPU time grows
quadratically with $N_R$. Besides, from the results reported in Ref.
28, it is also evident that for the SD method the convergence is
more difficult for larger values of the Coulomb repulsion.
In summary, it seems that at least in its current implementation, the
SD method is more expensive than the SEHS method reported in this
paper for a given accuracy.
\section{\protect\large \bf Application to the three-band Hubbard model}
\hspace{2em}Finally, and for completeness,
we briefly consider the three-band Hubbard model which contains
the Coulomb on-site repulsion for both the copper and
oxygen sites ($U_d$ and $U_p$ respectively), the energies of each ion
($e_d$ and $e_p$ for copper and oxygens ions), and a Coulomb
repulsion between copper and oxygens ions, $V$.$^{29}$
We study the $\sqrt{8} \times \sqrt{8}$ lattice (24 sites between
oxygens and
coppers) with
two doped holes (10 fermions), and the following set of parameters:
$U_d = 7$, $U_p = 0$, $e_p - e_d = 1.5$ and $V = 3$. As the initial
basis set, we took all the states with all the Cu sites having single
occupancy, and the remaining two holes located in O sites (also single
occupied). This is a good starting Hilbert space for the case
$V = 0$, but as the algorithm itself has shown it is not appropriate
for all values of the parameters.
In Fig. 9, we show the results obtained using the
Hilbert space expansion procedure. The dashed lines show the order
in which these points were obtained starting from the circle at the
top right in the same way as was explained in Fig. 6. In Fig. 10,
the best points
in the set of results shown in Fig. 9 are plotted with circles. In a
second stage, once we have reached $\sim 10^6$ states, we go
all the way back (points indicated with full
diamonds), finding that the initial guess was not appropriate (i.e. the
states with the highest weights were not those used in the starting Ansatz),
and then we increase the dimension of the basis set again
(empty squares). It can be seen that this last set of points
behaves very smoothly and the final part of the curve is fairly
flat indicating a reasonable convergence. From this set of
states, in principle, we could compute all quantities of interest and
eventually
extrapolate them to the full Hilbert space.
However, one should also notice that in this case the largest dimension that
we have considered ($\sim 2.5 \times 10^6$) is ``only" two orders of magnitude
smaller than the dimension of the full Hilbert space, and probably that
is the reason for the good convergence of the results.
In Fig. 11 we compare the energies for $V = 0$ and $V = 3$ as a
function of the dimension of the Hilbert space. The energies have been
shifted for the sake of comparison. It can be seen that the
convergence is better for the $V = 3$ case. For $V = 0$, following the
Zhang-Rice construction,$^4$ one can map this model to the one-band
$ t - J $ model. It is then reasonable to assume that, as in this
model, the spin background is responsible for the slow convergence.
The same pattern of convergence was also found for the other set of
parameters we have studied: $U_p = 3$, $e_p - e_d = 4$,
and $V = 0$, $V = 3$, and the same value of $U_d = 7$. In this
case, for $V = 3$ the convergence
is faster than for $V = 0$, reflecting the fact that it is easier for
the algorithm to find the most relevant states which contains double
occupied Cu sites.
Finally, we show in Fig. 12 the spin-spin correlation
at the maximum distance on the lattice, and the density of holes in
Cu sites as a function of the dimension of the Hilbert space for
the set of parameters $U_d = 7$, $U_p = 0$, $e_p - e_d = 1.5$ and
$V = 3$. These curves
indicate also a reasonable convergence. For this set of parameters
we obtain $n_{Cu} = 0.555$, while for $U_p = 3$, $e_p - e_d = 4$,
$n_{Cu} = 1.088$, indicating the presence of two different regimes
for large $V$. This result might be relevant to some speculation
regarding the nature of pairing and phase separation in $Cu-O$
planes.$^{30}$
In any case, it is quite encouraging to observe that the new technique
may work well in the realistic (and complicated) case of the three-band
Hubbard model.
\section{\protect\large \bf Discussion and conclusions}
\hspace{2em}
The procedure described in this paper can be regarded as a method to generate
and/or
improve variational wave functions. In the first place, it should be noted
that since no approximations are done on the Hamiltonian, and since
we work in a reduced Hilbert space, the energies obtained with
this procedure are rigorous upper bounds to the exact ground
state energies. The application to the $t-J$ model is one example
in which the initial set of states is ``corrected" by this algorithm.
In this case, a direct comparison with a variational
state was also given (see also Ref. 19).
Another application in which the elimination at each step of the
least weighted states leads to an improvement or to a correction
of the initial guess is the case of the three-band Hubbard model.
In this case, the initial state depends on the parameters
that determine the $Cu$ or $O$ occupancy when the nearest
neighbor Coulomb repulsion is large enough. In general, we believe that
the technique is promising and may compete against more standard Lanczos
and
Quantum Monte Carlo methods, at least for some particular Hamiltonians
and parameters. A clear example is the $t-J_z$ model in which the new
method has provided the more accurate results reported in the literature
thus far.$^{17}$
For the systems where we cannot arrive at a good approximation for the
ground state due to the slow rate of convergence of the results (for example
the $t-J$ model seems to converge only logarithmically), one should
resort to some extrapolation procedure to the full Hilbert dimension.
In this sense, we are in the same situation as the zero temperature
(Green's function or random walk) Monte Carlo algorithms that
cannot reach convergence before the noise becomes very high.$^{21}$
Besides the possible applications of this reduced Hilbert space
approach as indicated above, there are other situations that can also
be studied with the SEHS method. One of them is the quarter-filled
$t-J$ model on the 26 sites lattice, which is interesting to
study in order to analyze the finite size dependence of the results obtained in
Ref. 11 in the context of superconductivity in the $t-J$ model.
The method can also be applied to
coupled planes $t-t_\perp-J$
model.$^{31}$ For this system, one could start from the best states
of the ground state of each plane separately and then expand the
basis set by application of the interplane hopping term of the
Hamiltonian. This is equivalent to an expansion around
$t_\perp / t = 0$.
Finally, we want to comment that there are other algorithms that also
deal with truncated Hilbert spaces besides the stochastic
diagonalization approach and the presently described technique.
In an already mentioned paper,$^{23}$ the Hubbard model was studied
in momentum space with a truncation technique using concepts
of renormalization group theory.
Another stochastic truncation method has recently
been developed for the $Z_2$ gauge model.$^{32}$
The computational effort of the SEHS method of systematic
expansion of the Hilbert space grows roughly linearly with $N_h$, and
currently $N_h \sim 10^6$ for present-day computers.
These other methods use a smaller
size of the basis set, but the CPU time grows
as ${N_h}^3$ for the methods of references [23] and [32], and
quadratically in $N_h$ for the stochastic diagonalization algorithm.
Summarizing, a new algorithm has been discussed that has several
of the advantages of the Lanczos approach (specially the possibility of
studying dynamical responses), but that can be applied to large
clusters. The method works remarkably well in some special cases,
while in general it is competitive with other more standard algorithms.
\section{\protect\large \bf Acknowledgements}
\hspace{2em}
We thank Adriana Moreo for providing the Monte Carlo results
used in this paper, and for useful conversations. E. D. thanks
the Office of Naval Research for its partial support under
grant ONR-N00014-93-1-0495. J. R. wishes to acknowledge the support
from High Performance Computations grant from Vanderbilt
University.
Most of the calculations were done using the Cray YMP at the
Supercomputer Computations Research Institute in Tallahassee,
Florida. The research was sponsored in part by the U. S. Department
of Energy under contract No. DE-AC05-84OR21400 managed by Martin
Marietta Energy Systems, Inc.
\newpage
\section{\protect\large \bf References}
\begin{enumerate}
\item J. G. Bednorz and K. M\"uller, {\em Z. Phys.} {\bf B 64}, 189 (1986).
\item P. W. Anderson, {\em Science} {\bf 235}, 1196 (1987).
\item J. Hubbard, {\em Proc. R. Soc. London, Ser.} {\bf A 276}, 238 (1963).
\item F. Zhang and T. M. Rice, {\em Phys. Rev.} {\bf B 37}, 3759 (1988).
\item See for example, J. A. Verg\'{e}s, E. Louis, P. S. Lomdahl, F.
Guinea and A. R. Bishop, {\em Phys. Rev. } {\bf B 43}, 4462 (1989), and
references therein.
\item G. Kotliar and A. E. Ruckenstein, Phys. Rev. Lett. {\bf 57}, 1362 (1986).
\item W. von der Linden, {\em Phys. Rep. } {\bf 220 }, 53 (1992).
\item A. Moreo, D. J. Scalapino, R. L. Sugar, S. R. White and N. E.
Bickers, {\em Phys. Rev.} {\bf B 41}, 2313 (1990); and references therein.
\item B. N. Parlett ,{\em ``The symmetric eigenvalue problem"},
(Prentice Hall, 1980).
\item E. Dagotto, A. Moreo, F. Ortolani, D. Poilblanc
and J. Riera, {\em Phys. Rev. } {\bf B 45}, 10741 (1992).
\item E. Dagotto and J. Riera, {\em Phys. Rev. Lett.} {\bf 70},
682 (1993).
\item E. Y. Loh, et al., Phys. Rev. {\bf 41}, 9301 (1990).
\item An earlier discussion of this method was given in J. Riera, in
``Proceedings of the Mardi Gras '93 Conference on Concurrent Computing
in the Physical Sciences", World Scientific, 1993.
\item P. J. Knowles, { \em Chem. Phys. Letters} { \bf 155}, 513
(1989); P. J. Knowles and N. C. Hardy, { \em J. Chem. Phys.} { \bf 91},
2396 (1989).
\item W. Wenzel and K. G. Wilson, Phys. Rev. Lett. {\bf 69}, 800 (1992).
\item D. Poilblanc, J. Riera, and E. Dagotto, preprint, (1993).
\item J. Riera and E. Dagotto, {\em Phys. Rev. } {\bf B 47}, xxxxx (1993).
\item W. F. Brinkman and T. M. Rice, {\em Phys. Rev.} {\bf B 2}, 1324 (1970);
B. I. Schraiman and E. D. Siggia, {\em Phys. Rev. Lett.} {\bf 60},
740 (1988).
\item In a semi-analytical approach (S. Trugman, {\em Phys. Rev.} {\bf B 37},
1597 (1988); {\em Phys. Rev.} {\bf B 41}, 892 (1990)), the basis set was
expanded by
the application of the hopping term only (and the second neighbor double
hopping term present in the model considered by Trugman). See also J.
Inoue and
S. Maekawa, Prog. Theor. Phys., Suppl. {\bf 108}, 313 (1992).
Typically,
the Hilbert space was expanded to include a few hundred states. This is a very
small quantity compared with the $\sim 10^6$ one can reach with our
method, but Trugman's results are valid for the bulk limit. So, we obtain a
much better variational state but at the cost of limiting ourselves to
finite lattices.
\item E. Dagotto and J. R. Schrieffer, {\em Phys. Rev. }{\bf B 43},
8705 (1991).
\item M. Boninsegni and E. Manousakis, preprint (1992).
\item G. Fano, F. Ortolani and A. Parola, {\em Phys. Rev.} {\bf B 42}, 6878
(1990).
\item S. R. White, {\em Phys. Rev.} {\bf B 45}, 5752 (1992).
\item J. Gal\'{a}n and J. A. Verg\'{e}s, {\em Phys. Rev. } {\bf B 44},
10093 (1991).
\item A numerical, but more conventional, weak-coupling perturbative study
on the $6 \times 6$ lattice was reported by B. Friedman,
{ \em Europhysics Letters} { \bf 14}, 495 (1991).
\item A. Moreo, private communication.
\item N. Furukawa and M. Imada,
{\em J. Phys. Soc. Jpn.} {\bf 61}, 3331 (1992).
\item H. De Raedt and W. von der Linden, {\em Phys. Rev.} {\bf B 45},
8787 (1992); H. de Raedt and M. Frick, {\em Phys. Rep.}, to appear. See
also
P. Prelovsek, and X. Zotos, preprint.
\item V. Emery, {\em Phys. Rev. Lett.} {\bf 58}, 2794 (1987)
\item C. Varma,
S. Schmitt-Rink and E. Abrahams, {\em Solid State Commun.} {\bf 62}, 681
(1987).
\item J. M. Wheatley, T. C. Hsu and P. W. Anderson, {\em Nature} {\bf 333},
121 (1988).
\item C. J. Hamer and J. Court, preprint (1992).
\end{enumerate}
\newpage
\noindent
{\bf Table I}
\vskip 0.8cm
\begin{tabular}{rr} \hline\hline
${\rm H_D}$ & $E_{2h}$ \\ \hline
234 & -18.707940 \\
696 & -18.882805 \\
6204 & -19.026339 \\
18416 & -19.052528 \\
52672 & -19.066660 \\
106435 & -19.074957 \\
212486 & -19.079975 \\
673640 & -19.083531 \\
980681 & -19.084816 \\
1502829 & -19.085503 \\
2249454 & -19.085857 \\ \hline\hline
\end{tabular}
\noindent
\vskip 2cm
{\bf Table II}
\vskip 0.8cm
\begin{tabular}{llr} \hline\hline
method & 18 electrons & 26 electrons \\ \hline
QMC & -41.87$\pm$0.10 & -41.98$\pm$0.15 \\
SEHS & -41.69 & -41.49 \\
SD & -41.45 & -40.77 \\ \hline\hline
\end{tabular}
\newpage
\centerline {\bf TABLE CAPTIONS}
\vskip 2truecm
\noindent
{\bf Table I}
\noindent
Energy $E_{2h}$ of two holes in the ${\rm t-J_z}$ model, as a function
of the size of the Hilbert space, ${\rm H_D}$, for a cluster of
50 sites, and coupling $J_z/t = 0.3$.
\vskip 2truecm
\noindent
{\bf Table II}
\noindent
Comparison between ground state energies (in units of $t$) obtained
with the present method (SEHS), Quantum Monte Carlo (QMC), and
Stochastic Diagonalization (SD), for the $6 \times 6$ lattice and
$U = 4$.
\newpage
\centerline {\bf FIGURE CAPTIONS}
\vskip 2truecm
\noindent
{\bf Figure 1}
\noindent
Distribution of weights S(x) a) for the $t - J_z$ model, b) for
the $t - J$ model on the $4 \times 4$ lattice with 2 holes and $J/t = 0.6$.
\vskip 1truecm
\noindent
{\bf Figure 2}
\noindent
Energy vs dimension of the Hilbert space for the $4 \times 4$
lattice with two holes, J = 0.4. The full curve corresponds to
the energies obtained at each step of the conventional Lanczos
iteration. The dot-dashed (dashed) corresponds to the procedure
indicated in Sec. 3 with (without) including step 4.
\vskip 1truecm
\noindent
{\bf Figure 3}
\noindent
Overlap between the exact ground state and the states
generated during the procedure of expansion of the Hilbert space.
The meaning of the curves are as in Fig. 2.
\vskip 1truecm
\noindent
{\bf Figure 4}
\noindent
Hole-hole correlations at the maximum distance on the $4 \times 4$
lattice. The meaning of the curves are the same as for Fig. 2.
\vskip 1truecm
\noindent
{\bf Figure 5}
\noindent
Expansion of the Hilbert space starting from different
initial basis sets for the $4 \times 4$ lattice with 2 holes and J=0.2.
\vskip 1truecm
\noindent
{\bf Figure 6}
Energy vs dimension of the Hilbert space for the $6 \times 6$
lattice, 2 holes, J=0.4.
\noindent
\vskip 1truecm
\noindent
{\bf Figure 7}
\noindent
Energy of the Hubbard model on the $6 \times 6$ lattice with
18 electrons vs dimension of the Hilbert space. The asterisk
indicate the Monte Carlo estimates.
\vskip 1truecm
\noindent
{\bf Figure 8}
\noindent
Energy of the Hubbard model on the $6 \times 6$ lattice with
26 electrons vs dimension of the Hilbert space. The asterisk
indicate the Monte Carlo estimates.
\vskip 1truecm
\noindent
{\bf Figure 9}
Energy of the three-band Hubbard model on the
8 cells square lattice as obtained by application of the SEHS
procedure.
\vskip 1truecm
\noindent
{\bf Figure 10}
Energy of the three-band Hubbard model on the
8 cells square lattice vs the dimension of the Hilbert space.
The open circle points correspond to the filled square points
of Fig. 12. After reaching $\sim 10^6$ states, we truncate
the Hilbert space in successive steps (diamonds), and then
we start a new expansion of the basis set (squares).
\vskip 1truecm
\noindent
{\bf Figure 11}
Energy of the three-band Hubbard model on the
8 cells square lattice vs the dimension of the Hilbert space
for different values of the intersite Coulomb repulsion $V$.
\vskip 1truecm
\noindent
{\bf Figure 12}
Spin-spin correlation at the maximum distance and density of
holes at Cu sites for the three-band Hubbard model on the
8 cells square lattice vs the dimension of the Hilbert space.
\vskip 1truecm
\end{document}
|
train/arxiv
|
BkiUarzxK2li-LM1PiPP
| 5 | 1 |
\section{Introduction}
For any linear map $\Phi:M_{d_1}(\mathbb C) \to M_{d_2}(\mathbb C)$, we define
its \emph{Choi-Jamio{\l}kowski matrix} as
\begin{equation}\label{eq:def-J}
J(\Phi) := \sum_{i,j=1}^{d_1} |i\rangle \langle j| \otimes \Phi(|i\rangle \langle j|) \in M_{d_1}(\mathbb C) \otimes M_{d_2}(\mathbb C).
\end{equation}
This isomorphism was first studied by Choi~\cite{cho75a} and
Jamio{\l}kowski~\cite{jamiolkowski1972linear}. Note that some authors prefer to
add a normalization factor of $d_1^{-1}$ if front of the expression for
$J(\Phi)$. Other authors use the other order for the tensor product factors, a
choice resulting in an awkward order for the space in which $J(\Phi)$ lives.
The rank of the matrix $J(\Phi)$ is called the \emph{Choi rank} of $\Phi$; it is the minimum number $r$ such that the map $\Phi$ can be written as
$$\Phi(\cdot) = \sum_{i=1}^r A_i \cdot B_i^*,$$
for some operators $A_i,B_i \in M_{d_2 \times d_1}(\mathbb C)$.
The \emph{diamond norm} was introduced in Quantum Information Theory by Kitaev
\cite[Section 3.3]{kit} as a counterpart to the $1$-norm in the task of
distinguishing quantum channels. First, define the $1 \to 1$ norm of a linear
map $\Phi:M_{d_!}(\mathbb C) \to M_{d_2}(\mathbb C)$ as
$$\|\Phi\|_{1 \to 1} := \sum_{X \neq 0} \frac{\|\Phi(X)\|_1}{\|X\|_1}.$$
Kitaev noticed that the $1 \to 1$ norm is not stable under tensor products (as it can easily be seen by looking at the transposition map), and considered the following ``regularization'':
$$\|\Phi\|_\diamond:= \sup_{n \geq 1}\|\Phi \otimes \operatorname{id}_n\|_{1 \to 1}.$$
In operator theory, the diamond norm was known before as the \emph{completely bounded trace norm}; indeed, the $1 \to 1$ norm of an operator is the $\infty \to \infty$ norm of its dual, hence the diamond norm of $\Phi$ is equal to the completely bounded (operator) norm of $\Phi^*$ (see \cite[Chapter 3]{pau}).
We shall need two simple properties of the diamond norm. First, note that the
supremum in the definition can be replaced by taking the value $n=d_1$ (recall
that $d_1$ is the dimension of the input Hilbert space of the linear map
$\Phi$); actually, one could also take $n$ equal to the Choi rank of the map
$\Phi$, see \cite[Theorem 3.3]{tim} or \cite[Theorem 3.66]{wat}. Second, using
the fact that the extremal points of the unit ball of the $1$-norm are unit
rank matrices, we always have
$$\|\Phi\|_\diamond = \sup\{\|(\Phi \otimes \operatorname{id}_{d_1})(|x\rangle \langle y |)\|_1 \, : \, x,y \in \mathbb C^{d_1} \otimes \mathbb C^{d_1}, \, \|x\| = \|y\| = 1\}.$$
Moreover, if the map $\Phi$ is Hermiticity-preserving (e.g.~$\Phi$ is the
difference of two quantum channels), one can optimize over $x=y$ in the formula
above, see \cite[Theorem 3.53]{wat}.
Given a map $\Phi$, it is in general difficult to compute its diamond norm.
Computationally, there is a semidefinite program for the diamond norm,
\cite{wat13}, which has a simple form and which has been implemented in various
places (see, e.g.~\cite{qet}). We will bound the diamond norm in terms of the
partial trace of the absolute value of the Choi-Jamio{\l}kowski matrix.
The diamond norm finds applications in the problem of quantum channel
discrimination. Suppose we have an experiment in which our goal is to
distinguish between two quantum channels $\Phi$ and $\Psi$. Each of the
channels may appear with probability $\frac12$. Then, celebrated results by
Helstrom~\cite{helstrom1976quantum}, Holevo~\cite{hol72}, and Kitaev~\cite{kit} give an upper bound on the probability
of correct discrimination
\begin{equation}
p \leq \frac12 + \frac14 \| \Phi - \Psi \|_\diamond.
\end{equation}
The main goal of this work is study the asymptotic behavior of the diamond
norm of the difference of two independent quantum channels. To achieve this, in Section~\ref{sec:sdp} we find a new upper bound of on
the diamond norm of a general map. In our case, it has a nice form
\begin{equation}\label{eq:intro-UB}
\| \Phi -\Psi\|_\diamond \leq \| \Tr_2 |J(\Phi - \Psi)| \|_\infty.
\end{equation}
Next, in Section~\ref{sec:lower-bound} we prove that the well known lower bound
on the diamond norm, $\|J(\Phi-\Psi)\|_1 \leq \| \Phi-\Psi \|_\diamond$,
converges to a finite value for random independent quantum channels $\Phi$ and
$\Psi$ in the limit $d_{1,2} \to \infty$. We obtain that for channel sampled
from the flat Hilbert-Schmidt distribution, the value of the lower bound is
\begin{equation}
\lim_{d_{1,2}\to\infty} \| J(\Phi - \Psi) \|_1 = \frac12 + \frac{2}{\pi} \;\;
\mathrm{a.s.}
\end{equation}
Finally,
in Section~\ref{sec:upper-bound} we show that the upper bound \eqref{eq:intro-UB} also converges to
the same value as the lower bound. From these results, we infer that for independent random quantum channels
sampled from the Hilbert-Schmidt distribution, we have
\begin{equation}
\lim_{d_{1,2}\to\infty}\|\Phi - \Psi \|_\diamond = \frac12 + \frac{2}{\pi} \;\;
\mathrm{a.s.}
\end{equation}
In particular, the optimal success probability of distinguishing the two channels (in the asymptotical regime) is
\begin{equation}
p \leq \frac{1}{2} + \frac{1}{4}\left( \frac12 + \frac{2}{\pi} \right) = \frac{5}{8} + \frac{1}{2\pi} \approx 0.7842.
\end{equation}
Several generalizations of this type of results are gathered in Theorem \ref{thm:main}, the main result of this paper.
In Sections \ref{sec:depol} and \ref{sec:unitary} we address respectively two similar problems: distinguishing a random quantum channels from the maximally depolarizing channel and distinguishing two random unitary channels.
\section{Some useful bounds for the diamond norm}\label{sec:sdp}
We discuss in this section some bounds for the diamond norm. For a matrix $X$, we denote by $\sqrt{X^*X}$ and $\sqrt{XX^*}$ its right and
left absolute values, i.e.
$$ \sqrt{X^*X} = V \Sigma V^* \qquad \text{ and } \qquad \sqrt{XX^*} = U \Sigma
U^*,$$ when $X = U \Sigma V^*$ is the SVD of $X$. In the case where $X$ is
self-adjoint, we obviously have $\sqrt{X^*X} = \sqrt{XX^*}$.
In the result below, the lower bound is well-known, while the upper bound
appeared in a weaker and less general form in \cite[Theorem 2]{jpl}.
\begin{proposition}\label{prop:bound-diamond}
For any linear map $\Phi:M_{d_1}(\mathbb C) \to M_{d_2}(\mathbb C)$, we have
\begin{equation}\label{eq:bound-diamond}
\frac{1}{d_1} \|J(\Phi)\|_1 \leq \|\Phi\|_\diamond \leq \frac{\|
\operatorname{Tr}_2 \sqrt{J(\Phi)^*J(\Phi)} \|_\infty + \| \operatorname{Tr}_2
\sqrt{J(\Phi)J(\Phi)^*} \|_\infty}{2}.
\end{equation}
\end{proposition}
\begin{theproof}
Consider the semidefinite programs for the diamond norm given in \cite[Section 3.2]{wat13}:
\begin{center}
\begin{minipage}[t]{2in}
\centerline{\underline{Primal problem}}\vspace{-4mm}
\begin{align*}
\text{maximize:}\quad &
\frac{1}{2} \langle X, J(\Phi) \rangle + \frac{1}{2} \langle X^*,J(\Phi)^* \rangle
\\[2mm]
\text{subject to:}\quad &
\begin{bmatrix}
\rho_0 \otimes I_{d_2} & X\\
X^* & \rho_1 \otimes I_{d_2}
\end{bmatrix}
\geq 0\\
& \rho_0,\rho_1\in M_{d_1}^{1,+}(\mathbb C)\\
& X \in M_{d_1d_2}(\mathbb C)
\end{align*}
\end{minipage}
\qquad
\begin{minipage}[t]{2in}
\centerline{\underline{Dual problem}}\vspace{-4mm}
\begin{align*}
\text{minimize:}\quad &
\frac{1}{2} \| \operatorname{Tr}_2 Y_0 \|_\infty
+ \frac{1}{2} \| \operatorname{Tr}_2 Y_1 \|_\infty\\[2mm]
\text{subject to:}\quad &
\begin{bmatrix}
Y_0 & -J(\Phi)\\
-J(\Phi)^* & Y_1
\end{bmatrix}
\geq 0\\
& Y_0, Y_1 \in M_{d_1d_2}^+(\mathbb C)
\end{align*}
\end{minipage}
\end{center}
The lower and upper bounds will follow from very simple feasible points for the primal, resp.~the dual problems. Let $J(\Phi) = U \Sigma V^*$ be a SVD of the Choi-Jamio{\l}kowski state of the linear map. For the primal problem, consider the feasible point $\rho_{0,1} = d_1^{-1} I_{d_1}$ and $X = d_1^{-1} UV^*$. The value of the primal problem at this point is
$$\frac{1}{2d_1} \langle UV^*, UV^* |J(\Phi)| \rangle + \frac{1}{2d_1} \langle VU^*, |J(\Phi) | VU^* \rangle = \frac{1}{d_1} \|J(\Phi) \|_1,$$
showing the lower bound.
For the upper bound, set $Y_0 = \sqrt{J(\Phi)J(\Phi)^*} = U \Sigma U^*$ and $Y_1
= \sqrt{J(\Phi)^*J(\Phi)} = V \Sigma V^*$, both PSD matrices. The condition in
the dual problem is satisfied:
$$\begin{bmatrix}
Y_0 & -J(\Phi)\\
-J(\Phi)^* & Y_1
\end{bmatrix}
=
\begin{bmatrix}
U \Sigma U^* & -U \Sigma V^* \\
-V \Sigma U^* & V \Sigma V^*
\end{bmatrix}
=
\begin{bmatrix}
U & 0 \\
0 & V
\end{bmatrix} \cdot
\left( \begin{bmatrix}
1 & -1 \\
-1 & 1
\end{bmatrix} \otimes \Sigma \right) \cdot
\begin{bmatrix}
U & 0 \\
0 & V
\end{bmatrix} ^*
\geq 0, $$
and the proof is complete.
\end{theproof}
\begin{remark}
If the map $\Phi$ is Hermiticity-preserving (i.e.~the matrix $J(\Phi)$ is self-adjoint), the inequality in the statement reads simply
$$\frac{1}{d_1} \|J(\Phi)\|_1 \leq \|\Phi\|_\diamond \leq \| \operatorname{Tr}_2 |J(\Phi)| \|_\infty.$$
\end{remark}
\begin{remark}
The two bounds in \eqref{eq:bound-diamond} are equal iff the PSD matrices
$\varphi := \operatorname{Tr}_2 \sqrt{J(\Phi)^*J(\Phi)}$ and $\psi :=
\operatorname{Tr}_2 \sqrt{J(\Phi)J(\Phi)^*}$ are both scalar. Indeed, the
lower bound in \eqref{eq:bound-diamond} can be rewritten as
$$\frac{1}{d_1} \|J(\Phi)\|_1 = \frac{1}{d_1} \operatorname{Tr} \varphi = \frac{1}{d_1} \operatorname{Tr} \psi,$$
and the two bounds are equal exactly when the spectra of $\varphi$ and $\psi$ are flat. This is also the necessary and sufficient condition for the saturation of the lower bound, see \cite{kkeg,mkkg}.
\end{remark}
Let us now characterize the maps $\Phi$ for which the upper bound in \eqref{eq:bound-diamond} is saturated. Since our proof is SDP-based, we use the same technique as in \cite[Theorem 18]{kkeg}.
\begin{proposition}
A map $\Phi$ saturates the upper bound in \eqref{eq:bound-diamond} iff there exist unit vectors $a,b \in \mathbb C^{d_1}$ and a unitary operator $W \in \mathcal U_{d_1d_2}$ with the following properties (we write $J = J(\Phi)$):
\begin{itemize}
\item The vector $a$ achieves the operator norm for $\operatorname{Tr}_2 \sqrt{JJ^*}$
\item The vector $b$ achieves the operator norm for $\operatorname{Tr}_2 \sqrt{J^*J}$
\item $(aa^* \otimes I_{d_2})W = W(bb^* \otimes I_{d_2})$
\item $J = WP$ for some positive semidefinite operator $P$; in other words, $W$ is the angular part in some polar decomposition of $J$.
\end{itemize}
\end{proposition}
\begin{theproof}
The reasoning follows closely the proof of \cite[Theorem 18]{kkeg}, we only sketch the main lines. Writing the SDP in the standard form (see also \cite[Section 3.2]{wat13} for the notation). Optimal matrices for the primal and the dual program are, respectively
$$A_{opt} = \begin{bmatrix}
\rho_0 & . & . & . \\
. & \rho_1 & . & . \\
. & . & \rho_0 \otimes I_{d_2} & W \\
. & . & W^* & \rho_1 \otimes I_{d_2}
\end{bmatrix},
B_{opt} = \frac 1 2\begin{bmatrix}
\|\operatorname{Tr}_2 \sqrt{JJ^*} \|_\infty & . & . & . \\
. & \|\operatorname{Tr}_2 \sqrt{J^*J} \|_\infty & . & . \\
. & . & \sqrt{JJ^*} & . \\
. & . & . & \sqrt{J^*J}
\end{bmatrix},$$
where $.$ denotes an unimportant element. Since strong duality holds for our primal-dual pair \cite[Section 3.2]{wat13}, \emph{complementary slackness} holds and we have
\begin{align*}
\left(\|\operatorname{Tr}_2 \sqrt{JJ^*} \|_\infty I - \operatorname{Tr}_2 \sqrt{JJ^*} \right)\rho_0 &= 0\\
\left(\|\operatorname{Tr}_2 \sqrt{J^*J} \|_\infty I - \operatorname{Tr}_2 \sqrt{J^*J}\right)\rho_1 &= 0\\
U \Sigma U^* R &= U \Sigma V^* (\rho_1 \otimes I_{d_2})\\
V \Sigma V^* R^* &= V \Sigma U^* (\rho_0 \otimes I_{d_2}),
\end{align*}
where $J = U \Sigma V^*$ is the singular value decomposition of $J$. Using an approximation argument, we can assume $J$ (and thus $\Sigma$) is invertible, and thus $W = UV^*$ is unique. We then set $\rho_0 = aa^*$ and $\rho_1 = bb^*$, and the result follows.
\end{theproof}
\begin{remark}
The upper bound in \eqref{eq:bound-diamond} can be seen as a strengthening of the following inequality $\|\Phi\|_\diamond \leq \|J(\Phi)\|_1$, which already appeared in the literature (e.g.~\cite[Section 3.4]{wat}). Indeed, again in terms of $\varphi$ and $\psi$, we have $\|\varphi\|_\infty \leq \|\varphi\|_1$ and $\|\psi\|_\infty \leq \|\psi\|_1$. The inequality in \eqref{eq:bound-diamond} is much stronger: for example, it is always saturated for tensor product matrices $J = J_1 \otimes J_2$ ($W$ from the result above is also product), whereas the weaker inequality $\|\Phi\|_\diamond \leq \|J(\Phi)\|_1$ is saturated in this case only when $J_1$ has rank one, see \cite{kkeg,mkkg}.
\end{remark}
\section{Discriminating random quantum channels}
\subsection{Probability distributions on the set of quantum channels}
There are several ways to endow the convex body of quantum channels with probability distributions. In this section, we discuss several possibilities and the relations between them.
Recall that the Choi-Jamio{\l}kowski isomorphism puts into correspondence a quantum channel $\Phi : M_{d_1}(\mathbb C) \to M_{d_2}(\mathbb C)$ with a bipartite matrix $J(\Phi) \in M_{d_1}(\mathbb C) \otimes M_{d_2}(\mathbb C)$ having the following two properties
\begin{itemize}
\item $J(\Phi)$ is positive semidefinite
\item $\operatorname{Tr}_2 J(\Phi) = I_{d_1}$.
\end{itemize}
The above two properties correspond, respectively, to the fact that $\Phi$ is
complete positive and trace preserving. Hence, it is natural to consider
probability measures on quantum channels obtained as the image measures of
probabilities on the set of bipartite matrices with the above properties.
Henceforth we will denote the set of all quantum channels as $\Theta(d_1,
d_2)$.
Given some fixed dimensions $d_1,d_2$ and a parameter $s \geq d_1 d_2$, let $G
\in M_{d_1d_2 \times s}(\mathbb C)$ be a random matrix having i.i.d.~standard
complex Gaussian entries; such a matrix is called a \emph{Ginibre random
matrix}. Define then
\begin{align}
\label{eq:Wishart} W &:= GG^* \in M_{d_1}(\mathbb C) \otimes M_{d_2}(\mathbb C) \\
\label{eq:def-partial-normalization}
D &:= \left((\Tr_2 W)^{-1/2} \otimes I_{d_2} \right) W \left((\Tr_2 W)^{-1/2} \otimes I_{d_2} \right)\in M_{d_1}(\mathbb C) \otimes M_{d_2}(\mathbb C).
\end{align}
The random matrices $W$ and $D$ are called, respectively, \emph{Wishart} and \emph{partially normalized Wishart}. The inverse square root in the definition of $D$ uses the Moore-Penrose convention if $W$ is not invertible; note however that this is almost never the case, since the Wishart matrices with parameter $s$ larger than its size is invertible with unit probability. It is for this reason we do not consider here smaller integer parameters $s$. Note that the matrix $D$ satisfies the two conditions discussed above: it is positive semidefinite and its partial trace over the second tensor factor is the identity:
\begin{align*}
\operatorname{Tr}_2 D &= \operatorname{Tr}_2 \left[ \left((\Tr_2 W)^{-1/2} \otimes I_{d_2} \right) W \left((\Tr_2 W)^{-1/2} \otimes I_{d_2} \right) \right ] \\
&= (\Tr_2 W)^{-1/2} \left( \operatorname{Tr}_2 W \right) (\Tr_2 W)^{-1/2} = I_{d_1}.
\end{align*}
Hence, there exists a quantum channel $\Phi_G$, such that $J(\Phi_G) = D$ (note that $D$, and thus $\Phi$ are functions of the original Ginibre random matrix $G$).
\begin{definition}\label{def:measure-partially-normalized-Wishart}
The image measure of the Gaussian standard measure through the map $G \mapsto \Phi_G$ defined in \eqref{eq:Wishart}, \eqref{eq:def-partial-normalization} and the equation $J(\Phi_G) = D$ is called the \emph{partially normalized Wishart measure} and is denoted by $\gamma^W_{d_1,d_2,s}$.
\end{definition}
Of particular interest is the case $s = d_1d_2$; the measure obtained in this case will be called the \emph{Hilbert-Schmidt measure} and will be denoted by $\gamma^{HS}$ (see \cite{sommers2004statistical} for the case of random quantum states).
Another way of introducing a probability distribution on the set of quantum channels is via the Stinespring dilation theorem \cite{sti}: for any channel $\Phi: M_{d_1}(\mathbb C) \to M_{d_2}(\mathbb C)$, there exists, for some given $s \leq d_1d_2$, an isometry $V: \mathbb C^{d_1} \to \mathbb C^{d_2} \otimes \mathbb C^{s}$ such that
\begin{equation}\label{eq:Stinespring}
\Phi(\cdot) = \operatorname{Tr}_2 (V \cdot V^*).
\end{equation}
\begin{definition}\label{def:measure-isometries}
For any integer parameter $s$, let $\gamma^{Haar}_{d_1,d_2,s}$ be the image measure of the Haar distribution on isometries $V$ through the map in \eqref{eq:Stinespring}.
\end{definition}
Finally, one can consider the Lebesgue measure on the convex body of quantum
channels, $\gamma^L_{d_1,d_2}$. In this work, we shall however be concerned
only with the measure $\gamma^W$ coming from normalized Wishart matrices. The relations between all these probability measures on the set of quantum channels shall be investigated in some future work.
\subsection[The (two-parameter) subtracted Marcenko-Pastur distribution]{The (two--parameter) subtracted Mar\u{c}enko--Pastur distribution}
In this section we introduce and study the basic properties of a two-parameter family of probability measures which will appear later in the paper. This family generalizes the symmetrized Mar\u{c}enko--Pastur distributions from \cite{ppz}, see also \cite{nsp98,dno} for other occurrences of some special cases. Before we start, recall that the Mar\u{c}enko--Pastur (of free Poisson) distribution of parameter $x>0$ has density given by \cite[Proposition 12.11]{nsp}
$$d\mathcal{MP}_x = \max (1-x,0)\delta_0+\frac{\sqrt{4x-(u-1-x)^2}}{2\pi u}1_{[a,b]}(u)\, du,$$
where $a=(\sqrt x - 1)^2$ and $b=(\sqrt x +1)^2$.
\begin{definition}\label{def:SMP-xy}
Let $a,b$ be two free random variables having Mar\u{c}enko--Pastur distributions
with respective parameters $x$ and $y$. The distribution of the random variable
$a/x - b/y$ is called the \emph{subtracted Mar\u{c}enko--Pastur distribution}
with parameters $x,y$ and is denoted by $\mathcal{SMP}_{x,y}$. In other words,
\begin{equation}\label{eq:def-SMP-xy}
\mathcal{SMP}_{x,y} = D_{1/x} \mathcal{MP}_x \boxplus D_{-1/y}\mathcal{MP}_y.
\end{equation}
\end{definition}
We have the following result.
\begin{proposition}\label{prop:SMP-Wishart}
Let $W_x$ (resp.~$W_y$) be two Wishart matrices of parameters $(d,s_x)$ (resp~$(d,s_y)$). Assuming that $s_x/d \to x$ and $s_y/d \to y$ for some constants $x,y >0$, then, almost surely as $d \to \infty$, we have
$$\lim_{d \to \infty} \|(xd^2)^{-1}W_x - (y d^2)^{-1}W_y\|_1 = \int |u| \, d\mathcal{SMP}_{x,y}(u) =: \Delta(x,y).$$
\end{proposition}
\begin{theproof}
The proof follows from standard arguments in random matrix theory, and from the fact that the Schatten $1$-norm is the sum of the singular values, which are the absolute values of the eigenvalues in the case of self-adjoint matrices.
\end{theproof}
We gather next some properties of the probability measure $\mathcal{SMP}_{x,y}$. Examples of this distribution are shown in~Fig.~\ref{fig:SMP-xy}.
\begin{proposition}
Let $x,y>0$. Then,
\begin{enumerate}
\item If $x+y<1$, then the probability measure $\mathcal{SMP}_{x,y}$ has exactly one atom, located at 0, of mass $1-(x+y)$. If $x+y \geq 1$, then $\mathcal{SMP}_{x,y}$ is absolutely continuous with respect to the Lebesgue measure on $\mathbb R$.
\item Define
\begin{equation}
\begin{split}
U_{x,y}(u) &= 9 u^2 (x+y+2) -9 u (x-y)(x+y-1) + 2(x+y-1)^3\\
T_{x,y}(u) &= (x+y-1)^2+3u(y-x+u)\\
Y_{x,y}(u) &= U_{x,y}(u)+\sqrt{\left[ U_{x,y}(u) \right]^2 - 4 \left[
T_{x,y}(u) \right]^3}.
\end{split}
\end{equation}
The support of the absolutely continuous part of $\mathcal{SMP}_{x,y}$ is the set
\begin{equation}\label{eq:smp-support-set}
\{u \, : \, \left[ U_{x,y}(u)\right]^2 - 4 \left[ T_{x,y}(u) \right]^3 \geq
0\}.
\end{equation}
This set is the union of two intervals if $y \in (0,y_c)$ and it is connected when $y \geq y_c$, with
\begin{equation}
y_c = 4-x + 3 (2x)^{2/3}-6(2x)^{1/3}.
\end{equation}
\item On its support, the density of $\mathcal{SMP}_{x,y}$ is given by
\begin{equation}\label{eq:smp-density}
\frac{d\mathcal{SMP}_{x,y}}{du} = \left| \frac{\left[ Y_{x,y}(u)
\right]^{\frac23}-2^{\frac23} T_{x,y}(u)}{2^{\frac43} \sqrt{3} \pi u \left[ Y_{x,y}(u) \right]^{\frac13}} \right|.
\end{equation}
\end{enumerate}
\end{proposition}
\begin{theproof}
The statement regarding the atoms follows from \cite[Theorem 7.4]{bvo}. The formula for the density and equation \eqref{eq:smp-support-set} comes from Stieltjes inversion, see e.g.~\cite[Lecture 12]{nsp}. Indeed, since the $R$-transform of the Mar\u{c}enko--Pastur distribution $\mathcal{MP}_x$ reads $R_x(z) = x/(1-z)$, the $R$-transform of the subtracted measure reads
$$R(z) = \frac{1}{1-z/x} - \frac{1}{1+z/y}.$$
The Cauchy transform $G$ of $\mathcal{SMP}_{x,y}$ is the functional inverse of
$K(z) = R(z) + 1/z$. To write down the explicit formula for $G$, one has to
solve a degree 3 polynomial equation, and we omit here the details.
The statement regarding the number of intervals of the support follows from \eqref{eq:smp-support-set}. The inequality is given by a polynomial of degree 6 which factorizes by $u^2$, hence an effective degree 4 polynomial. The nature of roots of this polynomial is given by the sign of its discriminant, which, after some algebra, is the same as the sign of $y-y_c$, see \cite{num}.
\end{theproof}
In the case where $x=y$, some of the formulas from the result above become simpler (see also \cite{ppz}). The distribution $\mathcal{SMP}_{x,x}$ is supported between $u_\pm = \pm
\frac{1}{\sqrt{2}} \sqrt{10x - x^2 + (x+4)^\frac32 \sqrt{x} + 2}$.
Finally, in the case when $x=y=1$, which corresponds to a
flat Hilbert-Schmidt measure on the set of quantum channels, we get that
$\Delta(1,1) = \frac12 + \frac{2}{\pi}$.
\begin{figure}[ht]
\centering
\subfloat[$x=1$, $y=1$]{\includegraphics{fig1a}}
\subfloat[$x=1$, $y=2$]{\includegraphics{fig1b}}\\
\subfloat[$x=0.5$, $y=1$]{\includegraphics{fig1c}}
\subfloat[$x=0.2$, $y=0.5$]{\includegraphics{fig1d}}
\caption{Subtracted Mar\u{c}enko--Pastur distribution for (x, y)=(1,1) (a), (1,
2) (b), (0.5, 1) (c) and (0.2, 0.5) (d). The red curve is the plot of
\eqref{eq:smp-density}, while the black histogram corresponds to Monte Carlo
simulation.}
\label{fig:SMP-xy}
\end{figure}
\section{The asymptotic diamond norm of the difference of two independent random quantum channels}
We state here the main result of the paper. For the proof, see the following two subsections, each providing one of the bounds needed to conclude.
\begin{theorem}\label{thm:main}
Let $\Phi$, resp.~$\Psi$, be two \emph{independent} random quantum channels from $\Theta(d_1, d_2)$ having $\gamma^W$distribution with parameters $(d_1,d_2,s_x)$, resp.~$(d_1,d_2,s_y)$. Then, almost surely as $d_{1,2} \to \infty$ in such a way that $s_x/(d_1d_2) \to x$, $s_y/(d_1d_2) \to y$ (for some positive constants $x,y$), and $d_1 \ll d_2^2$,
$$\lim_{d_{1,2} \to \infty} \|\Phi - \Psi \|_\diamond = \Delta(x,y) = \int |u| \, d\mathcal{SMP}_{x,y}(u).$$
\end{theorem}
\begin{theproof}
The proof follows from Theorems \ref{thm:lower} and \ref{thm:upper}, which give the same asymptotic value.
\end{theproof}
\begin{remark}
We think that the condition $d_1 \ll d_2^2$ in the statement is purely technical, and could be replaced by a much weaker condition.
\end{remark}
\begin{corollary}
Combining Theorem~\ref{thm:main} with Hellstrom's theorem for quantum
channels, we get that the optimal probability $p$ of distinguishing two quantum
channels is equal to:
\begin{equation}
p = \frac58 + \frac{1}{2\pi}.
\end{equation}
Additionally, any maximally entangled state may be used to achieve this value.
\end{corollary}
\subsection{The lower bound}\label{sec:lower-bound}
In this section we compute the asymptotic value of the lower bound in Theorem \ref{thm:main}. Given two random quantum channels $\Phi,\Psi$, we are interested in the asymptotic value of the quantity $d_1^{-1}\|J(\Phi-\Psi)\|_1$.
\begin{theorem}\label{thm:lower}
Let $\Phi$, resp.~$\Psi$, be two \emph{independent} random quantum channels from
$\Theta(d_1, d_2)$ having $\gamma^W$distribution with parameters
$(d_1,d_2,s_x)$, resp.~$(d_1,d_2,s_y)$. Then, almost surely as $d_{1,2} \to
\infty$ in such a way that $s_x/(d_1d_2) \to x$ and $s_y/(d_1d_2) \to y$ for
some positive constants $x,y$,
$$\lim_{d_{1,2} \to \infty} \frac{1}{d_1} \|J(\Phi-\Psi)\|_1 = \Delta(x,y) = \int |u| \, d\mathcal{SMP}_{x,y}(u).$$
\end{theorem}
The proof of this result (as well as the proof of Theorem \ref{thm:lower}) uses
in a crucial manner the approximation result for partially normalized Wishart
matrices.
\begin{proposition}\label{prop:approximation-partial-normalization}
Let $W\in M_{d_1}(\mathbb C) \otimes M_{d_2}(\mathbb C)$ a random Wishart matrix of parameters $(d_1d_2,s)$, and consider its ``partial normalization'' $D$ as in \eqref{eq:def-partial-normalization}. Then, almost surely as $d_{1,2} \to \infty$ in such a way that $s \sim t d_1d_2$ for a fixed parameter $t>0$,
$$\big\| D - (td_1d_2^2)^{-1}W \big\|_{\infty} = O(d_2^{-2}).$$
\end{proposition}
Note that in the statement above, the matrix $W$ is not normalized; we have
$$\frac{1}{d_1d_2}\sum_{i=1}^{d_1d_2} \delta_{\lambda_i((d_1d_2)^{-1}W)} \to \mathcal{MP}_t,$$
the Mar\u{c}henko--Pastur distribution of parameter $t$. In other words, $W = GG^*$, where $G$ is random matrix of size $d_1d_2 \times s$, having i.i.d.~standard complex Gaussian entries.
Let us introduce the random matrices
$$X = (td_1d_2^2)^{-1} \operatorname{Tr}_2 W \quad \text{and} \quad Y =X^{-1/2} \otimes I_{d_2}.$$
The first observation we make is that the random matrix $X$ is also a (rescaled) Wishart matrix. Indeed, the partial trace operation can be seen, via duality, as a matrix product, so we can write
$$X = \frac{1}{td_1d_2^2} \tilde G \tilde G^*,$$
where $\tilde G$ is a complex Gaussian matrix of size $d_1 \times d_1s$; remember that $s$ scales like $td_1d_2$. Since, in our model, both $d_1$, $d_2$ grow to infinity, the behavior of the random matrix $X$ follows from \cite{cny}.
\begin{lemma}
As $d_{1,2} \to \infty$, the random matrix $\sqrt{t}d_2(X - I_{d_1})$ converges in moments toward a standard semicircular distribution. Moreover, almost surely, the limiting eigenvalues converge to the edges of the support of the limiting distribution:
\begin{align*}
\sqrt{t} d_2 \lambda_{\min}(X- I_{d_1}) &\rightarrow -2\\
\sqrt{t} d_2 \lambda_{\max}(X - I_{d_1}) &\rightarrow 2.
\end{align*}
\end{lemma}
\begin{theproof}
The proof is a direct application of \cite[Corollary 2.5 and Theorem 2.7]{cny}; we just need to check the normalization factors. In the setting of \cite[Section 2]{cny}, the Wishart matrices are not normalized, so the convergence result deals with the random matrices (here $d = d_1$ and $s = td_1d_2^2$)
$$\sqrt{t} d_1 d_2 \left(\frac{\tilde G \tilde G^*}{td_1^2d_2^2} - \frac{I_{d_1}}{d_1} \right) = \sqrt{t}d_2(X - I_{d_1}).$$
\end{theproof}
We look now for a similar result for the matrix $Y$; the result follows by functional calculus.
\begin{lemma}\label{lem:convergence-Y}
Almost surely as $d_{1,2} \to \infty$, the limiting eigenvalues of the random matrix $\sqrt{t} d_2(Y - I_{d_1d_2})$ converge respectively to $\pm 1$:
\begin{align*}
\sqrt{t} d_2 \lambda_{\min}(Y - I_{d_1d_2}) &\rightarrow -1\\
\sqrt{t} d_2 \lambda_{\max}(Y - I_{d_1d_2}) &\rightarrow 1.
\end{align*}
\end{lemma}
\begin{theproof}
By functional calculus, we have $\lambda_{\max}(Y) = [\lambda_{\min}(X)]^{-1/2}$, so, using the previous lemma, we get
$$\lambda_{\max}(Y) = \left[ 1-\frac{2}{\sqrt{t} d_2} + o(d_2^{-1}) \right]^{-1/2} = 1 + \frac{1}{2} \frac{2}{\sqrt{t} d_2} + o(d_2^{-1}),$$
and the conclusion follows. The case of $\lambda_{\min}(Y)$ is similar.
\end{theproof}
We have now all the ingredients to prove Proposition \ref{prop:approximation-partial-normalization}.
\begin{theproof}[Proof of
Proposition~\ref{prop:approximation-partial-normalization}]
We have
\begin{align*}
\big\| D - (td_1d_2^2)^{-1}W \big\|_{\infty} &= \big\| (td_1d_2^2)^{-1}\left(
YWY - W \right)\big\|_{\infty}\\
&= (td_1d_2^2)^{-1} \big\| (Y_i-I)W_iY_i + W_i(Y_i-I)\big\|_{\infty}\\
&\leq (td_1d_2^2)^{-1} \| Y_i-I \|_{\infty} \| W_i \|_{\infty} \left(
1+\|Y_i\|_{\infty} \right)\\
&= \frac{t^{-3/2}}{d_2^2}\cdot \sqrt{t} d_2 \| Y_i-I \|_{\infty} \cdot
(d_1d_2)^{-1}\| W_i \|_{\infty} \cdot \left(1+\|Y_i\|_{\infty} \right).
\end{align*}
Note that, almost surely, the three random matrix norms in the last equation above converge respectively to the following finite quantities
\begin{align*}
\sqrt{t} d_2 \| Y_i-I \|_{\infty} &\to 1 \\
(d_1d_2)^{-1}\| W_i \|_{\infty} &\to (\sqrt{t} +1 )^2 \\
1+ \|Y_i\|_{\infty} &\to 1.
\end{align*}
The first and the third limit above follow from Lemma \ref{lem:convergence-Y}, while the second one is the Bai-Yin theorem \cite[Theorem 2]{byi} or \cite[Theorem 5.11]{bsi}.
\end{theproof}
Let us now prove Theorem \ref{thm:lower}.
\begin{theproof}[Proof of Theorem~\ref{thm:lower}]
The result follows easily by approximating the partially normalized Wishart matrices with scalar normalizations. By the triangle inequality, with $D_x:= J(\Phi)$ and $D_y := J(\Psi)$, we have
\begin{align*}
&\left| \frac{1}{d_1} \|D_x- D_y\|_1 - \frac{1}{d_1} \|(xd_1d_2^2)^{-1}W_x- (yd_1d_2^2)^{-1}W_y\|_1 \right | \\
& \qquad \qquad \leq \frac{1}{d_1} \|D_x- (xd_1d_2^2)^{-1}W_x\|_1 + \frac{1}{d_1} \|D_y- (yd_1d_2^2)^{-1}W_y\|_1 \\
& \qquad \qquad \leq d_2 \|D_x- (xd_1d_2^2)^{-1}W_x\|_\infty + d_2 \|D_y- (yd_1d_2^2)^{-1}W_y\|_\infty.
\end{align*}
The conclusion follows from Propositions \ref{prop:SMP-Wishart} and \ref{prop:approximation-partial-normalization}.
\end{theproof}
\subsection{The upper bound}\label{sec:upper-bound}
The core technical result of this work consists of deriving the asymptotic value of the
upper bound in Theorem \ref{thm:main}. Given two random quantum channels
$\Phi,\Psi$, we are interested in the asymptotic value of the quantity
$\|\operatorname{Tr}_2 | J(\Phi-\Psi)|\|_\infty$.
\begin{theorem}\label{thm:upper}
Let $\Phi$, resp.~$\Psi$, be two \emph{independent} random quantum channels from $\Theta(d_1, d_2)$ having $\gamma^W$distribution with parameters $(d_1,d_2,s_x)$, resp.~$(d_1,d_2,s_y)$. Then, almost surely as $d_{1,2} \to \infty$ in such a way that $s_x/(d_1d_2) \to x$, $s_y/(d_1d_2) \to y$ (for some positive constants $x,y$), and $d_1 / d_2^2 \to 0$,
$$\lim_{d_{1,2} \to \infty} \|\operatorname{Tr}_2|J(\Phi-\Psi)|\|_\infty = \Delta(x,y) = \int |u| \, d\mathcal{SMP}_{x,y}(u).$$
\end{theorem}
The proof of Theorem~\ref{thm:upper} is presented at the end of this Section.
It is based on the following lemma which appears in \cite{dav88}; see also
\cite[Eq.~(5.10)]{bha94} or \cite[Chapter X]{bha}.
\begin{lemma}\label{lem:absolute-value-bound}
For any matrices $A,B$ of size $d$, the following holds:
\begin{equation}
\|\ |A|-|B|\ \|_{\infty} \leq C \log d \ \|A - B\|_{\infty},
\end{equation}
for a universal constant $C$ which does not depend on the dimension $d$.
\end{lemma}
For the sake of completeness, we give here a proof, relying on a similar
estimate for the Schatten classes proved in \cite{dav88}.
\begin{theproof}
Using \cite[Theorem 8]{dav88}, we have, for any $p \in [2,\infty)$:
\begin{align*}
\|\ |A|-|B|\ \|_\infty &\leq \|\ |A|-|B|\ \|_p \\
&\leq 4(1+cp) \| A-B \|_p \\
&\leq 4(1+cp)d^{1/p} \| A-B \|_\infty,
\end{align*}
for some universal constant $c \geq 1$. Choosing $p=\log d$ gives the desired bound, for $d$ large enough. The case of small values of $d$ is obtained by a standard embedding argument.
\end{theproof}
\begin{theproof}[Proof of Theorem \ref{thm:upper}]
Using the triangle inequality and Lemma \ref{lem:absolute-value-bound}, we first prove an approximation result (as before, we write $D_x := J(\Phi)$ and $D_y :=J(\Psi)$):
\begin{align*}
\big| \ \|\ \Tr_2|D_x&-D_y|\ \|_\infty - \|\ \Tr_2|(xd_1d_2^2)^{-1}W_x-(yd_1d_2^2)^{-1}W_y| \ \|_\infty \ \big| \\
&\qquad \leq \big\| \Tr_2|D_x-D_y| -
\Tr_2|(xd_1d_2^2)^{-1}W_x-(yd_1d_2^2)^{-1}W_y| \big\|_{\infty} \\
&\qquad = \big\| \Tr_2\left(|D_x-D_y| -
|(xd_1d_2^2)^{-1}W_x-(yd_1d_2^2)^{-1}W_y| \right) \big\|_{\infty} \\
&\qquad \leq d_2 \big\| \ |D_x-D_y| -
|(xd_1d_2^2)^{-1}W_x-(yd_1d_2^2)^{-1}W_y| \ \big\|_{\infty} \\
&\qquad \leq C d_2 \log(d_1d_2) \big\| (D_x-D_y) -
((xd_1d_2^2)^{-1}W_x-(yd_1d_2^2)^{-1}W_y) \big\|_{\infty} \\
&\qquad \leq C d_2 \log(d_1d_2) \left(\big\| D_x - (xd_1d_2^2)^{-1}W_x \big\|_\infty + \big\| D_y - (yd_1d_2^2)^{-1}W_y \big\|_\infty \right)\\
&\qquad = \frac{\log(d_1d_2)}{d_2} O(1) \to 0,
\end{align*}
where we have used Proposition \ref{prop:approximation-partial-normalization} and the fact that $d_1 \ll d_2^2 \implies \log(d_1) \ll d_2$. This proves the approximation result, and we focus now on the simpler case of Wishart matrices.
Let us define
\begin{align*}
Z&:=(xd_1d_2)^{-1}W_x-(yd_1d_2)^{-1}W_y \\
\tilde Z_1&:= \operatorname{tr}_2(|Z|) = \operatorname{Tr}_2|(xd_1d_2^2)^{-1}W_x-(yd_1d_2^2)^{-1}W_y|
\end{align*}
It follows from \cite[Proposition 4.4.9]{hpe} that the random matrix $Z$ converges almost surely (see Appendix \ref{app:partial-trace-unitarily-invariant} for the definition of almost sure convergence for a sequence of random matrices) to a non-commutative random variable having distribution $\mathcal{SMP}_{x,y}$, see \eqref{eq:def-SMP-xy}. Moreover, using a standard strong convergence argument \cite{mal}, the extremal eigenvalues of $Z$ converge almost surely to the extremal points of the support of the limiting probability measure $\mathcal{SMP}_{x,y}$. Hence, the almost sure convergence extends from the traces of the powers of $Z$ to any continuous bounded function (on the support of $\mathcal{SMP}_{x,y}$), in particular to the absolute value, i.e.~to $|Z|$.
From Proposition \ref{prop:partial-trace-flat}, the asymptotic spectrum of the random matrix $\tilde Z_1$ is flat, with all the eigenvalues being equal to
$$ a = \lim_{d_1,d_2 \to \infty} \mathbb E \frac{\operatorname{Tr} |(xd_1d_2)^{-1}W_x-(yd_1d_2)^{-1}W_y|}{d_1d_2} = \int |u| d\mathcal{SMP}_{x,y}(u),$$
which, by Proposition \ref{prop:SMP-Wishart}, is equal to $\Delta(x,y)$, finishing the proof.
\end{theproof}
\section{Distance to the depolarizing channel}\label{sec:depol}
In this section we compute the asymptotic distance between a random quantum channel $\Phi$ and the \emph{maximally depolarizing channel}
$$\Psi_\text{dep}: M_{d_1}(\mathbb C) \to M_{d_2}(\mathbb C), \qquad \Psi_\text{dep}(X) = \frac{\operatorname{Tr}(X)}{d_2} I_{d_2}.$$
Let us define the function $g:(1/4, \infty) \to (0,\infty)$
\begin{align*}
g(x)&:= \frac{3}{2}-x +\frac{\sqrt{4 x-1} (2 x+1)}{2 \pi x} \\
&\qquad - \frac{1}{\pi} \left( (x-1) \operatorname{arctan}\left(\frac{3 x-1}{(x-1) \sqrt{4 x-1}}\right)+ \operatorname{arctan}\left(\sqrt{4 x-1}\right)+ x \operatorname{arctan}\left(\frac{1}{\sqrt{4 x-1}}\right)\right).
\end{align*}
\begin{theorem}
Let $\Phi$ a random quantum channel from $\Theta(d_1,d_2)$ having distribution $\gamma^W$ with parameters $(d_1,d_2,s_x)$. Then, almost surely as $d_1, d_2 \to \infty$ and $s_x \sim xd_1d_2$, we have
\begin{equation}\label{eq:distance-depol}
\lim_{d_1,d_2 \to \infty} \|\Phi - \Psi_\text{dep}\|_\diamond = \int \left|
\frac{u}{x} - 1 \right| d\mathcal{MP}_x(u) = \begin{cases}
2(1-x) &\qquad \text{ if } x \in (0,1/4]\\
g(x) &\qquad \text{ if } x \in (1/4,1)\\
g(x)+x-1 &\qquad \text{ if } x \in [1,\infty).
\end{cases}
\end{equation}
In the case $x=1$, the limit above reads $3\sqrt 3/(2\pi)$.
\end{theorem}
\begin{remark}
We plot in Figure \ref{fig:distance-depol} the value of the limit in \eqref{eq:distance-depol} as a function of $x$. One can show that the limit is a decreasing function of $x$, converging to $0$ as $x \to \infty$. The function behaves as $8/(3\pi)x^{-1/2}$ as $x \to \infty$.
\begin{figure}[ht]
\centering\includegraphics[scale=0.8]{distance-depol}
\caption{The asymptotic diamond-norm distance between a random quantum channel and the maximally depolarizing channel as a function of the channel parameter $x$.}\label{fig:distance-depol}
\end{figure}
\end{remark}
\begin{theproof}
We analyze separately the lower bound and the upper bound from Proposition \ref{prop:bound-diamond}. First, let us denote by $D_x$ the Choi-Jamio{\l}kowski matrix of the channel $\Phi$, and note that $J(\Psi_\text{dep}) = d_2^{-1}I_{d_1d_2}$. For the lower bound, first show that we can approximate the random matrix $D_x$ by a rescaled Wishart matrix:
\begin{align*}\left| \frac{1}{d_1} \|D_x - d_2^{-1}I_{d_1d_2}\|_1 - \frac{1}{d_1} \|(xd_1d_2^2)^{-1}W_x - d_2^{-1}I_{d_1d_2}\|_1 \right| &\leq \frac{1}{d_1} \|D_x-(xd_1d_2^2)^{-1}W_x \|_1 \\
&\leq d_2 \|D_x-(xd_1d_2^2)^{-1}W_x \|_\infty,
\end{align*}
which converges almost surely to 0, by Proposition \ref{prop:approximation-partial-normalization}. The quantity with which we approximate is then
\begin{equation}\label{eq:distance-depol-LB}
\frac{1}{d_1} \|(xd_1d_2^2)^{-1}W_x - d_2^{-1}I_{d_1d_2}\|_1 = \frac{1}{d_1d_2} \sum_{i=1}^{d_1d_2} |\lambda_i[(xd_1d_2)^{-1}W_x - I_{d_1d_2}]|.
\end{equation}
The quantity above converges almost surely, as $d_1d_2 \to \infty$, towards
$$\int \left| \frac{u}{x} - 1 \right| d\mathcal{MP}_x(u).$$
Let us now show that the upper bound from Proposition \ref{prop:bound-diamond} converges to the same quantity. We follow the same steps as in the proof of Theorem \ref{thm:upper}: we first approximate the matrix $D_x$ by a rescaled Wishart random matrix, and then we argue that the partial trace appearing in the bound has ``flat'' eigenvalues, allowing us to replace the operator norm by the normalized trace. For the approximation step, we get, using again Proposition \ref{prop:approximation-partial-normalization},
\begin{align*}
\big| \ \|\ \Tr_2|D_x&-d_2^{-1}I_{d_1d_2}|\ \|_\infty - \|\ \Tr_2|(xd_1d_2^2)^{-1}W_x-d_2^{-1}I_{d_1d_2}| \ \|_\infty \ \big| \\
&\qquad \leq \big\| \Tr_2\left(|D_x-d_2^{-1}I_{d_1d_2}| - |(xd_1d_2^2)^{-1}W_x-d_2^{-1}I_{d_1d_2}| \right) \big\|_\infty \\
&\qquad \leq d_2 \big\| \ |D_x-d_2^{-1}I_{d_1d_2}| - |(xd_1d_2^2)^{-1}W_x-d_2^{-1}I_{d_1d_2}| \ \big\|_\infty \\
&\qquad \leq C d_2 \log(d_1d_2)\big\| D_x - (xd_1d_2^2)^{-1}W_x \big\|_\infty \\
&\qquad = \frac{\log(d_1d_2)}{d_2} O(1) \to 0 \quad \text{almost surely.}
\end{align*}
We focus now on the quantity $\|\ \Tr_2|(xd_1d_2^2)^{-1}W_x-d_2^{-1}I_{d_1d_2}| \ \|_\infty$. From Proposition \ref{prop:partial-trace-flat}, the spectrum of the random matrix $\Tr_2|(xd_1d_2^2)^{-1}W_x-d_2^{-1}I_{d_1d_2}|$ is flat, so its operator norm has the same limit as $d_1^{-1} \operatorname{Tr} |(xd_1d_2^2)^{-1}W_x-d_2^{-1}I_{d_1d_2}|$,
which is the same as \eqref{eq:distance-depol-LB}, finishing the proof.
\end{theproof}
\section{Distance to the nearest unitary channel}\label{sec:nearest-unitary}
In this section we consider an asymptotic distance between a random quantum
channel $\Phi:M_{d }(\mathbb C) \to M_{d}(\mathbb C)$ and a unitary channel.
First we note, that if a quantum channel $\Phi$ is an interior point of the set
of channels then, the best distinguishable one $\Psi$ is some unitary
channel~\cite{puchala2015exploring}. Below we show, that in the case of $d \to
\infty$ almost all quantum channels are perfectly distinguishable from any
unitary channel. To see it we write
\begin{equation}
\begin{split}
\min_{\Psi_U} \| \Phi - \Psi_U \|_\diamond
&\geq \frac{1}{d} \min_{\Psi_U} \| J(\Phi) - J(\Psi_U) \|_1
= \min_{U} \frac{1}{d} \| J(\Phi) - \ketbra{U}{U} \|_1
\geq \min_{ \ket{x} } \frac{1}{d} \| J(\Phi) - d \ketbra{x}{x} \|_1 \\
&\geq \min_{ \ket{x} } 2(1 - F(J(\Phi)/d,\ketbra{x}{x}) )
= 2 - 2 \|J(\Phi)/d \|_{\infty}.
\end{split}
\end{equation}
In the above we have used the inequality between diamond norm and the trace
norm of Choi-Ja\-mioł\-kow\-ski matrices, see
Proposition~\ref{prop:bound-diamond}, and next the Fuchs - van de Graaf
inequality~\cite{fuchs1999cryptographic}
involving trace norm and fidelity function $F(\rho,\sigma) = (\Tr
\sqrt{\sqrt{\rho} \sigma \sqrt{\rho}} )^2$.
Next we use the fact, that the largest eigenvalue os matrix $J(\Phi)/d$
tends to 0 almost surely.
\section{Distance between random unitary channels}\label{sec:unitary}
We consider in this section the problem of distinguishing two unitary channels,
\begin{equation}\label{eq:unitary-channels}
\Phi(X) = UXU^* \qquad \text{and} \qquad \Psi(X) = VXV^*,
\end{equation}
where $U,V$ are two $d \times d$ unitary operators. The diamond norm of the
difference $\|\Phi - \Psi\|_\diamond$ has already been considered in the
literature, and we gather below the results from~\cite[Theorem 3.57]{wat} and
\cite[Theorem 12]{jkp}.
\begin{proposition}
For any two unitary operators $U,V$, the diamond norm of the difference of the unitary channels induced by $U,V$ is given by
\begin{itemize}
\item $\|\Phi - \Psi\|_\diamond = 2\sqrt{1-\nu(U^*V)^2}$, where $\nu(U^*V)$ is the smallest absolute value of an element in the numerical range of the unitary operator $U^*V$. In other words, $\nu(U^*V)$ is the radius of the largest open disc centered at the origin which does not intersect the convex hull of the eigenvalues of $U^*V$ (i.e.~the numerical range).
\item $\|\Phi - \Psi\|_\diamond = 2R(U^*V)$, where $R(U^*V)$ is the radius of the smallest disc (not necessarily centered at the origin) containing all the eigenvalues of $U^*V$.
\item Let $2\alpha$ be the smallest arc containing the spectrum of $U^*V$. Then,
$$\|\Phi - \Psi\|_\diamond = \begin{cases}
2 \sin \alpha, \qquad &\text{ if } \alpha < \pi/2\\
2, \qquad &\text{ if } \alpha \geq \pi/2.
\end{cases}$$
\end{itemize}
\end{proposition}
We represent in Figure \ref{fig:nu-R} the eigenvalues of the operator $W := U^*V$ and the numerical range of $W$. Recall that the \emph{numerical range} of an operator $A$ is the set
$$\mathcal N(A) = \{\langle x, Ax \rangle \, : \, x \in \mathbb C^d, \, \|x\| = 1\}.$$
The numerical range is a convex body \cite[Chapter 1]{hjo91}, and in the case
where $A$ is a normal operator ($AA^* =A^*A$) it coincides with the convex hull
of the spectrum. One remarkable fact about the results in the proposition above
is that two unitary operations $\Phi$ and $\Psi$ become \emph{perfectly}
distinguishable as soon as the convex hull of the eigenvalues of $U^*V$
contains the origin~\cite[Theorem 3.57]{wat}, \cite[Theorem 12]{jkp}.
\begin{figure}[ht]
\centering\includegraphics[scale=0.7]{nu-R-small} \qquad\qquad\qquad\qquad
\centering\includegraphics[scale=0.7]{nu-R-big}
\caption{The eigenvalues of the unitary operator $A=U^*V$ (red dots, here
$d=5$) and its numerical range $\mathcal{N}(A)$(gray filled area). On the left,
the eigenvalues span an arc of length smaller than $\pi$, so the quantities
$\nu$ and $R$ are non-trivial. On the right, the eigenvalues span an arc larger
than half a circle, so the origin belongs to the numerical range; here, $\nu =
0$ and $R=1$.}\label{fig:nu-R}
\end{figure}
We consider next random unitary operators $U,V$. We analyze Haar-distributed operators and then the case where $U$ and $V$ are sampled from the distribution of two independent unitary Brownian motions stopped at different times. For independent, Haar-distributed unitary operators, in the limit of large dimension, the corresponding channels become perfectly distinguishable.
\begin{proposition}
Let $U,V \in \mathcal U(d)$ be two independent random variables, at least one of them being Haar-distributed. Then, with overwhelming probability as $d \to \infty$, the quantum channels $\Phi$ and $\Psi$ from \eqref{eq:unitary-channels} become perfectly distinguishable: for $d$ large enough,
$$\mathbb P\left[ \|\Phi - \Psi\|_\diamond = 2 \right] \geq 1-\exp\left(-\frac{\log 2}{2}d^2 \right).$$
\end{proposition}
\begin{remark}
The statement above includes the case where $U$ is a Haar-distributed random unitary matrix, and $V$ is the identity operator (hence, $\Psi$ is the identity channel).
\end{remark}
\begin{theproof}
From the hypothesis and the left / right invariance of the Haar distribution, it follows that the random matrix $W = U^*V$ is Haar-distributed. The estimate follows from \cite[Section 3.1]{bab}, where the probability of a Haar unitary matrix not having any eigenvalues in a given arc is related to a Toeplitz determinant, see equation (3.1) in \cite{bab}.
\end{theproof}
Let us now consider the case where the operators $U$ and $V$ are elements of two independent unitary Brownian motion processes. We shall not give the definition of this process, referring the reader to e.g.~\cite{bia95,bia97,rai97}. We shall only need here the following result of Biane, giving the asymptotic support of a unitary Brownian motion stopped at time $t$.
\begin{proposition}\cite[Proposition 10]{bia97}
Let $(U_t)_{t \geq 0}$ be a unitary Brownian motion on $\mathcal U(d)$ starting at the identity. Then, asymtptically as $d \to \infty$, the support of the eigenvalue distribution (on the unit circle) of the operator $U_t$ is the full circle if $t \geq 4$ and the arc
$$\left\{\exp(i\alpha) \, : \, |\alpha| \leq \frac{1}{2}\sqrt{t(4-t)} + \arccos(1-t/2) \right\} \qquad \text{ if }0 \leq t <4.$$
\end{proposition}
As a direct application of this result, we obtain the diamond norm of the difference of two unitary quantum channels stemming from independent unitary Brownian motions.
\begin{proposition}
Let $(U_s)_{s \geq 0}$ and $(V_t)_{t \geq 0}$ two independent unitary Brownian motions and consider the random unitary quantum channels $\Phi_s$ and $\Psi_t$ from \eqref{eq:unitary-channels} obtained from the operators $U_s$ and respectively $V_t$. Then, almost surely,
$$\lim_{d \to \infty} \|\Phi_s - \Psi_t\|_\diamond = \begin{cases}
2 \sin \left[ \frac{1}{2}\sqrt{(s+t)(4-s-t)} + \arccos(1-(s+t)/2)\right] , \quad &\text{ if } s+t < \tau\\
2, \quad &\text{ if } s+t \geq \tau,
\end{cases}$$
where $\tau \approx 0.6528$ is the unique solution of the equation
$$\frac{1}{2}\sqrt{t(4-t)} + \arccos(1-t/2) = \pi/2$$
on $(0, 4)$.
\end{proposition}
\begin{theproof}
The proof is an easy consequence of Biane's result (more precisely, of its ``strong'' formulation from \cite[Theorem 1.1]{cdk}), once we notice that the random unitary matrix $U_s^*V_t$ has the same distribution as $W_{s+t}$, where $W_\cdot$ is another unitary Brownian motion. We plot the diamond norm as a function of $s+t$ in Figure \ref{fig:uBm}.
\end{theproof}
\begin{figure}[ht]
\centering\includegraphics[scale=0.7]{uBm}
\caption{The diamond norm of a difference of two random unitary channels coming from two independent unitary Brownian motions stopped at times $s$ and $t$, as a function of $s+t$.}
\label{fig:uBm}
\end{figure}
\section{Concluding remarks} In this work we analyzed properties of generic
quantum channels concentrating on the case of large system size. Using tools
provided by the theory of random matrices and the free probability calculus we
showed that the diamond norm of the difference between two random channels
asymptotically tends to a constant specified in Theorem \ref{thm:main}. In the
case of channels corresponding to the simplest case $x=y=1$, the limit value of
the diamond norm of the difference is $\Delta(1,1) = 1/2 + 2/\pi$. Based on
these results, in Fig.~\ref{fig:set-of-channels} we provide a sketch of the set
of quantum channels. In Fig.~\ref{fig:convergence} we illustrate the
convergence of the upper and lower bound to the value $1/2 + 2/\pi$. This
statement allows us to quantify the mean distinguishability between two random
channels
\begin{figure}[!h]
\centering\input{set-of-channels} \caption{Sketch of the set $\Theta(d, d)$ of
all channels acting on $d$-dimensional states. A generic channel $\Phi$ belongs
to a sphere of radius $r=3\sqrt{3}/2\pi$, centered at the maximally
depolarizing channel, $\Phi_\mathrm{dep}$ in the metric induced by the diamond
norm. The distance between generic channels, $\Phi, \Psi$ is $\Delta=1/2 +
2/\pi$, while the distance to the nearest unitary channel reads $a=2$.}
\label{fig:set-of-channels}
\end{figure}
\begin{figure}[!h]
\centering\includegraphics{convergence} \caption{The convergence of upper
(circles) and lower (triangles) bounds on the distance between two random
quantum channels sampled from the Hilbert-Schmidt distribution ($d_1=d_2=d$).
The results were obtained via Monte Carlo simulation with 100 samples for each
data point.}\label{fig:convergence}
\end{figure}
To arrive at this result we considered an ensemble of normalized random density
matrices, acting on a bipartite Hilbert space ${\mathcal H}_A \otimes
{\mathcal H}_B$, and distributed according to the flat (Hilbert-Schmidt)
measure. Such matrices, can be generated with help of a complex Ginibre matrix
$G$ as $\rho=GG^{*}/{\rm Tr} GG^{*}$. In the simplest case of square matrices
$G$ of order $d=d_1^2$ the average trace distance of a random state $\rho$ from
the maximally mixed state $\rho_*={I }/d$ behaves asymptotically as $ ||\rho -
\rho_*||_1 \to 3\sqrt{3}/4\pi$ \cite{ppz}. However, analyzing both reduced
matrices $\rho_A={\rm Tr}_B \rho$ and $\rho_B={\rm Tr}_A \rho$ we can show that
they become $\epsilon$ close to the maximally mixed state in sense of the
operator norm, so that their smallest and largest eigenvalues do coincide. This
is visualized in Fig.~\ref{fig:set-of-states}.
\begin{figure}[!h]
\centering\input{set-of-states} \caption{Set of all bipartite quantum states of
dimension $d^2$, $\Omega_{d^2}$, (a) and its partial traces (b) and (c)
containing states of dimension $d$. A generic bipartite state $\sigma_{AB}$,
distant $r=3\sqrt{3}/4\pi$ from the maximally mixed state $I/d^2$, is mapped
into $\sigma_A \approx \sigma_B \approx I/d$, while a typical pure state
$\ket{\phi_{AB}}$ is sent into a generic mixed state $\rho_A \equiv \rho_B$
distant $r$ from $I/d$.}\label{fig:set-of-states}
\end{figure}
This observation implies that the state $\rho$ can be directly interpreted as
a Jamio{\l}kowski state $J$ representing a stochastic map $\Phi$,
as its partial trace $\rho_A$ is proportional to identity. Furthermore,
as it becomes asymptotically equal to the other partial trace $\rho_B$,
it follows that a generic quantum channel (stochastic map) becomes
unital and thus bistochastic.
The partial trace of a random bipartite state is shown to be close to identity
provided the support of the limiting measure characterizing the bipartite state
is bounded. In particular, this holds for a family of subtract Mar\u{c}enko--Pastur
distributions defined in Eq.~\eqref{eq:def-SMP-xy} as a free additive convolution
of two rescaled Mar\u{c}enko--Pastur distributions with different parameters
and determining the density of a difference of two random density matrices.
In this way we could establish the upper bound for the average
diamond norm between two channels and show
that it asymptotically converges to the lower bound $\Delta(x,y)$ given in Theorem \ref{thm:lower}.
The results obtained can be understood as an application
of the measure concentration paradigm \cite{asz} to the space of quantum
channels.
\noindent \emph{Acknowledgments.} I.N.~would like to thank Anna Jen\u{c}ov\'a
and David Reeb for very insightful discussion regarding the diamond norm, which
led to several improvements and simplifications of the proof of Proposition
\ref{prop:bound-diamond}. I.N.'s research has been supported by the ANR
projects {RMTQIT} {ANR-12-IS01-0001-01} and {StoQ} {ANR-14-CE25-0003-01}, as
well as by a von Humboldt fellowship. Financial support by the Polish National
Science Centre under projects number 2016/22/E/ST6/00062 (ZP),
2015/17/B/ST6/01872 ({\L}P) and 2011/02/A/ST1/00119 (K{\.Z}) is gratefully
acknowledged.
|
train/arxiv
|
BkiUddU4uzlhrI4X8QHl
| 5 | 1 |
\section{\hspace{-10pt}}
\section{The hardest logic puzzle ever}
\begin{defn}[The Hardest Logic Puzzle Ever]
\label{def:3GodsPuzzle}
Three gods ($\gamma_1,\ldots,\gamma_3$) will answer three yes or no questions.
Each question is to be directed at one god at a time.
The gods answer with the word `$\chi$' (or `\_') but we don't know what `$\chi$' (or `\_') means.
One god ($\mathcal{T}$) always tells the truth, one ($\mathcal{F}$) always lies,
and one ($\mathcal{R}$) answers
randomly\footnote{The
\label{fn:R}
puzzle has been interpreted as to allow for the random god to not answer randomly but instead
randomly function as a god who either tells the truth, or lies.\cite{rab08}
Since this renders the random god pointless,
we'll stick to the interpretation that the random god answers truly randomly.
Besides, the truly random interpretation seems to be
what
Boolos
had in
mind,\cite{bool96}
e.g.\ with explanations like
``will answer your question yes or no, completely at random''\cite[\hspace{-0.25em}p.\,2]{bool96}.
It is also how \cite{rob01} interpreted the puzzle.}\nolinebreak[3]\hspace{-0.17em}.
The challenge is to figure out which god is which.\cite{bool96}
\end{defn}
\section{Groundwork}
\label{sec:sol}
We'll use $0$, $\bot$, no, and false interchangeably, when there is no risk for confusion.
And similarly for $1$, $\top$, yes, and true.
And $\chi$, whatever it means.
We'll also e.g.\ use '$=$' as a boolean function.
With $3$ gods there are $3\cdot 2$ possibilities for the gods.
This doubles if we are to figure out the meaning of $\chi$.
With $3$ questions we can discern $2^3$ outcomes.
Hence, we better remain
ignorant
of the meaning of $\chi$.
\subsection{A question template}
Let $\gamma(q)$ be
god $\gamma$'s answer to the question $q$.
Given a yes or no question $q$,
and a god $\gamma$,
we want a function $t(q,\gamma,\chi) \rightarrow\left\lbrace 0,1\right\rbrace$ that gives
the truth value of $q$
when $\gamma$ isn't the random god.
Fortunately, one of the first and simplest attempts at such a $t$ yields one that works:
\begin{defn}
\label{def:t}
\(
t(q,\gamma)\coloneqq\;\;
\gamma( \mbox{``}\gamma(q)=\chi\mbox{''}) = \chi
\)
\end{defn}
\begin{theorem}
\label{theorem:templateWorks}
If $\gamma\neq\mathcal{R}$, then $t(q,\gamma)\leftrightarrow q$
\end{theorem}
\vspace{-0.95em}
\begin{proof}
We'll go through all possible cases:
\begin{description}[itemsep=-3pt,topsep=1pt]
\item[\(\bm{q\!=\!1, \gamma\!=\!\mathcal{T}, \chi\!=\!1}\):] \hspace{-0.0em}Then
$\gamma(q)\!=\!\chi$, and
\( \gamma( \mbox{``}\gamma(q)\!=\!\chi\mbox{''}) \!=\! \chi \).
\item[\(\bm{q\!=\!1, \gamma\!=\!\mathcal{T}, \chi\!=\!0}\):] \hspace{-0.0em}Then
$\gamma(q)\!\neq\!\chi$, and
\( \gamma( \mbox{``}\gamma(q)\!=\!\chi\mbox{''}) \!=\! \chi \).
\item[\(\bm{q\!=\!1, \gamma\!=\!\mathcal{F}, \chi\!=\!1}\):] \hspace{-0.0em}Then
$\gamma(q)\!\neq\!\chi$, and
\( \gamma( \mbox{``}\gamma(q)\!=\!\chi\mbox{''}) \!=\! \chi \).
\item[\(\bm{q\!=\!1, \gamma\!=\!\mathcal{F}, \chi\!=\!0}\):] \hspace{-0.0em}Then
$\gamma(q)\!=\!\chi$, and
\( \gamma( \mbox{``}\gamma(q)\!=\!\chi\mbox{''}) \!=\! \chi \).
\item[\(\bm{q\!=\!0, \gamma\!=\!\mathcal{T}, \chi\!=\!1}\):] \hspace{-0.0em}Then
$\gamma(q)\!\neq\!\chi$, and
\( \gamma( \mbox{``}\gamma(q)\!=\!\chi\mbox{''}) \!\neq\! \chi \).
\item[\(\bm{q\!=\!0, \gamma\!=\!\mathcal{T}, \chi\!=\!0}\):] \hspace{-0.0em}Then
$\gamma(q)\!=\!\chi$, and
\( \gamma( \mbox{``}\gamma(q)\!=\!\chi\mbox{''}) \!\neq\! \chi \).
\item[\(\bm{q\!=\!0, \gamma\!=\!\mathcal{F}, \chi\!=\!1}\):] \hspace{-0.0em}Then
$\gamma(q)\!=\!\chi$, and
\( \gamma( \mbox{``}\gamma(q)\!=\!\chi\mbox{''}) \!\neq\! \chi \).
\item[\(\bm{q\!=\!0, \gamma\!=\!\mathcal{F}, \chi\!=\!0}\):] \hspace{-0.0em}Then
$\gamma(q)\!\neq\!\chi$, and
\( \gamma( \mbox{``}\gamma(q)\!=\!\chi\mbox{''}) \!\neq\! \chi \).
\end{description}
(Cases also hold for symmetry reasons, and for double negation reasons.)
\end{proof}
For convenience we'll also introduce a way to refer to the meta-question put to the god
in definition \ref{def:t}:
\begin{defn}
\label{def:tq}
\(
t_q(q,\gamma)\coloneqq\;\;
\mbox{``}\gamma(q)=\chi\mbox{''}
\)
\end{defn}
Boolos does not use something like definition \ref{def:tq} in his solution to the
puzzle,\cite{bool96}
but Tim Roberts does.\cite{rob01}
\subsection{How to find questions}
Instead of presenting ``top-down'' solutions,
we'll try to develop solutions from the ground up,
that are guaranteed to work, and are optimal.
Essential for efficient searches is to split the search space in equally large subparts.
This means that we want the possible answers to a question to be equally strong.
\subsubsection{Finding a solution to the non-random interpretation}
Let's suppose,
as preparation,
that $\mathcal{R}$ functions like either
$\mathcal{T}$ or $\mathcal{F}$ (cf.\ fn.\,\ref{fn:R});
in particular, theorem \ref{theorem:templateWorks} works also for $\mathcal{R}$.
Then
an optimal split of the $6$ possibilities would be provided
by asking about:
\begin{defn}[$q_{\bar{\mathcal{R}}}$]
\label{def:qRBar}
\begin{equation}
\label{eq:qRBar}
q_{\bar{\mathcal{R}}}\coloneqq\;\;
\bigvee
\left\lbrace
\begin{matrix}
\land\{\gamma_1=\mathcal{R},\gamma_2=\mathcal{T},\gamma_3=\mathcal{F}\},\\
\land\{\gamma_1=\mathcal{R},\gamma_2=\mathcal{F},\gamma_3=\mathcal{T}\},\\
\land\{\gamma_1=\mathcal{T},\gamma_2=\mathcal{F},\gamma_3=\mathcal{R}\}\;\\
\end{matrix}
\right\rbrace
\end{equation}
\end{defn}
We'll also need to reason about $\neg q_{\bar{\mathcal{R}}}$.
\begin{equation}
\neg q_{\bar{\mathcal{R}}} \leftrightarrow
\bigwedge
\left\lbrace
\begin{matrix}
\lor\{\gamma_1\neq\mathcal{R},\gamma_2\neq\mathcal{T},\gamma_3\neq\mathcal{F}\},\\
\lor\{\gamma_1\neq\mathcal{R},\gamma_2\neq\mathcal{F},\gamma_3\neq\mathcal{T}\},\\
\lor\{\gamma_1\neq\mathcal{T},\gamma_2\neq\mathcal{F},\gamma_3\neq\mathcal{R}\}\;\\
\end{matrix}
\right\rbrace
\end{equation}
which in disjunctive normal form (DNF) becomes
\begin{equation}
\label{eq:notqRBar}
\neg q_{\bar{\mathcal{R}}} \leftrightarrow
\bigvee
\left\lbrace
\begin{matrix}
\land\{\gamma_1=\mathcal{T},\gamma_2=\mathcal{R},\gamma_3=\mathcal{F}\},\\
\land\{\gamma_1=\mathcal{F},\gamma_2=\mathcal{T},\gamma_3=\mathcal{R}\},\\
\land\{\gamma_1=\mathcal{F},\gamma_2=\mathcal{R},\gamma_3=\mathcal{T}\}\;\\
\end{matrix}
\right\rbrace
\end{equation}
\begin{solution}
\label{solution:nonR}
There is a solution to the non-random interpretation of ``The Hardest Logic Puzzle Ever''
using $2.67$ questions.
(It's also enough to assume non-randomness for only the first question.)
\end{solution}
\begin{proof}
Put $t_q(q_{\bar{\mathcal{R}}},\gamma_1)$ to $\gamma_1$ and consider the possible cases:
\begin{description}[itemsep=2pt,topsep=1pt,leftmargin=1.1em]
\item[Case \(t(q_{\bar{\mathcal{R}}},\gamma_1)\):] \hspace{0.0em}Then we know
from theorem \ref{theorem:templateWorks}, and from the supposed non-randomness of $\mathcal{R}$,
that $q_{\bar{\mathcal{R}}}$ holds.
Next we'll ask $\gamma_2$ about $\gamma_2=\mathcal{T}$.
\textbf{Suppose $t(\gamma_2=\mathcal{T},\gamma_2)$.}
Then we know from equation \eqref{eq:qRBar} that
$\gamma_1=\mathcal{R}$, $\gamma_2=\mathcal{T}$, and $\gamma_3=\mathcal{F}$.
This case used $2$ questions.
\textbf{Suppose $\neg t(\gamma_2=\mathcal{T},\gamma_2)$.}
Then we know from equation \eqref{eq:qRBar} that $\gamma_2=F$.
$t(\gamma_1=\mathcal{T},\gamma_2)$ determines which is which of $\gamma_1$ and $\gamma_3$.
\item[Case \(\neg t(q_{\bar{\mathcal{R}}},\gamma_1)\):] \hspace{0.0em}Then
$\neg q_{\bar{\mathcal{R}}}$.
Next we'll ask $\gamma_1$ about $\gamma_1=\mathcal{T}$.
\textbf{Suppose $t(\gamma_1=\mathcal{T},\gamma_1)$.}
Then we know from equation \eqref{eq:notqRBar} that
$\gamma_1=\mathcal{T}$, $\gamma_2=\mathcal{R}$, and $\gamma_3=\mathcal{F}$.
This case used $2$ questions.
\textbf{Suppose $\neg t(\gamma_1=\mathcal{T},\gamma_1)$.}
Then we know from equation \eqref{eq:notqRBar} that $\gamma_1=F$.
$t(\gamma_2=\mathcal{T},\gamma_1)$ determines which is which of $\gamma_2$ and $\gamma_3$.
\end{description}
\vspace{-0.65em}
\end{proof}
\vspace{-0.90em}
\subsubsection{Managing randomness}
For the full
version of the puzzle (def.\,\ref{def:3GodsPuzzle}),
each question that can be answered by the random god weakens the
narrowing of the search space by up to $2$ possibilities,
adding $(\gamma_1\!=\!\mathcal{R},\gamma_2\!=\!\mathcal{F},\gamma_3\!=\!\mathcal{T})$
and $(\gamma_1\!=\!\mathcal{R},\gamma_2\!=\!\mathcal{T},\gamma_3\!=\!\mathcal{F})$
to the list of possibilities, say.
So it also seems important to find, as quickly as possible,
a god that isn't $\mathcal{R}$, in order to get reliable answers and minimize waste.
Hence, the problem with asking about something like $q_{\bar{\mathcal{R}}}$ for the full
puzzle (def.\,\ref{def:3GodsPuzzle})
is that the conclusion we are able to draw from $\neg t(q_{\bar{\mathcal{R}}},\gamma_1)$
is given by
\[
\bigvee
\left\lbrace
\begin{matrix}
\land\{\gamma_1=\mathcal{T},\gamma_2=\mathcal{R},\gamma_3=\mathcal{F}\},\\
\land\{\gamma_1=\mathcal{F},\gamma_2=\mathcal{T},\gamma_3=\mathcal{R}\},\\
\land\{\gamma_1=\mathcal{F},\gamma_2=\mathcal{R},\gamma_3=\mathcal{T}\},\\
\land\{\gamma_1=\mathcal{R},\gamma_2=\mathcal{F},\gamma_3=\mathcal{T}\},\\
\land\{\gamma_1=\mathcal{R},\gamma_2=\mathcal{T},\gamma_3=\mathcal{F}\}\;\\
\end{matrix}
\right\rbrace
\]
Since this has $5$ possibilities, it's not solvable with only the remaining $2$ questions.
(The case $t(q_{\bar{\mathcal{R}}},\gamma_1)$ remains as in solution \ref{solution:nonR} though
as that case already includes the possibilities
$(\mathcal{R},\mathcal{F},\mathcal{T})$ and $(\mathcal{R},\mathcal{T},\mathcal{F})$.)
Instead we'll again balance the partitions, by moving $1$ of the added random possibilities from
the negative side to the positive:
\begin{defn}[$q_1$]
\label{def:q1}
\begin{equation}
\label{eq:q1}
q_1\coloneqq\;\;
\bigvee
\left\lbrace
\begin{matrix}
\land\{\gamma_1=\mathcal{R},\gamma_2=\mathcal{T},\gamma_3=\mathcal{F}\},\\
\land\{\gamma_1=\mathcal{R},\gamma_2=\mathcal{F},\gamma_3=\mathcal{T}\},\\
\land\{\gamma_1=\mathcal{T},\gamma_2=\mathcal{F},\gamma_3=\mathcal{R}\},\\
\land\{\gamma_1=\mathcal{F},\gamma_2=\mathcal{T},\gamma_3=\mathcal{R}\}\;\\
\end{matrix}
\right\rbrace
\end{equation}
\end{defn}
We'll also need to reason about $\neg q_1$.
\begin{equation}
\neg q_1 \leftrightarrow
\bigwedge
\left\lbrace
\begin{matrix}
\lor\{\gamma_1\neq\mathcal{R},\gamma_2\neq\mathcal{T},\gamma_3\neq\mathcal{F}\},\\
\lor\{\gamma_1\neq\mathcal{R},\gamma_2\neq\mathcal{F},\gamma_3\neq\mathcal{T}\},\\
\lor\{\gamma_1\neq\mathcal{T},\gamma_2\neq\mathcal{F},\gamma_3\neq\mathcal{R}\},\\
\lor\{\gamma_1\neq\mathcal{F},\gamma_2\neq\mathcal{T},\gamma_3\neq\mathcal{R}\}\;\\
\end{matrix}
\right\rbrace
\end{equation}
which in disjunctive normal form becomes
\begin{equation}
\label{eq:notq1}
\neg q_1 \leftrightarrow
\bigvee
\left\lbrace
\begin{matrix}
\land\{\gamma_1=\mathcal{T},\gamma_2=\mathcal{R},\gamma_3=\mathcal{F}\},\\
\land\{\gamma_1=\mathcal{F},\gamma_2=\mathcal{R},\gamma_3=\mathcal{T}\}\;\\
\end{matrix}
\right\rbrace
\end{equation}
which with added $\mathcal{R}$ possibilities gives
\begin{defn}[$\bar{q}_1^R$]
\label{def:notq1R}
\begin{equation}
\label{eq:notq1R}
\bar{q}_1^R \coloneqq\;\;
\bigvee
\left\lbrace
\begin{matrix}
\land\{\gamma_1=\mathcal{T},\gamma_2=\mathcal{R},\gamma_3=\mathcal{F}\},\\
\land\{\gamma_1=\mathcal{F},\gamma_2=\mathcal{R},\gamma_3=\mathcal{T}\},\\
\land\{\gamma_1=\mathcal{R},\gamma_2=\mathcal{F},\gamma_3=\mathcal{T}\},\\
\land\{\gamma_1=\mathcal{R},\gamma_2=\mathcal{T},\gamma_3=\mathcal{F}\}\;\\
\end{matrix}
\right\rbrace
\end{equation}
\end{defn}
\section{A bottom-up solution to the puzzle}
\begin{solution}
\label{solution:bottomUp}
A solution to ``The Hardest Logic Puzzle Ever'' exists.
\end{solution}
\begin{proof}
Put $t_q(q_1,\gamma_1)$ to $\gamma_1$ and consider the possible cases:
\begin{description}[itemsep=2pt,topsep=3pt,leftmargin=1.1em]
\item[Case \(t(q_1,\gamma_1)\):] \hspace{0.2em}Since equation \eqref{eq:q1}
already includes all the possibilities where $\gamma_1=\mathcal{R}$,
$q_1$ holds.
Hence $\gamma_2\neq\mathcal{R}$ and is safe to question.
Next we'll ask $\gamma_2$ about $\gamma_2=\mathcal{T}$.
After that, $t(\gamma_1=\mathcal{R},\gamma_2)$ determines which is which of $\gamma_1$ and
$\gamma_3$.
\item[Case \(\neg t(q_1,\gamma_1)\):] \hspace{0.2em}Then $\gamma_1=\mathcal{R}$ or
$\neg q_1$ holds. Adding the $\gamma_1=\mathcal{R}$ possibilities to $\neg q_1$ results in
$\bar{q}_1^R$, which must hold. Inspecting equation \eqref{eq:notq1R} shows that
$\gamma_3\neq\mathcal{R}$ and is safe to question.
Next we'll ask $\gamma_3$ about $\gamma_3=\mathcal{T}$.
After that, $t(\gamma_1=\mathcal{R},\gamma_3)$ determines which is which of $\gamma_1$ and
$\gamma_2$.
\end{description}
\vspace{-1.1em}
\end{proof}
Note that $q_1\leftrightarrow (\gamma_2\!\neq\!\mathcal{R})$. A first question $q$ such that
$q$ and $\neg q$
are equally strong, and where equally many $\mathcal{R}$ possibilities are added to each side
of the search split, works too --- for example
$\gamma_3\!=\!\mathcal{R} \lor (\gamma_1\!=\!\mathcal{R} \land \gamma_2\!=\!\mathcal{T} \land
\gamma_3\!=\!\mathcal{F})$.
Note too that there is no solution using less than $3$ questions to the full puzzle.
\section{The n gods puzzle class}
\label{sec:nGods}
\begin{defn}[The Hardest Logic Puzzle Ever with $n$ Gods, $m$ Random Gods, and
$k$ Truthfull Gods]
\label{def:nGodsPuzzle}
Let the $(n,m,k)$ gods puzzle be like ``The Hardest Puzzle Ever'' (def.\,\ref{def:3GodsPuzzle})
but with $n$ gods, $m$ random gods, $k$ truthful gods,
$n-m-k$ lying gods,
and with no restriction on the number of questions allowed.
\end{defn}
\begin{theorem}
\label{theorem:nGodsSolvability}
An $n$ gods puzzle is solvable if and only if the number of random gods is stricly less than
the number of non-random gods.
\end{theorem}
\begin{lemma}
\label{lemma:findingNonR}
If an $n$ gods puzzle has strictly more non-random gods than random gods,
then a non-random god can be found.
\end{lemma}
\vspace{-1.2em}
\begin{proof}
We'll prove the lemma by induction.
The lemma holds for puzzles with $1$ and $2$ gods.
Assume that the lemma holds for $k<n$,
and that there are more non-random gods than random gods.
We'll then find a non-random god for the $n$ case.
Ask $\gamma_1$ about $\gamma_i\!=\!\mathcal{R}$ for $2\leq i\leq n$ until
$t(\gamma_i\!=\!\mathcal{R},\gamma_1)$, or
all gods have been checked.
If $\gamma_1\!\neq\! \mathcal{R}$, then $\gamma_i\!=\!\mathcal{R}$ by
theorem \ref{theorem:templateWorks}, or there are no
more random gods at all.
Hence, the subproblem for
$(\gamma_2,\ldots,\gamma_{i-1},\gamma_{i+1},\linebreak[1]\ldots,\linebreak[1]\gamma_n)$
has at least $1$ less random god, and at most $1$ less non-random god,
if there is any random god at all.
Thus, a non-random god $\gamma_j$ can be found for the subproblem.
And $\gamma_j$
suffice
for the $n$ case too.
\end{proof}
\begin{lemma}
\label{lemma:RsLTNonRsSolvable}
If an $n$ gods puzzle has strictly more non-random gods than random gods,
then it is solvable.
\end{lemma}
\vspace{-1.2em}
\begin{proof}
Assume more non-random gods than random gods. Then, by lemma \ref{lemma:findingNonR},
a non-random god $\gamma_j$ can be found.
After that it's straightforward to go through all gods and ask $\gamma_j$ about their
identity. This determines all the gods, according to theorem \ref{theorem:templateWorks}.
\end{proof}
\begin{proof}[Proof of theorem \ref{theorem:nGodsSolvability}]
The $\Leftarrow$ case is covered by lemma \ref{lemma:RsLTNonRsSolvable}.
For the $\Rightarrow$ case,
to see why puzzles with at least as many random gods
as non-random gods aren't solvable, consider the easiest to solve of such problems:
Assume that $n$ is even, that the random gods equal the non-random gods,
and that there are only truthful non-random gods.
Let
\begin{defn}
\label{def:pnUns}
$p_n \coloneqq $
\begin{equation}
\label{eq:nonSolvable}
\bigvee\!\!
\left\lbrace
\begin{matrix}
\land\{\gamma_1\!=\!\mathcal{T},\gamma_2\!=\!\mathcal{R},\gamma_3\!=\!\mathcal{T},
\gamma_4\!=\!\mathcal{R},\ldots,\gamma_n\!=\!\mathcal{R}\},\\
\land\{\gamma_1\!=\!\mathcal{R},\gamma_2\!=\!\mathcal{T},\gamma_3\!=\!\mathcal{R},
\gamma_4\!=\!\mathcal{T},\ldots,\gamma_n\!=\!\mathcal{T}\}\;\\
\end{matrix}\!
\right\rbrace\!\!
\end{equation}
\end{defn}
Let $p_{\mathcal{\scriptscriptstyle T}\mathcal{\scriptscriptstyle R}}$ and
$p_{\mathcal{\scriptscriptstyle R}\mathcal{\scriptscriptstyle T}}$ be the
conjunctions of $p_n$:
\begin{defn}
\label{pTR}
$p_{\mathcal{\scriptscriptstyle T}\mathcal{\scriptscriptstyle R}}\coloneqq $
\begin{equation}
\land\{\gamma_1\!=\!\mathcal{T},\gamma_2\!=\!\mathcal{R},\gamma_3\!=\!\mathcal{T},
\gamma_4\!=\!\mathcal{R},\ldots,\gamma_n\!=\!\mathcal{R}\}
\end{equation}
\end{defn}
\begin{defn}
\label{pRT}
$p_{\mathcal{\scriptscriptstyle R}\mathcal{\scriptscriptstyle T}}\coloneqq $
\begin{equation}
\hspace{3.0em}\land\{\gamma_1\!=\!\mathcal{R},\gamma_2\!=\!\mathcal{T},
\gamma_3\!=\!\mathcal{R},
\gamma_4\!=\!\mathcal{T},\ldots,\gamma_n\!=\!\mathcal{T}\}
\end{equation}
\end{defn}
If the gods are as described by
$p_{\mathcal{\scriptscriptstyle T}\mathcal{\scriptscriptstyle R}}$ or
$p_{\mathcal{\scriptscriptstyle R}\mathcal{\scriptscriptstyle T}}$, and only
$p_n$ is known, then that puzzle instance is unsolvable, if the random gods are
unhelpful (we'll show this next).
So let's assume that the gods are as described by $p_n$.
To get to an unsolvable position, we'll have the random gods ``happen''
to force the puzzle there. To that end,
if both $q$ and $\neg q$ are consistent with $p_n$,
(i.e.\ not $p_n\!\rightarrow\!\neg q$ and not $p_n\!\rightarrow\!q$),
assume that the random gods always happen to give the incorrect answer
to a question about $q$.
If only one of $q$ and $\neg q$ is consistent with $p_n$,
assume that the random gods always happen to give the correct answer
to a question about $q$.
Call a question `trivial' if it or its negation follow from already
asked questions.
Given this setup,
and if non-trivial questions are asked,
then it will be known that $p_n$ holds.
But once there,
no more conclusion can be drawn from the answer to a question $q$.
And it is not possible to determine which of
$p_{\mathcal{\scriptscriptstyle T}\mathcal{\scriptscriptstyle R}}$ and
$p_{\mathcal{\scriptscriptstyle R}\mathcal{\scriptscriptstyle T}}$
that holds.
More specifically, without loss of generality we will assume that $\gamma_1$ is asked about $q$.
An easy way to see that we
are able to
conclude $p_n$ is to note that we can ask
$\frac{n}{2}+1$ gods
about $p_n$ explicitly.
Since all gods answer that $p_n$ holds, $p_n$ can be concluded since at least
$1$ of $\frac{n}{2}+1$ gods must be non-random.
Note that
if $p_n$
is known
and $q$ is non-trivial, then
$q\leftrightarrow p_{\mathcal{\scriptscriptstyle T}\mathcal{\scriptscriptstyle R}}$ or
$q\leftrightarrow p_{\mathcal{\scriptscriptstyle R}\mathcal{\scriptscriptstyle T}}$.
Because suppose
that $q$ is non-trivial. Then it has a model $\mathcal{M}$ (i.e.\ an assignment of the gods)
where $q$, and $p_n$, are true. Suppose, without loss of generality, that it's
the $p_{\mathcal{\scriptscriptstyle T}\mathcal{\scriptscriptstyle R}}$
disjunct that's true in $\mathcal{M}$.
Then, since $p_{\mathcal{\scriptscriptstyle T}\mathcal{\scriptscriptstyle R}}$
completely determines a model, $q$ holds whenever
$p_{\mathcal{\scriptscriptstyle T}\mathcal{\scriptscriptstyle R}}$ does, i.e.\
$p_{\mathcal{\scriptscriptstyle T}\mathcal{\scriptscriptstyle R}}\rightarrow q$.
Suppose $\neg p_{\mathcal{\scriptscriptstyle T}\mathcal{\scriptscriptstyle R}}$.
Then $p_{\mathcal{\scriptscriptstyle R}\mathcal{\scriptscriptstyle T}}$.
Since $p_{\mathcal{\scriptscriptstyle R}\mathcal{\scriptscriptstyle T}}$
determines its models completely, if $q$ where to hold, then $q$ would
hold in all models to $p_n$, contradicting that $q$ is non-trivial.
Hence $\neg q$ holds, i.e.\
$\neg p_{\mathcal{\scriptscriptstyle T}\mathcal{\scriptscriptstyle R}}\rightarrow \neg q$.
Assume that $p_n$ is known.
To see that once $p_n$ is known, no more conclusions can be made,
suppose $\gamma_1\!=\!\mathcal{T}$,
and $q$ is non-trivial.
Then an answer that $q$ (or $\neg q$) holds
(with implications that
$p_{\mathcal{\scriptscriptstyle T}\mathcal{\scriptscriptstyle R}}$ holds)
is undone because we can only conclude that
$q \vee \gamma_1\!=\!\mathcal{R}$ (or $\neg q \vee \gamma_1\!=\!\mathcal{R}$),
which is equivalent to $p_n$.
Similarly, if instead $\gamma_1=\mathcal{R}$, then we are able to conclude only
$q \vee \gamma_1\!=\!\mathcal{R}$ (or $\neg q \vee \gamma_1\!=\!\mathcal{R}$).
If $q$ is non-trivial,
then both $q$ and $\neg q$ are consistent with $p_n$.
Hence, the false answer that $q$ (or $\neg q$) holds
undoes the disjunct $\gamma_1=\mathcal{R}$.
If $q$ is trivial, the concluded disjunction,
$q \vee \gamma_1\!=\!\mathcal{R}$ (or $\neg q \vee \gamma_1\!=\!\mathcal{R}$),
will already be known, and nothing new can be concluded again.
To see that even harder puzzles to solve are unsolvable too,
the restriction that the non-random gods are only truthful is immaterial.
If more random gods are added, the proof still works with minor alterations, e.g.\
to eq.\,\eqref{eq:nonSolvable}.
\end{proof}
\section{A solution to the 5 gods puzzle with 2 random and 3 truthful gods}
We will solve the puzzle where the non-random gods are the same
(i.e.\ all $\mathcal{T}$, or all $\mathcal{F}$).
This is unimportant
although it reduces the number of possibilities;
the other variants are similar.
There are $\frac{5\cdot 4}{2}$ possibilities for the gods.
We'll address $\gamma_1$ first, without loss of generality.
Included in any conclusions drawn from the first answer is that $\gamma_1$ could be $\mathcal{R}$:
$\gamma_1\!=\!\mathcal{R} \;\leftrightarrow$
\begin{equation}
\label{eq:gamma1IsR}
\bigvee\!\!
\left\lbrace\!
\begin{matrix}
\land\{\gamma_1=\mathcal{R},\gamma_2=\mathcal{T},\gamma_3=\mathcal{T},\gamma_4=\mathcal{T},
\gamma_5=\mathcal{R}\},\\
\land\{\gamma_1=\mathcal{R},\gamma_2=\mathcal{T},\gamma_3=\mathcal{T},\gamma_4=\mathcal{R},
\gamma_5=\mathcal{T}\},\vspace{0pt}\\
\land\{\gamma_1=\mathcal{R},\gamma_2=\mathcal{T},\gamma_3=\mathcal{R},\gamma_4=\mathcal{T},
\gamma_5=\mathcal{T}\},\\
\land\{\gamma_1=\mathcal{R},\gamma_2=\mathcal{R},\gamma_3=\mathcal{T},\gamma_4=\mathcal{T},
\gamma_5=\mathcal{T}\}\;\\
\end{matrix}\!
\right\rbrace\!\!\!\!
\end{equation}
For the first question we'll take half of the conjunctions from
$\gamma_1=\mathcal{R}$ (eq.\,\eqref{eq:gamma1IsR}),
and add half of the remaining possibilities,
aiming to get $\gamma_2$ likely to be non-random in the positive case,
and $\gamma_3$ likely non-random in the negative case:
\begin{defn
\label{def:q15}
$q_1^5\coloneqq$
\begin{equation}
\label{eq:q15}
\bigvee\!
\left\lbrace\!
\begin{matrix}
\land\{\gamma_1=\mathcal{R},\gamma_2=\mathcal{T},\gamma_3=\mathcal{T},\gamma_4=\mathcal{T},
\gamma_5=\mathcal{R}\},\\
\land\{\gamma_1=\mathcal{R},\gamma_2=\mathcal{T},\gamma_3=\mathcal{T},\gamma_4=\mathcal{R},
\gamma_5=\mathcal{T}\},\\
\land\{\gamma_1=\mathcal{T},\gamma_2=\mathcal{T},\gamma_3=\mathcal{T},\gamma_4=\mathcal{R},
\gamma_5=\mathcal{R}\},\vspace{0pt}\\
\land\{\gamma_1=\mathcal{T},\gamma_2=\mathcal{T},\gamma_3=\mathcal{R},\gamma_4=\mathcal{T},
\gamma_5=\mathcal{R}\},\\
\land\{\gamma_1=\mathcal{T},\gamma_2=\mathcal{T},\gamma_3=\mathcal{R},\gamma_4=\mathcal{R},
\gamma_5=\mathcal{T}\}\;\\
\end{matrix}\!
\right\rbrace\!\!\!
\end{equation}
\end{defn}
We'll also need to reason about $\neg q_1^5$.
Using disjunctive normal form we have:
$\neg q_1^5 \leftrightarrow$
\begin{equation}
\label{eq:notq15}
\bigvee\!\!
\left\lbrace\!
\begin{matrix}
\land\{\gamma_1=\mathcal{T},\gamma_2=\mathcal{R},\gamma_3=\mathcal{T},\gamma_4=\mathcal{T},
\gamma_5=\mathcal{R}\},\\
\land\{\gamma_1=\mathcal{T},\gamma_2=\mathcal{R},\gamma_3=\mathcal{T},\gamma_4=\mathcal{R},
\gamma_5=\mathcal{T}\},\vspace{0pt}\\
\land\{\gamma_1=\mathcal{T},\gamma_2=\mathcal{R},\gamma_3=\mathcal{R},\gamma_4=\mathcal{T},
\gamma_5=\mathcal{T}\},\\
\land\{\gamma_1=\mathcal{R},\gamma_2=\mathcal{T},\gamma_3=\mathcal{R},\gamma_4=\mathcal{T},
\gamma_5=\mathcal{T}\},\\
\land\{\gamma_1=\mathcal{R},\gamma_2=\mathcal{R},\gamma_3=\mathcal{T},\gamma_4=\mathcal{T},
\gamma_5=\mathcal{T}\}\;\\
\end{matrix}\!
\right\rbrace\!\!\!
\end{equation}
Adding $\gamma_1=\mathcal{R}$ possibilities (eq.\,\eqref{eq:gamma1IsR}) to $q_1^5$ and
$\neg q_1^5$ gives
\begin{defn
\label{def:q15R}
$q_{15}^R \coloneqq$
\begin{equation}
\label{eq:q15R}
\bigvee\!\!
\left\lbrace\!
\begin{matrix}
\land\{\gamma_1=\mathcal{R},\gamma_2=\mathcal{T},\gamma_3=\mathcal{R},\gamma_4=\mathcal{T},
\gamma_5=\mathcal{T}\},\\
\land\{\gamma_1=\mathcal{R},\gamma_2=\mathcal{R},\gamma_3=\mathcal{T},\gamma_4=\mathcal{T},
\gamma_5=\mathcal{T}\},\\
\land\{\gamma_1=\mathcal{R},\gamma_2=\mathcal{T},\gamma_3=\mathcal{T},\gamma_4=\mathcal{T},
\gamma_5=\mathcal{R}\},\\
\land\{\gamma_1=\mathcal{R},\gamma_2=\mathcal{T},\gamma_3=\mathcal{T},\gamma_4=\mathcal{R},
\gamma_5=\mathcal{T}\},\\
\land\{\gamma_1=\mathcal{T},\gamma_2=\mathcal{T},\gamma_3=\mathcal{T},\gamma_4=\mathcal{R},
\gamma_5=\mathcal{R}\},\vspace{0pt}\\
\land\{\gamma_1=\mathcal{T},\gamma_2=\mathcal{T},\gamma_3=\mathcal{R},\gamma_4=\mathcal{T},
\gamma_5=\mathcal{R}\},\\
\land\{\gamma_1=\mathcal{T},\gamma_2=\mathcal{T},\gamma_3=\mathcal{R},\gamma_4=\mathcal{R},
\gamma_5=\mathcal{T}\}\;\\
\end{matrix}\!
\right\rbrace\!\!
\end{equation}
\end{defn}
\begin{defn
\label{def:notq15R}
$\bar{q}_{15}^R \coloneqq$
\begin{equation}
\label{eq:notq15R}
\bigvee\!\!
\left\lbrace\!
\begin{matrix}
\land\{\gamma_1=\mathcal{T},\gamma_2=\mathcal{R},\gamma_3=\mathcal{T},\gamma_4=\mathcal{T},
\gamma_5=\mathcal{R}\},\\
\land\{\gamma_1=\mathcal{T},\gamma_2=\mathcal{R},\gamma_3=\mathcal{T},\gamma_4=\mathcal{R},
\gamma_5=\mathcal{T}\},\vspace{0pt}\\
\land\{\gamma_1=\mathcal{T},\gamma_2=\mathcal{R},\gamma_3=\mathcal{R},\gamma_4=\mathcal{T},
\gamma_5=\mathcal{T}\},\\
\land\{\gamma_1=\mathcal{R},\gamma_2=\mathcal{T},\gamma_3=\mathcal{R},\gamma_4=\mathcal{T},
\gamma_5=\mathcal{T}\},\\
\land\{\gamma_1=\mathcal{R},\gamma_2=\mathcal{R},\gamma_3=\mathcal{T},\gamma_4=\mathcal{T},
\gamma_5=\mathcal{T}\},\\
\land\{\gamma_1=\mathcal{R},\gamma_2=\mathcal{T},\gamma_3=\mathcal{T},\gamma_4=\mathcal{T},
\gamma_5=\mathcal{R}\},\\
\land\{\gamma_1=\mathcal{R},\gamma_2=\mathcal{T},\gamma_3=\mathcal{T},\gamma_4=\mathcal{R},
\gamma_5=\mathcal{T}\}\;\\
\end{matrix}\!
\right\rbrace\!\!
\end{equation}
\end{defn}
$q_{15}^R$ and $\bar{q}_{15}^R$ are then equally strong, and their construction provides a
solution:
\begin{solution}
\label{solution:5gods}
A solution to the $5$ gods puzzle with 2 random and 3 truthful gods
using $4.15$ questions exists.
\end{solution}
\begin{proof}
Put $t_q(q_1^5,\gamma_1)$ to $\gamma_1$ and consider the possible cases:
\begin{description}[itemsep=2pt,topsep=3pt,leftmargin=0.6em]
\item[Case \(t(q_1^5,\gamma_1)\):] \hspace{0.0em}Then $q_{15}^R$ (eq.\,\eqref{eq:q15R})
holds by its construction,
and theorem \ref{theorem:templateWorks}.
Given
$q_{15}^R$,
$\gamma_2$ is most likely to be non-random,
and we'll ask her next.
We'll again aim to split the remaining possibilities in two equally large parts,
and with gods likely to be non-random on both sides.
Let
\begin{defn
\label{def:q25}
$q_{2}^5 \coloneqq$
\begin{equation}
\label{eq:q25}
\bigvee\!\!
\left\lbrace\!
\begin{matrix}
\land\{\gamma_1=\mathcal{R},\gamma_2=\mathcal{R},\gamma_3=\mathcal{T},\gamma_4=\mathcal{T},
\gamma_5=\mathcal{T}\},\\
\land\{\gamma_1=\mathcal{R},\gamma_2=\mathcal{T},\gamma_3=\mathcal{T},\gamma_4=\mathcal{T},
\gamma_5=\mathcal{R}\},\\
\land\{\gamma_1=\mathcal{R},\gamma_2=\mathcal{T},\gamma_3=\mathcal{T},\gamma_4=\mathcal{R},
\gamma_5=\mathcal{T}\},\\
\land\{\gamma_1=\mathcal{T},\gamma_2=\mathcal{T},\gamma_3=\mathcal{T},\gamma_4=\mathcal{R},
\gamma_5=\mathcal{R}\}\;\vspace{0pt}\\
\end{matrix}\!
\right\rbrace\!\!\!\!
\end{equation}
\end{defn}
Then $\neg q_{2}^5 \leftrightarrow$
\begin{equation}
\label{eq:notq25}
\bigvee\!\!
\left\lbrace\!
\begin{matrix}
\land\{\gamma_1=\mathcal{R},\gamma_2=\mathcal{T},\gamma_3=\mathcal{R},\gamma_4=\mathcal{T},
\gamma_5=\mathcal{T}\},\\
\land\{\gamma_1=\mathcal{T},\gamma_2=\mathcal{T},\gamma_3=\mathcal{R},\gamma_4=\mathcal{T},
\gamma_5=\mathcal{R}\},\\
\land\{\gamma_1=\mathcal{T},\gamma_2=\mathcal{T},\gamma_3=\mathcal{R},\gamma_4=\mathcal{R},
\gamma_5=\mathcal{T}\}\;\\
\end{matrix}\!
\right\rbrace\!\!\!\!\!
\end{equation}
There is no more $\gamma_2=\mathcal{R}$ possibility to add to $q_2^5$,
but there is one for $\neg q_2^5$:
\begin{defn
\label{def:notq25R}
$\bar{q}_{25}^R \coloneqq$
\begin{equation}
\label{eq:notq25R}
\bigvee\!\!
\left\lbrace\!
\begin{matrix}
\land\{\gamma_1=\mathcal{R},\gamma_2=\mathcal{R},\gamma_3=\mathcal{T},\gamma_4=\mathcal{T},
\gamma_5=\mathcal{T}\},\\
\land\{\gamma_1=\mathcal{R},\gamma_2=\mathcal{T},\gamma_3=\mathcal{R},\gamma_4=\mathcal{T},
\gamma_5=\mathcal{T}\},\\
\land\{\gamma_1=\mathcal{T},\gamma_2=\mathcal{T},\gamma_3=\mathcal{R},\gamma_4=\mathcal{T},
\gamma_5=\mathcal{R}\},\\
\land\{\gamma_1=\mathcal{T},\gamma_2=\mathcal{T},\gamma_3=\mathcal{R},\gamma_4=\mathcal{R},
\gamma_5=\mathcal{T}\}\;\\
\end{matrix}\!
\right\rbrace\!\!\!\!\!\!
\end{equation}
\end{defn}
$q_2^5$ and $\neg q_2^5$ are balanced, and we'll ask $\gamma_2$ about $q_2^5$ next.
\begin{description}[itemsep=2pt,topsep=1pt,leftmargin=0.6em]
\item[Case $t(q_2^5,\gamma_2)$:]
Then $q_{2}^5$ (eq.\,\eqref{eq:q25}) holds, by construction.
Therefore $\gamma_3\neq\mathcal{R}$ and is safe to ask.
Asking $\gamma_3$ about $\gamma_4\neq\mathcal{R}$ and $\gamma_5\neq\mathcal{R}$
determine the rest of the gods.
This case used $4$ questions, and covered $4$ possibilities.
\item[Case $\neg t(q_2^5,\gamma_2)$:]
Then $\bar{q}_{25}^R$ (eq.\,\eqref{eq:notq25R}) holds, by construction.
Next we'll go after $\gamma_4$, since it's likely that she isn't
$\mathcal{R}$.
We'll again aim to split the remaining possibilities in two equally large parts.
Let
\begin{defn
\label{def:q35}
$q_{3}^5 \coloneqq$
\begin{equation}
\label{eq:q35}
\bigvee\!\!
\left\lbrace\!
\begin{matrix}
\land\{\gamma_1=\mathcal{T},\gamma_2=\mathcal{T},\gamma_3=\mathcal{R},\gamma_4=\mathcal{T},
\gamma_5=\mathcal{R}\},\\
\land\{\gamma_1=\mathcal{T},\gamma_2=\mathcal{T},\gamma_3=\mathcal{R},\gamma_4=\mathcal{R},
\gamma_5=\mathcal{T}\}\;\\
\end{matrix}\!
\right\rbrace\!\!\!
\end{equation}
\end{defn}
Then $\neg q_{3}^5 \leftrightarrow$
\begin{equation}
\label{eq:notq35}
\bigvee\!\!
\left\lbrace\!
\begin{matrix}
\land\{\gamma_1=\mathcal{R},\gamma_2=\mathcal{T},\gamma_3=\mathcal{R},\gamma_4=\mathcal{T},
\gamma_5=\mathcal{T}\},\\
\land\{\gamma_1=\mathcal{R},\gamma_2=\mathcal{R},\gamma_3=\mathcal{T},\gamma_4=\mathcal{T},
\gamma_5=\mathcal{T}\}\;\\
\end{matrix}\!
\right\rbrace\!\!\!
\end{equation}
There is no more $\gamma_4=\mathcal{R}$ possibility to add to $q_3^5$,
but there is one for $\neg q_3^5$:
\begin{defn
\label{def:notq35R}
$\bar{q}_{35}^R \coloneqq$
\begin{equation}
\label{eq:notq35R}
\bigvee\!\!
\left\lbrace\!
\begin{matrix}
\land\{\gamma_1=\mathcal{R},\gamma_2=\mathcal{T},\gamma_3=\mathcal{R},\gamma_4=\mathcal{T},
\gamma_5=\mathcal{T}\},\\
\land\{\gamma_1=\mathcal{R},\gamma_2=\mathcal{R},\gamma_3=\mathcal{T},\gamma_4=\mathcal{T},
\gamma_5=\mathcal{T}\},\\
\land\{\gamma_1=\mathcal{T},\gamma_2=\mathcal{T},\gamma_3=\mathcal{R},\gamma_4=\mathcal{R},
\gamma_5=\mathcal{T}\}\;\\
\end{matrix}\!
\right\rbrace\!\!\!
\end{equation}
\end{defn}
$q_3^5$ and $\neg q_3^5$ are as balanced as possible, and we'll ask $\gamma_4$ about $q_3^5$ next.
\begin{description}[itemsep=2pt,topsep=1pt,leftmargin=0.6em]
\item[Case $t(q_3^5,\gamma_4)$:]
Then $q_{3}^5$ (eq.\,\eqref{eq:q35}) holds, by construction.
Therefore $\gamma_2\neq\mathcal{R}$ and is safe to ask.
Asking $\gamma_2$ about $\gamma_4\neq\mathcal{R}$
determines the rest of the gods.
This case used $4$ questions, and covered $2$ possibilities.
\item[Case $\neg t(q_3^5,\gamma_3)$:]
Then $\bar{q}_{35}^R$ (eq.\,\eqref{eq:notq35R}) holds, by construction.
Therefore $\gamma_5\neq\mathcal{R}$ and is safe to ask.
Asking $\gamma_5$ about $\gamma_4\neq\mathcal{R}$, and if needed
asking $\gamma_5$ about $\gamma_3\neq\mathcal{R}$,
determine the rest of the gods.
This case used $5$ questions for $2$ possibilities,
and $4$ questions for $1$ possibility.
\end{description}
\end{description}
\item[Case \(\neg t(q_1^5,\gamma_1)\):]
Then $\bar{q}_{15}^R$ (eq.\,\eqref{eq:notq15R}) holds, by construction.
We'll go after $\gamma_3$ next. Let
\begin{defn
\label{def:q2bar5}
$q_{2}^{\bar{5}} \coloneqq$
\begin{equation}
\label{eq:q2bar5}
\bigvee\!\!
\left\lbrace\!
\begin{matrix}
\land\{\gamma_1=\mathcal{T},\gamma_2=\mathcal{R},\gamma_3=\mathcal{T},\gamma_4=\mathcal{T},
\gamma_5=\mathcal{R}\},\\
\land\{\gamma_1=\mathcal{T},\gamma_2=\mathcal{R},\gamma_3=\mathcal{R},\gamma_4=\mathcal{T},
\gamma_5=\mathcal{T}\},\\
\land\{\gamma_1=\mathcal{R},\gamma_2=\mathcal{T},\gamma_3=\mathcal{T},\gamma_4=\mathcal{T},
\gamma_5=\mathcal{R}\}\;\\
\end{matrix}\!
\right\rbrace\!\!\!\!
\end{equation}
\end{defn}
Then $\neg q_{2}^{\bar{5}} \leftrightarrow$
\begin{equation}
\label{eq:notq2bar5}
\bigvee\!\!
\left\lbrace\!
\begin{matrix}
\land\{\gamma_1=\mathcal{T},\gamma_2=\mathcal{R},\gamma_3=\mathcal{T},\gamma_4=\mathcal{R},
\gamma_5=\mathcal{T}\},\vspace{0pt}\\
\land\{\gamma_1=\mathcal{R},\gamma_2=\mathcal{T},\gamma_3=\mathcal{T},\gamma_4=\mathcal{R},
\gamma_5=\mathcal{T}\},\\
\land\{\gamma_1=\mathcal{R},\gamma_2=\mathcal{R},\gamma_3=\mathcal{T},\gamma_4=\mathcal{T},
\gamma_5=\mathcal{T}\},\\
\land\{\gamma_1=\mathcal{R},\gamma_2=\mathcal{T},\gamma_3=\mathcal{R},\gamma_4=\mathcal{T},
\gamma_5=\mathcal{T}\}\;\\
\end{matrix}\!
\right\rbrace\!\!\!\!\!
\end{equation}
Adding $\gamma_3=\mathcal{R}$ possibilities to $q_2^{\bar{5}}$ and
$\neg q_2^{\bar{5}}$ gives
\begin{defn}
\label{def:q2bar5R}
$q_{2{\bar{5}}}^R \coloneqq$
\begin{equation}
\label{eq:q2bar5R}
\bigvee\!\!
\left\lbrace\!
\begin{matrix}
\land\{\gamma_1=\mathcal{R},\gamma_2=\mathcal{T},\gamma_3=\mathcal{R},\gamma_4=\mathcal{T},
\gamma_5=\mathcal{T}\},\\
\land\{\gamma_1=\mathcal{T},\gamma_2=\mathcal{R},\gamma_3=\mathcal{T},\gamma_4=\mathcal{T},
\gamma_5=\mathcal{R}\},\\
\land\{\gamma_1=\mathcal{T},\gamma_2=\mathcal{R},\gamma_3=\mathcal{R},\gamma_4=\mathcal{T},
\gamma_5=\mathcal{T}\},\\
\land\{\gamma_1=\mathcal{R},\gamma_2=\mathcal{T},\gamma_3=\mathcal{T},\gamma_4=\mathcal{T},
\gamma_5=\mathcal{R}\}\;\\
\end{matrix}\!
\right\rbrace\!\!\!\!\!\!
\end{equation}
\end{defn}
\begin{defn
\label{def:notq2bar5R}
$\bar{q}_{2{\bar{5}}}^R \coloneqq$
\begin{equation}
\label{eq:notq2bar5R}
\bigvee\!\!
\left\lbrace\!
\begin{matrix}
\land\{\gamma_1=\mathcal{T},\gamma_2=\mathcal{R},\gamma_3=\mathcal{T},\gamma_4=\mathcal{R},
\gamma_5=\mathcal{T}\},\vspace{0pt}\\
\land\{\gamma_1=\mathcal{R},\gamma_2=\mathcal{T},\gamma_3=\mathcal{T},\gamma_4=\mathcal{R},
\gamma_5=\mathcal{T}\},\\
\land\{\gamma_1=\mathcal{R},\gamma_2=\mathcal{R},\gamma_3=\mathcal{T},\gamma_4=\mathcal{T},
\gamma_5=\mathcal{T}\},\\
\land\{\gamma_1=\mathcal{R},\gamma_2=\mathcal{T},\gamma_3=\mathcal{R},\gamma_4=\mathcal{T},
\gamma_5=\mathcal{T}\},\\
\land\{\gamma_1=\mathcal{T},\gamma_2=\mathcal{R},\gamma_3=\mathcal{R},\gamma_4=\mathcal{T},
\gamma_5=\mathcal{T}\}\;\\
\end{matrix}\!
\right\rbrace\!\!\!\!\!\!
\end{equation}
\end{defn}
$q_2^{\bar{5}}$ and $\neg q_2^{\bar{5}}$ are balanced, and we'll ask $\gamma_3$ about
$q_2^{\bar{5}}$ next.
\begin{description}[itemsep=2pt,topsep=1pt,leftmargin=0.6em]
\item[Case $t(q_2^{\bar{5}},\gamma_2)$:]
Then $q_{2{\bar{5}}}^R$ (eq.\,\eqref{eq:q2bar5R}) holds, by construction.
Therefore $\gamma_4\neq\mathcal{R}$ and is safe to ask.
Asking $\gamma_4$ about $\gamma_2\neq\mathcal{R}$ and $\gamma_3\neq\mathcal{R}$
determine the rest of the gods.
This case used $4$ questions, and covered $4$ possibilities.
\item[Case $\neg t(q_2^{\bar{5}},\gamma_2)$:]
Then $\bar{q}_{2{\bar{5}}}^R$ (eq.\,\eqref{eq:notq2bar5R}) holds, by construction.
Therefore $\gamma_5\neq\mathcal{R}$ and is safe to ask.
Ask $\gamma_5$ about $\gamma_4\neq\mathcal{R}$:
\begin{description}[itemsep=2pt,topsep=1pt,leftmargin=0.6em]
\item[If $\gamma_4\neq\mathcal{R}$,] then asking
$\gamma_5$ about $\gamma_1\neq\mathcal{R}$, and if needed asking
$\gamma_5$ about $\gamma_3\neq\mathcal{R}$,
determine all the gods.
This case used $5$ questions for $2$ possibilities,
and $4$ questions for $1$ possibility.
\item[If $\gamma_4=\mathcal{R}$,] then asking
$\gamma_5$ about $\gamma_1\neq\mathcal{R}$
determines all gods.
This case used $4$ questions, and covered $2$ possibilities.
\end{description}
\end{description}
\end{description}
\vspace{-0.75em}
\end{proof}
\vspace{-1.85em}
\subsection{Average number of questions used}
The number of questions used for each possibility is shown below
(eq.\,\eqref{nOfQ}).
Some possibilites can be detected in more than one way.
When that can happen, and when
the number of questions differ,
the probability for each
question sequence is listed as well.
As can be seen, it's slightly better to try to hide
cases that can take $5$ questions among possibilites that have multiple ways to get detected
(e.g., $(\frac{1}{4}4+\frac{1}{4}5+\frac{1}{2}5) + 4 < \frac{4+5}{2}+\frac{4+5}{2}$).
\vspace{-0.2em}
\begin{equation}
\label{nOfQ}
\begin{matrix}
P\,|Qs| & \gamma_1 & \gamma_2 & \gamma_3 & \gamma_4 & \gamma_5 \\
\frac{1}{4}4,\frac{1}{4}5,\frac{1}{2}5
& \mathcal{R} & \mathcal{R} & \mathcal{T} & \mathcal{T} & \mathcal{T} \\
4,4 & \mathcal{R} & \mathcal{T} & \mathcal{T} & \mathcal{T} & \mathcal{R} \\
4,4 & \mathcal{R} & \mathcal{T} & \mathcal{T} & \mathcal{R} & \mathcal{T} \\
4 & \mathcal{T} & \mathcal{T} & \mathcal{T} & \mathcal{R} & \mathcal{R} \\
4 & \mathcal{T} & \mathcal{T} & \mathcal{R} & \mathcal{T} & \mathcal{R} \\
4,4 & \mathcal{T} & \mathcal{T} & \mathcal{R} & \mathcal{R} & \mathcal{T} \\
\frac{1}{2}5,\frac{1}{4}4,\frac{1}{4}5
& \mathcal{R} & \mathcal{T} & \mathcal{R} & \mathcal{T} & \mathcal{T} \\
4 & \mathcal{T} & \mathcal{R} & \mathcal{T} & \mathcal{T} & \mathcal{R} \\
4,4 & \mathcal{T} & \mathcal{R} & \mathcal{R} & \mathcal{T} & \mathcal{T} \\
4 & \mathcal{T} & \mathcal{R} & \mathcal{T} & \mathcal{R} & \mathcal{T} \\
\end{matrix}
\end{equation}
Thus, the average number of questions used to find a solution is
$ ( 2(\frac{1}{4}4 + \frac{1}{4}5 + \frac{1}{2}5) + 8\cdot 4 ) \;\div\; 10 \;=\; 4.15 $.
\section{A top-down solution to the puzzle}
\begin{solution}[Tim Roberts's solution]
\label{solution:nonRPrio}
Here is a solution to ``The Hardest Logic Puzzle Ever'' that is similar to Tim Roberts's
solution.\cite{rob01}
\vspace{-0.4em}
\end{solution}
\begin{proof}
We'll start by asking $\gamma_1$ the question $t_q(\gamma_3=\mathcal{R},\gamma_1)$.
Consider the possible cases:
\begin{description}[itemsep=0pt,topsep=1pt,leftmargin=1.1em]
\item[Case \(t(\gamma_3=\mathcal{R},\gamma_1)\):] \hspace{0.0em}If $\gamma_1\neq\mathcal{R}$,
then $\gamma_3=\mathcal{R}$ by theorem \ref{theorem:templateWorks}, and
$\gamma_2\neq\mathcal{R}$.
If $\gamma_1=\mathcal{R}$,
then $\gamma_2\neq\mathcal{R}$ again. Hence we now know that $\gamma_2\neq\mathcal{R}$ and
is safe to question.
Next we'll ask $\gamma_2$ about $\gamma_1\neq\mathcal{R}$ (because there are $4$
possibilities left, half of which have $\gamma_1=\mathcal{R}$).
If \(t(\gamma_1\neq\mathcal{R},\gamma_2)\), then $\gamma_3=\mathcal{R}$,
and e.g.\ $t(\gamma_2=\mathcal{T},\gamma_2)$ determines which is which of $\gamma_1$ and
$\gamma_2$.
If \(\neg t(\gamma_1\neq\mathcal{R},\gamma_2)\),
then $\gamma_1=\mathcal{R}$, and
$t(\gamma_2=\mathcal{T},\gamma_2)$ determines which is which of $\gamma_2$ and $\gamma_3$.
\item[Case \(\neg t(\gamma_3=\mathcal{R},\gamma_1)\):] \hspace{0.0em}If
$\gamma_1\neq\mathcal{R}$,
then $\gamma_3\neq\mathcal{R}$ by theorem \ref{theorem:templateWorks}.
If $\gamma_1=\mathcal{R}$,
then $\gamma_3\neq\mathcal{R}$ again. Hence $\gamma_3\neq\mathcal{R}$ and is safe to question.
Next we'll ask $\gamma_3$ about $\gamma_1\neq\mathcal{R}$.
If \(t(\gamma_1\neq\mathcal{R},\gamma_3)\), then $\gamma_2=\mathcal{R}$,
and e.g.\ $t(\gamma_3=\mathcal{T},\gamma_3)$ determines which is which of $\gamma_1$ and
$\gamma_3$.
If \(\neg t(\gamma_1\neq\mathcal{R},\gamma_3)\),
then $\gamma_1=\mathcal{R}$, and
$t(\gamma_3=\mathcal{T},\gamma_3)$ determines which is which of $\gamma_2$ and $\gamma_3$.
\end{description}
\vspace{-1.15em}
\end{proof}
\vspace{-0.95em}
\section{A note on mathematical vs.\ computational thinking}
Donald Knuth has distinguished
beetween
mathematical and computational thinking.\cite{knuthInt}
G\"{o}del's incompleteness theorems
provide a particular type of mathematical thinking.
Their proofs
consist
of
straightforward computational reasoning,
except for the fixed-point theorem,
which
requires a certain mathematical thinking.
Similarly, the foundation of computability is straightforward computational,
except for its fixed-point result, Kleene's recursion theorem,
which is quite mathematical.\cite{fra13,10.1305/ndjfl/1093890812,owi89,west10,ham10}
In this article,
while the rest is mostly straightforward computational reasoning,
the meta-question template,
def.\,\ref{def:tq},
is more of the mathematical
kind,
perhaps.
\section{Infinite number of gods}
\label{sec:infGods}
The results in section \ref{sec:nGods} about when a puzzle is solvable
hold also when the number of gods is infinite.
Let $\nu$ be the number of gods.
Regard $\nu$ as an ordinal and
let the gods be
$\Gamma\!\coloneqq\bigcup_{\alpha<\nu}\{\gamma_\alpha\}$.
\subsection{Finding a non-random god}
For finding which type each god is, we'll define a function
$\bar{\varrho}$ that
takes a well-ordered set of gods and
returns a non-random god, if there are more non-random than
random gods.
Let $\bar{\varrho}(\{\gamma\})\coloneqq\gamma$.
Let $\bar{\varrho}(\{\gamma,\_\})\coloneqq\gamma$,
with $\gamma$ least,
say.
For the successor case,
we'll ask the last god, $\gamma'$.
Go through the gods, from least to greatest,
until a random god $\gamma_\mathcal{\scriptscriptstyle R}$ is found,
i.e.\ until
$t(\gamma_\mathcal{\scriptscriptstyle R}=\mathcal{R},\gamma')$,
or the search has reached the last god
in which case we'll set $\gamma_\mathcal{\scriptscriptstyle R}$ to the least god.
Then remove the last god, $\gamma'$;
replace $\gamma_\mathcal{\scriptscriptstyle R}$ with the least god;
and update the well-ordering accordingly, by removing the last god,
and handling the $\gamma_\mathcal{\scriptscriptstyle R}$ replacement.
Then we can recursively apply $\bar{\varrho}$ to the new set of gods and
the updated well-ordering;
the result of this recursive application will be the result of the successor case too.
For the limit case,
the result of $\bar{\varrho}$ is the limit of $\bar{\varrho}$ on the smaller sets.
The limit exists if the non-random gods are more than the random gods.
\begin{lemma}
\label{lemma:RsLTNonRsSolvableInf}
If a $\nu$ gods puzzle has strictly more non-random gods than random gods,
then it is solvable.
\end{lemma}
\vspace{-1.2em}
\begin{proof}
Assume more non-random gods than random gods.
Then we can use $\bar{\varrho}$ to find a non-random god
$\gamma$.
After that we can go through all the gods and ask $\gamma$ about their type.
\end{proof}
\subsection{Finding an unsolvable puzzle}
Let $\mathscr{P}(\nu)$ be the set of all subsets of ordinals less than $\nu$.
\begin{theorem}
\label{theorem:nuGodsSolvability}
A $\nu$ gods puzzle is solvable if and only if the number of random gods is stricly less than
the number of non-random gods.
\end{theorem}
\begin{proof
The $\Leftarrow$ case is covered by lemma \ref{lemma:RsLTNonRsSolvableInf}.
For the $\Rightarrow$ case,
consider the easiest to solve of such problems:
Assume
that the random gods equal the non-random gods,
and that there are only truthful non-random gods.
$p_n$ (def.\,\ref{def:pnUns}) from section \ref{sec:nGods} becomes
\begin{defn}
\label{def:PnuUns}
$P_\nu \coloneqq $
\begin{equation}
\label{eq:nonSolvable2}
\bigcup_{\underset{|\alpha|<\aleph_0}{\alpha\in\mathscr{P}(\nu),}}
\{\,
\bigvee
\left\lbrace
\begin{matrix}
\bigwedge\limits_{\beta\in\alpha}
\left\lbrace
\begin{cases}
\gamma_\beta\!=\!\mathcal{T},&\text{if } \beta \text{ is even}\\
\gamma_\beta\!=\!\mathcal{R},&\text{if } \beta \text{ is odd}
\end{cases}
\right\rbrace\!\!,\\
\bigwedge\limits_{\beta\in\alpha}
\left\lbrace
\begin{cases}
\gamma_\beta\!=\!\mathcal{R},&\text{if } \beta \text{ is even}\\
\gamma_\beta\!=\!\mathcal{T},&\text{if } \beta \text{ is odd}
\end{cases}
\right\rbrac
\end{matrix}
\right\rbrace
\}
\end{equation}
\end{defn}
Let
$P_{\mathcal{\scriptscriptstyle T}\mathcal{\scriptscriptstyle R}}$ and
$P_{\mathcal{\scriptscriptstyle R}\mathcal{\scriptscriptstyle T}}$ be the
conjunctions of $P_\nu$, similar to section \ref{sec:nGods}.
\begin{defn}
\label{def:PTR}
$P_{\mathcal{\scriptscriptstyle T}\mathcal{\scriptscriptstyle R}} \coloneqq $
\begin{equation}
\label{eq:PTR}
\bigcup_{\underset{|\alpha|<\aleph_0}{\alpha\in\mathscr{P}(\nu),}}
\{
\bigwedge\limits_{\beta\in\alpha}
\left\lbrace
\begin{cases}
\gamma_\beta\!=\!\mathcal{T},&\text{if } \beta \text{ is even}\\
\gamma_\beta\!=\!\mathcal{R},&\text{if } \beta \text{ is odd}
\end{cases}
\right\rbrace
\}
\end{equation}
\end{defn}
\begin{defn}
\label{def:PRT}
$P_{\mathcal{\scriptscriptstyle R}\mathcal{\scriptscriptstyle T}} \coloneqq $
\begin{equation}
\label{eq:PRT}
\bigcup_{\underset{|\alpha|<\aleph_0}{\alpha\in\mathscr{P}(\nu),}}
\{
\bigwedge\limits_{\beta\in\alpha}
\left\lbrace
\begin{cases}
\gamma_\beta\!=\!\mathcal{R},&\text{if } \beta \text{ is even}\\
\gamma_\beta\!=\!\mathcal{T},&\text{if } \beta \text{ is odd}
\end{cases}
\right\rbrace
\}
\end{equation}
\end{defn}
Let
$P_{\mathcal{\scriptscriptstyle T}\mathcal{\scriptscriptstyle R}}^\nu$ and
$P_{\mathcal{\scriptscriptstyle R}\mathcal{\scriptscriptstyle T}}^\nu$ be the
full descriptions of the puzzle instances:
\begin{defn}
\label{def:PTRnu}
$P_{\mathcal{\scriptscriptstyle T}\mathcal{\scriptscriptstyle R}}^\nu \coloneqq $
\begin{equation}
\label{eq:PTRnu}
\bigcup_{\beta<\nu}
\left\lbrace
\begin{cases}
\gamma_\beta\!=\!\mathcal{T},&\text{if } \beta \text{ is even}\\
\gamma_\beta\!=\!\mathcal{R},&\text{if } \beta \text{ is odd}
\end{cases}
\right\rbrace
\end{equation}
\end{defn}
\begin{defn}
\label{def:PRTnu}
$P_{\mathcal{\scriptscriptstyle R}\mathcal{\scriptscriptstyle T}}^\nu \coloneqq $
\begin{equation}
\label{eq:PRTnu}
\bigcup_{\beta<\nu}
\left\lbrace
\begin{cases}
\gamma_\beta\!=\!\mathcal{R},&\text{if } \beta \text{ is even}\\
\gamma_\beta\!=\!\mathcal{T},&\text{if } \beta \text{ is odd}
\end{cases}
\right\rbrace
\end{equation}
\end{defn}
Assume that the gods are as described by
$P_{\mathcal{\scriptscriptstyle T}\mathcal{\scriptscriptstyle R}}^\nu$ or
$P_{\mathcal{\scriptscriptstyle R}\mathcal{\scriptscriptstyle T}}^\nu$, and
that hence $P_\nu$ holds.
If both $q$ and $\neg q$ are consistent with $P_\nu$,
assume that the random gods always happen to give the incorrect answer
to a question about $q$.
If only one of $q$ and $\neg q$ is consistent with $P_\nu$,
assume that the random gods always happen to give the correct answer
to a question about $q$.
We should be able to conclude that $P_\nu$ (eq.\,\eqref{eq:nonSolvable2}) holds,
if we reason in e.g.\ set theory.
Because we can ask more gods than there are random gods about
$P_\nu$.
Since everyone will answer that
$P_\nu$ holds we can conclude that since
we have asked at least one truthful god, $P_\nu$ must hold.
Next we'll show that if
$P_\nu\!\nvdash q$ and
$P_\nu\!\nvdash \neg q$,
then
$P_\nu\!\vdash q \!\leftrightarrow\!
\phi_{\mathcal{\scriptscriptstyle T}\mathcal{\scriptscriptstyle R}}$
for all
$\phi_{\mathcal{\scriptscriptstyle T}\mathcal{\scriptscriptstyle R}}\!\in\!
P_{\mathcal{\scriptscriptstyle T}\mathcal{\scriptscriptstyle R}}$, or
$P_\nu\!\vdash q \leftrightarrow
\phi_{\mathcal{\scriptscriptstyle R}\mathcal{\scriptscriptstyle T}}$
for all
$\phi_{\mathcal{\scriptscriptstyle R}\mathcal{\scriptscriptstyle T}}\!\in\!
P_{\mathcal{\scriptscriptstyle R}\mathcal{\scriptscriptstyle T}}$.
Assume $P_\nu$,
$P_\nu\!\nvdash q$ and
$P_\nu\!\nvdash \neg q$.
Then $q$ has a model $\mathcal{M}$ (i.e.\ an assignment of the gods)
where $q$, and $P_\nu$, are true. Suppose, without loss of generality, that it's
$P_{\mathcal{\scriptscriptstyle T}\mathcal{\scriptscriptstyle R}}$
that's true in $\mathcal{M}$.
Then, since $P_{\mathcal{\scriptscriptstyle T}\mathcal{\scriptscriptstyle R}}$
completely determines a model, $q$ holds whenever
$P_{\mathcal{\scriptscriptstyle T}\mathcal{\scriptscriptstyle R}}$ does, i.e.\
$\phi_{\mathcal{\scriptscriptstyle T}\mathcal{\scriptscriptstyle R}}\!\rightarrow\! q$
for
$\phi_{\mathcal{\scriptscriptstyle T}\mathcal{\scriptscriptstyle R}}\!\in\!
P_{\mathcal{\scriptscriptstyle T}\mathcal{\scriptscriptstyle R}}$.
Suppose
$\neg\phi_{\mathcal{\scriptscriptstyle T}\mathcal{\scriptscriptstyle R}}$
for
$\phi_{\mathcal{\scriptscriptstyle T}\mathcal{\scriptscriptstyle R}}\!\in\!
P_{\mathcal{\scriptscriptstyle T}\mathcal{\scriptscriptstyle R}}$.
Then $P_{\mathcal{\scriptscriptstyle R}\mathcal{\scriptscriptstyle T}}$.
Since $P_{\mathcal{\scriptscriptstyle R}\mathcal{\scriptscriptstyle T}}$
determines its models completely, if $q$ where to hold, then $q$ would
hold in all models to $P_\nu$, contradicting
$P_\nu\!\nvdash q$.
Hence $\neg q$ holds, i.e.\
$\neg\phi_{\mathcal{\scriptscriptstyle T}\mathcal{\scriptscriptstyle R}}\!\rightarrow\!\neg q$.
Assume $P_\nu$ (eq.\,\eqref{eq:nonSolvable2}).
To see that if $P_\nu$ is known, no more conclusions can be made,
suppose that $\gamma_{\alpha}\!=\!\mathcal{T}$, with $\alpha$ even say.
Suppose $\gamma_\alpha$ is asked about $q$.
Suppose
$P_\nu\!\nvdash q$ and
$P_\nu\!\nvdash \neg q$.
Then an answer that $q$ (or $\neg q$) holds
(with implications that
$\phi_{\mathcal{\scriptscriptstyle T}\mathcal{\scriptscriptstyle R}}$ holds, for
$\phi_{\mathcal{\scriptscriptstyle T}\mathcal{\scriptscriptstyle R}}\!\in\!
P_{\mathcal{\scriptscriptstyle T}\mathcal{\scriptscriptstyle R}}$)
is undone because we can only conclude that
$q \vee \gamma_{\alpha}\!=\!\mathcal{R}$ (or
$\neg q \vee \gamma_{\alpha}\!=\!\mathcal{R}$),
which is equivalent to
formulas in $P_\nu$.
Similarly, if instead $\gamma_\alpha=\mathcal{R}$, then we are able to conclude only
$q \vee \gamma_\alpha\!=\!\mathcal{R}$ (or $\neg q \vee \gamma_\alpha\!=\!\mathcal{R}$).
If
both $q$ and $\neg q$ are consistent with $P_\nu$,
the false answer that $q$ (or $\neg q$) holds
undoes the disjunct $\gamma_\alpha=\mathcal{R}$.
If $q$ is already known,
the concluded disjunction,
$q \vee \gamma_1\!=\!\mathcal{R}$ (or $\neg q \vee \gamma_1\!=\!\mathcal{R}$),
will already be known too, and nothing new can be concluded again.
\end{proof}
\nocite{gend17}
{\small
|
train/arxiv
|
BkiUdizxaKgQGdQApUFb
| 5 | 1 |
\section{Introduction}\label{intro}
Polar Ring/Disk Galaxies (PRGs) are multi-spin systems. The polar structure is made by dust and gas that rotates in a perpendicular plane with respect to the stars of the central galaxy \citep[see][as a review]{Iod2014}. A "second event" is invoked in the formation history of PRGs in order to explain the decoupling of the angular momentum. Thus, PRGs are among the best galaxies to study the physics of accretion/interaction mechanisms, the disk formation and the dark halo shape. New deep surveys have shown that multi-spin galaxies are quite common at high redshift and studying these systems at increasing redshift gives fundamental information on the physical processes at work (i.e. merging and accretion) during the formation of galaxies \citep[see][as a review]{Cons2014}.
In the literature there are few studies on PRGs at high redshifts ($z\ge0.05$), where the dominant component, the central galaxy, is easily visible, while detection of the faint, bluer and dusty polar structure requires deep imaging, with high spatial resolution.
The most distant kinematically confirmed PRG is at $z\sim0.06$ \citep{Brosch2010}. At higher redshifts ($z\sim 0.06 -1.3$), there are only few PRG candidates to be confirmed \citep{Resh97, Resh07}. The photometric analysis performed on PRGs at $z\ge0.05$ shows that in all of them the polar structure has an almost regular ring-like shape, with some clumps of light due to star forming regions. Up to date there is not a detailed study of a {\it forming} PRG, where the polar structure is still ongoing in an intermediate stage, like the well studied galaxies NGC~3808B and NGC6286 at $z\sim0.02$ \citep{Resh96}. In the new SDSS-based Polar Ring Catalogue (SPRC) compiled by \citet{Moi11}, which reports several new PRG candidates in a large range of redshift, only two objects resemble an ongoing interaction to form a PRG at $z\ge0.05$ (SPRC199 and SPRC226), but both of them are neither kinematically confirmed nor studied in detail.
At $z\sim0.02$ a forming polar disk is found in the wall between voids, resulting from the slow accretion of gas \citep{Stan09}: the polar structure is made only by neutral hydrogen (it was detected in the HI emission) and the gas density is still too low to form the stellar counterpart.
In this letter we present the detailed photometric analysis of the background galaxy FCSS J033710.0-354727 at $z\sim0.05$, which is a good candidate to be the first forming wide PRG at this redshift.
\section{Observations and Data Reduction}\label{data}
As part of the {\it VST Survey of Elliptical Galaxies in the Southern hemisphere} \citep[VEGAS][see]{Cap2011}, which is a Guaranteed Time Observation survey being performed at the ESO VLT Survey Telescope (VST), we have obtained a mosaic of $1.75 \times 1.59$~degree of the Deep Field of the Fornax Cluster, around the cD galaxy NGC~1399. Images are collected in the $g$ and $i$ bands, on October 2011.
VST is a 2.6-m wide field optical survey telescope, located at Cerro Paranal in Chile.
The VST is currently the largest telescope in the world specially designed for surveying the sky in visible light; it is the ESO work-horse totally dedicated to visible survey programmes. The telescope is a F/5.5 with an alt-azimuth mount, equipped with an active optics system \citep{Schipani2010, Schipani2012}.
VST is equipped with the wide field camera OmegaCAM, spanning a $1 \times 1$~degree$^2$ field of view, in the optical wavelength range from 0.3 to 1.0 micron \citep{Kui2011}. The mean pixel scale is 0.21~arcsec/pixel.
The region of maximum overlap of all pointings is around the galaxy NGC~1389, where the total integration time is 3 hours in the $g$ band and 1.4 hours in the $i$ band. The average seeing is about 1~arcsec.
The data reduction has been performed with the {\it VST-Tube} imaging pipeline \citep{Grado2012}. From the raw data, it provides fully calibrated images, throughout the following steps: 1) overscan, bias and flat-field correction; 2) CCD gain equalization and illumination correction; 3) astrometric and photometric calibration, applied before stacking for the final co-added image. For a detailed description of the data reduction procedure see \citet{Ripepi2014}.
In order to perform a deep surface photometry of the extended galaxies in the VEGAS Survey, the VST-Tube pipeline also includes a task to remove the background patterns. This will be described in the forthcoming paper, which is dedicated to the first results of the VEGAS Survey (Capaccioli et al. in preparation). In the present work, since the studied object covers a very small area (its diameter is $D\la 30$~arcsec, see Sec.~\ref{phot}), the background has been estimated locally, in the regions surrounding the galaxy.
\section{The Galaxy FCSS J033710.0-354727: morphology and light distribution}\label{phot}
The background galaxy FCSS~J033710.0-354727 at $z\sim0.051$, is located South of the bright S0 galaxy NGC~1389 in the Fornax cluster (see Fig.~\ref{field}). It is characterized by a central component, that we named "host galaxy" (HG), surrounded by a warped ring-like structure (see left panel of Fig.~\ref{PRG} and Fig.~\ref{PRG_i}). The main properties of FCSS~J033710.0-354727 are listed in Table\ref{PRG_prop}.
According to \citet{Brocca97}, inside a distance of about five times its diameter, i.e. $R\le200$~kpc (see Table\ref{PRG_prop}), no companion galaxies at comparable measured redshift are found around FCSS~J033710.0-354727, which could be an isolated object.
We have analysed images in the $g$ and $i$ bands to derive the galaxy structure and colors.
\begin{figure*}
\centering
\includegraphics[width=17cm]{NGC1389_field.jpg}
\caption{ Field around the bright S0 galaxy NGC~1389, in the $g$-band VST Deep Field of the Fornax cluster. The arrow locates the background galaxy FCSS~J033710.0-354727.}
\label{field}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=17cm]{PRG_g_ratio.jpg}
\caption{ Left panel - VST image of FCSS~J033710.0-354727 in $g$-band and the isophote contours (green lines). Labels indicate the center of the galaxy, {\bf X}, the two bright knots inside the polar structure, {\bf Y} and {\bf Z}, and several bright features, from {\bf A} to {\bf G} which are apparently distributed on elliptical orbit (blue dashed line) around the polar direction. Right panel - The {\it high frequency residual images} of FCSS~J033710.0-354727 in the $g$-band (see Sec.~\ref{phot} for details). North is up and East is on the left.}
\label{PRG}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=18cm]{PRG_i_ratio.jpg}
\caption{ Left panel - VST image of FCSS~J033710.0-354727 in $i$-band and the isophote contours (green lines). Labels are the same as in Fig.1~\ref{PRG}. Right panel - The {\it high frequency residual images} of FCSS~J033710.0-354727 in the $i$-band. North is up and East is on the left.}
\label{PRG_i}
\end{figure*}
\begin{table}
\begin{minipage}[t]{\columnwidth}
\caption{Global properties of FCSS J033710.0-354727.}
\label{PRG_prop}
\centering
\renewcommand{\footnoterule}{}
\begin{tabular}{lccc}
\hline\hline
Parameter&Value&Ref.\\
\hline
Morphological type&Sc peculiar&NED\footnote{NASA/IPAC Extragalactic Database}\\
R.A. (J2000) &03h37m09.94s &NED \\
Decl. (J2000) &-35d47m27.2s&NED \\
Helio. radial velocity & 15447 km/s&NED\\
Redshift & 0.051 &FCSS\footnote{{\it The Fornax Spectroscopic Survey} \citep{FCSS}}\\
Distance & 206 Mpc & \\
scale & 1 kpc/arcsec & \\
Diameters& $3.4\times\ 40$ kpc& this work\\
Magnitudes\footnote{Absolute magnitudes are corrected for both Galactic Extinction and K-correction, while apparent magnitudes take into account only the Galactic Extinction, see text for details.} & & this work\\
$m_{g}$ & 18.59 mag& \\
$m_{i}$ & 18.03 mag& \\
$M_{g}$ & -18.00 mag& \\
$M_{i}$ & -18.48 mag& \\
& {\it Central galaxy}:&\\
$m_{g}$ & 20.47 mag& \\
$m_{i}$ & 19.90 mag& \\
$M_{g}$ & -16.12 mag& \\
$M_{i}$ & -16.61 mag& \\
$g-i$& 0.6 mag & \\
& {\it Polar structure}:&\\
$m_{g}$ & 19.09 mag& \\
$m_{i}$ & 18.53 mag& \\
$M_{g}$ & -17.54 mag& \\
$M_{i}$ & -17.99 mag& \\
$g-i$& $0.56$ mag & \\
\hline
\end{tabular}
\end{minipage}
\end{table}
{\it Morphology -} The VST image of FCSS~J033710.0-354727 in $g$ and $i$ bands are shown in the left panels of Fig.~\ref{PRG} and Fig.~\ref{PRG_i}. The center of the galaxy, labelled as {\bf X}, contributes for the most of the light. Towards the North, there are two other bright knots, labelled as {\bf Y} and {\bf Z}, which are inside the polar structure. At larger distances from the galaxy center, we observe several bright features, which are apparently distributed on elliptical orbit around the polar direction, one of this (labelled as {\bf A}) is likely a disk galaxy. None of the above objects have a redshift measurement.
We have derived the {\it high frequency residual images} of FCSS~J033710.0-354727 in the $g$ and $i$ bands (see right panels of Fig.~\ref{PRG} and Fig.~\ref{PRG_i}), as the ratio of the original reduced image with a smoothed\footnote{We used the IRAF task {\small FMEDIAN} to smooth the original reduced image, by adopting a box of $25\times25$~pixels.} one, where each original pixel value is replaced with the median value in a rectangular window.
The {\it high frequency residual images} shows a disk-like structure along the major axis of the central component.
The polar structure extends up to the galaxy center, crossing it by tracing an "S-shaped" pattern. It is characterized by a very bright peak at the end of the North arm.
In Fig.~\ref{PRG_zoom} it is shown the $g$ band image of FCSS~J033710.0-354727 with different contrast: close to center, the light of the HG is not symmetric and it is perturbed by the two arms of the polar structure that are approaching the nucleus. In particular, on the NE side there is a third bright knot, which light is smeared in the light coming from the central HG.
Even if the morphology of FCSS~J033710.0-354727 could be also similar to that of late-type barred galaxy (where the central disk could be the bar with loosely wound and highly-inclined arms), the qualitative analysis of the structure observed for FCSS~J033710.0-354727 described above suggests to classify this object as PRG. In fact, the two arms of the polar structure cross the center of the galaxy, they do not start from the edge of the central disk/bar. Moreover, the "accreting loop" observed towards the galaxy center was also observed in other forming polar rings at lower redshifts, such as ESO~474-G26 \citep[see Fig.~14 of][]{Spav12} and VGS31b \citep[see Fig.~4 of][]{Spav13}.
{\it Surface photometry -} We used the {\small ELLIPSE} task in IRAF on both $g$ and $i$ band images to perform the isophotal analysis. All the bright features around FCSS~J033710.0-354727, including background objects and bright stars in the field, are masked. The azimuthally averaged surface brightness profiles, the Position Angle (P.A.) and the ellipticity ($\epsilon$) are shown in Fig.~\ref{ellipse}. The limits of the surface photometry presented in this work are derived as the distance from the center where the galaxy's light blends into the background level, which are found to be 23~arcsec ($\sim$23~kpc) in both $g$ and $i$ band.
The limiting magnitudes corresponding to the limiting radii given above are $\mu_{g} = 29.3 \pm 0.3$~mag~arcsec$^{-2}$ for the $g$ band, and $\mu_{i} = 27.7 \pm 0.3$~mag~arcsec$^{-2}$ for the $i$ band.
The error estimates on the magnitudes take the uncertainties on the photometric calibration ($\sim0.02$~mag) and sky subtraction ($\simeq 0.06$~ADU in the $g$ band and $\simeq 0.2$~ADU in the $i$ band) into account.
The azimuthally averaged surface brightness profiles in the $g$ and $i$ bands (see Fig.~\ref{ellipse}, left panel), are quite smooth on the whole range of radii, except for the two peaks of light at $R\sim 3.5$~arcsec ($\sim3.5$~kpc) and $R\sim 7$~arcsec ($\sim7$~kpc), which corresponds to the bright knots, labeled {\bf Y} and {\bf Z} respectively on Fig.~\ref{PRG} (left panel).
Both the P.A. and ellipticity profiles (Fig.~\ref{ellipse}), show an abrupt change in the shape at $R\le1.69$~arcsec ($\sim1.7$~kpc): by looking at the isophotes, in the left panel of Fig.~\ref{PRG}, this radius correspond to the "transition" from the HG to polar ring (PR) and set a constraint on the radial extent of the two components, being $R_{HG} \sim 2$~kpc and $R_{PR} \sim 20$~kpc.
For $R\ge1.69$~arcsec, i.e. along the polar structure major axis, and up to $R \sim 20$~arcsec ($\sim 1.7 - 20$~kpc), a strong twisting is observed, as the P.A. varies of about $70^\circ$. In this range of radii, the flattening increases from $0.1$ to $0.4$.
We obtained the 2-dimensional (2D) model from the fit of the isophotes\footnote{The 2-dimensional model of the fitted isophotes was obtained by using the IRAF task {\small BMODEL}} in the $g$ band and it has been subtracted off from the original image. The 2D residual is shown in the Fig.~\ref{res}. As already found from the high-frequency residual image (left panels of Fig.~\ref{PRG} and Fig.~\ref{PRG_i}) and from the high-level contrast in the g band (see Fig.~\ref{PRG_zoom}), along the polar direction an "S-shaped" structure is clearly detectable. It extends from the north to the south crossing the galaxy center, drawing a spiral loop throughout the nucleus, and it is characterised by two bright knots in the northern arm.
We derived the integrated magnitudes in the $g$ and $i$ bands inside elliptical aperture corresponding to the last fitted isophote for the whole galaxy and for both components (HG and polar structure). Values are corrected or the extinction within the Milky Way, by using the absorption coefficient $A_{g}=0.035$ and $A_{i}=0.018$ derived according to \citet{Schlegel98}. The K-correction was applied to the absolute magnitudes, where the correction factor in $g$ and $i$ band are $K_g=0.02$ and $K_i=-0.06$ respectively \citep{Chi2010, Chi2012}.
{\it Light profiles -} We have extracted the light profiles along the the major axis of two main components in FCSS~J033710.0-354727 (see Fig.~\ref{profili}). The P.A.s of these directions are defined from the fit of the isophotes (see right panels of Fig.~\ref{ellipse}). Due to the "S-shape" of the polar structure, which generates the observed twisting, we adopted the two directions at $P.A.= 8^\circ$ and $P.A.= 30^\circ$ that intersect the two bright knots {\bf Y} and {\bf Z}.
The surface brightness of the central component extends up to about 8~arcsec ($\sim 8$~kpc). It is not symmetric with respect to the center of the galaxy. For $R\ge 2$~arcsec, the NW part of the light profile is brighter than the SE part. We performed a least-square fit of the HG light profiles (NW and SE sides) by using an exponential law given by
$$\mu(R)= \mu_{0} + 1.086 \times R/r_{h}$$
where $R$ is the galactocentric distance, $\mu_{0}$ and $r_{h}$ are the central surface brightness and scale length of the disk. The best fit values for the structural parameters found for the NW profile are $\mu^{NW}_{0}=22.21 \pm 0.05$~mag/arcsec$^2$ and $r^{NW}_{h}=1.89 \pm 0.04$~arcsec and for the SE profile are $\mu^{SE}_{0}=22.07 \pm 0.10$~mag/arcsec$^2$ and $r^{SE}_{h}=1.41 \pm 0.05$~arcsec. For $R\le 2$~arcsec there is an excess of light of about 0.2~mag, which is quite symmetric with respect to the center, contrary to what is observed at larger radii. This feature could be related to a small bulge.
Along the polar structure the surface brightness is twice as extended than that along the HG, out to $R\sim20$~arcsec ($\sim20$~kpc). The peak of light corresponding to {\bf Z} is less than one magnitude fainter than the galaxy center.
\begin{figure*}
\centering
\includegraphics[width=17cm]{PRG_g_zoom.jpg}
\caption{VST images of FCSS~J033710.0-354727 in $g$-band with different contrasts in order
to emphasize the morphology of the galaxy close to center. The maximum level increases from left to the right. Labels are the same as in Fig.1~\ref{PRG}. The three circles indicates the bright blobs of light along the northern arm of the polar structure. The crosses signature in each panel "follows" the bright path of the polar structure up to the galaxy center (see Sec.~\ref{phot} for details). North is up and East is on the left.}
\label{PRG_zoom}
\end{figure*}
\begin{figure*}
\includegraphics[width=9cm]{ellipse_mu.jpg}
\includegraphics[width=9cm]{ellipse_fit1.jpg}
\caption{Left panel - Azimuthally averaged surface brightness profiles as function of log(R), derived by the isophote fit. $R$ is the isophote major axis. Data are for the $g$-band image (blue dots) and $i$-band (red dots). The dashed line delimits the regions where the main components (HG and polar ring) of the galaxy structure are located. Right panel - Average profiles of P.A. (top panel) and ellipticity (bottom panel) as function of log(R).}
\label{ellipse}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=8cm]{res_g_ex.jpg}
\caption{ Residual image obtained by subtracting off the 2D model derived from the fit of isophotes to the original image in the $g$ band. The image size is $34\times42$~arcsec ($\sim 34\times42$~kpc).}
\label{res}
\end{figure}
\begin{figure*}
\includegraphics[width=9cm]{profg_HG_fit.jpg}
\includegraphics[width=9cm]{profg_PR.jpg}
\caption{Left panel - Folded light profiles in the $g$ band along the major axis of the central spheroid. The continuos and dashed lines are the results of the exponential fits to the light distribution for NW (circles) and SE (triangles) sides respectively (see Sec.~\ref{phot} for details.)
Right panel - Light profiles in the $g$ band along the polar structure.
The two directions at $P.A.= 8^\circ$ and $P.A.= 30^\circ$ are chosen to intersect the two bright knots {\bf Y} and {\bf Z}, observed on the north arm, respectively (see also Fig.~\ref{PRG} left panel).}
\label{profili}
\end{figure*}
{\it Integrated magnitudes and colors - } In Fig.~\ref{ellipse_col} it is shown the isophotal $g-i$ color profile. The central spheroid, for $R\le1.6$~arcsec, is redder than the polar structure, having an average color $g-i \sim 0.58$~mag. For $R >1.6$~arcsec, the color profile has a steep gradient towards bluer colors, reaching a value of $g-i \sim 0.2$~mag at $R\sim 10$~arcsec.
We have derived the integrated magnitudes and $(g-i)$ colors in circular apertures\footnote{The radius of each circular aperture is set to include two times the peak of light.}
of the center of galaxy, the two bright knots {\bf Y} and {\bf Z}, and of all the bright objects around the galaxy (see left panel of Fig.~\ref{PRG}). Values are listed in Table~\ref{mag}. As already shown from the light profiles, the bright knot {\bf Z} on the NE side is only 0.5 magnitude fainter than the galaxy center.
The center of the galaxy is redder than the two knots on the North side of the polar structure, and it has $(g-i) = 0.55$~mag.
The closest knot to the central spheroid, labelled as {\bf Y}, has $(g-i) = 0.35$~mag, while the outer and more luminous knot {\bf Z} of the polar structure has bluer color $(g-i) = 0.13$~mag.
The integrated colors of the bright objects observed around FCSS~J033710.0-354727 have a "bimodal" distribution. The galaxy labelled as {\bf A} and the two bright objects {\bf B} and {\bf G} have $(g-i) \ge 1$~mag, while for all the other sources we measured $0.3 \le (g-i) \le 1$~mag. In particular, the objects {\bf D}, {\bf E} and {\bf F}, located on NW region of FCSS J033710.0-354727, have $(g-i)$ colors comparable with the range of colors derived for the polar structure.
From the $g-i$ colors, by using the stellar population synthesis model GISSEL\footnote{\it Galaxies Isochrone Synthesis Spectral Evolution Library} \citep{Bru03}, we have estimated the mass-to-light ratio (M/L) for the central galaxy and the polar structure, in order to constrain the total stellar mass for each component. Assuming a simple stellar population, with a solar metallicity, for the central galaxy the models predict an $M/L \sim 0.6$ and for the polar structure an $M/L \sim 0.3 - 0.6$. From the total magnitudes in the $g$ band (see Table~\ref{mag}), the stellar mass in the HG is $M^{HG} \sim 2 \times\ 10^{8}$~M$_{\odot}$ and in the polar structure is $M^{PR} \sim 1 \times\ 10^{8}$~M$_{\odot}$. These should be considered as lower limits for the total baryonic mass, since we do not have any information about the gas content in this galaxy, which is typically large in PRGs, from $10^{8}$ to $10^{10}$~M$_{\odot}$, \citep[see][and references therein]{Iod2014}. In particular, since the gas is usually associated to the polar structure, the baryoinc mass of this component in FCSS~J033710.0-354727 could be even higher. Thus, as lower limits for the stellar mass ratio between the central disk and the polar structure we found 3:1 to 2:1.
\begin{figure}
\centering
\includegraphics[width=8cm]{ellipse_col.jpg}
\caption{Azimuthally averaged $g-i$ color profile as function of log(R), derived by the isophote fit. $R$ is the isophote major axis. The dashed line delimits the regions where the main components (HG and polar ring) of the galaxy structure are located.}
\label{ellipse_col}
\end{figure}
\begin{table*}
\begin{minipage}[t]{180mm}
\caption{\label{mag}Integrated magnitudes and colors for regions in FCSS J033710.0-354727 in circular apertures.}
\begin{tabular}{ccccccc}
\hline\hline
Region & $\alpha$ & $\delta$ & R & $m_{g}$ & $m_{i}$ & $g-i$ \\
& [sec] & [arcsec] & [arcsec] & [mag] & [mag] & [mag]\\
(1) & (2) & (3) & (4) & (5) & (6) & (7)\\
\hline
\hline
{\bf X} & 09.93 & 27.15 & 1.05 & $21.14\pm 0.02$ & $20.61\pm 0.02$ & $0.53\pm0.04$\\
{\bf Y} & 10.07 & 24.71 & 1.05 & $21.98\pm0.03$ & $21.63\pm0.03$ & $0.35\pm0.06$\\
{\bf Z} & 09.94 & 20.17 & 1.47 & $21.64\pm0.02$ & $21.53\pm0.03$ & $0.11\pm0.05$\\
{\bf A} & 09.67 & 43.39 & 2.31 & $22.57\pm0.04$ & $21.30\pm0.02$ & $1.27\pm0.05$\\
{\bf B} & 10.38 & 35.53 & 1.89 & $23.68\pm0.07$ & $22.65\pm0.06$ & $1.03\pm0.13$\\
{\bf C} & 10.41 & 19.29 & 1.47 & $23.95\pm0.08$ & $23.01\pm0.07$ & $0.94\pm0.15$\\
{\bf D} & 09.69 & 11.61 & 1.89 & $23.75\pm0.08$ & $23.49\pm0.12$ & $0.26\pm0.2$\\
{\bf E} & 09.55 & 15.28 & 1.89 & $23.16\pm0.06$ & $22.48\pm0.06$ & $0.68\pm0.12$\\
{\bf F} & 09.30 & 23.13 & 1.47 & $24.00\pm0.08$ & $23.41\pm0.10$ & $0.60\pm0.2$\\
{\bf G} & 09.00 & 24.79 & 2.31 & $23.00\pm0.05$ & $21.93\pm0.04$ & $1.07\pm0.09$\\
\hline
\end{tabular}
\end{minipage}
\smallskip
{\em Col.~1}: Region of FCSS J033710.0-354727 labelled in Fig.~\ref{PRG}. {\em Col.~2} and {\em Col.~3}: Celestial coordinates of the the center of each circular aperture, see left panel of Fig.~\ref{PRG} for reference. {\em Col.~4} Radius of the circular aperture in arcsec. {\em Col.~5} and {\em Col.~6}: Integrated magnitudes in the $g$ and $i$ bands corrected for the Galactic Extinction. {\em Col.~7} Integrated $g-i$ color.
\end{table*}
\section{Results: the Ongoing Formation of a Wide Polar Ring/Disk}
The deep exposures in the $g$ and $i$ bands, combined with the high angular resolution of the OmegaCAM at VST, allow us to carry out the first detailed photometric analysis of the background galaxy FCSS~J033710.0-354727 at $z\sim0.05$, in the field of the Fornax cluster.
Main results obtained in the present work (see Sec.\ref{phot}) show that:
\begin{itemize}
\item the system is characterized by a central component, surrounded by a warped ring-like structure (see Fig.~\ref{PRG} and Fig.~\ref{PRG_zoom}.);
\item the central component is a disk with an exponential surface brightness profile;
\item the polar structure is 2 times more extended than the central disk. It crosses the galaxy center, along the North-South direction, drawing a spiral loop throughout the nucleus. It is characterised by two bright knots, one of them ({\bf Z}) having an almost comparable luminosity to that of the galaxy center;
\item the central galaxy is redder ($g-i \sim 0.55$~mag) than the polar structure ($g-i \sim 0.13 - 0.4$~mag);
\item the integrated colors of the bright objects detected around FCSS~J033710.0-354727, have a "bimodal" distribution. Only those located on the NW region have $(g-i)$ colors comparable with those derived for the polar structure (see Table~\ref{mag}), thus they can be considered as features "related" to the galaxy.
\end{itemize}
The new observations and analysis are in favor of the classification of this galaxy as PRG. In particular, the whole morphology and the large polar extension suggests that this system can be considered as good candidate for a wide polar ring/disk galaxy, like NGC~4650A \citep{Iod02}. In FCSS~J033710.0-354727, the warped geometry of the polar structure and the presence of bright knots along its light distribution, as well as the several debris observed on the NW side with similar colors, suggest that the polar structure is still forming.
Given the high luminosity, comparable with that of the galaxy center, the two knots {\bf Y} and {\bf Z} (see Fig.~\ref{PRG}) could be the remnant of a companion galaxy that is disrupting in the potential of the central disk, which is 2-3 times more massive than the accreting object. This mechanism, i.e. the tidal accretion of material from outside, is one of the possible formation scenario proposed for PRGs \citep[see][as review]{Combes2014}. In this framework, taking into account the small stellar mass ratio between the central disk and the polar structure (3:1 - 2:1), and the high inclination of the accreting material, numerical simulations \citep{Bou03} are able to form a massive and extended polar ring/disk as observed in FCSS~J033710.0-354727. By comparing the snapshots of the simulation with the observed morphology for FCSS~J033710.0-354727, we suggest that this kind of gravitational interaction is still in act, thus we are looking at the intermediate stage of the PRG formation, at the epoch of about 2 Gyr.
Alternatively, FCSS~J033710.0-354727 might also be a post-merger system, where two big galaxies have already coalesced into the single system, with two tidal tails around it and an accumulation of mass at the tip of the northern tail. Something like NGC~7252 \citep{Hibb95}.
One observational fact in favour of the tidal accretion process, instead of the major merger, is the large mass of the the bright knot {\bf Z}, not typically found in tidal tail, and the "S-shape" feature that connects the outer parts of polar structure to the galaxy center (see right panel of Fig.~\ref{PRG}), which is more similar to an accreting loop rather than a tidal tail remnants.
Kinematic measurements are needed, not only to confirm FCSS~J033710.0-354727 as PRG, but also to discriminate between the two formation mechanisms. In particular, if this galaxy is the remnant of a major merger between two disk galaxies we expect to found the central object dominated by random motions \citep{Bou03}.
In conclusion, FCSS~J033710.0-354727 is a peculiar system at $z\sim0.05$ resulting from the interaction of two massive galaxies, which could form a wide polar ring/disk galaxy.
As discussed in Sec.~\ref{intro}, the few PRGs studied at $z \ge 0.05$ show a well-formed polar structure, without any tail or ripple that could suggest an ongoing interaction event.
The main result of this work is that, up to date, FCSS~J033710.0-354727 is the most distant {\it PRGs in formation} for which a detailed photometric analysis has been performed: the intermediate stage "captured" for this object allows us to derive the important constraint on the mass ratio of the two interacting galaxies, which is hard to estimate, or quite uncertain, in a final remnant galaxy.
\begin{acknowledgements}
We are very grateful to the anonymous referee for his/her comments and
suggestions which helped us to improve and clarify our work.
This work was supported by the PRIN-INAF "Galaxy Evolution with the VLT Survey Telescope (VST)" (PI A. Grado). M. Cantiello acknowledges support from PO FSE Abruzzo 2007-2013 (PO 2012/2013).
\end{acknowledgements}
\bibliographystyle{aa}
|
train/arxiv
|
BkiUc_Y5qhLA5_F8juOy
| 5 | 1 |
\section{Introduction}
Let $\Gamma$ be
a finitely generated group
and let $S$ be
a finite set of generators
for $\Gamma$.
The group $\Gamma$
embeds
as a cocompact,
discrete subgroup
in the automorphism group
of its Cayley graph
with respect to $S$,
which is
a totally disconnected, locally compact group.
This note
begins the examination of
common features of
the class $\mathcal{F}(\Gamma)$
of all embeddings
of a fixed finitely generated group $\Gamma$
as a discrete, cocompact subgroup
in some locally compact group.
By the previous paragraph,
the automorphism group
of any
Cayley graph
of $\Gamma$
belongs to $\mathcal{F}(\Gamma)$.
In this article
we treat
the case
where
$\Gamma$
is virtually free
of finite rank
at least $2$.
Most of our results
will be about
the subclass $\mathcal{F}_{td}(\Gamma)$
of $\mathcal{F}(\Gamma)$
consisting of
cocompact embeddings
of $\Gamma$
into totally disconnected, locally compact groups;
see however Corollary~\ref{cor:tdlcG-envelope(freeG)}.
This program
was suggested
(in a less general form)
by George Willis
in~ \cite[penultimate topic in Section~6]{can_form(aut(tdlcGs))}.
One of the common features
of the class $\mathcal{F}_{td}(\Gamma)$
suggested there
for examination
is the set
of values of the scale function,
whose definition we recall below,
with respect to
members of $\mathcal{F}_{td}(\Gamma)$
evaluated on $\Gamma$.
Furman
undertook
a related project
for lattices in semisimple (connected real) Lie groups
in~\cite{Mostow-Margulis-rig-lc-targets}.
He
proposed
to classify
all second countable, locally compact groups,
which admit
a lattice embedding
(not necessarily cocompact)
of a given lattice $\Gamma$
in a semisimple connected, real Lie group
\cite[p.~31]{Mostow-Margulis-rig-lc-targets})
and
solved this problem
if $\Gamma$ is
a lattice
in a simple Lie group
of higher rank
(\cite{Mostow-Margulis-rig-lc-targets}, Theorem~A).
He was also able
to classify
second countable, locally compact groups
which admit
lattice embeddings
which are
cocompact,
if $\Gamma$ is
an irreducible lattice
in either
a semisimple, connected, real Lie group
not locally isomorphic to $\mathrm{SL}_2(\mathbb{R})$
or
a cocompact lattice
in a group
locally isomorphic to $\mathrm{SL}_2(\mathbb{R})$
(\cite{Mostow-Margulis-rig-lc-targets},
Theorem~B and Theorem~C).
We now
return to
the question,
how one might
characterize
common features
of discrete, cocompact embeddings
of a given group $\Gamma$
into some
totally disconnected, locally compact group.
One way
in which
this may be done,
following the suggestion
by Willis
mentioned above,
is by restricting
the values
taken by
the scale functions
with respect to
the codomain
of the embeddings
on the image of $\Gamma$.
For
a totally disconnected, locally compact group $G$,
the value of
the \emph{scale function $s_G$}
at an automorphism,
$\alpha$,
of $G$
measures
minimal distortion
of compact, open subgroups
of $G$
under $\alpha$.
The scale function
is defined on
the set of automorphisms of $G$ by
the formula
\[
s_G(\alpha):=\min\{|\alpha(V)\colon\alpha(V)\cap V|\colon V\text{ a compact, open subgroup of }G\}\,.
\]
Note that
the minimum
above
is attained,
because
it is
formed for
a set of
positive integers.
A compact, open subgroup $O$
of $G$
is \emph{tidy for $\boldsymbol\alpha$}
if this minimum is attained at $O$.
The scale of
a group element,
$x$,
of $G$
is defined to be
the value of
the scale function
with respect to $G$
at conjugation by $x$.
The collection of tidy subgroups
for an automorphism
and the scale function
are invariants,
which have been used
to answer various questions
on totally disconnected, locally compact groups;
see~\cite{can_form(aut(tdlcGs))}
for a survey.
The scale of an automorphism
of a locally compact, totally disconnected group
is an analogue
for the set of eigenvalues
of a linear transformation.
That there should be
any uniform bound
on the primes
dividing
values
of the scale
with respect to
the elements of $\mathcal{F}_{td}(\Gamma)$
is not clear,
even though
a single element of $\mathcal{F}_{td}(\Gamma)$
contributes only
a finite number
of prime factors
by Theorem~3.4
in~\cite{prime-factors(sF(cp.gen))=finite}.
The author
was surprised
to discover,
that
even a slightly stronger result
can be proved
quite easily
in the case when
$\Gamma$ is virtually
a free group
of rank at least $2$,
by putting together
work by
Lee Mosher, Michah Sageev, and Kevin Whyte
and Alexander Lubotzky;
see Corollary~\ref{cor:bound(div(scales))_virt-freeGs}.
It can be shown,
that
virtually abelian groups
embed cocompactly
only
in totally disconnected, locally compact groups
whose scale function
is identically $1$;
hence
a result
analogous to Corollary~\ref{cor:bound(div(scales))_virt-freeGs}
does hold
for virtually infinite cyclic groups
also.
As a further remark,
by the results
of Furman
mentioned above,
groups $\Gamma$
to which
one of his Theorems~A, B, or~C
applies,
also
embed cocompactly only
in a totally disconnected, locally compact group $G$
whose scale function
is identically~$1$,
\underline{provided}
the group~$G$
is second countable.
The method of proof used
in this note
suggests,
that
replacing
the use of Theorem~\ref{thm:virtF,cocp latt}
by
an appeal to the main result of~\cite{large_scale-Geo(prod(trees))}
will prove
a generalization
of Corollary~\ref{cor:bound(div(scales))_virt-freeGs}
to groups $\Gamma$,
which are quasi-isometric to
a product of finitely many trees
once
a replacement for Lemma~\ref{lem:ramification_bound(SchottkyGs)}
can be found,
to bound
the quotient
of an action
of a group $\Gamma$
on a product of trees
in terms of properties of $\Gamma$.
Note that
the class of groups
which are quasi-isometric to
products of finitely many trees
not only contains
products of virtually free groups
but also
the finitely presented
simple groups
constructed by
Burger and Mozes;
see~\cite{fp-simpleGs+prod(trees)}.
\section{Conventions and outline of the paper}
In what follows,
$\Gamma$ will denote
a group
which is virtually free
of finite rank
at least $2$,
while
$F$ will denote
a free group
of finite rank
at least $2$.
We will
first
treat
the case of free groups
to obtain Theorem~\ref{thm:volume_bound(freeG-envelope)},
which gives more precise information
in this special case.
Then
we will deduce
the announced result
for virtually free groups,
Corollary~\ref{cor:bound(div(scales))_virt-freeGs},
as a corollary.
To ease discussion
of the circle of ideas
which form the topic of this paper,
we adapt
the following terminology.
Let $\Lambda$ and $G$
be a locally compact groups.
In our applications,
$\Lambda$ will usually be
finitely generated
and discrete.
An injective homomorphism
$\varphi\colon \Lambda \to G$
such that
$\varphi(\Lambda)$ is
a closed, cocompact subgroup
in $G$
will be called an \emph{envelope of $\Lambda$}.
The group $G$
will also be called
an envelope of $\Lambda$
by abuse of language;
any envelope
of a compactly generated group
is compactly generated.
In Section~\ref{sec:quasi-action->action}
we will use
a quasi-isometric rigidity result
by
Lee Mosher, Michah Sageev, and Kevin Whyte
to see that
any envelope of $\Gamma$
acts cocompactly on a locally finite tree.
From this
we deduce in Section~\ref{sec:tree-envelopes}
that,
in terms of scale values,
$\Gamma$ has
a larger envelope
which is an automorphism group
of a locally finite tree.
Specializing
to free groups
in Section~\ref{sec:Schottky-bounds}
and using
Lubotzky's results
on Schottky groups
of automorphisms
of trees,
we bound
the geometry of
the underlying trees.
The required results
are then
immediate consequences.
In this note
we adhere to
the following conventions:
$0$ is a natural number.
Graphs
may have loops and multiple edges,
but edges
will not be given
an orientation,
except
on occasion of
applications
of the Bass-Serre theory
of groups acting on trees.
The reader
may consult
\cite{trees}, section~2.1
for a formal setup
of the terminology
and notation
for graphs.
An isometry of a tree
will be called
elliptic or hyperbolic
according to
whether it admits a fixed point
(which may be a geometric edge)
or not.
Conjugation by a group element $g$
is understood to be
the map $x\mapsto gxg^{-1}$.
The relations $\subset$, $\vartriangleleft$ {\it etc.\/} always imply
strict inclusion.
Any automorphism of a topological group will be assumed to be a homeomorphism.
\section{Envelopes of virtually free groups of finite rank act on bushy trees}
\label{sec:quasi-action->action}
Any envelope $G$
of a finitely generated group $\Lambda$
quasi-acts
on each of the Cayley graphs
of $\Lambda$.
If $\Gamma$ is
virtually free
of finite rank
at least $2$,
more can be said,
thanks to
the following theorem
by Lee Mosher, Michah Sageev, and Kevin Whyte.
The theorem
shows,
that
any envelope $G$
of $\Gamma$
is
a compact extension of
a cocompact subgroup of
the automorphism group
of a locally finite bushy tree $T$
(a tree $T$
is \emph{bushy}
if
each point of $T$
is a uniformly bounded distance
from a vertex
having at least $3$
unbounded complementary components).
This will enable us
to reduce to
the case where
the envelope $G$
is
the group $\Aut T_.$,
for such a tree $T$
in the next section.
\begin{theorem}[{\cite[Theorem~9]{quasi-Act>T-bound_val}}
\label{thm:virtF,cocp latt}
Let $G$ be a locally compact topological group
which contains
a cocompact lattice
which is
virtually free of finite rank
at least $2$.
Then there exists a cocompact action of $G$
on a bushy tree $T$
of bounded valence,
inducing
a continuous, closed homomorphism $\tau\colon G\to \Aut T_.$
with compact kernel
and cocompact image.
\end{theorem}
\begin{remark}\label{rem:virtF,cocp latt}
~\\[-4ex]
\begin{enumerate}
\item
\label{rem:virtF,cocp latt(1)}
The statement
of Theorem~9
on page~125
in~\cite{quasi-Act>T-bound_val}
uses the word `proper'
in place of `closed'.
The proof
of Theorem~9
on page~161
in~\cite{quasi-Act>T-bound_val}
makes it clear,
that
the intended meaning
of proper
in this context
is closed.
\item
\label{rem:virtF,cocp latt(2)}
In Theorem~\ref{thm:virtF,cocp latt}
we can assume that
the action of $G$ on $T$ via $\tau$
is minimal,
replacing $T$ with
the minimal $G$-invariant subtree
if necessary.
Indeed,
the $G$-action
on the tree
constructed
in the course of
the proof of
Theorem~\ref{thm:virtF,cocp latt}
in \cite{quasi-Act>T-bound_val}
is already minimal.
As we are only interested
in groups containing lattices of positive rank,
such a minimal tree
will have no vertex of degree $1$.
\item
\label{rem:virtF,cocp latt(3)}
By Theorem~16
in~\cite{analog(CayleyGphs(topGs))},
totally disconnected envelopes
of a group
which is
virtually free
of finite rank
can be characterized as
those groups
whose rough Cayley graphs
are quasi-isometric
to a tree.
That tree
is bushy
if and only if
the rank
of free subgroups
of the cocompact discrete subgroup
are at least $2$.
\end{enumerate}
\end{remark}
\begin{corollary}\label{cor:tdlcG-envelope(freeG)}
Let $G$ be
an envelope
of a virtually free group
of finite rank
at least $2$.
Then the connected component of the identity
of $G$ is compact.
\end{corollary}
\pf{}
By Theorem~\ref{thm:virtF,cocp latt},
$G$ has
a continuous
homomorphism $\tau$
into the automorphism group
of a locally finite tree $T$
with compact kernel.
Since
$\Aut T_.$ is totally disconnected,
the connected component $G_0$ of the identity in $G$
is contained in
the kernel of $\tau$.
We conclude that
$G_0$ is relatively compact,
being contained in $\ker(\tau)$.
Since $G_0$ is closed as well,
we conclude that
$G_0$ is compact
as claimed.
\hspace*{\fill}$\square$
\section{Scale function on envelopes which are automorphism groups of bushy trees}
\label{sec:tree-envelopes}
The next three results
will show,
that values of the
scale function
on totally disconnected
envelopes of
the free group $F$
can be bounded
in terms of
values of the scale function
on envelopes
of $F$
which are
automorphism groups
of some locally finite, bushy tree $T$.
First,
we show that
whenever $G$ is
an envelope of $F$,
then
the codomain of the map $\tau$,
introduced in Theorem~\ref{thm:virtF,cocp latt},
is also an envelope of $F$.
\begin{proposition}\label{prop:larger-envelop}
Let $\varphi\colon F\to G$ be
an envelope of
a free group $F$
of finite rank
at least $2$.
Let $\tau$ be
the homomorphism
from $G$
into the automorphism group
of the tree $T$
provided by Theorem~\ref{thm:virtF,cocp latt}
and
let $\theta\colon F \to \Aut T_.$
be the composite of $\varphi$ and $\tau$.
Then
$\theta$ is injective
and
$\theta(F )$ is
a cocompact lattice in $\Aut T_.$.
Hence $\Aut T_.$
is an envelope of $F $.
\end{proposition}
\pf{}
The group $\theta(F )$
is a discrete subgroup of $\Aut T_.$,
because the kernel of $\tau$ is compact.
The kernel of $\theta$ is
a compact subgroup
of the discrete group $F $,
hence is finite.
Since we are assuming that
$F $ is a free group,
it is torsion free
and we conclude that
the kernel of $\theta$
is trivial
and $\theta$ is injective.
Since $F $ is cocompact in $G$
and $\tau(G)$ is cocompact in $\Aut T_.$
we conclude that
$\theta(F )$ is cocompact in $\Aut T_.$.
We have verified
all parts of our claim.
\hspace*{\fill}$\square$
The next lemma
shows that
the scale
of an element
does not change
if we apply
a homomorphism,
which is
a perfect map
(which,
for a homomorphism
of groups,
means
a continuous, open, surjective homomorphism
with compact kernel).
\begin{lemma}\label{lem:proper_maps&scale}
Let $\pi\colon G\to \widehat{G}$
be a continuous, open, surjective homomorphism
with compact kernel
between totally disconnected, locally compact groups.
Let $\alpha$ be
an automorphism
of $G$
preserving $\ker(\pi)$
and
$\widehat{O}$ a subgroup of $\widehat{G}$
tidy for
the automorphism $\widehat{\alpha}$
induced by $\alpha$ on $\widehat{G}$.
Then
the group
$\pi^{-1}(\widehat{O})$
is tidy for $\alpha$
and
$s_G(\alpha)=s_{\widehat{G}}(\widehat{\alpha})$.
\end{lemma}
\pf{}
Put $O:=\pi^{-1}(\widehat{O})$.
Since $\pi$ is continuous
and
the kernel of $\pi$ is compact,
the group $O$
is compact and open.
The conclusion follows from
the definition of the scale of $\alpha$
as a minimum,
the equation
\[
|\alpha(O)\colon \alpha(O)\cap O|=
|\widehat{\alpha}(\widehat{O})\colon \widehat{\alpha}(\widehat{O})\cap \widehat{O}|=
s_{\widehat{G}}(\widehat{\alpha})
\]
and Proposition~4.7 in~\cite{furtherP(s(tdG))}.
\hspace*{\fill}$\square$
The larger envelope
of $F$
obtained from Proposition~\ref{prop:larger-envelop}
does not have
smaller scale values,
as shown by
the next result.
\begin{corollary}\label{cor:upper_bound-Aut(T)}
Let $G$ be
a totally disconnected, locally compact group,
$T$ a locally finite, bushy tree
and
$\tau\colon G\to \Aut T_.$
a continuous, closed homomorphism
with compact kernel.
Then
\[
s_G(x)\leq s_{\Aut T_.}(\tau(x))\quad \text{for all }x\in G \,.
\]
\end{corollary}
\pf{}
Let $x$ be an element of $G$.
Put $\widehat{G}:=\tau(G)$,
let $\pi\colon G\to \widehat{G}$
be the map
induced by $\tau$
and
let $\alpha$ be conjugation by $x$.
Then
$\pi$ and $\alpha$
satisfy
the conditions
on the maps
with the same names
in Lemma~\ref{lem:proper_maps&scale}
and
$\widehat{\alpha}$
is conjugation by $\pi(x)$.
We conclude that
$s_G(x)= s_{\widehat{G}}(\pi(x))$.
Furthermore,
renaming $\alpha$ as
conjugation by $\tau(x)$
and
applying
Proposition~4.3 in \cite{furtherP(s(tdG))}
with $H:=\widehat{G}$,
we deduce that
$s_{\widehat{G}}(\pi(x))\le s_{\Aut T_.}(\tau(x))$.
Combining
these two relations
we obtain
$s_G(x)\le s_{\Aut T_.}(\tau(x))$.
Since the choice of $x$ was arbitrary,
the claim follows.
\hspace*{\fill}$\square$
The scale function
of a closed subgroup,
$G$,
of the automorphism group
of a locally finite tree,
$T$,
can be determined geometrically,
as seen in
the next result,
Lemma~\ref{lem:scale(hyp-iso(tree)); prod-formula}.
The geometric description
of the
value of
the scale function
with respect to $G$
at a hyperbolic isometry,
$h$ say,
in $G$
given in Lemma~\ref{lem:scale(hyp-iso(tree)); prod-formula}
uses
ramification indices
of a subtree $T_{G,\epsilon}$
of $T$,
which
depends on
the group $G$
and
the attracting end,
$\epsilon$,
of $h$.
The tree $T_{G,\epsilon}$
is defined
as follows:
Given an end $\epsilon$
of $T$
the tree $T_{G,\epsilon}$\label{T_G,epsilon}
is the union
of the axes
of all hyperbolic isometries in $G_\epsilon$.
\begin{lemma}[{\cite[Lemmas~26 and~31]{direction(aut(tdlcG))}}]
\label{lem:scale(hyp-iso(tree)); prod-formula}
Let $G$ be a closed subgroup of the automorphism group of
a locally finite tree $T$.
\begin{enumerate}
\item
Let $g$ be an elliptic isometry of $T$ in $G$.
Then $g$ is topologically periodic
and $s_G(g)=1=s_G(g^{-1})$.
\item
Let $h$ be a hyperbolic isometry of $T$
in $G$
with attracting end $\epsilon$,
of translation length $n$ say.
Let $q_1+1,\ldots,q_n+1$
be the ramification indices of
$n$ consecutive vertices
on the axis of $h$
with respect to the tree $T_{G,\epsilon}$.
Then $s_G(h)= \prod_{i=1}^n q_i$.
\end{enumerate}
\end{lemma}
The geometric description
of the scale function
given by
Lemma~\ref{lem:scale(hyp-iso(tree)); prod-formula}
will be useful
in the next section.
\section{Upper bounds for scales on envelopes of Schottky lattices}
\label{sec:Schottky-bounds}
The isomorphic image
$\theta(F )$
of $F$
obtained
in Proposition~\ref{prop:larger-envelop},
is
a finitely generated, torsion free, discrete subgroup
of $\Aut T_.$.
A finitely generated, torsion free, discrete subgroup
of the automorphism group of a tree
will be called
a \emph{Schottky subgroup of $\Aut T_.$}.
This terminology
was coined by
Alexander Lubotzky
in section~1
of the paper~\cite{l.rank1},
where he describes
the structure
of such groups.
Let $\Gamma$ be a Schottky subgroup of $\Aut T_.$.
Then $\Gamma$
acts freely on $T$
and hence
is a free group;
it has finite rank,
because it is
finitely generated
by assumption.
Bass-Serre theory
(\cite{trees}, chapter~I or \cite{CovGG})
provides us
with a nice basis of $\Gamma$,
via its identification with
the fundamental group
of the trivial graph of groups
on the quotient graph $X:=\Gamma \backslash T$.
Any basis
of a Schottky subgroup,
$\Gamma$,
of $\Aut T_.$
that is
obtained from
Bass-Serre theory
in the manner explained below
will be called a \emph{Schottky basis}
in what follows.
We recall now
how these bases
of $\Gamma$
are obtained,
following the proof
of Proposition~1.7 in~\cite{l.rank1}.
This
construction
of Schottky bases
for the group $\Gamma$
will involve
several choices,
only some of which
will be of interest
for us
later;
see Lemma~\ref{lem:essential-prop(Schottky_bases)}.
Choose
an orientation
of the edges of $X$.
Any choice
of a maximal subtree,
$Y$ say,
in $X:=\Gamma\backslash T$
defines
the set of edges $X\smallsetminus Y$,
belonging to $X$
but not to $Y$,
which freely generate
the fundamental group
of the graph $X$.
The set of edges $X\smallsetminus Y$
corresponds to
some set of isometries
of the universal covering tree $T$
of $X$.
This correspondence
is defined
in two stages.
In the first stage
defining this correspondence,
choose
connected subgraphs
$Y_T\subseteq X_T$
in $T$
such that
the canonical projection,
$p\colon T\to X$,
is bijective on
the set of vertices of $Y_T$
and
the set of edges of $X_T$.
Make this choice
in such a way
that the origins of
all the edges of $X_T\smallsetminus Y_T$
belong to $X_T$.
Such a pair $(Y_T,X_T)$
is called
a `\emph{lifting}' or `\emph{opening}'
of $X$
in the literature.
For later use,
denote
the inverse image
of a vertex,
$v$,
in $Y$
under $p$
that is
contained in $Y_T$
by $\widetilde{v}$
and
the inverse image
of an edge,
$e$,
in $X$
under $p$
that is
contained in $X_T$
by $\widetilde{e}$.
Also,
for an edge,
$e$,
denote by
$o(e)$ and $t(e)$
the origin and terminal vertices
of $e$.
For the second stage
defining
the set of
isometries
corresponding to $X\smallsetminus Y$
choose,
for each edge $e$
in $X\smallsetminus Y$,
an element,
$\gamma_e$,
of $\Gamma$
such that,
$t(\widetilde{e})=\gamma_e.\widetilde{t(e)}$.
This construction
is illustrated
in a simple example
below,
where $\Gamma$
is the free group
on~$3$ generators.
To the left,
we see
some possible
quotient graph $\Gamma\backslash T$.
The choice
of a maximal subtree,
$Y$,
in $\Gamma\backslash T$
is indicated by
labeling
the edges
outside $Y$
by the symbols $e_1$, $e_2$ and~$e_3$.
To the right,
we see
how
the graph $X_T$
of an opening
of the graph $\Gamma\backslash T$
determined by
the choice
of orientation
and maximal subtree
looks like.
Only the orientation
of the edges
$e_1$, $e_2$ and $e_3$
matters,
and
is indicated by
arrows
in both pictures.
The vertices
of the graph $X_T$,
that do not belong to
the graph $Y_T$,
are indicated by
hollow circles.
All other vertices
in both pictures
are indicated by
filled circles.
\begin{picture}(160,80)(-10,10)
\thicklines
\put(30,55){\circle{30}}
\put(14,57){\vector(0,-1){5}}
\put(3,54){$e_1$}
\put(45,55){\circle*{5}}
\put(45,55){\line(1,0){30}}
\put(75,55){\circle*{5}}
\put(75,55){\line(1,0){30}}
\put(105,55){\circle*{5}}
\put(75,55){\oval(60,60)[b]}
\put(75,25){\vector(1,0){5}}
\put(71,17){$e_3$}
\put(90,55){\oval(30,30)[t]}
\put(89,70){\vector(1,0){3}}
\put(85,75){$e_2$}
\end{picture}
\begin{picture}(190,80)(0,15)
\thicklines
\put(0,55){$t(\widetilde{e}_1)$}
\put(25,58){\circle{5}}
\put(50,60){\oval(50,30)[t]}
\put(52,75){\vector(-1,0){5}}
\put(47,80){$\widetilde{e}_1$}
\put(50,55){$\widetilde{t(e_1)}$}
\put(75,60){\circle*{5}}
\put(75,60){\line(1,0){30}}
\put(105,60){\circle*{5}}
\put(105,60){\line(1,0){30}}
\put(135,60){\circle*{5}}
\put(140,55){$\widetilde{t(e_2)}=\widetilde{t(e_3)}$}
\put(105,60){\oval(60,60)[bl]}
\put(105,30){\line(1,0){30}}
\put(105,30){\vector(1,0){5}}
\put(101,19){$e_3$}
\put(137,30){\circle{5}}
\put(140,19){$t(\widetilde{e}_3)$}
\put(135,60){\oval(60,30)[tl]}
\put(119,75){\vector(1,0){3}}
\put(137,75){\circle{5}}
\put(115,80){$\widetilde{e}_2$}
\put(140,80){$t(\widetilde{e}_2)$}
\end{picture}
\\
The set $\{\gamma_e\colon e\in X\smallsetminus Y\}$,
which we obtain
from all the choices
we made,
is a free set
of generators
of~$\Gamma$
by \cite[\S 5]{trees},
which we call
the Schottky basis of $\Gamma$
determined by these choices.
The Schottky basis $\{\gamma_e\colon e\in X\smallsetminus Y\}$
consists of
hyperbolic isometries of $T$.
This is true
because
the set of edges
of the graph $X_T$
is a fundamental domain
for the $\Gamma$-action
on the edges.
The latter argument
works
for any set $\{\gamma_e\colon e\in X\smallsetminus Y\}$
of elements
of a group $\Gamma$
constructed
in the way
described above,
as long as $\Gamma$
acts without inversion
of edges.
In the case at hand,
where $\Gamma$
is a free group,
it may
alternatively
be seen,
by using that
$\Gamma$ must act
freely on $X$.
Below,
we will need
the following information
on the axes
(and translation lengths)
of elements
in the Schottky basis $\{\gamma_e\colon e\in X\smallsetminus Y\}$.
The axis
of the element $\gamma_e$
passes through
the vertices $\widetilde{t(e)}$ and $t(\widetilde{e})=\gamma_e.\widetilde{t(e)}$,
whose distance
is therefore
the translation length
of $\gamma_e$.
This may be seen
by applying
Lemma~1.2
in~\cite{l.rank1}
with
$\gamma$ equal to $\gamma_e$,
$x$ equal to $\widetilde{t(e)}$
and
$y$ equal to the first vertex
on the shortest path
joining $\widetilde{t(e)}$ to $t(\widetilde{e})$,
(which, by the way,
is contained in $X_T$).
The axis of $\gamma_e$
also passes through $o(\widetilde{e})$,
because
by our choice
of a lift for $e$,
the vertex $t(\widetilde{e})$
is incident to
only one edge of $X_T$.
This finishes
the description
of the construction of
the set of
Schottky bases for $\Gamma$,
save for
two final remark
for readers
familiar with~\cite{l.rank1}.
First,
in~\cite{l.rank1},
a Schottky basis
is supposed to
satisfy
an apparently stronger condition
than the one
we have verified here,
see Definition~1.4 in~\cite{l.rank1}.
But,
as stated in
the proof
of Propostion~1.7 in~\cite{l.rank1},
the required labeling
of the axes of
the elements
in the basis $\{\gamma_e\colon e\in X\smallsetminus Y\}$
is obtained
if we choose,
for every $e\in X\smallsetminus Y$,
the labeling
of the axis of $\gamma_e$
determined by
labeling $\widetilde{t(e)}$
with the symbol $x_1$.
Second,
the construction above
gives
all Schottky bases
for $\Gamma$
in the sense
defined in
Definition~1.4 in~\cite{l.rank1}.
To see this,
note that
the fundamental domain~$F$
from Proposition~1.6 in~\cite{l.rank1}
defines an opening
for $\Gamma\backslash T$,
such that
the given Schottky basis $\{\gamma_1,\ldots,\gamma_l\}$
is one of the possible Schottky bases
obtainable from
that opening
in the sense
and the way
described
above.
The next result
is obvious from
the discussion
of the construction
of a Schottky basis
above.
\begin{lemma}\label{lem:essential-prop(Schottky_bases)}
Let $T$ be a tree.
Suppose that
$\{\gamma_1,\ldots,\gamma_l\}$
and
$\{\gamma_1',\ldots,\gamma_l'\}$
are
two Schottky bases of
the same Schottky subgroup,
$\Gamma$,
in $\Aut T_.$,
obtainable from
the same choice
of maximal subtree,
$Y$ say,
in $\Gamma\backslash T$.
Let $\{e_1,\ldots, e_l\}$
be the set of edges
of $\Gamma\backslash T$
outside $Y$.
Then there are
permutations,
$\pi$ and $\pi'$,
of the set
$\{1,\ldots,l\}$
such that
for each integer $i$
with $1\le i\le l$
the restriction of
the canonical projection $p\colon T\to \Gamma\backslash T$
induces bijections
from
$(1)$ onto~$(0)$
and
from
$(1')$ onto~$(0)$
where:
\begin{itemize}
\item[$(0)$]
is
the set
of vertices
on the shortest path
in
$Y$
connecting
the two vertices
of the edge $e_i$;
\item[$(1)$]
is
the set
of vertices
of a fundamental domain
for the vertices
on the axis
of $\gamma_{\pi(i)}$
modulo $\langle\gamma_{\pi(i)}\rangle$;
\item[$(1')$]
is
the set
of vertices
of a fundamental domain
for the vertices
on the axis
of $\gamma'_{\pi'(i)}$
modulo $\langle\gamma'_{\pi'(i)}\rangle$.
\end{itemize}
\end{lemma}
Let $T$ be a tree
and
$\Gamma$ be
a Schottky subgroup of $\Aut T_.$.
In what follows,
we will only be interested in
those properties
of a Schottky basis,
that
are already determined by
the quotient graph $\Gamma\backslash T$
and
the set of
collections
of vertices described in~(0),
indexed by
the set $\{e_1,\ldots,e_l\}$ of edges
outside a maximal subtree,
$Y$,
of $\Gamma\backslash T$.
Therefore,
we will allow ourselves
to speak,
on occasion,
somewhat inaccurately,
of
`the Schottky basis determined by the maximal subtree $Y$'.
We now
return to
the Schottky subgroup $\theta(F )$
of $\Aut T_.$
obtained
in Proposition~\ref{prop:larger-envelop}.
The group $\theta(F )$
is also a (cocompact) lattice
in $\Aut T_.$.
It follows that
the quotient graph
$X:=\theta(F )\backslash T$
is finite.
Using
part~\ref{rem:virtF,cocp latt(2)}
of Remark~\ref{rem:virtF,cocp latt},
we may assume that
$T$ has no vertex of degree $1$.
This implies that
$X$ has no vertex of degree $1$ either.
We will call a Schottky subgroup of $\Aut T_.$
that is also a lattice
a \emph{Schottky lattice}.
We now derive our main lemma on Schottky lattices.
\begin{lemma}\label{lem:ramification_bound(SchottkyGs)}
Any tree $T$
with all vertices
having degree at least $3$,
which admits
a Schottky lattice on $n$ generators,
belongs to a finite list of
universal covering trees
of finite graphs.
In particular
there are integers $K_n$ and $l_n$,
such that
\begin{enumerate}
\item
the ramification indices of $T$
are at most $K_n$;
\item
every member of a Schottky basis
of a Schottky lattice on $T$
has translation length at most $l_n$.
\end{enumerate}
\end{lemma}
\pf{}
It suffices to show,
that there are
only finitely many graphs
with fundamental group
free of rank~$n$
with all vertices of degree at least $3$.
We will show this,
by deriving
upper bounds
for the number
of vertices
and edges
of such a graph.
Let $X$ be such a graph.
It has
$n$ edges outside a maximal subtree.
Denote by
$e$ the total number of edges
and by
$v$ the number of vertices
of $X$.
Since
the number of vertices
of a tree
exceeds
the number of its edges
by one,
we have $n=e-v+1$.
The sum
of the degrees
of vertices
in $X$
is at most $2e$,
because
at most $2$
different vertices
belong to any edge.
On the other hand
that sum
is at least $3v$
by our assumption
on the vertex degrees.
It follows that
$e\geq 3/2\,v$.
Substituting
this inequality
into the equation
stated in the last paragraph
we obtain
the upper bound $v\le 2(n-1)$
for $v$
in terms of $n$.
The equation $n=e-v+1$
implies that
$e=n+v-1$,
which is seen to be
bounded above
by $3(n-1)$
using the upper bound $v\le 2(n-1)$,
derived above.
This
completes the proof.
\hspace*{\fill}$\square$
Finally,
we derive our main result.
\begin{theorem}
\label{thm:volume_bound(freeG-envelope)}
Let $F$ be a free group
of finite rank
at least $2$.
There is a constant $V(F )$
such that
for all envelopes
$\varphi\colon F \to G$
of $F $
into a totally disconnected group,
there is a free generating set $S_\varphi$
in $F $
such that
\[
\prod_{t\in S_\varphi} s_G(\varphi(t))\leq V(F )\,.
\]
Furthermore,
the set of primes
dividing one of the numbers
\[
s_G(x);\quad \text{$G$ a $F $-envelope, $x\in G$}
\]
is finite.
\end{theorem}
\pf{}
Proposition~\ref{prop:larger-envelop}
and
Corollary~\ref{cor:upper_bound-Aut(T)}
show that,
for both claims,
we may assume that
$G$ runs through
automorphism groups
of bounded valence, bushy trees,
which are envelopes of $F$.
As pointed out
in Remark~\ref{rem:virtF,cocp latt},
we may assume that
the corresponding
tree actions
are minimal,
hence that
none of these trees has
a vertex of degree $1$.
Since
$F$ is torsion free,
it acts
on the trees
in an orientation preserving way.
Hence,
we may also assume that
none of the trees
has vertices
of degree $2$.
As explained
at the beginning of this section,
this implies that
the image of $F$ under $\varphi$
is a Schottky lattice.
We choose $S_\varphi$
to be a Schottky basis
of $\varphi(F)$.
It follows then
from
Lemma~\ref{lem:scale(hyp-iso(tree)); prod-formula}
and
Lemma~\ref{lem:ramification_bound(SchottkyGs)}
that
we have
$V(F )\leq (K_n-1)^{nl_n}$
and
that no prime divisor of $s_{\Aut T_.}(x)$
can exceed $K_n$.
This shows our claim.
\hspace*{\fill}$\square$
The second property
in the above theorem
can also be obtained
for groups
which are just virtually free
of finite rank at least $2$.
\begin{corollary}\label{cor:bound(div(scales))_virt-freeGs}
Let $\Gamma$ be
virtually a free group of finite rank
at least $2$.
Then
the set of primes
dividing one of the numbers
\[
s_G(x);\quad \text{$G$ a totally disconnected $\Gamma $-envelope, $x\in G$}
\]
is finite.
\end{corollary}
\pf{}
Let $F$ be
a free subgroup
of finite index
in $\Gamma$
and
let $\iota\colon F\hookrightarrow \Gamma$
be the inclusion of $F$ in $\Gamma$.
If
$\varphi\colon \Gamma\to G$
is a $\Gamma$-envelope,
then
the composite $\varphi\circ\iota\colon F\to G$
is an $F$-envelope.
Therefore
the claim
follows immediately
from the second statement
of Theorem~\ref{thm:volume_bound(freeG-envelope)}.
\hspace*{\fill}$\square$
We now
make
terminology
and
notation
available,
to describe
the phenomenon
established
for free groups
of finite rank
at least~$2$
in
the first part of
Theorem~\ref{thm:volume_bound(freeG-envelope)}.
This
terminology
and
notation
will be used
in the next section.
\begin{definition}\label{def:scale-volumes}
Let $\Lambda$ be a finitely generated group
and
let $\varphi\colon \Lambda\to G$
be an envelope of $\Lambda$
with
totally disconnected, locally compact codomain $G$.
\begin{enumerate}
\item
If $S$ is
a finite set
of generators
for $\Lambda$,
the number
$\mathop\mathrm{vol}_\varphi(S):=\prod_{t\in S} s_G(\varphi(t))$
will be called
\emph{the scale volume of $S$ with respect to $\varphi$}.
\item
The number
$\mathop\mathrm{vol}_\varphi\langle\Lambda\rangle:=
\min\{\mathop\mathrm{vol}_\varphi(S)\colon \text{$S$ is finite and generates $\Lambda$}\}$
will be called
\emph{the scale volume of $\Lambda$ with respect to $\varphi$}.
\item
The number
$\mathop{s\text{-}\mathrm{vol}}(\Lambda):=
\sup\{\mathop\mathrm{vol}_\varphi\langle\Lambda\rangle\colon
\text{$\varphi$ is an envelope of $\Lambda$ with totally disconnected codomain}\}$
(which may be infinite)
will be called
\emph{the scale volume of $\Lambda$}.
\end{enumerate}
\end{definition}
Using
the terminology
just introduced,
the first part
of Theorem~\ref{thm:volume_bound(freeG-envelope)}
may be
restated as
`the scale volume
of a free group
of finite rank
at least $2$
is finite'.
\section{Explicit bounds on scales}\label{sec:explicit bounds}
In this section
we will address
the problem
of getting
quantitative versions
of
Theorem~\ref{thm:volume_bound(freeG-envelope)}
and
Corollary~\ref{cor:bound(div(scales))_virt-freeGs}.
We start with
explicit bounds
on possible values
of the scale function
on a free group
of finite rank
at least $2$.
\begin{proposition}
\label{prop:explicit_bounds}
Let $n$ be an integer
which is at least $2$.
Let $\mathcal{B}(n)$
be the class
of all graphs
with all vertices of degree at least $3$
and fundamental group
free of rank $n$.
Then
\begin{enumerate}
\item
The maximal vertex degree
of graphs in $\mathcal{B}(n)$
is $2n$.
This maximal degree is achieved
for the unique graph,
$B_{2n}$,
in $\mathcal{B}(n)$
with $1$ vertex
and
is not achieved
for any other graph in $\mathcal{B}(n)$.
The fundamental group,
$F$,
of the trivial graph of groups over $B_{2n}$
admits,
via its action
on the universal covering tree,
$T_{2n}$,
of $B_{2n}$,
which is a homogenous tree
of degree $2n$,
a discrete, cocompact embedding
into $\Aut {T_{2n}}_.$.
All elements
of the unique Schottky basis
of $F$
obtained using all edges of $B_{2n}$
have translation length~$1$
and
the scale function of $\Aut {T_{2n}}_.$
assumes the value $2n-1$
on all of these elements.
\item
For any odd integer $s$
with $3\le s\le 2n-1$,
there is a graph,
$B(s)$,
in $\mathcal{B}(n)$
such that
the scale function
on the automorphism group
of the covering tree,
$T(B(s))$,
of $B(s)$
assumes the value~$s$
on
some hyperbolic isometry
of translation length~$1$
which
belongs to
every Schottky basis
of the fundamental group,
$F$,
of the trivial graph of groups
over $B(s)$.
The group $F$
admits,
via its action
on $T(B(s))$,
a discrete, cocompact embedding
into $\Aut{T(B(s))}_.$.
\item
Let $n$ be an integer
which is at least $2$.
The maximal translation length
of a member of
a Schottky basis
for the image
of the free group
on $n$ generators
in a discrete, cocompact embedding
in the automorphism group
of a locally finite tree
all of whose vertices
have degree
at least $3$
is $2(n-1)$.
More precisely,
there is a graph,
$B^{2(n-1)}$,
in $\mathcal{B}(n)$
such that
every Schottky basis
of the fundamental group,
$F$,
of the trivial graph of groups
over $B^{2(n-1)}$
has an element
whose translation length is $2(n-1)$
and whose scale
with respect to
the automorphism group
of the covering tree,
$T(B^{2(n-1)})$,
of $B^{2(n-1)}$
is $2^{2(n-1)}$.
The group~$F$
admits,
via its action
on $T(B^{2(n-1)})$,
a discrete, cocompact embedding
into $\Aut{T(B^{2(n-1)})}_.$.
\end{enumerate}
\end{proposition}
\pf{}
\noindent\textbf{Proof of 1:\quad}
In proving
the first claim,
we start by
confirming
the statements
made
in the first two sentences
thereof.
Thanks to
the lower bound
on vertex degrees
for graphs is $\mathcal{B}(n)$,
the following holds
for any graph,
$B$ say,
in $\mathcal{B}(n)$.
Contracting an edge
of a maximal subtree
in $B$
to its initial vertex,
$o$ say,
creates another graph
in $\mathcal{B}(n)$,
whose vertex degree
at $o$
is strictly larger than
the vertex degree
of $o$ in $B$.
Therefore,
the maximal vertex degree
of a graph in $\mathcal{B}(n)$
is achieved
for a graph
with just one vertex.
Since
there is
a unique graph,
$B_{2n}$,
in $\mathcal{B}(n)$
with one vertex,
we have seen that
the first two sentences
are true.
A moment's thought shows,
that
all the remaining statements
made in the first claim
save the value
taken by the scale function
of $\Aut{T_{2n}}_.$
on the elements
of the Schottky basis
of $F$
follow immediately from
the statements just proved.
To see that
this remaining statement
is true also,
it suffices
by Lemma~\ref{lem:scale(hyp-iso(tree)); prod-formula}
to show
that
for any end $\epsilon$
of $T_{2n}$
the tree ${(T_{2n}})_{\Aut{T_{2n}}_.,\epsilon}$
(introduced
in the paragraph
preceding Lemma~\ref{lem:scale(hyp-iso(tree)); prod-formula},
page~\pageref{T_G,epsilon})
is the whole tree $T_{2n}$.
Since
for any two distinct ends,
$\epsilon$ and $\epsilon'$ say,
in a homogenous tree
such as $T_{2n}$
there is a hyperbolic isometry
(of translation length $1$)
whose axis is
the line joining
$\epsilon$ and $\epsilon'$,
we clearly have
${(T_{2n}})_{\Aut{T_{2n}}_.,\epsilon}=T_{2n}$,
for any end $\epsilon$,
and we finished
the proof
of our first claim.
\vspace{1ex}\noindent\textbf{Proof of 2:\quad}
To obtain
the statements
of the second claim,
we first introduce
the graph $B(s)$
for a given odd integer $s$
with $3\le s\le 2n-1$.
If $s$ is equal to $2n-1$,
then,
by the first claim,
which we already proved,
the graph $B_{2n}$
has
all the properties
the graph $B(2n-1)$
is required to satisfy.
Therefore,
we may assume
in what follows,
that $s$ is at most $2n-3$.
If $s\le 2n-3$,
the graph $B(s)$
has $2$ vertices,
$v_0$ and $v_1$.
The vertices $v_0$ and $v_1$
are joined by
one, respectively two edges
depending on whether
$s$ is different from or equal to $n-1$.
No matter what
the value of $s$ is,
attach $(s+1)/2\ge 2$ loops
to the vertex $v_0$.
If $s\neq n-1$,
attach $n-(s+1)/2\ge n-(2n-2)/2=1$ loops
to the vertex $v_1$.
If $s=n-1$,
attach $n-n/2-1=n/2-1\ge 1$ loops
to the vertex $v_1$.
For each odd integer $s$
with $3\le s\le 2n-3$
the graph $B(s)$
belongs to $\mathcal{B}(n)$
by construction.
The degrees $d_0$ and $d_1$
of the vertices $v_0$ and $v_1$
of $B(s)$
are different.
We claim that
the scale function
with respect to
$\Aut T(B(s))_.$
assumes the value $s$
on each of the elements of $F$
obtained from
the $(s+1)/2$ loops
based at $v_0$.
To see this,
let $h$ be one of
the elements of $F$
obtained from loops at $v_0$
and
let $\epsilon$
be the attracting end
of $h$.
We will determine
the tree $T(B(s))_{\Aut T(B(s))_.,\epsilon}$.
Note that
all vertices
on the axis of $h$
are translates of
a lift of $v_0$
and hence have
degree $d_0$.
The vertices
on the axis
of every hyperbolic isometry
of $T(B(s))$
that has $\epsilon$
as attracting end
must have degree $d_0$
also.
Let $k$ be
such a hyperbolic isometry.
By the above observation
on the degrees
of vertices
on the axis of $k$,
this axis
maps to
a closed edge path
at $v_0$
in $B(s)$
all of whose edges
belong to
the subgraph, $B(v_0)$,
of $B(s)$
consisting of
the vertex $v_0$
and
all the loops based at $v_0$.
Conversely,
every closed edge path
at $v_0$
can be obtained
as an image of
a hyperbolic isometry
of the tree $T(B(s))$
fixing the end $\epsilon$.
We conclude that
the tree $T(B(s))_{\Aut T(B(s))_.,\epsilon}$
equals
the lift
of $B(v_0)$
in $T(B(s))$
that contains $\epsilon$.
In particular,
the tree $T(B(s))_{\Aut T(B(s))_.,\epsilon}$
is regular
of degree $2(s+1)/2=s+1$.
Our claim
on the values of the scale function
follows from Lemma~\ref{lem:scale(hyp-iso(tree)); prod-formula}.
Since
the remaining statements made
are obvious,
the proof of
the second claim
is complete.
\vspace{1ex}\noindent\textbf{Proof of 3:\quad}
We first establish
the upper bound
on the translation length
claimed.
Recall that
the upper bound
for the number
of vertices
of a graph
with fundamental group free
of rank $n$
and all vertices
of degree at least $3$
obtained in
the proof of Lemma~\ref{lem:ramification_bound(SchottkyGs)}
was $2(n-1)$.
The longest path
without backtracking
inside a maximal subtree
of a graph
with fundamental group free
of rank $n$
and all vertices
of degree at least $3$
therefore is $2(n-1)-1$
and
the longest translation length
an element of a Schottky basis
in the fundamental group
over the trivial graph of groups
over such a graph
can have
is $2(n-1)-1+1=2(n-1)$.
This establishes
the upper bound
for the translation length
claimed.
To prove
the remainder
of the claim,
it suffices
to prove
the existence
of the graph $B^{2(n-1)}$
with the properties stated
for every integer $n$
at least $2$,
since
the other statements
follow
from this.
The graph $B^{2(n-1)}$
is constructed as follows.
Take the $2(n-1)$ numbers
$0,\ldots ,2n-3$
as the set of vertices of $B^{2(n-1)}$.
The number of these vertices is even.
Choose,
once and for all,
a cyclic order
on the set
of vertices
of $B^{2(n-1)}$.
Traverse
the vertices
of $B^{2(n-1)}$
in that cyclic order
and connect
encountered successive vertices
in an alternating fashion
by $1$ respectively $2$ edges.
By construction,
every vertex
of the graph $B^{2(n-1)}$
has degree $3$.
Since
the various Schottky bases
of the fundamental group
of the trivial graph of groups
over $B^{2(n-1)}$
are determined
by the choice
of a maximal subtree
in $B^{2(n-1)}$,
we determine next
all choices
of a maximal subtree
in $B^{2(n-1)}$.
If $T$ is
a maximal subtree
of $B^{2(n-1)}$,
then
not every vertex
in $B^{2(n-1)}$
can be connected
to the previous vertex
within $T$
with respect to
the chosen cyclic order
on the set of vertices,
because then
$T$ would contain
a cycle.
Let $v_0$ be
a vertex
of $B^{2(n-1)}$
which is not connected to
its predecessor,
$v_{2n-3}$,
in the chosen cyclic order.
Then
the set of edges of $T$
forms a path
in cyclic order
from $v_0$ to $v_{2n-3}$.
We therefore see that,
up to an automorphism
of $B^{2(n-1)}$,
the maximal subtrees
in $B^{2(n-1)}$
are of
at most
two kinds.
If $n$ is at least $3$
there are two kinds,
depending on
whether
$v_{2n-3}$ is connected to $v_0$
by $1$ or by $2$ edges
inside $B^{2(n-1)}$,
while
there is just one kind
if $n$ is equal to $2$.
Each of the edges
connecting $v_{2n-3}$ to $v_0$
that do not belong to $T$
defines an element of
the Schottky basis
determined by $T$
of translation length $2(n-1)$.
We have just seen that
there is
at least $1$ element
with such a translation length
in every Schottky basis
attached to
the graph $B^{2(n-1)}$ .
Since
the covering tree $T(B^{2(n-1)}$
of $B^{2(n-1)}$
is $3$-regular,
the scale
of such an element
with respect to $\Aut T(B^{2(n-1)})_.$
is $2^{2(n-1)}$.
We have shown
all parts
of claim~3.
\hspace*{\fill}$\square$
The following corollary
follows from
Proposition~\ref{prop:explicit_bounds}.
\begin{corollary}\label{cor:estimates(scale-val(freeGs))}
Let $n$ be an integer
which is at least $2$
and
let $F_n$ be
the free group
of rank $n$.
Then,
for every prime number,
$p$,
such that $2\le p\le 2n-1$,
there is
a discrete, cocompact embedding,
$\varphi\colon F_n\to G$,
into a totally disconnected, locally compact group, $G$,
and an element,
$h_\varphi$,
of $F_n$
such that
$p\mid {s_G(\varphi(h_\varphi))}$.
Furthermore,
every prime,
$p$,
dividing
the value
of the scale function
on an element of $F_n$
with respect to any
envelope
of $F_n$
satisfies $2\le p\le 2n-1$.
\hspace*{\fill}$\square$
\end{corollary}
From Proposition~\ref{prop:explicit_bounds}
we also obtain
estimates
for the scale volume
of the free group
of rank
at least~$2$.
\begin{corollary}\label{cor:estimates(scale-vol(freeGs))}
Let $n$ be
an integer
which is at least $2$,
and
let $F_n$ be
the free group
of rank $n$.
Then
the scale volume,
$\mathop{s\text{-}\mathrm{vol}}(F_n)$,
of $F_n$
satisfies
the inequalities
\[
(2n-1)^n \le \mathop{s\text{-}\mathrm{vol}}(F_n) \le (2n-1)^{2n(n-1)}\,.
\]
\end{corollary}
\pf{}
The stated lower bound
follows from
part~1
of Proposition~\ref{prop:explicit_bounds}.
The upper bound
follows from
parts~1 and~3
of Proposition~\ref{prop:explicit_bounds}.
\hspace*{\fill}$\square$
The gap between
the upper and lower bounds
in the estimate
for $\mathop{s\text{-}\mathrm{vol}}(F_n)$
in~Corollary~\ref{cor:estimates(scale-vol(freeGs))}
grows fast,
and both bounds are
very far from
the truth.
Some experimentation
led me
to believe that,
asymptotically,
as $n$ goes to infinity,
the value of
$\mathop{s\text{-}\mathrm{vol}}(F_n)$
is
$2^{n(2\log_2(n/3)+3)}$.
However,
I have been forced
to revise
my initial impression
that
it is straightforward
to determine
the asymptotics
of the sequence $(\mathop{s\text{-}\mathrm{vol}}(F_n))_{n\ge 2}$.
|
train/arxiv
|
BkiUbMU5qWTD6essZRcR
| 5 | 1 |
\section{Introduction}
This demo file is intended to serve as a ``starter file''
for IEEE conference papers produced under \LaTeX\ using
IEEEtran.cls version 1.8b and later.
I wish you the best of success.
\hfill mds
\hfill August 26, 2015
\subsection{Subsection Heading Here}
Subsection text here.
\subsubsection{Subsubsection Heading Here}
Subsubsection text here.
\section{Conclusion}
The conclusion goes here.
\section*{Acknowledgment}
The authors would like to thank...
\section{Introduction}
Advanced Driver Assistance Systems (ADAS) and upcoming autonomous cars rely on an accurate perception of the environment to make the proper decisions concerning the trajectory of the vehicle. These systems must be endowed with an exceptional degree of robustness under different circumstances that can affect the driving decisions (e.g. illumination).
Consequently, the design of perception systems intended for automotive applications is usually oriented towards topologies with several complementary sensory modalities. Vision systems are frequent \cite{Franke2013}, due mainly to their ability to provide appearance information. Among the available vision devices, stereo-vision systems, which makes use of a pair of cameras separated a fixed distance to get depth information about the environment, stand out as a cost-effective solution able to provide additional dense 3D information to model the surroundings of the vehicle.
On the other hand, the rapid development of the technology behind 3D laser scanners in recent years has enabled its widespread use in both research and industry driving applications. Contrary to vision systems, lidar range measurements are characterized by its high accuracy; moreover, they can provide information in a full 360\si{\degree} field of view.
Due to the particular features of these two sensory technologies, they are suitable to be part of the same perception system, providing complementary information. In that kind of designs, data from both sensors have to be appropriately combined before inference, making use of fusion techniques \cite{Garcia2017}. In the most usual setup, sensors have overlapping fields of view (as in Fig. \ref{fig:sidesetup}), and the advantages conferred by their joint use come from the ability to make correspondences between both data representations. For that end, the relative pose of the sensors, given by their respective extrinsic parameters, must be accurately determined through a calibration process.
\begin{figure}[t]
\centering
\subfloat[]{\includegraphics[height=1.2in]{figures/ivvi_apepinado.jpg}%
\label{fig:ivvi_outside}}
\hfil
\subfloat[]{\includegraphics[height=1.2in]{figures/itsc2017sensors.pdf}%
\label{fig:sensor_scheme}}
\caption{(a) Common sensor setup for vehicles using a laser scanner and a trinocular camera; (b) relative calibration between both sensors}
\label{fig:sidesetup}
\end{figure}
Existing calibration methods suffer from either the need for complex setups or the lack of generalization ability, so that the accuracy of the results is strongly dependent on the parameters of the sensors or the structuredness of the environment.
In this work, we present a calibration method tailored to automotive sensor setups composed of a stereo rig and a 360\si{\degree} multi-layer lidar scanner. Unlike the existing methods, no strong assumptions are made, allowing its use with medium-resolution scanners (e.g. 16-layers) as well as very different relative poses between the sensors. Our method can be performed within a reasonable time using a simple setup designed to exploit the correspondences in the data from both devices.
The remainder of this paper is organized as follows. In Section \ref{sec:related}, a brief review of related work is provided. Section \ref{sec:proposed} details a description of the proposed algorithm. In Section \ref{sec:results}, the experimental results that assess the method performance are discussed. Finally, in Section \ref{sec:conclusion}, conclusions are presented.
\section{Related work}
\label{sec:related}
The issue of calibration of the extrinsic parameters relating different sensors has been addressed by many researchers in the past, driven by its typical application in multi-sensory robotic and automotive platforms.
Among the variety of possible sensor setups, the camera-to-range problem is the most frequently considered in the literature. Due to the restrictions inherent to mobile platforms, devices used in this applications often provide data in a limited number of scan planes, typically one or four, resulting in very scarce range information \cite{Kwak2011}. Calibration is usually assumed as a process to be performed in a controlled environment before the regular operation of the sensorial system. Traditional methods require manual annotation to some extent \cite{Scaramuzza2007}. However, since miscalibrations are frequent in robotic platforms, research effort has usually focused on automatic approaches. As the process aims to find the correspondence between data acquired from different points of view, unambiguous pattern instruments have been used as calibration targets, such as triangular boards \cite{Debattisti2013}, polygonal boards \cite{Park2014} or spheres \cite{Pereira2016}. The diversity of shapes used in the existing works deals with the necessity of the targets to be distinguishable in all the data representations from the sensors (i.e. range and appearance). Nonetheless, planar targets are particularly prevalent \cite{Li2011}, since they are easily detectable using range information and provide a characteristic shape that can be used to perform geometrical calculations. On the other hand, Scott et al. dealt with the particular case of non-overlapping field of views \cite{Scott2015} using an accurate motion tracking system.
With the widespread introduction of range sensors providing a dense 3D point cloud in recent years, research interest has shifted to these type of devices. Geiger et al. \cite{Geiger2012c} proposed a calibration method based on a single shot in the presence of a setup based on several planar checkerboards used as calibration targets. Velas et al. \cite{Velas2014} propose an approach enabling the estimation of the extrinsic parameters using a single point of view, based on the detection of circular features on a calibration pattern. These methods are targeted to dense range measurements so that 3D lidar scanners with lower resolution (e.g. the 16-layer scanner used in this work) entail particular issues that are addressed in this paper.
A large second group of approaches dispenses with any artificial calibration targets and use the features in the environment. Moghadam et al. \cite{Moghadam2013} use linear features extracted from natural scenes to determine the transformation between the coordinate frames. The method is suitable for indoor scenes populated with a large number of linear landmarks. In traffic environments, the ground plane and the obstacles have been used to perform camera-laser calibration \cite{Ponz}, although some parameters are assumed as known.
On the other hand, the assessment of calibration methods remains an open issue, given that an accurate ground-truth of the six parameters defining the relationship between the pose of the sensors cannot be obtained in practice. The lack of standard evaluation metrics has led to the use of custom schemes, which are difficult to extend to other domains and eventually based on inaccurate manual annotations. In this regard, Levinson and Thrun \cite{Levinson2013} presented a method aimed to detect miscalibrations through the variations in an objective function computed from the discontinuities in the scene.
\section{Calibration Algorithm}
\label{sec:proposed}
We propose a calibration algorithm aimed to estimate the rigid-body transform relating the coordinate system $\left\{ C \right\}$, centered in one of the two cameras belonging to the stereo-vision system (hereafter assumed to be the leftmost one), and the coordinate frame $\left\{ L \right\}$, fixed in the laser range scanner, as depicted in Fig. \ref{fig:sensor_scheme}. This transformation is defined by a set of six parameters $\boldsymbol{\xi}_{CL} = (t_x, t_y, t_z, \phi, \theta, \psi)$ representing the translation along the $x$, $y$ and $z$ axis, and the rotation around $x$ (roll), $y$ (pitch) and $z$ (yaw). We compute the parameters with respect to the stereo camera frame $\left\{ C \right\}$, although the transformation is straightforwardly reversible. Using homogeneous coordinates, the set $\boldsymbol{\xi}_{CL}$ can be used to build a transformation matrix, $\mathbf{T}_{CL}$, which allows to transform a 3D point in camera coordinates, $\mathbf{p}_c$, into a point in laser range scanner coordinates, $\mathbf{p}_l = \mathbf{T}_{CL}\mathbf{p}_c$.
To obtain this transform, we use a short series of stereo-pair images (where both images are expected to be synchronized) and lidar scans. We assume that the intrinsic parameters relating the 3D scene points with the representations provided by both sensors are known, including the baseline of the stereo system, whose images are expected to be rectified beforehand.
A single calibration target is used, and it is intended to be perceived by the sensors from a unique point of view, avoiding the need for the presence of multiple targets or changes in the calibration scene during the process. Following \cite{Velas2014}, we use a custom-made planar target, with four circular holes symmetrically disposed, as shown in Fig. \ref{fig:escobar}. Those holes act as distinct features visible by the camera and the lidar; therefore, both sensors are assumed to be placed with a certain overlap in their field of views, so that the circular holes are intersected by at least two lidar beams and fully visible from the camera.
\begin{figure}[htb]
\centering
\subfloat[]{\includegraphics[height=1.2in]{figures/patron.png}%
\label{fig:patron_img}}
\hfil
\subfloat[]{\includegraphics[height=1.2in]{figures/patron_cloud.png}%
\label{fig:patron_pc}}
\caption{(a) Calibration target for the proposed method, as seen in the left image of the stereo system; (b) projection in a 16-layer laser point cloud.}
\label{fig:escobar}
\end{figure}
Beyond that reasonable constraint, no additional assumptions are made regarding either the relative rotation and translation of the sensors (i.e. large displacements are allowed) or the pose of the calibration target, which is not required to be perfectly aligned with any axis.
The calibration process involves two stages: a segmentation of reference points in both clouds, illustrated in Fig. \ref{fig:fulll} and Fig. \ref{fig:fullc}, and a registration to estimate the transform parameters.
\begin{figure*}[t]
\centering
\subfloat[]{\includegraphics[height=1in]{figures/l0.png}%
\label{fig:full1l}}
\hfil
\subfloat[]{\includegraphics[height=1in]{figures/l1.png}%
\label{fig:full2l}}
\hfil
\subfloat[]{\includegraphics[height=1in]{figures/l2.png}%
\label{fig:full3l}}
\hfil
\subfloat[]{\includegraphics[height=1in]{figures/l3.png}%
\label{fig:full4l}}
\hfil
\subfloat[]{\includegraphics[height=1in]{figures/l4.png}%
\label{fig:full5l}}
\caption{Segmentation pipeline for extraction of the reference points from the lidar point cloud.}
\label{fig:fulll}
\end{figure*}
\begin{figure*}[t]
\centering
\subfloat[]{\includegraphics[height=1in]{figures/0_imgcomposition_v2.png}%
\label{fig:full1c}}
\hfil
\subfloat[]{\includegraphics[height=1in]{figures/c1.png}%
\label{fig:full2c}}
\hfil
\subfloat[]{\includegraphics[height=1in]{figures/c2.png}%
\label{fig:full3c}}
\hfil
\subfloat[]{\includegraphics[height=1in]{figures/c3.png}%
\label{fig:full4c}}
\hfil
\subfloat[]{\includegraphics[height=1in]{figures/c4.png}%
\label{fig:full5c}}
\caption{Segmentation pipeline for extraction of the reference points from the stereo point cloud.}
\label{fig:fullc}
\end{figure*}
\subsection{Data Representation}
The proposed method rely on the information provided by the sensors to be calibrated. For the laser scanner, the 3D point cloud $\mathcal{P}^l_0 = \left\{(x,y,z)\right\}$ representing the range measurements is considered. On the other hand, information from the stereo camera is employed in its two modalities: grayscale intensity and depth estimation. The latter is obtained by a stereo matching algorithm, which allows to map every pixel to its $(x,y,z)$ coordinates, thus producing an analogous point cloud $\mathcal{P}^c_0 = \left\{(x,y,z)\right\}$. The similarity between both clouds is exploited during the calibration process for, ultimately, determining the transformation between both sensors. Each of the clouds $\mathcal{P}^l_0$ and $\mathcal{P}^c_0$ is expressed in the coordinate system with origin in its sensor; that is, $\left\{ L \right\}$ and $\left\{ C \right\}$ respectively.
For the stereo matching stage, we use the Semi-Global Matching (SGM) method \cite{Hirschmuller2008}, which we found to be reasonably accurate in the depth estimation. The border localization problem typically present in stereo matching do not significantly affect the algorithm, since it is tackled by using the intensity information, as will be shown below. However, the calibration target is assumed to present a minimum of texture, allowing the resolution of the stereo correspondence problem.
Pass through filters, based on the distance along the local axes, are then applied to the point clouds from the two sensors. This step is useful to limit the information processing to the area which is likely covered by both fields of views, where the calibration target must be placed.
\subsection{Target segmentation}
The first steps of the calibration algorithm are intended to extract the points in each cloud belonging to discontinuities in the calibration target. For this purpose, several successive segmentation processes are carried out over the clouds containing the data from the sensors. Segmentation is used to find subsets of points $\mathcal{P}_{i_0}$ representing a geometrical shape (e.g. a plane) in the original cloud. Thus, for the step $i_0$:
\begin{equation}
\mathcal{P}_{i_0}^j = \left\{(x,y,z)\right\} \subseteq \mathcal{P}_{i_0-1}^j, \;\; \forall j \in \left\{ l,c \right\}
\end{equation}
Taking advantage of the planar shape of the calibration target, we perform a sample consensus-based plane segmentation method to determine the plane models in each cloud: $\pi^c$ and $\pi^l$. Some restrictions are posed to guarantee the segmentation of the proper plane in different types of real environments, including those where the ground plane or building walls could affect the segmentation. First, a tight threshold $\delta_{plane}$ is used in the RANSAC algorithm and, second, the plane model is required to be parallel to the vertical axis of the sensor reference frame within a specified angular deviation $\alpha_{plane}$.
When the plane model is available, points separated from the model a distance greater than $\delta_{inliers,l}$, for the lidar cloud, and $\delta_{inliers,c}$, for the stereo cloud, are removed, resulting in the cloud segments $\mathcal{P}_1^l$ and $\mathcal{P}_1^c$. Examples of the results of the plane segmentation process are shown in Fig. \ref{fig:full1l} and Fig. \ref{fig:full1c}.
After that, the segmented clouds undergo a process aimed to filter out the points in the calibration target not belonging to discontinuities. Due to the nature of the data from the two sensors, this stage is sensor-specific.
Regarding the laser point cloud, we follow the method in \cite{Levinson2013} to find depth discontinuities. Each point in the plane model cloud, $\mathbf{p}^i \in \mathcal{P}_1^l$, is assigned a magnitude representing the depth difference with respect to their neighbors:
\begin{equation}
p^i_\Delta = \max(p^{i-1}_r-p^{i}_r,p^{i+1}_r-p^{i}_r,0)
\end{equation}
Where $p_r^i$ is the range measurement given by the sensor for the point $\mathbf{p}^i$, and $\mathbf{p}^{i-1}$ and $\mathbf{p}^{i+1}$ are the points adjacent to $\mathbf{p}^{i}$ in the same scan plane. Then, we filter out all points with a discontinuity value $p_\Delta < \delta_{discont,l}$ (Fig. \ref{fig:full2l}).
On the other hand, for the stereo-camera cloud, we keep the points that map to edges in the intensity image. To that end, a Sobel filter is applied over the left image of the stereo pair. Points whose projection on the image corresponds to a low value in the Sobel image (smaller than $\tau_{sobel,c}$) are filtered out, as shown in Fig. \ref{fig:full2c}.
\subsection{Circle segmentation}
Next steps are intended to segment the four circles on the calibration target, whose centers will be used as correspondence keypoints between the two clouds in the registration.
To enhance the circle segmentation accuracy, we previously perform a filtering process aimed to get rid of the points not belonging to the circles. As only discontinuities are present in the clouds at this stage, outliers to be removed are, mostly, from the outer boundaries of the calibration target. The cloud from the lidar is processed to keep only the rings with a number of points compatible with the presence of a circle and subsequently to remove the outer points in these rings. On the other hand, the camera cloud is subjected to a filtering process based on the elimination of lines, since the borders of the calibration target are densely represented in this case. Lines are found using a sample consensus segmentation and selected according to their orientation and the known dimensions of the calibration pattern, to prevent the removal of useful information from the circles themselves. Notwithstanding these considerations, our experiments proved that our segmentation method was largely insensitive to the presence of these borders, except for some very specific poses of the calibration target. Filtered clouds, $\mathcal{P}_2^l$ and $\mathcal{P}_2^c$, are shown in Fig. \ref{fig:full3l} and Fig. \ref{fig:full3c}.
Afterward, points representing the holes of the calibration target in both clouds are detected. To that end, a circle segmentation process is performed in the 2D space determined by the plane models $\pi^c$ and $\pi^l$. This is effectively implemented by rotating the clouds $\mathcal{P}_2$ until the points are aligned with the XY plane and then adjusting their $z$ coordinate to that enforced by the plane equation. Later, circles are segmented in the XY subspace through sample consensus, imposing the known circle radius as a constraint. To avoid spurious detections, the distance between the circle centers in the physical calibration target is also taken into account. Finally, the obtained centers are transformed back to the 3D sensor's coordinate frame, resulting in the point clouds depicted in green in Fig. \ref{fig:full4l} and Fig. \ref{fig:full4c}. Note that, since the circle segmentation stage is performed in a bidimensional space, the proposed method is suitable for medium-resolution range sensors, as circles are defined by only three points in a plane, thus requiring just two lidar beams to intersect with each of the four circles.
Provided that the representation of the calibration target is accurate enough in the data from both sensors, registration might be performed with a single-shot representation of the four circles centers. Nevertheless, in order to improve the robustness of the method against the sources of noise present in the process, centers are cumulated over a window of $N$ frames. Later, a clustering algorithm is applied, and the cluster centroids are used at the registration stage. This implies the assumption that the calibration target, as well as the sensors, remains static throughout the whole process. We choose a Euclidean distance strategy, with a cluster tolerance of $\delta_{cluster}$, to perform the clustering so that this approach would naturally tackle the presence of outliers. Strict restrictions can be imposed on the minimum and the maximum number of points allowed in each cluster considering the length of the window, i.e. the number of frames, used in the computation.
\subsection{Registration}
The final registration process is aimed to find the set of transformation parameters $\hat{\boldsymbol{\xi}}_{CL}$ which minimize the distance between the reference points of both clouds, namely the centroids of the cumulated circle centers, once the transformation is applied.
The registration procedure involves two steps. First, we compute the optimal transformation assuming the absence of rotation. The result is hence a pure translation, expressed by the set of parameters $\mathbf{t}_{CL}' = (t_x', t_y', t_z')$, which can be obtained by finding the least-squares solution of the overdetermined system of 12 equations provided by the registration of the four reference points:
\begin{equation}
\mathbf{t}_{CL} = \bar{\mathbf{p}}^i_l-\bar{\mathbf{p}}^i_c, \;\; \forall i \in \left\{tl, tr, bl, br \right\}
\end{equation}
Three equations are obtained for each pair of centroids from both the laser cloud, $\bar{\mathbf{p}}^i_l$, and the stereo camera cloud, $\bar{\mathbf{p}}^i_c$. Points are labeled according to its position in the sensors' coordinate systems as top-left ($tl$), top-right ($tr$), bottom-left ($bl$) and bottom-right ($br$) to allow the matching of the points in both clouds. Finally, we find the least-squares solution through column-pivoting QR decomposition.
In the second stage of the registration process, we provide a final estimation of the full set of parameters of the transform: $\hat{\boldsymbol{\xi}}_{CL} = (t_x, t_y, t_z, \phi, \theta, \psi)$. Given that the data association problem is trivial to solve at this stage, we rely on the well-known Iterative Closest Points algorithm \cite{Besl1992} to minimize the sum of point-to-point distances between the cluster centroids clouds. The final transformation is straightforwardly obtained as the composition of that two partial transformations.
\section{Experimental Results}
\label{sec:results}
The performance of the proposed calibration approach is evaluated in a comprehensive set of experiments. The analysis includes both tests performed in a synthetic test suite developed for the evaluation of calibration algorithms, and real tests with the sensing devices available in the IVVI 2.0 research platform \cite{Martin2014a} (shown in Fig. \ref{fig:ivvi_outside}). The former are used to provide numerical results against a perfect ground-truth; moreover, they allow comparison with other approaches in the literature. Meanwhile, the later demonstrate the validity of the method in real world conditions.
\subsection{Synthetic Test Suite}
Unlike previous works, where calibration performance is compared to manual annotations \cite{Geiger2012c} or scene discontinuities \cite{Levinson2013}, we propose a novel approach based on a simulation environment, which provides exact ground truth about the relative transformation between sensors. To this end, the open-source Gazebo simulator \cite{Koenig2004} is used, and nine different calibration setups are defined, as listed in Table \ref{tab:setups}. The first seven settings are simple setups designed so that the parameters of the transform could be independently evaluated, while the last two represent challenging situations that go beyond the difficulties that may arise in automotive applications, in order to analyze the ability of generalization of the proposed approach.
\begin{table}[hbt]
\renewcommand{\arraystretch}{0.9}
\caption{Camera-lidar transformation parameters in the simulator settings used for the experiments}
\label{tab:setups}
\centering
\begin{tabular}{| c | c c c | c c c |}
\hline
Setting & $t_x$ (m) & $t_y$ (m) & $t_z$ (m) & $\psi$ (rad) & $\theta$ (rad) & $\phi$ (rad)\\ \hline
1 & -0.8 & -0.1 & 0.4 & 0 & 0 & 0 \\
2 & 0 & 0 & 0 & 0.5 & 0 & 0 \\
3 & 0 & 0 & 0 & 0.3 & 0.1 & 0.2 \\
4 & -0.3 & 0.2 & -0.2& 0.3 & -0.1& 0.2\\
5 & 0 & 0 & 0 & 0 & 0.1 & 0 \\
6 & 0 & 0 & 0 & 0 & 0 & 0.4 \\
7 & 0 & 0 & 0 & 0 & 0 & 0 \\
8 & -0.128 & 0.418 & -0.314 & -0.103 & -0.299 & 0.110 \\
9 & -0.433 & 0.845 & 1.108 & -0.672 & 0.258 & 0.075 \\
\hline
\end{tabular}
\end{table}
The stereo camera and different lidar devices were modeled taking into account their real specifications in terms of field of view, resolution, and accuracy. As a consequence, calibration results achieved in the calibration scenarios are comparable to those obtained in real settings. A model of the calibration target was created mimicking the appearance of the actual wooden embodiment shown in Fig. \ref{fig:patron_img}, so that realism was preserved.
\subsection{Performance Evaluation}
Algorithm parameters were empirically chosen as: $\delta_{plane}$ = \SI{1}{cm}, $\delta_{inliers,l}$ = \SI{5}{cm}, $\delta_{inliers,c}$ = \SI{10}{cm}, $\delta_{discont,l}$ = \SI{50}{cm}, $\delta_{cluster}$ = \SI{2}{cm}, $\tau_{sobel,c}$ = 128 and $\alpha_{plane}$ = 0.55 rad.
Following \cite{Geiger2012c}, performance was evaluated as the difference between the estimated transform and the ground-truth, measured in its linear and angular components:
\begin{align}
e_t = \| \mathbf{t}-\mathbf{t_g} \| \\
e_r = \angle(\mathbf{R}^{−1}\mathbf{R}_g)
\end{align}
Where $\mathbf{t}$ is the translation vector given by the $\tau_{CL}$ parameters and $\mathbf{R}$ is the rotation matrix provided by the roll, pitch and yaw angles.
The baseline model was based on a Point Grey Bumblebee XB3 trinocular system (resolution 1280x960, baseline \SI{12}{cm}) and a 16-layer Velodyne VLP-16 lidar scanner. Unless stated otherwise, three different results are given for each setting. A Gaussian noise $\mathcal{N}(0, \sigma_0^2)$ was included in the measurements of the sensors, with $\sigma_{0}^c=0.007m$ for the pixel intensities and $\sigma_{0}^l=0.008m$ for the range measurements from the lidar.
To test the sensitivity of the algorithm to the length of the window $N$, the error after each number of data shots provided by the sensors is plotted in Fig. \ref{fig:iter}. Both the linear and the angular error converge quickly, showing the accuracy of the proposed method. From now on, results are given for the 30\textsuperscript{th} iteration, where the error rate becomes steady. Note that not every frame is actually used for the calibration since the segmentation process is not guaranteed to provide an outcome.
\begin{figure}[htb]
\centering
\subfloat[Translation error, $e_t$]{\includegraphics[height=1.12in]{figures/trans.pdf}%
\label{fig:tr_iter}}
\hfil
\subfloat[Rotation error, $e_r$]{\includegraphics[height=1.12in]{figures/rot.pdf}%
\label{fig:or_iter}}
\caption{Evolution of the calibration errors with the number of cumulated frames. Henceforth, boxes represents the interquartile range, and outliers are depicted individually. }
\label{fig:iter}
\end{figure}
On the other hand, we tested the robustness of the method against different levels of noise in the input data. Modeling the noise with a normal distribution $\mathcal{N}(0, (K \sigma_0)^2)$, we provide the calibration error for three different values of the \textit{noise factor} $K$ in Fig. \ref{fig:rob}.
\begin{figure}[htb]
\centering
\subfloat[Translation error, $e_t$]{\includegraphics[height=1.12in]{figures/noise_t.pdf}%
\label{fig:rob_t}}
\hfil
\subfloat[Rotation error, $e_r$]{\includegraphics[height=1.12in]{figures/noise_r.pdf}%
\label{fig:rob_a}}
\caption{Robustness of the calibration method against Gaussian noise in the data. The red line in the boxes is the median value.}
\label{fig:rob}
\end{figure}
We also compared our method with those proposed by Velas et. al \cite{Velas2014}, using their ROS implementation, and Geiger et al. \cite{Geiger2012c}, through their public web toolbox\footnote{http://www.cvlibs.net/software/calibration/}. For fairness, it is important to note that both methods are aimed to monocular cameras and, in addition, the latter one is able to provide the camera intrinsic parameters as well. Experiments for Geiger et. al were conducted on a representative set of settings (1, 2, 4, 8 and 9) and the best result was chosen. Meanwhile, Velas et al. was applied over the whole set of settings. Nevertheless, it was unable to provide valid results for most of the settings since it is designed for smaller magnitudes of the transformation parameters.
The required scenarios for the different calibration methods were properly recreated in the simulator, as shown in Fig. \ref{fig:worlds}.
\begin{figure}[htb]
\centering
\subfloat[]{\includegraphics[height=1in]{figures/patronworld.png}%
\label{fig:patronworld}}
\hfil
\subfloat[]{\includegraphics[height=1in]{figures/geigerworld.png}%
\label{fig:geigerworld}}
\caption{Simulator scenes: (a) custom calibration target; (b) Geiger et al. setup.}
\label{fig:worlds}
\end{figure}
Since the works we used for comparison were not targeted for mid-resolution laser, the performance evaluation was conducted over three lidar devices with a different number of scan planes: 16, 32 and 64. Results, considering only valid outcomes, are presented in Fig. \ref{fig:comp}.\begin{figure}[htb]
\centering
\subfloat[16-layer, trans. error]{\includegraphics[height=1.1in]{figures/c1_l.pdf}%
\label{fig:comp_t1}}
\hfill
\subfloat[32-layer, trans. error]{\includegraphics[height=1.1in]{figures/c2_l.pdf}%
\label{fig:comp_t2}}
\hfill
\subfloat[64-layer, trans. error]{\includegraphics[height=1.1in]{figures/c3_l.pdf}%
\label{fig:comp_t3}}
\\
\subfloat[16-layer, rot. error]{\includegraphics[height=1.1in]{figures/c1_a.pdf}%
\label{fig:comp_a1}}
\hfill
\subfloat[32-layer, rot. error]{\includegraphics[height=1.1in]{figures/c2_a.pdf}%
\label{fig:comp_a2}}
\hfill
\subfloat[64-layer, rot. error]{\includegraphics[height=1.1in]{figures/c3_a.pdf}%
\label{fig:comp_a3}}
\caption{Comparison of reprojection errors with other comparable methods.}
\label{fig:comp}
\end{figure}
\subsection{Tests in Real Scenarios}
Tests in real scenarios were performed using the IVVI 2.0 platform setup. The results proved the usability of the algorithm for automotive applications requiring data registration, as shown in Fig. \ref{fig:realtest}.
\begin{figure}[htb]
\centering
\subfloat[]{\includegraphics[height=1in]{figures/cloud_projection2.png}%
\label{fig:real_cloud}}
\hfil
\subfloat[]{\includegraphics[height=1in]{figures/manolo.png}%
\label{fig:real_img}}
\caption{Results in real scenes: (a) cloud registration; (b) range projections.}
\label{fig:realtest}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
We have presented a methodology for calibration of lidar-stereo camera systems aimed to remove nearly all the burden associated with the process. Contrary to previous approaches, we focus on getting accurate results in real setups using close-to-production sensor devices, without user intervention.
We have also addressed the open issue of the assessment of the calibration performance, caused by the inability of obtaining precise ground truth measurements of the sensor setups in practice. An advanced simulation software has been used to accurately recreate the sensor models and the calibration scenes, with the inherent advantage of having the exact measurements available for the validation of the methods.
Experiments show that the proposed algorithm outperforms the existing approaches by a large margin, while real tests have corroborated the results provided by the simulation software.
For the sake of research reproducibility, we have released our open-source implementation of the test suite, as well as the calibration algorithm, both integrated into the ROS framework.
\section*{Acknowledgement}
Research supported by the Spanish Government through the CICYT projects (TRA2015-63708-R and TRA2016-78886-C3-1-R), and the Comunidad de Madrid through SEGVAUTO-TRIES (S2013/MIT-2713)
\bibliographystyle{IEEEtran}
|
train/arxiv
|
BkiUddQ4uBhi4In9Ya_Y
| 5 | 1 |
\section{Heavy hadron lifetimes}
Lifetimes are fundamental properties of particles, which connect deeply with their dynamics. Improved lifetime determinations of heavy hadrons probe the interplay of the strong and weak interactions between constituent partons, stimulating further refinement of the phenomenological understanding. Most importantly, measurements of heavy hadron lifetimes enhance the reach in indirect searches for non-standard-model physics. Comparisons of similarly precise measurements and predictions of observables associated with quark-flavor dynamics probe the existence of non-standard-model particles of masses much larger than those directly accessible at particle colliders. The precision of the predictions is often limited by difficulties in calculating strong-interaction transition amplitudes at low energies. Predictability is often recovered by resorting to effective models such as heavy-quark expansion~\cite{Lenz:2014jha}. Heavy-hadron lifetimes offer precious and constraining validation and tuning of such models. \par Precise {\ensuremath{\B^0_\squark}}\xspace\ lifetime measurements are particularly needed. In fact, the {\ensuremath{\B^0_\squark}}\xspace\ lifetime precision has a significant impact in the lifetime ratio between {\ensuremath{\B^0_\squark}}\xspace and {\ensuremath{\B^0}}\xspace mesons, which shows a 2.5 standard-deviation discrepancy from predictions that calls for further investigation. Especially relevant are measurements of the ``flavor-specific" {\ensuremath{\B^0_\squark}}\xspace meson lifetime,
\begin{equation}
\ensuremath{\tau^{\rm fs}_{{\ensuremath{\B^0_\squark}}\xspace}}\xspace\equiv\frac{1}{\Gamma_s} \left[ \frac{1+(\Delta\Gamma_s/2\Gamma_s)^2}{1-(\Delta\Gamma_s/2\Gamma_s)^2} \right],
\end{equation}
where $\Gamma_s = (\Gamma_{s,H} + \Gamma_{s,L})/2$ and $\Delta\Gamma_s = \Gamma_{s,L} - \Gamma_{s,H}$ are the average and the difference, respectively, of the natural widths $\Gamma_{s,H(L)}$ of the heavy (light) mass eigenstate. This empirical quantity allows an indirect determination of $\Delta\Gamma_s$ that, compared with direct determinations, may test the presence of non-standard-model physics~\cite{HFAG}. The lifetime $\ensuremath{\tau^{\rm fs}_{{\ensuremath{\B^0_\squark}}\xspace}}\xspace$ is measured with a single-exponential fit to the distribution of decay time to a final state not accessible by both {\ensuremath{\B^0_\squark}}\xspace and {\ensuremath{\Bbar{}^0_\squark}}\xspace mesons~\cite{Hartkorn:1999ga}. The current best determination, $\ensuremath{\tau^{\rm fs}_{{\ensuremath{\B^0_\squark}}\xspace}}\xspace = 1.535 \pm 0.015 ({\rm stat}) \pm 0.014({\rm syst}) $\,ps~\cite{LHCb-PAPER-2014-037}, obtained by the LHCb collaboration using hadronic ${\ensuremath{\B^0_\squark}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\D^-_\squark}}\xspace \pi^+$ decays, has similarly-sized statistical and systematic uncertainties. Throughout this document, the symbol $X$ identifies any decay product, other than neutrinos, not included in the candidate reconstruction, and the inclusion of charge-conjugate processes is implied. \par Semileptonic {\ensuremath{\B^0_\squark}}\xspace decays, owing to larger signal yields than from hadronic decays, offer richer potential for precise $\ensuremath{\tau^{\rm fs}_{{\ensuremath{\B^0_\squark}}\xspace}}\xspace$ measurements. However, neutrinos and other low-momentum neutral final-state particles prevent the full reconstruction of such decays. This introduces serious limitations due to degraded understanding of background contributions and difficulties in obtaining the decay time from the observed decay-length distribution. Measurements of bottom-meson lifetimes using semileptonic decays, which had been popular at LEP, $B$-factories, and Tevatron Run I since the '90s through approximately 2004--2006, became rarer afterwards, when large samples of fully reconstructed $B \ensuremath{\rightarrow}\xspace J/\psi X$ decays become available. Controlling systematic uncertainties proved challenging~\cite{daveclark,satoru} and rarely analyses achieved competitive results, which were anyhow limited by the size of the systematic uncertainty~\cite{Bc,Abazov:2014rua}.
\par The LHCb Collaboration has recently proposed a novel, data-driven approach that suppresses such limitations thus achieving a world-class measurement of \ensuremath{\tau^{\rm fs}_{{\ensuremath{\B^0_\squark}}\xspace}}\xspace with small systematic uncertainty~\cite{paper}. The analysis also yields a strongly improved determination of the {\ensuremath{\D^-_\squark}}\xspace lifetime over the current best result, $\tau_{{\ensuremath{\D^-_\squark}}\xspace}= 0.5074 \pm 0.0055\ensuremath{\mathrm{\,(stat)}}\xspace \pm 0.0051\ensuremath{\mathrm{\,(syst)}}\xspace$ ps, reported more than a decade ago by the FOCUS collaboration~\cite{Link:2005ew}. Such a novel analysis approach is not necessarily restricted to LHCb or to determinations of lifetimes solely.
\section{Overview}
The {\ensuremath{\B^0_\squark}}\xspace and $D^-_s$ lifetimes are determined from the variation in {\ensuremath{\B^0_\squark}}\xspace signal yield as a function of decay time, relative to that of {\ensuremath{\B^0}}\xspace decays reconstructed in the same final state. The use of kinematically similar {\ensuremath{\B^0}}\xspace decays of precisely known lifetime, as a reference, suppresses the uncertainties from partial reconstruction and lifetime-biasing selection criteria. \par
We analyze proton-proton collisions at center-of-mass energies of 7 and 8 TeV collected by the LHCb experiment in 2011 and 2012 and corresponding to an integrated luminosity of 3.0 fb$^{-1}$. We reconstruct approximately 407\,000 ${\ensuremath{\B^0_\squark}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\D^{*-}_\squark}}\xspace \mu^+\nu_\mu$ and ${\ensuremath{\B^0_\squark}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\D^-_\squark}}\xspace \mu^+\nu_\mu$ ``signal" decays, and approximately 108\,000 ${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\D^{*-}}}\xspace \mu^+\nu_\mu$ and ${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\D^-}}\xspace \mu^+\nu_\mu$ ``reference" decays. The $D$ candidates are reconstructed as combinations of $K^+$, $K^-$, and $\pi^-$ candidates originating from a common space-point (vertex), displaced from any proton-proton interaction vertex. The $B^0_{(s)}$ candidates, namely $K^+ K^- \pi^- \mu^+$ combinations, are formed by $D$ candidates associated with muon candidates originating from another common displaced vertex. We collectively refer to the signal and reference decays as ${\ensuremath{\B^0_\squark}}\xspace \ensuremath{\rightarrow}\xspace [K^+K^-\pi^-]_{\ensuremath{\D^{\scalebox{0.4}{(}*\scalebox{0.4}{)}-}_{\squark}}}\xspace \mu^+\nu_\mu$ and ${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace [K^+K^-\pi^-]_{\ensuremath{\D^{\scalebox{0.4}{(}*\scalebox{0.4}{)}-}}}\xspace \mu^+\nu_\mu$, respectively. A fit to the ratio of event yields between the signal and reference decays as a function of $B^0_{({\ensuremath{\Ps}}\xspace)}$ decay time, $t$, determine $\Delta_\Gamma(B) \equiv 1/\ensuremath{\tau^{\rm fs}_{{\ensuremath{\B^0_\squark}}\xspace}}\xspace - \Gamma_d$, where $\Gamma_d$ is the known natural width of the {\ensuremath{\B^0}}\xspace meson. A similar fit, performed as a function of the $D^-_{(s)}$ decay time, determines the decay-width difference between {\ensuremath{\D^-_\squark}}\xspace and {\ensuremath{\D^-}}\xspace mesons, $\Delta_\Gamma(D)$. Event yields are determined by fitting the candidates' ``corrected-mass" distribution, $m_{\rm corr} = p_{\perp, D\mu} + \sqrt{m^2_{D\mu}+p^2_{\perp, D\mu}}$~\cite{Kodama:1991ij}. The corrected mass is determined from the invariant mass of the $D_{(s)}^-\mu^+$ pair, $m_{D\mu}$, and the component of its momentum perpendicular to the $B^0_{(s)}$ flight direction, $p_{\perp, D\mu}$, to compensate for the average transverse momentum of unreconstructed decay products. The flight direction is the directed line-segment connecting the $B^0_{(s)}$ production and decay vertices; the decay time $t = m_B L k/ p_{D\mu}$ is calculated from the known $B^0_{(s)}$ mass, $m_B$~\cite{PDG2016}, the observed $B^0_{(s)}$ decay length, $L$, and the $D^-_{(s)}\mu^+$-pair momentum, $p_{D\mu}$. The scale factor $k$ corrects $p_{D\mu}$ for the average momentum fraction carried by decay products excluded from the reconstruction~\cite{Abulencia:2006ze, Leonardo:2006fq}. Decay-time acceptances and resolutions, determined from simulation, are included in the fits.
\section{LHCb detector and simulation}
The LHCb detector~\cite{Alves:2008zz,Aaij:2014jba} is a single-arm forward spectrometer covering $2 < \eta < 5$ pseudorapidity, designed for the study of particles containing bottom or charm quarks. The detector allows tracking using a silicon-strip vertex detector surrounding the interaction region, a large-area silicon-strip detector located upstream of a dipole magnet with a bending power of about 4 Tm, and three stations of silicon-strip detectors and straw drift tubes installed downstream of the magnet. The fractional resolution on charged-particle's momentum $p$ is 0.5\%--1.0\%. The minimum distance of a charged-particle trajectory (track) to a primary vertex, the impact parameter, is measured with $(15 + 29/p_T)$ $\ensuremath{{\,\upmu\mathrm{m}}}\xspace$ resolution, where $p_T$ is the $p$ component transverse to the beam, in GeV/$c$. Charged-hadron species are distinguished using two ring-imaging Cherenkov detectors. Photons, electrons and hadrons are identified by a sampling calorimeter consisting of scintillating-pad electromagnetic and hadronic portions and preshower detectors. Muons are identified using alternating layers of iron and multiwire proportional chambers. The online event selection is performed by a hardware trigger, based on information from the calorimeter and muon systems, followed by a software trigger, which applies a full event reconstruction. Simulation of collisions is provided by a specially configured {\sc Pythia} software package. Hadron decays are described by {\sc EvtGen} including final-state radiation simulated using {\sc Photos}. The interaction of particles with the detector and its response are simulated using the {\sc geant4} toolkit~\cite{LHCb-PROC-2011-006, LHCb-PROC-2010-056}. Simulation is used to identify all relevant sources of bottom-hadron decays, model the mass distributions, and correct for the effects of incomplete kinematic reconstructions, relative decay-time acceptances, and decay-time resolutions. The unknown details of the {\ensuremath{\B^0_\squark}}\xspace decay dynamics are modeled in the simulation through empirical form-factor parameters~\cite{Caprini:1997mu}, assuming values inspired by the known $B^0$ form factors~\cite{HFAG}. The impact of these assumptions is accounted for in the systematic uncertainties.
\section{Sample selection}
The trigger requires a muon candidate, with $\mbox{$p_{\mathrm{ T}}$}\xspace > 1.5-1.8$\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace, associated with 1--3 charged particles, all originating in a vertex displaced from the proton-proton vertex~\cite{Aaij:2012me} and pointing to the displaced vertex where the muon candidate originates from. \par Offline, the muon is combined with charged particles consistent with the topology and kinematics of signal ${\ensuremath{\B^0_\squark}}\xspace \ensuremath{\rightarrow}\xspace [K^+K^-\pi^-]_{\ensuremath{\D^{\scalebox{0.4}{(}*\scalebox{0.4}{)}-}_{\squark}}}\xspace \mu^+\nu_\mu$ and reference ${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace [K^+K^-\pi^-]_{\ensuremath{\D^{\scalebox{0.4}{(}*\scalebox{0.4}{)}-}}}\xspace \mu^+\nu_\mu$ decays. The accepted $K^+K^-\pi^-$ mass range is restricted around the known {\ensuremath{\D^{\scalebox{0.4}{}\scalebox{0.4}{}-}_{(\squark)}}}\xspace meson masses to suppress signal-reference cross-contamination to less than 0.1\%, as estimated from simulation. We also reconstruct ``same-sign" $K^+K^-\pi^-\mu^-$ candidates, formed by charm and muon candidates with same-sign charge, to model combinatorial background from accidental $D^{-}_{(s)} \mu^+$ associations. The event selection is designed to suppress the background under the charm signals and making same-sign candidates a reliable model for the combinatorial background: track- and vertex-quality, vertex-displacement, \mbox{$p_{\mathrm{ T}}$}\xspace, and particle-identification criteria are chosen such as to minimize shape and yield differences between same-sign and signal candidates in the $m_{D\mu} > 5.5\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace$ region, where genuine bottom-hadron decays are kinematically excluded and combinatorial background dominates. Mass vetoes suppress background from misreconstructed decays such as ${\ensuremath{\B^0_\squark}}\xspace \ensuremath{\rightarrow}\xspace \psi^{(')}(\ensuremath{\rightarrow}\xspace \mu^+\mu^-)\phi (\ensuremath{\rightarrow}\xspace K^+K^-)$ decays where a muon is misidentified as a pion, ${\ensuremath{\Lz^0_\bquark}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\Lz^+_\cquark}}\xspace (\ensuremath{\rightarrow}\xspace pK^-\pi^+) \mu^- \bar{\nu}_\mu X$ decays where the proton is misidentified as a kaon or a pion, and $B^0_{(s)} \ensuremath{\rightarrow}\xspace D^-_{(s)}\pi^+$ decays where the pion is misidentified as a muon. Significant contributions arise from decays of a bottom hadron into pairs of charm hadrons, one peaking at the $D^-_{(s)}$ mass and the other decaying semileptonically, or into single charm hadrons and other particles. Such decays include ${\ensuremath{\PB}}\xspace^0_{({\ensuremath{\Ps}}\xspace)} \ensuremath{\rightarrow}\xspace
{\cal D}^{\scalebox{0.4}{(}*\scalebox{0.4}{)}-}_{({\ensuremath{\Ps}}\xspace)}{\ensuremath{\D^{\scalebox{0.4}{}\scalebox{0.4}{}+}_{(\squark)}}}\xspace$, ${\ensuremath{\Bu}}\xspace \ensuremath{\rightarrow}\xspace {\kern 0.2em\overline{\kern -0.2em \PD}{}}\xspace{}^{\scalebox{0.4}{(}*\scalebox{0.4}{)}0} D^{\scalebox{0.4}{(}*\scalebox{0.4}{)}+}$, ${\ensuremath{\Bu}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\D^-}}\xspace \mu^+ \nu_\mu X$, ${\ensuremath{\Bu}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\D^{\scalebox{0.4}{(}*\scalebox{0.4}{)}-}_{\squark}}}\xspace K^+ \mu^+ \nu_\mu X$, ${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\D^{\scalebox{0.4}{(}*\scalebox{0.4}{)}-}_{\squark}}}\xspace {\ensuremath{\kaon^0}}\xspace \mu^+ \nu_\mu X$, ${\ensuremath{\B^0_\squark}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\D^0}}\xspace{\ensuremath{\D^-_\squark}}\xspace K^+$, ${\ensuremath{\B^0_\squark}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\D^-}}\xspace {\ensuremath{\D^+_\squark}}\xspace {\ensuremath{\kaon^0}}\xspace$, ${\ensuremath{\Lz^0_\bquark}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\Lz^+_\cquark}}\xspace {\ensuremath{\D^{\scalebox{0.4}{(}*\scalebox{0.4}{)}-}_{\squark}}}\xspace X$, and ${\ensuremath{\Lz^0_\bquark}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\D^+_\squark}}\xspace {\ensuremath{\PLambda}}\xspace \mu^- \bar{\nu}_\mu X$ decays. We suppress these backgrounds with an upper threshold, linearly dependent on $m_{\rm corr}$, applied to the ${\cal D}^-_{(s)}$ momentum component perpendicular to the ${\ensuremath{\PB}}\xspace^0_{({\ensuremath{\Ps}}\xspace)}$ flight direction, shown in Fig.~\ref{fig:threshold}. Finally, a $t>0.1$\,ps requirement on the ${\cal D}^-_{({\ensuremath{\Ps}}\xspace)}$ proper decay time renders the signal- and reference-decay acceptances as functions of decay time more similar, with little signal loss.
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{Fig0}\\
\caption{Two-dimensional distribution of the ${\cal D}^-_{(s)}$ momentum-component perpendicular to the ${\ensuremath{\PB}}\xspace^0_{({\ensuremath{\Ps}}\xspace)}$ flight direction as a function of $m_{\rm corr}$ for three classes of simulated events. The linear boundary used in the analysis is represented by the dashed line.\label{fig:threshold}}
\end{figure}
\section{Data analysis}
Approximately 468\,000 (141\,000) signal (reference) candidates, formed by combining with candidates $\mu^+$ the $K^+K^-\pi^-$ candidates consistent with {\ensuremath{\D^-_\squark}}\xspace ({\ensuremath{\D^-}}\xspace) decays, fulfill the selection. Figure~\ref{fig:visibleMass} shows the $D\mu$ mass distributions with corresponding $K^+K^-\pi^-$ mass distributions in the inset. \par In the $D\mu$ distribution, the enhancements of the signal and reference distributions over the corresponding same-sign distributions for $m_{{\cal D}\mu}< 5.5 \ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace$ are predominantly due to bottom-hadron decays. The gap of candidates at $m_{{\cal D}\mu}\approx 5.3 \ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace$ reflects the $B^0_{(s)} \ensuremath{\rightarrow}\xspace D^-_{(s)}\pi^+$ veto. The two peaks in the $K^+K^-\pi^-$ distributions of same-sign candidates are due to genuine charm decays accidentally combined with muon candidates. Along with ${\ensuremath{\B^0_\squark}}\xspace \ensuremath{\rightarrow}\xspace [K^+K^-\pi^-]_{\ensuremath{\D^{\scalebox{0.4}{(}*\scalebox{0.4}{)}-}_{\squark}}}\xspace \mu^+\nu_\mu$ decays, many {\ensuremath{\B^0_\squark}}\xspace decays potentially useful for the lifetime measurement contribute signal candidates, including decays into $D_{(s)}^{**}(\ensuremath{\rightarrow}\xspace {\ensuremath{\D^{\scalebox{0.4}{(}*\scalebox{0.4}{)}-}_{\squark}}}\xspace X) \mu^+ \nu_{\mu}$, ${\ensuremath{\D^-_\squark}}\xspace \tau^+ (\ensuremath{\rightarrow}\xspace \mu^+ \nu_{\mu} \bar{\nu}_\tau) \nu_\tau$, ${\ensuremath{\D^{*-}_\squark}}\xspace (\ensuremath{\rightarrow}\xspace {\ensuremath{\D^-_\squark}}\xspace X) \tau^+ (\ensuremath{\rightarrow}\xspace \mu^+ \nu_\mu \bar{\nu}_{\tau}) \nu_{\tau}$, and $D^{**}_{s} (\ensuremath{\rightarrow}\xspace {\ensuremath{\D^{\scalebox{0.4}{(}*\scalebox{0.4}{)}-}_{\squark}}}\xspace X) \tau^+ (\ensuremath{\rightarrow}\xspace \mu^+\nu_\mu \bar{\nu}_{\tau}) \nu_\tau$ final states.\footnote{The symbol $D^{**}_{(s)}$ identifies collectively higher orbital excitations of $D^{-}_{(s)}$ mesons throughout.} Similarly, along with the ${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace [K^+K^-\pi^-]_{\ensuremath{\D^{\scalebox{0.4}{(}*\scalebox{0.4}{)}-}}}\xspace \mu^+\nu_\mu$ decays, potential reference candidates are contributed by {\ensuremath{\B^0}}\xspace decays into $D^{**}(\ensuremath{\rightarrow}\xspace D^{\scalebox{0.4}{(}*\scalebox{0.4}{)}-} X ) \mu^+ \nu_\mu$, $D^- \tau^+ (\ensuremath{\rightarrow}\xspace \mu^+ \nu_\mu \bar{\nu}_\tau) \nu_\tau$, $D^{*-} (\ensuremath{\rightarrow}\xspace {\ensuremath{\D^-}}\xspace X) \tau^+ (\ensuremath{\rightarrow}\xspace \mu^+ \nu_\mu \bar{\nu}_\tau) \nu_\tau$, and $D^{**}(\ensuremath{\rightarrow}\xspace D^{\scalebox{0.4}{(}*\scalebox{0.4}{)}-} X )\tau^+ (\ensuremath{\rightarrow}\xspace \mu^+ \nu_\mu \bar{\nu}_\tau) \nu_\tau$ final states. However, to simplify the analysis we restrict the signal (reference) decays solely to the ${\ensuremath{\B^0_\squark}}\xspace \ensuremath{\rightarrow}\xspace [K^+K^-\pi^-]_{\ensuremath{\D^{\scalebox{0.4}{(}*\scalebox{0.4}{)}-}_{\squark}}}\xspace \mu^+\nu_\mu$ (${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace [K^+K^-\pi^-]_{\ensuremath{\D^{\scalebox{0.4}{(}*\scalebox{0.4}{)}-}}}\xspace \mu^+\nu_\mu$) channels since they already contribute 95\% (91\%) of the inclusive $K^+K^-\pi^-\mu^+$ yield from semileptonic {\ensuremath{\B^0}}\xspace ({\ensuremath{\B^0_\squark}}\xspace) decays and require smaller and better-known $k$-factor corrections to relate the observed decay times to their true values.
\begin{figure}
\centering
\begin{overpic}[width=0.48\textwidth]{Fig1b}
\put(41,69){\includegraphics[width=0.17\textwidth]{Fig1a}}
\end{overpic}\\
\caption{\label{fig:visibleMass} Distributions of $D\mu$ mass for (top panel) reference candidates, formed by combining ${\ensuremath{\D^-}}\xspace \ensuremath{\rightarrow}\xspace K^+K^-\pi^-$ candidates with $\mu^+$ candidates, and (bottom panel) signal candidates formed by ${\ensuremath{\D^-_\squark}}\xspace \ensuremath{\rightarrow}\xspace K^+K^-\pi^-$ candidates combined with $\mu^+$ candidates. The inset shows the $K^+K^-\pi^-$-mass distribution with vertical lines enclosing the {\ensuremath{\D^-}}\xspace ({\ensuremath{\D^-_\squark}}\xspace) candidates used to form the reference (signal) candidates. The dark-filled histograms show same-sign candidate distributions.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{Fig2}\\
\caption{Corrected-mass distributions for (top panel) reference ${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace [K^+ K^- \pi^-]_{{\ensuremath{\D^{\scalebox{0.4}{(}*\scalebox{0.4}{)}-}}}\xspace} \mu^+ \nu_\mu$
and (bottom panel) signal ${\ensuremath{\B^0_\squark}}\xspace \ensuremath{\rightarrow}\xspace [K^+K^-\pi^-]_{{\ensuremath{\D^{\scalebox{0.4}{(}*\scalebox{0.4}{)}-}_{\squark}}}\xspace} \mu^+ \nu_\mu$
candidates satisfying the selection. Results of the global composition-fit are overlaid. In the {\ensuremath{\B^0_\squark}}\xspace fit projection, the lower- and higher-mass background components described in the text are displayed as a single, merged ``physics background" component. \label{fig:B_M_AfterSelection}}
\end{figure}
\par A reliable understanding of the sample composition is essential for correct lifetime results. An unbiased determination, from simulation, of the acceptances and mass distributions as functions of decay time requires that the composition of the simulated sample mirrors the data composition. We therefore weight the composition of the simulated samples according to the results of a global, least-squares composition fit to the $m_{\rm corr}$ distributions in data, shown in Fig.~\ref{fig:B_M_AfterSelection}. In the {\ensuremath{\B^0_\squark}}\xspace sample, such fit includes the two signal components, ${\ensuremath{\B^0_\squark}}\xspace \ensuremath{\rightarrow}\xspace [K^+K^-\pi^-]_{\ensuremath{\D^-_\squark}}\xspace \mu^+ \nu_\mu$ and ${\ensuremath{\B^0_\squark}}\xspace \ensuremath{\rightarrow}\xspace [K^+K^-\pi^-]_{\ensuremath{\D^{*-}_\squark}}\xspace \mu^+ \nu_\mu$; a combinatorial component; and two physics backgrounds. Each physics background component is formed by grouping together processes yielding sufficiently similar corrected-mass distributions, resulting in a contribution at lower values of corrected mass (${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace D^{\scalebox{0.4}{(}*\scalebox{0.4}{)}-}D^{\scalebox{0.4}{(}*\scalebox{0.4}{)}+}_s$, ${\ensuremath{\Bu}}\xspace \ensuremath{\rightarrow}\xspace {\kern 0.2em\overline{\kern -0.2em \PD}{}}\xspace{}^{\scalebox{0.4}{(}*\scalebox{0.4}{)}0}D^{\scalebox{0.4}{(}*\scalebox{0.4}{)}+}_s$, and $D^{**}(\ensuremath{\rightarrow}\xspace {\ensuremath{\D^{\scalebox{0.4}{(}*\scalebox{0.4}{)}-}_{\squark}}}\xspace X) \mu^+ \nu_\mu$) and another at higher corrected-mass values (${\ensuremath{\Bu}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\D^{\scalebox{0.4}{(}*\scalebox{0.4}{)}-}_{\squark}}}\xspace K^+ \mu^+ \nu_\mu X$, ${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\D^{\scalebox{0.4}{(}*\scalebox{0.4}{)}-}_{\squark}}}\xspace {\ensuremath{\kaon^0}}\xspace \mu^+ \nu_\mu X$, and ${\ensuremath{\B^0_\squark}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\D^-_\squark}}\xspace \tau^+ (\ensuremath{\rightarrow}\xspace \mu^+ \nu_\mu \bar{\nu}_\tau) \nu_\tau X$). All distributions are modeled empirically from simulation, except for the combinatorial distribution, which is modeled using same-sign data. Contributions expected to be smaller than 0.5\% are neglected. The impact of this approximation, and of possible variations of the relative proportions within each fit category, are accounted for in the systematic uncertainties. The fit has 62.1\% $p$-value and determines the fractions of each component with 0.13\%--0.91\% absolute statistical uncertainty. \par A simpler composition fit is used for the {\ensuremath{\B^0}}\xspace sample. Signal and combinatorial components mirror those of the {\ensuremath{\B^0_\squark}}\xspace case; the contributions from ${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace D^{**-}(\ensuremath{\rightarrow}\xspace D^{\scalebox{0.4}{(}*\scalebox{0.4}{)}-} X )\mu^+ \nu_\mu$ and ${\ensuremath{\Bu}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\D^-}}\xspace \mu^+ \nu_\mu X$ decays have sufficiently similar distributions to be merged into a single physics-background component. The results of the corrected-mass composition fit of the reference sample, and of a sample of 2.1 million ${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace [K^+\pi^-\pi^-]_{\ensuremath{\D^{\scalebox{0.4}{(}*\scalebox{0.4}{)}-}}}\xspace \mu^+\nu_\mu$ decays where the {\ensuremath{\D^-}}\xspace meson is reconstructed in the $K^+\pi^-\pi^-$ final state, offer a stringent validation. Discrepancies in the individual fractional contributions with respect to precise results from other experiments do not exceed 1.3 statistical standard deviations.\par
The composition fit is sufficient for the determination of $\Delta_\Gamma(D)$, where no $k$-factor corrections are needed since the final state is fully reconstructed. We determine $\Delta_\Gamma(D)$ through a least-squares fit of the ratio of signal {\ensuremath{\B^0_\squark}}\xspace and reference {\ensuremath{\B^0}}\xspace yields as a function of the charm-meson decay time in the range 0.1--4.0\,ps. The yields of signal ${\ensuremath{\B^0_\squark}}\xspace \ensuremath{\rightarrow}\xspace [K^+K^-\pi^-]_{\ensuremath{\D^{\scalebox{0.4}{(}*\scalebox{0.4}{)}-}_{\squark}}}\xspace \mu^+\nu_\mu$ and reference ${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace [K^+K^-\pi^-]_{\ensuremath{\D^{\scalebox{0.4}{(}*\scalebox{0.4}{)}-}}}\xspace \mu^+\nu_\mu$ decays are determined in each of 20 decay-time bins with a $m_{\rm corr}$ fit similar to the global composition-fit. The two signal and the two physics-background contributions are each merged into a single component according to the proportions determined by the global fit and their decay-time evolution expected from simulation. The fit includes the decay-time resolution and the ratio between signal and reference decay-time acceptances, which are determined to be uniform within 1\% from simulation. The fit is shown in the top panel of Fig.~\ref{fig:timefit_Bdratio}; it has 34\% $p$-value and determines $\Delta_\Gamma(D) = 1.0131 \pm 0.0117$\ensuremath{{\mathrm{ \,ps^{-1}}}}\xspace.\par
The measurement of $\Delta_\Gamma(B)$ requires, in addition, an acceptance correction for the differences between signal and reference decays, and the $k$-factor correction. The acceptance correction accounts for the difference in decay-time-dependent efficiency due to the combined effect of the difference between {\ensuremath{\D^-}}\xspace and {\ensuremath{\D^-_\squark}}\xspace lifetimes and the online requirements on the spatial separation between $D^-_{(s)}$ and $B^0_{(s)}$ decay vertices: we apply to the {\ensuremath{\B^0_\squark}}\xspace sample a per-candidate weight, $w_i \equiv \exp[\Delta_\Gamma(D) t({\ensuremath{\D^-_\squark}}\xspace)]$, based on the $\Delta_\Gamma(D)$ result and the {\ensuremath{\D^-_\squark}}\xspace decay time, such that the {\ensuremath{\D^-_\squark}}\xspace and {\ensuremath{\D^-}}\xspace decay-time distributions become consistent. Figure~\ref{fig:acceptance} shows the effect of the weighting on the acceptance.
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{Fig5}\\
\caption{Ratio between signal and reference decay acceptance as a function of decay time (open dots) prior to and (full dots) after the acceptance correction, with the result of a fit of the latter overlaid.\label{fig:acceptance}}
\end{figure}
The $k$-factor is the average fractional contribution of the observed momentum to the true momentum determined in a simulated sample. The $k$-factor-dependence on the kinematic properties of each candidate is included through a dependence on $m_{D\mu}$, $k(m_{D\mu}) = \left\langle p_{D\mu}/p_{\rm true}\right\rangle$, where $p_{\rm true}$ indicates the true momentum of the $B^0_{(s)}$ meson (Fig.~\ref{fig:kfactor}).
Our candidate-specific correction consists in dividing the candidate's momentum reconstructed in data by the $k$-factor.
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{Fig6a}\\
\caption{Distribution of $k$-factor as a function of $m_{D\mu}$ in simulated signal data with an empirical fit of its $m_{D\mu}$-averaged value overlaid. \label{fig:kfactor}}
\end{figure}
Equalized compositions of simulated and experimental data samples ensure that the $k$-factor distribution specific to each of the four signal and reference decays is unbiased. \par We determine $\Delta_\Gamma(B)$ with the same fit of $m_{\rm corr}$ used to measure $\Delta_\Gamma(D)$ except that here the ratios of signal and reference yields are determined as functions of the $B^0_{(s)}$ decay time. The decay-time smearing due to the $k$-factor spread is included in the fit. After the {\ensuremath{\D^-_\squark}}\xspace lifetime weighting, the decay-time acceptances of simulated signal and reference modes are consistent, with a $p$-value of $83\%$, and are not included in the fit. The fit is shown in the middle panel of Fig.~\ref{fig:timefit_Bdratio}; the resulting width difference is $\Delta_\Gamma(B) = -0.0115 \pm 0.0053$\ensuremath{{\mathrm{ \,ps^{-1}}}}\xspace, with 91\% $p$-value.
We validate the analysis with a null test to check against biases due to differences in acceptances and kinematic properties, We repeat the width-difference determination by using the same reference ${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace [K^+K^-\pi^-]_{\ensuremath{\D^{\scalebox{0.4}{(}*\scalebox{0.4}{)}-}}}\xspace \mu^+\nu_\mu$ sample and replacing the signal decays with ${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace [K^+\pi^-\pi^-]_{\ensuremath{\D^{\scalebox{0.4}{(}*\scalebox{0.4}{)}-}}}\xspace \mu^+\nu_\mu$ decays, where the {\ensuremath{\D^-}}\xspace meson is reconstructed in the $K^+\pi^-\pi^-$ final state. Differing momentum and vertex-displacement selection criteria induce up to 10\% acceptance differences as a function of ${\ensuremath{\D^-}}\xspace$ decay time and up to 25\% variations as a function of {\ensuremath{\B^0}}\xspace decay time. Acceptance ratios are therefore included in the fit (Fig.~\ref{fig:timefit_Bdratio}, bottom panel). The $p$-values are 21\% for the {\ensuremath{\B^0}}\xspace fit and 33\% for the {\ensuremath{\D^-}}\xspace fit. The resulting width differences, $\Delta_\Gamma(D) = (-19 \pm 10)\times 10^{-3}$\ensuremath{{\mathrm{ \,ps^{-1}}}}\xspace and $\Delta_\Gamma(B) = (-4.1 \pm 5.4)\times 10^{-3}$\ensuremath{{\mathrm{ \,ps^{-1}}}}\xspace, are consistent with zero, hence supporting the overall validity of the approach.
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{Fig3}\\
\caption{Ratio between acceptance-corrected yields of signal ${\ensuremath{\B^0_\squark}}\xspace \ensuremath{\rightarrow}\xspace [K^+K^-\pi^-]_{{\ensuremath{\D^{\scalebox{0.4}{(}*\scalebox{0.4}{)}-}_{\squark}}}\xspace} \mu^+\nu_\mu$
and reference ${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace [K^+K^-\pi^-]_{{\ensuremath{\D^{\scalebox{0.4}{(}*\scalebox{0.4}{)}-}}}\xspace} \mu^+\nu_\mu$ decay yields as a function of (top panel) charm-meson and (middle panel) bottom-meson decay time. The bottom panel shows the ratio between acceptance-corrected {\ensuremath{\B^0}}\xspace decay yields in the $[K^+\pi^-\pi^-]_{{\ensuremath{\D^{\scalebox{0.4}{(}*\scalebox{0.4}{)}-}}}\xspace} \mu^+\nu_\mu$ and $[K^+K^-\pi^-]_{{\ensuremath{\D^{\scalebox{0.4}{(}*\scalebox{0.4}{)}-}}}\xspace} \mu^+\nu_\mu$ channels as a function of {\ensuremath{\B^0}}\xspace decay time. Fit results are overlaid. Relevant for the results is only the slope of the ratios as a function of decay time; absolute ratios, which depend on the decay yields, weighting, and efficiencies, are irrelevant. \label{fig:timefit_Bdratio}}
\end{figure}
We assess independent systematic uncertainties due to (i) potential fit biases; (ii) assumptions on the components contributing to the sample and their mass distributions; (iii) assumptions on the signal decay model, e.g., choice of ${\ensuremath{\B^0_\squark}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\D^{*-}_\squark}}\xspace$ form factors; (iv) uncertainties on the decay-time acceptances; (v) uncertainties on the decay-time resolution; (vi) contamination from {\ensuremath{\B^0_\squark}}\xspace candidates produced in $B_c^+$ decays; and (vii) mismodeling of the expected \mbox{$p_{\mathrm{ T}}$}\xspace differences between {\ensuremath{\B^0}}\xspace and {\ensuremath{\B^0_\squark}}\xspace mesons. We evaluate each contribution by including the relevant effect in the model and repeating the whole analysis on ensembles of simulated experiments that mirror the data. For the $\Delta_\Gamma(D)$ result, the systematic uncertainty is dominated by a 0.0049\ensuremath{{\mathrm{ \,ps^{-1}}}}\xspace contribution due to the decay-time acceptance, and a 0.0039\ensuremath{{\mathrm{ \,ps^{-1}}}}\xspace contribution due to the decay-time resolution. A smaller contribution of 0.0018\ensuremath{{\mathrm{ \,ps^{-1}}}}\xspace arises from possible mismodeling of \mbox{$p_{\mathrm{ T}}$}\xspace differences in {\ensuremath{\B^0}}\xspace and {\ensuremath{\B^0_\squark}}\xspace production. For the $\Delta_\Gamma(B)$ result, a 0.0028\ensuremath{{\mathrm{ \,ps^{-1}}}}\xspace uncertainty from mismodeling of \mbox{$p_{\mathrm{ T}}$}\xspace differences between {\ensuremath{\B^0}}\xspace and {\ensuremath{\B^0_\squark}}\xspace mesons and a 0.0025\ensuremath{{\mathrm{ \,ps^{-1}}}}\xspace contribution from the {\ensuremath{\B^0_\squark}}\xspace decay model dominate. Smaller contributions arise from {\ensuremath{\B_\cquark^+}}\xspace feed-down (0.0010\ensuremath{{\mathrm{ \,ps^{-1}}}}\xspace), residual fit biases (0.0009\ensuremath{{\mathrm{ \,ps^{-1}}}}\xspace), sample composition (0.0005\ensuremath{{\mathrm{ \,ps^{-1}}}}\xspace), and decay-time acceptance and resolution (0.0004\ensuremath{{\mathrm{ \,ps^{-1}}}}\xspace each). The uncertainties associated with the limited size of simulated samples are included in the fit $\chi^2$ and contribute up to 20\% of the statistical uncertainties. The uncertainty in the decay-length measurement has negligible impact. Consistency checks based on repeating the measurement independently on subsamples chosen according to data-taking time, online-selection criteria, charged-particle and vertex multiplicities, momentum of the $K^+K^-\pi^-\mu^+$ system, and whether only the ${\ensuremath{\D^-_\squark}}\xspace \mu^+ \nu_\mu$ or the ${\ensuremath{\D^{*-}_\squark}}\xspace \mu^+ \nu_\mu$ channel is considered as signal, all yield results compatible with statistical fluctuations.
\section{Summary of results and discussion}
We report world-leading measurements of {\ensuremath{\B^0_\squark}}\xspace and {\ensuremath{\D^-_\squark}}\xspace meson lifetimes using a novel method. We reconstruct ${\ensuremath{\B^0_\squark}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\D^{*-}_\squark}}\xspace \mu^+\nu_\mu$ and ${\ensuremath{\B^0_\squark}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\D^-_\squark}}\xspace \mu^+\nu_\mu$ decays in proton-proton collisions collected by the LHCb experiment and corresponding to 3.0\ensuremath{\mbox{\,fb}^{-1}}\xspace of integrated luminosity. We use ${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\D^{*-}}}\xspace \mu^+\nu_\mu$ and ${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\D^-}}\xspace \mu^+\nu_\mu$ decays reconstructed in the same final state as a reference to suppress systematic uncertainties. The resulting width differences are $\Delta_\Gamma(B) = \ensuremath{\rm -0.0115} \pm \ensuremath{0.0053}\ensuremath{\mathrm{\,(stat)}}\xspace \pm \ensuremath{0.0041}\ensuremath{\mathrm{\,(syst)}}\xspace$\ensuremath{{\mathrm{ \,ps^{-1}}}}\xspace and $\Delta_\Gamma(D) = \ensuremath{1.0131} \pm \ensuremath{0.0117} \ensuremath{\mathrm{\,(stat)}}\xspace \pm \ensuremath{0.0065} \ensuremath{\mathrm{\,(syst)}}\xspace$\ensuremath{{\mathrm{ \,ps^{-1}}}}\xspace. They are uncorrelated. Using the known values of the {\ensuremath{\B^0}}\xspace~\cite{PDG2016, Aaij:2014owa} and {\ensuremath{\D^-}}\xspace lifetimes~\cite{PDG2016,Link:2002bx}, we determine the flavor-specific {\ensuremath{\B^0_\squark}}\xspace lifetime, $\ensuremath{\tau^{\rm fs}_{{\ensuremath{\B^0_\squark}}\xspace}}\xspace = \ensuremath{\rm 1.547} \pm \eStattB\ensuremath{\mathrm{\,(stat)}}\xspace \pm \eSysttB\ensuremath{\mathrm{\,(syst)}}\xspace\pm\ensuremath{0.004} \refB$\,ps, and the {\ensuremath{\D^-_\squark}}\xspace lifetime, $\tau_{{\ensuremath{\D^-_\squark}}\xspace} = \tD \pm \ensuremath{0.0030} \ensuremath{\mathrm{\,(stat)}}\xspace \pm \ensuremath{0.0017} \ensuremath{\mathrm{\,(syst)}}\xspace\pm \ensuremath{0.0017} \ensuremath{\mathrm{\,(}\tau_{{\cal D}}\mathrm{)}} $\,ps; the uncertainties are dominated by the size of the reference sample, and the last contributions are due to the uncertainties on the {\ensuremath{\B^0}}\xspace and {\ensuremath{\D^-}}\xspace lifetimes, respectively. \par The results improve by 15\% over the current \ensuremath{\tau^{\rm fs}_{{\ensuremath{\B^0_\squark}}\xspace}}\xspace\ value and by a factor of two the current $D^+_s$ lifetime, whose precision had not been improved in the past decade~\cite{Link:2005ew,Abazov:2014rua,LHCb-PAPER-2014-037}. They might offer improved insight into the interplay between strong and weak interactions in the dynamics of heavy mesons and sharpen the reach of indirect searches for non-standard-model physics. \par Promising opportunities of improvement are available. Extensions to events collected by additional triggers may offer an approximate 20\% increase in signal yield from the same data used in this work; addition of the 2015--2019 LHCb data set will further triple the signal yields; usage of higher-yield reference decays, like ${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace [K^+\pi^-\pi^-]_{\ensuremath{\D^{\scalebox{0.4}{(}*\scalebox{0.4}{)}-}}}\xspace \mu^+\nu_\mu$, will further reduce the statistical uncertainty. This work enables again, after a decade of declining interest, the opportunity of using semileptonic decays to achieve competitive measurements of lifetimes and other observables, like semileptonic branching fractions or {\ensuremath{\B^0_\squark}}\xspace\ form factors, in LHCb and other experiments.
\bigskip
I thank Andreas Kronfeld, Alexander Lenz, Jonathan Rosner, and G.~Punzi for useful discussions.
\addcontentsline{toc}{section}{References}
\setboolean{inbibliography}{true}
\ifx\mcitethebibliography\mciteundefinedmacro
\PackageError{LHCb.bst}{mciteplus.sty has not been loaded}
{This bibstyle requires the use of the mciteplus package.}\fi
\providecommand{\href}[2]{#2}
\begin{mcitethebibliography}{10}
\mciteSetBstSublistMode{n}
\mciteSetBstMaxWidthForm{subitem}{\alph{mcitesubitemcount})}
\mciteSetBstSublistLabelBeginEnd{\mcitemaxwidthsubitemform\space}
{\relax}{\relax}
\bibitem{Lenz:2014jha}
For a recent review, see A.~Lenz, \ifthenelse{\boolean{articletitles}}{\emph{{Lifetimes and HQE}},
}{}\href{http://arxiv.org/abs/1405.3601}{{\normalfont\ttfamily
arXiv:1405.3601}} and references therein\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\EndOfBibitem
\bibitem{HFAG}
Heavy Flavor Averaging Group, Y.~Amhis {\em et~al.},
\ifthenelse{\boolean{articletitles}}{\emph{{Averages of $b$-hadron,
$c$-hadron, and $\tau$-lepton properties as of summer 2016}},
}{}\href{http://arxiv.org/abs/1612.07233}{{\normalfont\ttfamily
arXiv:1612.07233}}, {updated results and plots available at
\href{http://www.slac.stanford.edu/xorg/hfag/}{{\texttt{http://www.slac.stanford.edu/xorg/hflav/}}}}\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\EndOfBibitem
\bibitem{Hartkorn:1999ga}
K.~Hartkorn and H.G. Moser, \ifthenelse{\boolean{articletitles}}{\emph{{A new
method of measuring $\Delta(\Gamma)/\Gamma$ in the {\ensuremath{\B^0_\squark}}\xspace-{\ensuremath{\Bbar{}^0_\squark}}\xspace system}},
}{}\href{http://dx.doi.org/10.1007/s100520050472}{Eur.\ Phys.\ J.\
\textbf{C8} (1999) 381}\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\EndOfBibitem
\bibitem{LHCb-PAPER-2014-037}
LHCb collaboration, R.~Aaij {\em et~al.},
\ifthenelse{\boolean{articletitles}}{\emph{{Measurement of the {\ensuremath{\Bbar{}^0_\squark}}\xspace meson
lifetime in {\ensuremath{\D^+_\squark}}\xspace {\ensuremath{\pion^-}}\xspace decays}},
}{}\href{http://dx.doi.org/10.1103/PhysRevLett.113.172001}{Phys.\ Rev.\
Lett.\ \textbf{113} (2014) 172001},
\href{http://arxiv.org/abs/1407.5873}{{\normalfont\ttfamily
arXiv:1407.5873}}\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\EndOfBibitem
\bibitem{satoru}
S.~Uozumi,
\ifthenelse{\boolean{articletitles}}{\emph{{Measurement of the $B$ meson Lifetimes with the Collider Detector At Fermilab}},
}{}\href{http://inspirehep.net/record/741160/files/fermilab-thesis-2006-78.PDF}{{\normalfont\ttfamily
FERMILAB-THESIS-2006-78}}, {Ph.D.\ thesis, University of Tsukuba (2006)}\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\EndOfBibitem
\bibitem{daveclark}
D.K.~Clark,
\ifthenelse{\boolean{articletitles}}{\emph{{Measurement of the $B^0$ and $B^+$ Lifetimes using Semileptonic Decays at CDF}},
}{}\href{http://inspirehep.net/record/1508436/files/fermilab-thesis-2010-76.PDF}{{\normalfont\ttfamily
FERMILAB-THESIS-2010-76}}, {Ph.D.\ thesis, Brandeis University (2010)}\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\EndOfBibitem
\bibitem{Bc}
LHCb collaboration, R.~Aaji {\em et~al.},
\ifthenelse{\boolean{articletitles}}{\emph{{Measurement of the $B_c^+$
meson lifetime using $B_c^+ \ensuremath{\rightarrow}\xspace J/\psi^- \mu^+\nu_\mu X$ decays}},
}{}\href{http://dx.doi.org/10.1140/epjc/s10052-014-2839-x}{Eur.\ Phys.\ J.\ C 74, 2839 (2014)},
\href{http://arxiv.org/abs/1401.6932}{{\normalfont\ttfamily
arXiv:1401.6932}}\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\EndOfBibitem
\bibitem{Abazov:2014rua}
D0 collaboration, V.~M. Abazov {\em et~al.},
\ifthenelse{\boolean{articletitles}}{\emph{{Measurement of the $B_s^0$
lifetime in the flavor-specific decay channel $B_s^0 \ensuremath{\rightarrow}\xspace D_s^- \mu^+\nu X$}},
}{}\href{http://dx.doi.org/10.1103/PhysRevLett.114.062001}{Phys.\ Rev.\
Lett.\ \textbf{114} (2015) 062001},
\href{http://arxiv.org/abs/1410.1568}{{\normalfont\ttfamily
arXiv:1410.1568}}\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\EndOfBibitem
\bibitem{paper}
LHCb collaboration, R.~Aaij {\em et~al.},
\ifthenelse{\boolean{articletitles}}{\emph{{Measurement of $B^0_s$ and $D^-_s$ Meson
Lifetimes}}, }{}\href{http://dx.doi.org/10.1103/PhysRevLett.119.101801}{Phys.\
Rev.\ Lett.\ \textbf{119} (2017) 101801},
\href{http://arxiv.org/abs/1705.03475}{{\normalfont\ttfamily
arXiv:1705.03475}}\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\EndOfBibitem
\bibitem{Link:2005ew}
FOCUS collaboration, J.M. Link {\em et~al.},
\ifthenelse{\boolean{articletitles}}{\emph{{A measurement of the {\ensuremath{\D^+_\squark}}\xspace
lifetime}}, }{}\href{http://dx.doi.org/10.1103/PhysRevLett.95.052003}{Phys.\
Rev.\ Lett.\ \textbf{95} (2005) 052003},
\href{http://arxiv.org/abs/hep-ex/0504056}{{\normalfont\ttfamily
arXiv:hep-ex/0504056}}\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\EndOfBibitem
\bibitem{Kodama:1991ij}
Fermilab E653 collaboration, K.~Kodama {\em et~al.},
\ifthenelse{\boolean{articletitles}}{\emph{{Measurement of the relative
branching fraction $\Gamma (D^0 \rightarrow K \mu \nu) / \Gamma (D^0
\rightarrow \mu X)$}},
}{}\href{http://dx.doi.org/10.1103/PhysRevLett.66.1819}{Phys.\ Rev.\ Lett.\
\textbf{66} (1991) 1819}\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\EndOfBibitem
\bibitem{PDG2016}
Particle Data Group, C.~Patrignani {\em et~al.},
\ifthenelse{\boolean{articletitles}}{\emph{{\href{http://pdg.lbl.gov/}{Review
of particle physics}}},
}{}\href{http://dx.doi.org/10.1088/1674-1137/40/10/100001}{Chin.\ Phys.\
\textbf{C40} (2016) 100001}\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\EndOfBibitem
\bibitem{Abulencia:2006ze}
CDF collaboration, A.~Abulencia {\em et~al.},
\ifthenelse{\boolean{articletitles}}{\emph{{Observation of {\ensuremath{\B^0_\squark}}\xspace-{\ensuremath{\Bbar{}^0_\squark}}\xspace
oscillations}},
}{}\href{http://dx.doi.org/10.1103/PhysRevLett.97.242003}{Phys.\ Rev.\ Lett.\
\textbf{97} (2006) 242003},
\href{http://arxiv.org/abs/hep-ex/0609040}{{\normalfont\ttfamily
arXiv:hep-ex/0609040}}\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\EndOfBibitem
\bibitem{Leonardo:2006fq}
N.T.~Leonardo,
\ifthenelse{\boolean{articletitles}}{\emph{{Analysis of {\ensuremath{\B^0_\squark}}\xspace flavor oscillations at CDF}},
}{}\href{http://inspirehep.net/record/725921/files/fermilab-thesis-2006-18.PDF}{{\normalfont\ttfamily
FERMILAB-THESIS-2006-18}}, {Ph.D.\ thesis, MIT (2006)}\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\EndOfBibitem
\bibitem{Alves:2008zz}
LHCb collaboration, A.A. Alves~Jr.\ {\em et~al.},
\ifthenelse{\boolean{articletitles}}{\emph{{The \mbox{LHCb}\xspace detector at the LHC}},
}{}\href{http://dx.doi.org/10.1088/1748-0221/3/08/S08005}{JINST \textbf{3}
(2008) S08005}\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\EndOfBibitem
\bibitem{Aaij:2014jba}
LHCb collaboration, R.~Aaij {\em et~al.},
\ifthenelse{\boolean{articletitles}}{\emph{{LHCb detector performance}},
}{}\href{http://dx.doi.org/10.1142/S0217751X15300227}{Int.\ J.\ Mod.\ Phys.\
\textbf{A30} (2015) 1530022},
\href{http://arxiv.org/abs/1412.6352}{{\normalfont\ttfamily
arXiv:1412.6352}}\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\EndOfBibitem
\bibitem{LHCb-PROC-2011-006}
M.~Clemencic {\em et~al.}, \ifthenelse{\boolean{articletitles}}{\emph{{The
\mbox{LHCb}\xspace simulation application, Gauss: Design, evolution and experience}},
}{}\href{http://dx.doi.org/10.1088/1742-6596/331/3/032023}{{J.\ Phys.\ Conf.\
Ser.\ } \textbf{331} (2011) 032023}\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\EndOfBibitem
\bibitem{LHCb-PROC-2010-056}
I.~Belyaev {\em et~al.}, \ifthenelse{\boolean{articletitles}}{\emph{{Handling
of the generation of primary events in Gauss, the LHCb simulation
framework}}, }{}\href{http://dx.doi.org/10.1088/1742-6596/331/3/032047}{{J.\
Phys.\ Conf.\ Ser.\ } \textbf{331} (2011) 032047}\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\EndOfBibitem
\bibitem{Caprini:1997mu}
I.~Caprini, L.~Lellouch, and M.~Neubert,
\ifthenelse{\boolean{articletitles}}{\emph{{Dispersive bounds on the shape of
${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace D^{(*)} \ell \bar{\nu}_{\ell}$ form factors}},
}{}\href{http://dx.doi.org/10.1016/S0550-3213(98)00350-2}{Nucl.\ Phys.\
\textbf{B530} (1998) 153},
\href{http://arxiv.org/abs/hep-ph/9712417}{{\normalfont\ttfamily
arXiv:hep-ph/9712417}}\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\EndOfBibitem
\bibitem{Aaij:2012me}
R.~Aaij {\em et~al.}, \ifthenelse{\boolean{articletitles}}{\emph{{The LHCb
trigger and its performance in 2011}},
}{}\href{http://dx.doi.org/10.1088/1748-0221/8/04/P04022}{JINST \textbf{8}
(2013) P04022}, \href{http://arxiv.org/abs/1211.3055}{{\normalfont\ttfamily
arXiv:1211.3055}}\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\EndOfBibitem
\bibitem{Aaij:2014owa}
LHCb collaboration, R.~Aaij {\em et~al.},
\ifthenelse{\boolean{articletitles}}{\emph{{Measurements of the $B^+, B^0,
B^0_s$ meson and $\Lambda^0_b$ baryon lifetimes}},
}{}\href{http://dx.doi.org/10.1007/JHEP04(2014)114}{JHEP \textbf{04} (2014)
114}, \href{http://arxiv.org/abs/1402.2554}{{\normalfont\ttfamily
arXiv:1402.2554}}\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\EndOfBibitem
\bibitem{Link:2002bx}
FOCUS collaboration, J.M. Link {\em et~al.},
\ifthenelse{\boolean{articletitles}}{\emph{{New measurements of the $D^0$ and
$D^+$ lifetimes}},
}{}\href{http://dx.doi.org/10.1016/S0370-2693(02)01934-2}{Phys.\ Lett.\
\textbf{B537} (2002) 192},
\href{http://arxiv.org/abs/hep-ex/0203037}{{\normalfont\ttfamily
arXiv:hep-ex/0203037}}\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\EndOfBibitem
\end{mcitethebibliography}
\def\Discussion{
\setlength{\parskip}{0.3cm}\setlength{\parindent}{0.0cm}
\bigskip\bigskip {\Large {\bf Discussion}} \bigskip}
\def\speaker#1{{\bf #1:}\ }
\def{}
\end{document}
|
train/arxiv
|
BkiUdwHxK0fkXQzmMBqp
| 5 | 1 |
\section{Introduction}
In 1975, T. Koornwinder (\cite{Koor75}) introduced a non--trivial method to generate orthogonal
polynomials in two variables using univariate classical Jacobi polynomials. In fact, he studied
some classes of {\it two--variable analogues of the classical orthogonal polynomials}, and he
proved that all of these classes are eigenfunctions of second order linear partial differential operators.
Recently, in \cite{FPP12}, the authors studied the Koornwinder's construction in a more general
framework. They deduced some additional properties for weight functions associated with this
polynomials and introduced some new examples of bivariate Koornwinder polynomials.
\bigskip
Univariate {\it semiclassical} orthogonal polynomials were introduced for the first
time by E. Hendriksen and H. van Rossum in \cite{HR85} as the natural generalization of the classical
orthogonal polynomials. A weight function $w(x)$ defined over a
bounded or unbounded interval $(a,b)$ is said to be semiclassical if and only if it satisfies the Pearson equation
\begin{equation}\label{Uni-Pearson}
\frac{d}{dx} (\phi(x)\, w(x) ) = \psi(x)\,w(x),
\end{equation}
where $\phi(x)$ and $\psi(x)$ are fixed polynomials with $\deg\phi = p\ge 0$ and $\deg\psi=q\ge 1$,
respectively, and the boundary conditions
\begin{equation}\label{Uni-boundary}
\lim_{x\to a} \phi(x)\,w(x)\,p(x) = \lim_{x\to b} \phi(x)\,w(x)\,p(x) = 0,
\end{equation}
for every polynomial $p(x)$. Of course, the polynomials $\phi(x)$ and $\psi(x)$ in \eqref{Uni-Pearson} are not unique. This fact motivates the definition of \textit{class} of a semiclassical weight function introduced by P. Maroni in \cite{maroni1987} (see also \cite{maroni1991}). The class $s$ of a weight function $w(x)$ is defined as
\begin{equation}
s=\min \max \{\deg(\phi)-2,\deg(\psi)-1\},
\end{equation}
where the minimum is taken over all the polynomials $\phi$ and $\psi$ such that $w(x)$ satisfies the Pearson equation \eqref{Uni-Pearson}.
In \cite{maroni1991}, the author also proved that
orthogonal polynomials associated with semiclassical weight functions satisfy the difference--differential
equation
\begin{equation}\label{diff-diff}
\mathcal{L}[p_n] \equiv \phi(x) \,p_n''(x) + \psi(x)\, p'_n(x) = \sum_{i=n-s}^{n+s} \lambda_{n,i}\,p_i(x),
\quad n\ge s,
\end{equation}
where $s$ denotes the class of $w(x)$.
\bigskip
Naturally, the case $s=0$ reduces to the classical weight functions in one variable. Notice that
in this case \eqref{diff-diff} reads
$$
\mathcal{L}[p_n] \equiv \phi(x) \,p''_n(x) + \psi(x)\, p'_n(x) = \lambda_n\,p_n,\quad n\ge 0,
$$
where $\phi(x)$ and $\psi(x)$ are fixed polynomials with $\deg\phi \le 2$ and $\deg\psi= 1$, and
$\lambda_n\neq 0, n \ge 1$. Thus, the associated orthogonal polynomials are
eigenfunctions of the second order linear differential operator
$$
\mathcal{L}[\cdot] \equiv \phi(x) \,\frac{d^2}{dx^2} + \psi(x)\, \frac{d}{dx}.
$$
Bochner (\cite{Bo29}) proved that Jacobi, Laguerre and Hermite orthogonal polynomials are the only
families of univariate orthogonal polynomials satisfying the above differential equation.
\bigskip
Classical and semiclassical weight functions in two variables can be defined by means of a bivariate
extension of the above definitions. In that case, the Pearson equation becomes
a matrix Pearson equation with matrix polynomial coefficients, and the derivative is replaced by
the usual divergence operator
$$\textnormal{div}\,(\Phi\, w(x,y)) = \Psi\,w(x,y),$$
where $\Phi$ is a $2\times 2$ symmetric polynomial matrix, and $\Psi$ is a $2\times 1$ polynomial vector,
as we will study in Section \ref{semi-section}.
The symmetric character of the matrix $\Phi$ is connected with the fact that orthogonal polynomials
associated with a semiclassical weight function $w(x,y)$ satisfy a difference--differential equation
whose coefficients are the entries of the matrices $\Phi$ and $\Psi$. In the classical case (when the
degrees of the entries of the matrices are less than or equal to $2$ and $1$, respectively), the
difference--differential equation becomes a partial differential equation for the orthogonal polynomials.
\bigskip
In this work, we study bivariate Koornwinder weight functions constructed from semiclassical
univariate weights. In this case, it was proved in \cite{FPP12} that $w(x,y)$ satisfies a matrix partial differential equation
$$\textnormal{div}(\varphi\,w(x,y)) = \delta\,w(x,y),$$
but the $2\times 2$ matrix $\varphi$ is not symmetric in general, and some of its entries
can be rational functions.
The main goal of this paper is to transform the above equation into a matrix Pearson equation
symmetrizing the matrix $\varphi$ in order to obtain a symmetric matrix $\Phi$, in such a way
that all the entries will be polynomials with the lowest possible degree.
\bigskip
The structure of this work is as follows. Section 2 presents some basic background about orthogonal
polynomials in two variables. Section 3 is focused on the basic background on semiclassical and classical orthogonal polynomials in two variables. Koornwinder's method for
constructing systems of orthogonal polynomials in two variables as well as the construction of
semiclassical orthogonal polynomials in two variables with
Koornwinder's method are described in Section 4. In Section 5 we analyze two methods for finding
Pearson equations
for semiclassical and classical Koornwinder weights, and finally in Section 6 we provide examples
of these two methods and write second order linear partial differential operators associated with
semiclassical Koornwinder polynomials.
\bigskip
\section{Orthogonal polynomials in two variables}\label{section5}
Some background on orthogonal polynomials in two variables is introduced in this section for its use
throughout this work. We follow mainly \cite{DX2014}.
For $n\ge 0$, let $\Pi_n$ denote the linear space of real polynomials in two variables of
total degree not
greater than $n$, where the total degree of a polynomial is the highest combined degree of its
monomial terms. Let $\Pi=\bigcup_{n\ge 0} \Pi_n$ denote the linear space of all bivariate real polynomials. Observe that
$$\dim \Pi_n = \binom{n+2}{n},$$
and, for $n\ge 0$, there exist $n+1$ bivariate independent polynomials of exact degree $n$.
\medskip
Let $\mathcal{M}_{h \times k}(\mathbb{R})$ denote the linear space of real matrices of size $h\times k$ and $\mathcal{M}_h(\mathbb{R})$ denotes the space of real square matrices. Given a matrix $M\in \mathcal{M}_{h \times k}(\mathbb{R})$, we denote by $M^t$ its transpose, and if $h=k$, $\det (M)$ denotes its determinant, and we say that $M$ is {\it non--singular}
if $\det(M)\neq 0$. The linear spaces of polynomial matrices and polynomial square matrices will be denoted by $\mathcal{M}_{h \times k}(\Pi)$ and $\mathcal{M}_h(\Pi)$, respectively. The degree of a polynomial matrix is defined as the maximum of the degrees of its polynomial entries. In addition, let $I_n$ denote the identity matrix of order
$n$.
\bigskip
Let $\Omega\subseteq \mathbb{R}^2$ be a domain having a non--empty interior. Suppose that
$w(x,y)$ is a non--negative
and integrable function defined on $\Omega$ such that
$$
\iint_{\Omega}w(x,y)dxdy>0,
$$
and the moments
$$
\mu_{h,k}=\iint_{\Omega} x^h\,y^k\,w(x,y)\,dxdy,
$$
are finite for all $h,k\ge0$, whether $\Omega$ is bounded or unbounded. Then $w(x,y)$ is said to be a
weight function over $\Omega$.
In this way, we can define the inner product
$$\langle p, q\rangle = \iint_{\Omega} p(x,y)\,q(x,y)\,w(x,y)\,dxdy,
$$
for all $p, q \in \Pi$. We say that $p\in \Pi_n$ is an {\it orthogonal polynomial with respect to $w(x,y)$} if
$$
\langle p, q\rangle \equiv 0, \qquad \forall q\in\Pi_{n-1}.
$$
Following \cite{DX2014}, let denote by $\mathcal{V}_n$ the space of orthogonal polynomials of
degree exact $n$, that is,
$$\mathcal{V}_n = \{p\in \Pi_n: \langle p,q\rangle =0, \quad \forall q\in\Pi_{n-1}\}.
$$
Obviously, for $n\ge 1$, $\dim \mathcal{V}_n = n+1$, and we will denote an orthogonal basis of
$\mathcal{V}_n$ as $P_{n,k}(x,y), \,\, 0\le k\le n$. Observe that $\{P_{n,k}(x,y): 0\le k\le n,
\, n\ge 0\}$ is a sequence of independent bivariate polynomials
such that
\begin{itemize}
\item $\deg P_{n,k} = n,\quad n\ge 0, \quad 0\le k \le n$
\item $\langle P_{n,k}, P_{m,j}\rangle = K_{n,k}\,\delta_{n,m}\,\delta_{k,j}, \quad K_{n,k}> 0$.
\end{itemize}
Then, we will call it a {\it sequence of bivariate orthogonal polynomials} associated with the
weight function $w(x,y)$.
\bigskip
Suppose that $f:\mathbb{R}^2\rightarrow\mathbb{R}$ and $\mathbf{F}:\mathbb{R}^2\rightarrow\mathbb{R}^2$. In this work, the {\it gradient operator}
$$\nabla f(x,y)=(\partial_x f,\,\partial_y f)^t,$$
and the {\it divergence operator}
$$\textnormal{div}\,\mathbf{F}(x,y)=\nabla \cdot \mathbf{F},$$
will be used as the standard differential operators in two variables.
\bigskip
\section{Classical and semiclassical weight functions in two variables}\label{semi-section}
The contents of this section are dedicated to recall the definition of the classical and
semiclassical character for weight functions in two variables (\cite{AdMFPn07, AdMFPn08}).
\begin{defi}
Let $w(x,y)$ be a bivariate weight function defined over the domain $\Omega$. Then $w(x,y)$ is said
to be {\it semiclassical} if there exist a non--zero symmetric polynomial matrix and a non--zero
polynomial vector
\begin{equation}\label{phipsi}
\Phi=\begin{pmatrix}
\phi_{1,1} & \phi_{1,2} \\
\phi_{1,2} & \phi_{2,2}
\end{pmatrix}\in\mathcal{M}_{2\times2}(\Pi), \quad
\Psi=\begin{pmatrix}
\psi_1 \\
\psi_2
\end{pmatrix}\in\mathcal{M}_{2\times1}(\Pi),
\end{equation}
with $\textnormal{deg}\,\Phi\ge0$ and $\textnormal{deg}\,\Psi\ge1$, such that $\det\langle 1,\Phi
\rangle \ne 0$ and $w(x,y)$ satisfies the matrix Pearson equation
\begin{equation}\label{pearsontypeq1}
\textnormal{div}(\Phi w)= \Psi^t\, w,
\end{equation}
and the boundary conditions
\begin{equation}\label{boundaryconditions}
\begin{array}{l}
\displaystyle{\int_{\partial \Omega}} p(x,y)\,w(x,y)\,(\phi_{1,1}(x,y)dy-\phi_{1,2}(x,y)dx)=0\\
~~\\
\displaystyle{\int_{\partial \Omega}} p(x,y)\,w(x,y)\,(\phi_{1,2}(x,y)dy-\phi_{2,2}(x,y)dx)=0,
\end{array}
\end{equation}
must hold for every polynomial $p(x,y)$. Moreover, we define
\begin{equation}\label{ese}
s = \max \{\deg(\Phi)-2,\deg(\Psi)-1\}.
\end{equation}
\end{defi}
\begin{remark}
The matrix Pearson equation for a given weight function is not unique. In fact,
\eqref{pearsontypeq1} can be left multiplied times another $2\times 2$ non--singular
polynomial matrix, and we obtain a new matrix Pearson equation for $w(x,y)$.
The problem of characterize the minimum matrix Pearson equation for a semiclassical weight
function remains open.
\end{remark}
\medskip
From the definition of semiclassical weight function, we can consider a {\it classical} one as a particular case.
\begin{defi}
A weight function $w(x,y)$ defined on a domain $\Omega \subseteq \mathbb{R}^2$ is called {\it classical} if
it is semiclassical with $\deg \Phi\le 2$ and $\deg \Psi =1$. Using definition \eqref{ese}, we get that,
for classical weights, $s=0$.
\end{defi}
Classical and semiclassical weight functions are characterized by difference--differential properties
for the associated orthogonal polynomial sequences. Using the matrices defined in \eqref{phipsi},
we introduce the partial differential operator
$$\mathcal{L}[\cdot]:= \phi_{1,1}\,\partial_{xx} + 2\phi_{1,2}\,\partial_{xy} + \phi_{2,2}\,\partial_{yy}
+ \psi_{1}\,\partial_x + \psi_2\,\partial_y.
$$
We recall the next characterization for semiclassical weight functions.
\begin{teor}[\cite{AdMFPn08}]
Let $\{P_{n,k}(x,y): 0\le k\le n, \quad n\ge0\}$ be an orthogonal polynomial sequence associated with a
weight function $w(x,y)$. Then $w(x,y)$ is semiclassical, that is, it satisfies a Pearson equation
(\ref{pearsontypeq1}) if and only if for each $n\ge0, \, 0\le k\le n$, $P_{n,k}(x,y)$ satisfies the
second order difference--differential relation
\begin{equation}\label{diffrel2d}
\mathcal{L}[P_{n,k}(x,y)] = \sum_{m=n-s}^{n+s}\sum_{i=0}^m \lambda^{n,k}_{m,i} \,P_{m,i}(x,y),
\end{equation}
where $\lambda^{n,k}_{m,i}\in \mathbb{R}$.
\end{teor}
Observe that if $w(x,y)$ is classical, then $s=0$, and in the above difference--differential
relation the double sum in the right hand side reduces to a sum of orthogonal polynomials
in $\mathcal{V}_n$. This characterization for bivariate classical orthogonal polynomials was shown in \cite{FPP05a} and it can be reformulated in the following way.
\begin{teor}[\cite{FPP05a}]\label{teor-clas} In the above conditions, $w(x,y)$ is classical if and only if
there exists constants $\lambda^{n,k}_{i}\in \mathbb{R}$ such that
\begin{equation*}
\mathcal{L}[P_{n,k}(x,y)] = \sum_{i=0}^n \lambda^{n,k}_{i} \,P_{n,i}(x,y).
\end{equation*}
\end{teor}
\begin{remark} We must remark that in the classical case, the whole linear space of orthogonal
polynomials of exact degree $n$ is preserved by $\mathcal{L}$.
In other words,
\begin{equation*}
\mathcal{L}[\mathcal{V}_n] \subset \mathcal{V}_n.
\end{equation*}
\end{remark}
\noindent
In the particular case when $\lambda^{n,k}_{i}=\delta_{k,i} \lambda_{n,k}$, every polynomial of the above sequence
is an eigenfunction of $\mathcal{L}$
$$
\mathcal{L}[P_{n,k}] = \lambda_{n,k}\,P_{n,k},
$$
that is, every orthogonal polynomial satisfies a second order linear partial differential equation.
Theorem \ref{teor-clas} is an extension of the Krall and Sheffer's definition of classical
orthogonal polynomials in two variables (\cite{KS67}). In that case,
$\lambda^n_k \equiv \lambda_n$ is independent of $k$, and then every orthogonal polynomial of
total degree $n$ satisfies the same second order linear partial differential equation.
\bigskip
\section{Two variable semiclassical Koornwinder weights}
First, we describe the method introduced by T. H. Koornwinder in 1975 (see \cite{DX2014,Koor75}),
to construct weight functions in two variables from two
weight functions in one variable.
\medskip
Let $w_1(x)$ and $w_2(y)$ be univariate weight functions defined on the intervals $(a,b)$ and $(c,d)$,
respectively. Let $\rho(x)$ be a positive function on $(a,b)$ satisfying one of the following two conditions
\medskip
\leftline{
\begin{tabular}{lp{0.85\textwidth}}
\emph{Case I}: & $\rho(x)$ is a polynomial of degree $\le 1$, that is, $\rho(x) = r_1\,x+r_0$,
with $|r_1|+|r_0|>0$,\\
& \\
\emph{Case II}: & $\rho(x)$ is the square root of a non--negative polynomial of degree at most 2,
$c=-d < 0$, and $w_2(y)$ is an even function on $(-d, d)$.
\end{tabular}}
\bigskip
Anyway, $\rho(x)^2$
is a polynomial of degree less than or equal to 2, and from now on,
we will denote
\begin{equation*}
\rho(x)^2 = a_2\, x^2 + a_1\, x + a_0,
\end{equation*}
where $a_2, a_1, a_0 \in \mathbb{R}$ and $|a_2|+|a_1| + |a_0|>0$. Observe that, in
the first case $a_2 = r_1^2\ge 0$, $a_1 = 2\,r_1\,r_0$, and $a_0 = r_0^2\ge 0$.
\bigskip
For $m\ge 0$, let $\{p_{n}(x;m)\}_{n\geqslant0}$ be the monic orthogonal polynomial sequence
with respect to the weight function $\rho(x)^{2m+1}\,w_1(x)$ and let $\{q_n(y)\}_{n\geqslant0}$
be the monic orthogonal polynomial sequence with respect to the weight function $w_2(y)$.
Then, we define the monic bivariate {\it Koornwinder polynomials}
\begin{equation}\label{kops}
P_{n,m}(x,y) = p_{n-m}(x;m)\,\rho(x)^m\, q_m\left(\frac{y}{\rho(x)}\right), \quad 0 \leqslant m\leqslant n.
\end{equation}
Notice that we get a polynomial of total degree $n$ and degree $m$ in $y$. Moreover, they are orthogonal with respect to the {\it Koornwinder weight function}
\begin{equation}\label{Koorw}
w(x,y)=w_1(x)\,w_2\left(\frac{y}{\rho(x)}\right),
\end{equation}
over the domain
\begin{equation}\label{domain}
\Omega = \{(x,y)\in \mathbb{R}^2,\quad a<x<b, \quad c\,\rho(x) < y < d\,\rho(x)\}.
\end{equation}
\medskip
Observe that the \emph{tensor product} of two monic orthogonal polynomials in one
variable
$$P_{n,m}(x,y) = p_{n-m}(x)\,q_m(y), \quad 0 \leqslant m\leqslant n,
$$
corresponds to monic Koornwinder orthogonal polynomials with respect to the weight function $w(x,y) = w_1(x)\,w_2(y)$ where $\rho(x)=1$.
\bigskip
Now, we will recover a Theorem in \cite{FPP12} where it was proved that the semiclassical character is
inherited by bivariate Koornwinder polynomials
\begin{teor}[\cite{FPP12}]
Let $w_1$ and $w_2$ be two semiclassical weight functions in one variable. Then, the bivariate
Koornwinder weight \eqref{Koorw} is semiclassical.
\end{teor}
The proof of above Theorem does not provides an standard method to find a minimal (in the sense
of the degrees of the coefficients) matrix Pearson equation for Koornwinder weights. This is the
main objective from now on.
Let $w_1(x)$ and $w_2(x)$ be two semiclassical weight functions in one variable defined on $(a,b)$ and
$(c,d)$, respectively, and let
$$
\frac{d}{dx}(\phi_i(x)\,w_i(x)) = \psi_i(x)\,w_i(x), \quad i=1,2,
$$
its respective Pearson equations, where $\deg\,\phi_i = p_i \ge 0$ and $\deg\,\psi_i = q_i \ge1$.
The Pearson equations are equivalent to
$$
\phi_i(x) w'_i(x) = \widetilde{\psi}_i(x) w_i(x),
$$
where $\widetilde{\psi}_i(x) = \psi_i(x) - \phi'_i(x), \, i=1,2$.
Taking partial derivatives on \eqref{Koorw}, we get
\begin{align*}
&\frac{\partial}{\partial x}w(x,y) = w'_1(x)\,w_2\left(\frac{y}{\rho(x)}\right) - w_1(x)\,w'_2\left(\frac{y}{\rho(x)}\right)\frac{y}{\rho(x)^2}\,\rho'(x),\\
\\
&\frac{\partial}{\partial y}w(x,y) = w_1(x)\,w'_2\left(\frac{y}{\rho(x)}\right)\frac{1}{\rho(x)}.
\end{align*}
Then, substituting the second equation into the first one and using the Pearson equations for
$w_1$ and $w_2$, we deduce
\begin{align}
&\phi_1(x)\frac{\partial}{\partial x} w(x,y) + \phi_1(x) \frac{\rho'(x)}{\rho(x)}\,y\,
\frac{\partial}{\partial y} w(x,y) = \widetilde{\psi}_1(x)\,w(x,y),\label{first}\\
\nonumber \\
&\phi_2\left(\frac{y}{\rho(x)} \right) \frac{\partial}{\partial y} w(x,y) = \frac{1}{\rho(x)}
\widetilde{\psi}_2\left(\frac{y}{\rho(x)}\right)\,w(x,y).\label{second}
\end{align}
If we define the following matrices
\begin{equation*}
\varphi =\begin{pmatrix}
\varphi_1 & \varphi_2\\
\\
0 & \varphi_3
\end{pmatrix}= \begin{pmatrix}
\phi_1(x) & \eta(x)\,y\\
\\
0 & \rho(x)\phi_2\left(\dfrac{y}{\rho(x)} \right)
\end{pmatrix}, \quad \delta = \begin{pmatrix}
\delta_1 \\ \\ \delta_2
\end{pmatrix}= \begin{pmatrix}
\widetilde{\psi}_1(x) \\ \\ \widetilde{\psi}_2\left(\dfrac{y}{\rho(x)}\right)
\end{pmatrix},
\end{equation*}
equations \eqref{first}--\eqref{second} can be written as
\begin{equation}\label{eq30}
\varphi \,\nabla w = \delta \, w,
\end{equation}
where
\begin{equation*}
\eta(x)=\phi_1(x)\frac{\rho'(x)}{\rho(x)}.
\end{equation*}
\begin{remark} If both univariate weight functions are semiclassical and $\rho(x) =1$, Koornwinder construction
yields semiclassical weight functions in two variables since matrix $\varphi$ is diagonal. In particular,
tensor product of univariate classical weight functions provides a bivariate classical weight.
\end{remark}
We must remark that equation \eqref{eq30} is not a matrix Pearson equation for the Koornwinder
weight since the coefficient matrix $\varphi$ is not a symmetric matrix in general and it is not guaranteed that its entries are polynomials.
\bigskip
Observe that the determinant of $\varphi$ does not vanish on the interior of $\Omega$,
the domain of orthogonality for Koornwinder polynomials given in (\ref{domain}). In fact,
$$\det\,\varphi=\phi_1(x)\,\rho(x)\phi_2\left(\frac{y}{\rho(x)}\right)\neq 0,
$$
for all $(x,y)$ belonging to the interior of $\Omega$. Therefore, $\varphi$ is a non--singular
matrix on the interior of $\Omega$ and we can solve (\ref{eq30}) obtaining
\begin{equation}\label{eq35}
\nabla\,w(x,y) = \varphi^{-1}\,\delta\,w(x,y),
\end{equation}
where
$$
\varphi^{-1}\, \delta =\begin{pmatrix}
\dfrac{\widetilde{\psi}_1(x)}{\phi_1(x)}-\dfrac{\eta(x)y\widetilde{\psi}_2\left(\dfrac{y}{\rho(x)}
\right)}{\phi_1(x)\rho(x)\phi_2\left(\dfrac{y}{\rho(x)}\right)} \\
\\
\dfrac{\widetilde{\psi}_2\left(\dfrac{y}{\rho(x)}\right)}{\rho(x)\phi_2\left(\dfrac{y}{\rho(x)}\right)}
\end{pmatrix}.
$$
The entries of the column vector $\varphi^{-1}\delta$ are rational functions whose denominator can only
vanish on the set
\begin{equation}\label{set}
\left\{(x,y)\in\mathbb{R}^2:\,\det\,\varphi=\phi_1(x)\,\rho(x)\phi_2\left(\frac{y}{\rho(x)}\right)= 0\right\},
\end{equation}
and therefore, $w$ may only vanish or become infinite on \eqref{set}. Moreover, since $\nabla \ln w=\nabla w/w$, then the partial derivatives of all orders of $\ln w$ are rational functions whose denominators do not vanish outside \eqref{set}. Hence, $\ln w$ is analytic, and consequently, so is $w$.
\bigskip
Now, we study when the rational function
$$\eta(x)=\phi_1(x)\,\frac{\rho'(x)}{\rho(x)},$$
is a polynomial. Notice that
$$
\frac{d}{dx}[\phi_1(x)\,\rho(x)^{2m+1}\,w_1(x)] =
\left[\psi_1(x)+(2m+1)\phi_1(x)\frac{\rho'(x)}{\rho(x)}\right]\rho(x)^{2m+1}\,w_1(x),
$$
that is, the weight function $u_m(x)=\rho(x)^{2m+1}\,w_1(x)$ satisfies
\begin{equation}\label{Pearson-m}
\frac{d}{dx}[\phi_m(x)\,u_m(x)] =
\psi_m(x)\,u_m(x),
\end{equation}
where
$$\phi_m(x) = \phi_1(x), \qquad \psi_m(x) = \psi_1(x)+(2m+1)\phi_1(x)\frac{\rho'(x)}{\rho(x)}.
$$
In order to have a Pearson equation for the weight function $u_m(x)$, we need that the coefficients
of \eqref{Pearson-m} to be polynomials.
\begin{prop}
If the weight function $w_1(x)$ is semiclassical of class $s_1$, then $u_m(x)$ is semiclassical of class
$c_m$, where
\begin{itemize}
\item \emph{Case I}: if $\rho(x)$ divides $\phi_1(x)$, then $c_m=s_1$, otherwise if $\rho(x)$ does not
divides $\phi_1(x)$, then $c_m=s_1+1$,
\item \emph{Case II}: if $\rho(x)^2$ divides $\phi_1(x)$, we get $c_m=s_1$, while
if $\rho(x)^2$ does not divides $\phi_1(x)$, then $s_1 + 1 \le c_m \le s_1+2$.
\end{itemize}
As a consequence, if $w_1(x)$ is classical and
\begin{itemize}
\item \emph{Case I}: $\rho(x)$ divides $\phi_1(x)$,
\item \emph{Case II}: $\rho(x)^2$ divides $\phi_1(x)$,
\end{itemize}
then $u_m(x)$ is classical, and the rational function $\eta(x)$ is a polynomial of degree $\le 1$.
\end{prop}
\begin{proof}
In {\it Case I}, $\rho(x) = r_1\,x+r_0$, with $|r_1|+|r_0|>0$.
If $\rho(x)$ divides $\phi_1(x)$, the
weight $u_m(x)$ is semiclassical of the same class as
$w_1(x)$. On the other hand, if $\rho(x)$ does not divide $\phi_1(x)$, then multiplying \eqref{Pearson-m}
times $\rho(x)$, we deduce that $u_m(x)$ is again semiclassical, but its class increases.
In {\it Case II}, we know that $\rho(x) = \sqrt{a_2\, x^2 + a_1\, x + a_0}$ and
$$\frac{\rho'(x)}{\rho(x)} = \frac{2\,a_2 \,x+ a_1}{2\,\rho(x)^2}.$$
Again, $u_m(x)$ is semiclassical, and if $\rho(x)^2$ divides $\phi_1(x)$, the weight function $u_m(x)$
is semiclassical of the same class as $w_1(x)$, but on the contrary the class increases.
\end{proof}
\bigskip
\section{Two symmetrization methods}
In this section we will explain two methods for symmetrizing (\ref{eq30}). The first method is based
on finding a matrix $S$ with rational entries such that the matrix $S\,\varphi$ is symmetric, that is,
such that the condition $S\varphi-\varphi^tS^t=0$ holds. Additionally, the entries of the matrix $
S\varphi$ and the vector $S\,\delta$ must be polynomials of the lowest possible degree. We choose to
call this first method {\it the matrix symmetrization method}. The second method consists on factorizing
elements of $\varphi$ and $\delta$, such as their entries and $\det\varphi$, and then using these
factorizations to construct auxiliary functions that will help turn (\ref{eq30}) into a matrix Pearson
equation for the weight $w$. This second method will be called {\it the decomposition method}. We will
make these descriptions of both methods in a more precise way in the sequel.
\bigskip
\subsection{The symmetrization method}
We want to symmetrize the matrix $\varphi$ by finding a matrix
\begin{equation}
S\equiv\begin{pmatrix}
A & B\\
C & D
\end{pmatrix},
\end{equation}
where $A=A(x,y),\,B=B(x,y),\,C=C(x,y),$ and $D=D(x,y)$ are rational functions such that $AD-BC\neq 0$
on the interior of $\Omega$. Then, it is required that the matrix $S$ left--multiplies $\varphi$ and
transforms it in a symmetric matrix, that is,
\begin{equation}\label{sym}
\begin{pmatrix}
A & B \\
C & D \end{pmatrix}\begin{pmatrix}
\varphi_1 & \varphi_2 \\
0 & \varphi_3 \end{pmatrix}-\begin{pmatrix}
\varphi_1 & \varphi_2 \\
0 & \varphi_3 \end{pmatrix}^T\begin{pmatrix}
A & B \\
C & D \end{pmatrix}^T=\begin{pmatrix}
0 & 0 \\
0 & 0 \end{pmatrix}.
\end{equation}
Notice that (\ref{sym}) yields the constraint that $A,\, B$, and $C$ must satisfy, namely,
\begin{equation}
A\varphi_2+B\varphi_3-C\varphi_1=0.
\end{equation}
This restriction admits polynomials and rational functions as solutions. Nevertheless, we must point out that polynomial solutions will always increase the degree of $\varphi$.
Furthermore, in order to have a matrix Pearson equation for the Koornwinder weight $w$ satisfying
$S\,\varphi\, \nabla w = S\,\delta \,w$, the vector
\begin{equation*}
\widetilde{\Psi}_{\delta}\equiv S\,\delta=\begin{pmatrix}\delta_1\,A+\delta_2\,B\\ \delta_1\,C+
\delta_2\,D \end{pmatrix},
\end{equation*}
must have polynomial entries. Define the matrix $\Phi_{\varphi}$ and the column vector $\Psi_{\delta}$ as
\begin{equation*}
\Phi_{\varphi}=S\,\varphi =\begin{pmatrix}
A\,\varphi_1 & A\,\varphi_2+B\,\varphi_3 \\
C\,\varphi_1 & C\,\varphi_2+D\,\varphi_3
\end{pmatrix}, \quad \Psi_{\delta}=\widetilde{\Psi}_{\delta}+(\textnormal{div}\,\Phi_{\varphi})^t.
\end{equation*}
Then, $w(x,y)$ satisfies $\textnormal{div}(\Phi_{\varphi} \,w)=\Psi_{\delta}^t\, w$. We must choose $S$ such that $\Phi_{\varphi}$ has minimal degree and $\textnormal{deg}\,\Psi_{\delta}\ge 1$.
\bigskip
\subsection{The decomposition method}
We can always find a common polynomial denominator $E=E(x,y)$ for the rational entries of the
column vector multiplying $w(x,y)$ in the right side of (\ref{eq35}). Note that $E(x,y)\ne0$ for
all $(x,y)$ in the interior of $\Omega$, because if $E(x_0,y_0)=0$ for some $(x_0,y_0)$ in the interior of $\Omega$, then $w(x,y)$ would not be defined at $(x_0,y_0)$.
From now on, we will write (\ref{eq35}) using this common polynomial denominator as
\begin{equation}\label{eq43}
\begin{pmatrix}
E & 0 \\
0 & E
\end{pmatrix}\nabla w= \begin{pmatrix}
F \\
H
\end{pmatrix}w,
\end{equation}\label{eq36}
\begin{equation}
E = a_0\,a_1\,c_1,\quad F = F_0\,c_1, \quad H = H_0\,a_1,
\end{equation}
where $F=F(x,y)$ and $H=H(x,y)$ are polynomials. Observe that (\ref{eq43}) is not necessarily the
desired Pearson equation for $w(x,y)$. We have allowed the possibility of $E$ having common
factors $c_1=c_1(x,y)$ and $a_1=a_1(x,y)$ with $F$ and $H$, respectively. There is no loss of
generality here since we can always get either $c_1$ or $a_1$, or both be equal to 1.
We seek polynomials $a_2=a_2(x,y),\,b_1=b_1(x,y)$, and $c_2=c_2(x,y)$ such that
\begin{equation*}
a_0=a_2c_2-a_1b_1^2c_1.
\end{equation*}
Let introduce the auxiliary functions
\begin{equation}\label{eq39}
a(x,y)=\frac{a_2(x,y)}{a_0(x,y)\,c_1(x,y)},\quad b(x,y)=\frac{b_1(x,y)}{a_0(x,y)},\quad c(x,y)
=\frac{c_2(x,y)}{a_0(x,y)\,a_1(x,y)},
\end{equation}
and the matrix
\begin{equation}\label{eq40}
\begin{pmatrix}
a & b\\
b & c
\end{pmatrix}, \quad ac-b^2=\frac{1}{E}.
\end{equation}
After left--multiplying (\ref{eq35}) by (\ref{eq40}), we get
\begin{equation}\label{eq41}
\begin{pmatrix}
a\,E & b\,E\\
b\,E & c\,E
\end{pmatrix}\nabla w=\begin{pmatrix}
a\,F+b\,H\\
b\,F+c\,H
\end{pmatrix}w.
\end{equation}
From the decomposition method, to find a Pearson equation for $w$ we must to obtain three polynomials $a_2,\,b_1$, and $c_2$ such that the matrix coefficient in (\ref{eq41}) has polynomial entries of lowest total degree possible, and the column vector on the right side of the same equation has polynomial entries.
\bigskip
\section{Examples}
In this section we will denote by $\{P_n^{(\alpha,\beta)}\}_{n\ge 0}$ the sequence of classical Jacobi polynomials associated with the weight function
\begin{equation*}
w^{(\alpha,\beta)}(x)=(1-x)^{\alpha}(1+x)^{\beta}, \quad -1\le x\le 1, \quad \alpha,\beta>-1
\end{equation*}
(see \cite{Ch78,Sz78}). The Pearson equation for Jacobi polynomials is
\begin{equation*}
\frac{d}{dx}\left[(1-x^2)\,w^{(\alpha,\beta)}\right] =
[\beta-\alpha -(\alpha+\beta+2)x]w^{(\alpha,\beta)},
\end{equation*}
that is, $\phi(x) = 1-x^2$ and $\psi(x) = \beta-\alpha -(\alpha+\beta+2)x$.
\medskip
Classical Jacobi polynomials can be defined on the interval $[0,1]$. In this case, the weight function is given by
\begin{equation*}
u^{(\alpha,\beta)}(x)=(1-x)^{\alpha}x^{\beta}, \quad \alpha,\beta>-1,
\end{equation*}
and the Pearson equation for $u^{(\alpha, \beta)}$ is
\begin{equation*}
\frac{d}{dx}\left[(1-x)x\,u^{(\alpha,\beta)}\right]=[\beta+1 - (\alpha+\beta+2)x]u^{(\alpha,\beta)}.
\end{equation*}
In this case, $\phi(x) = (1-x)x$ and $\psi(x) = \beta+1 -(\alpha+\beta+2)x$.
\medskip
On the other hand, we will denote by $\{L^{(\alpha)}_n\}_{n\ge0}$ the sequence of classical Laguerre polynomials associated with the weight function
\begin{equation*}
w^{(\alpha)}(x)=x^{\alpha}e^{-x}, \quad 0\le x<\infty,\quad \alpha>-1,
\end{equation*}
whose Pearson equation is
\begin{equation*}
\frac{d}{dx}\left[x\,w^{(\alpha)}\right] = [\alpha + 1 - x]w^{(\alpha)},
\end{equation*}
where $\phi(x) = x$ and $\psi(x) = \alpha + 1 - x$.
\subsection{Ball polynomials}
Let
\begin{equation*}
\mathbf{B}=\{(x,y)\in\mathbb{R} :\, x^2+y^2\le 1\},
\end{equation*}
be the unit disk in $\mathbb{R}^2$, and let
\begin{equation*}
w(x,y)=(1-x^2-y^2)^{\alpha}, \quad \alpha>-1,
\end{equation*}
be the weight function. Ball polynomials can be constructed by using Koornwinder's
method taking
\begin{align*}
&w_1(x)=(1-x^2)^{\alpha}, \quad -1\le x\le 1,\\
&w_2(y)=(1-y^2)^{\alpha}, \quad -1\le y\le 1,\\
&\rho(x)=\sqrt{1-x^2}.
\end{align*}
Then, ball polynomials can be defined as
\begin{equation*}
P_{n,m}(x,y)=P_{n-m}^{(\alpha+m+1/2,\alpha+m+1/2)}(x)\,(1-x^2)^{m/2}\,P_m^{(\alpha,\alpha)}
\left(\frac{y}{1-x^2}\right), \quad 0\le m\le n.
\end{equation*}
Observe that, in this case, $\phi_1(x) = \phi_2(x) = \rho(x)^2$, the weight function
\begin{equation*}
w(x,y)=w_1(x)w_2\left(\frac{y}{\rho(x)}\right)=(1-x^2-y^2)^{\alpha},\quad \alpha>-1,
\end{equation*}
satisfies (\ref{eq30}) where
\begin{equation}\label{eq42}
\varphi=\begin{pmatrix}
1-x^2 & -xy \\
0 & 1-x^2-y^2
\end{pmatrix}, \qquad \psi=\begin{pmatrix}
-2\alpha x \\ -2\alpha y\end{pmatrix}.
\end{equation}
A suitable choice for the symmetrization matrix of (\ref{eq42}) is
\begin{equation*}
S =
\begin{pmatrix}
1 & 0\\
\dfrac{-xy}{1-x^2} & \dfrac{1}{1-x^2}
\end{pmatrix},
\end{equation*}
and after a symmetrization using $S$, we recover the well known matrix Pearson equation for
ball weight
\begin{equation}\label{eq44}
\begin{pmatrix}
1-x^2 & -xy \\
-xy & 1-y^2
\end{pmatrix}\,\nabla w=\begin{pmatrix}
-2\alpha x \\ -2\alpha y
\end{pmatrix}\,w.
\end{equation}
The second order linear partial differential operator for ball polynomials is
$$
\mathcal{L}[\cdot]=
(1-x^2)\partial_{xx}-2xy\partial_{xy}+(1-y^2)\partial_{yy}-(2\alpha+3)x\partial_x
-(2\alpha+3)y\partial_y,
$$
and, therefore, ball polynomials satisfy the Krall and Sheffer second order
linear partial differential equation
$$\mathcal{L}[P_{n,m}] = -n(n+2\alpha+2)P_{n,m}.
$$
If the decomposition method is used, then a suitable choice of auxiliary functions (\ref{eq39}) are
\begin{equation*}
a(x,y)=\frac{1-x^2}{1-x^2-y^2},\quad b(x,y)=\frac{-xy}{1-x^2-y^2}, \quad c(x,y)=\frac{1-y^2}{1-x^2-y^2},
\end{equation*}
and we obtain again (\ref{eq44}). Another suitable choice of auxiliary functions is
\begin{equation*}
a(x,y)=1,\quad b(x,y)=0,\quad c(x,y)=1,
\end{equation*}
and we obtain another Pearson equation (see \cite{Le00}),
\begin{equation}
\begin{pmatrix}
1-x^2-y^2 & 0\\
0 & 1-x^2-y^2
\end{pmatrix}\nabla w=\begin{pmatrix}
-2\alpha x \\ -2\alpha y
\end{pmatrix}w.
\end{equation}
\bigskip
\subsection{Koornwinder polynomials over the parabolic biangle}
For $\alpha,\beta>-1$, the polynomials
\begin{equation*}
P_{n,m}(x,y) = P_{n-m}^{(\alpha,\beta+m+1/2)}(2x-1)\,x^{m/2}\,P_m^{(\beta,\beta)}\left(\frac{y}{x} \right),\quad 0\le m\le n,
\end{equation*}
are orthogonal polynomials associated with the Koornwinder weight function
\begin{equation*}
w(x,y)=(1-x)^{\alpha}(x-y^2)^{\beta},
\end{equation*}
on the parabolic biangle
\begin{equation*}
\Omega=\{(x,y)\in\mathbb{R}^2:\,y^2<x<1\},
\end{equation*}
with boundary
$$
\partial \Omega=\{x-y^2=0, 0\le x\le 1\}\cup\{1-x=0, -1\le y\le 1 \}.
$$
These polynomials are obtained from the Koornwinder construction with
\begin{align*}
&w_1(x)=(1-x)^{\alpha}x^{\beta},\quad 0\le x\le1,\\
&w_2(y)=(1-y^2)^{\beta}, \quad -1\le y\le 1,\\
&\rho(x)=\sqrt{x}.
\end{align*}
Since $\phi_1(x)=(1-x)x,\,\phi_2(y)=1-y^2$, equation (\ref{eq30}) reads
\begin{equation*}
\begin{pmatrix}
(1-x)x & \frac{1}{2}(1-x)y \\
0 & x-y^2
\end{pmatrix}\,\nabla w = \begin{pmatrix}
\beta-(\alpha+\beta)x \\
-2\beta y
\end{pmatrix}w.
\end{equation*}
A suitable choice for the simmetrization matrix is
\begin{equation*}
S=\begin{pmatrix}
1 & 0 \\
\dfrac{y}{2x} & -\dfrac{1}{4x}
\end{pmatrix},
\end{equation*}
and the resulting Pearson equation is
\begin{equation*}
\begin{pmatrix}
(1-x)x & \frac{1}{2}(1-x)y\\
\frac{1}{2}(1-x)y & \frac{1}{4}(1-y^2)
\end{pmatrix} \nabla w=\begin{pmatrix}
\beta-(\alpha+\beta)x\\
-\frac{1}{2}(\alpha+\beta)y
\end{pmatrix}w.
\end{equation*}
In this case, the associate second order linear partial differential operator is
$$
\mathcal{L}[\cdot]=2(1-x)x\partial_{xx}+2(1-x)y\partial_{xy}+\frac{1}{2}(1-y^2)\partial_{yy}
+[2\beta+3-(2\alpha+2\beta+5)x]\partial_x-(\alpha+\beta+2)y\partial_y,
$$
and the corresponding second order linear partial differential equation satisfied by the sequence of bivariate polynomials
is
$$\mathcal{L}[P_{n,m}]= -\left[(n-m)(2n+2\alpha+2\beta+5)+\frac{1}{2}m(m+2\alpha+2\beta+3)\right]P_{n,m}.
$$
If the decomposition method is used, then a suitable choice of auxiliary functions is
\begin{equation*}
a(x,y)=\frac{2x}{x-y^2}, \quad b(x,y)=\frac{y}{x-y^2}, \quad c(x,y)=\frac{1-y^2}{2(1-x)(x-y^2)},
\end{equation*}
and we obtain again the same matrix Pearson equation. Notice that the Koornwinder polynomials over
the parabolic biangle are classical.
\bigskip
\subsection{Koornwinder polynomials over the triangle}
Following \cite{Koor75}, for $\alpha, \beta, \gamma>-1$ these polynomials correspond to
\begin{align*}
&w_1(x)=(1-x)^{\alpha}x^{\beta+\gamma}, \quad 0\le x\le1,\\
&w_2(y)=(1-y)^{\beta}y^{\gamma}, \quad 0\le y\le 1,\\
&\rho(x)=x,
\end{align*}
on the triangle
\begin{equation*}
\mathbf{T}=\{(x,y)\in\mathbb{R}^2:\,0<y<x<1\}.
\end{equation*}
The polynomials
\begin{equation*}
P_{n,m}(x,y)=P_{n-m}^{(\alpha,\beta+\gamma+2m+1)}(2x-1)\,x^m\,P_m^{(\beta,\gamma)}\left(\frac{2y}{x}-1\right), \quad 0\le m\le n,
\end{equation*}
are orthogonal with respect to the weight function
\begin{equation*}
w(x,y)=(1-x)^{\alpha}(x-y)^{\beta}y^{\gamma}.
\end{equation*}
Notice that $\phi_1(x)=\phi_2(x)=(1-x)x$, and (\ref{eq30}) reads
\begin{equation*}
\begin{pmatrix}
(1-x)x & (1-x)y \\
0 & (x-y)y
\end{pmatrix}\,\nabla w = \begin{pmatrix}
\beta+\gamma-(\alpha+\beta+\gamma)x \\
\gamma x-(\beta+\gamma)y
\end{pmatrix}\,w.
\end{equation*}
This matrix equation is symmetrized by left multiplication times
\begin{equation*}
S =
\begin{pmatrix}
1 & 0\\
\dfrac{y}{x} & \dfrac{1}{x}
\end{pmatrix},
\end{equation*}
and the resulting Pearson equation is
\begin{equation*}
\begin{pmatrix}
(1-x)x & (1-x)y \\
(1-x)y & (1-y)y
\end{pmatrix}\nabla w=\begin{pmatrix}
\beta+\gamma-(\alpha+\beta+\gamma)x\\
\gamma-(\alpha+\beta+\gamma)y
\end{pmatrix}w.
\end{equation*}
The second order linear partial differential equation satisfied by the Koornwinder polynomials over the triangle is
$$
\mathcal{L}[P_{n,m}]= -n(n+\alpha+\beta+\gamma+2) P_{n,m},$$
where
\begin{eqnarray*}
\mathcal{L}[\cdot] &=& (1-x)x\partial_{xx}+2(1-x)y\partial_{xy}+(1-y)y\partial_{yy}\\
&~&
+[\beta+\gamma+2-(\alpha+\beta+\gamma+3)x]\partial_x+[\gamma+1-(\alpha+\beta+\gamma+3)y]\partial_y.
\end{eqnarray*}
If the decomposition method is used, we get the same equation by choosing the auxiliary functions as
\begin{equation*}
a(x,y)=\frac{x}{(x-y)y}, \quad b(x,y)=\frac{1}{x-y}, \quad c(x,y)=\frac{1-y}{(1-x)(x-y)}.
\end{equation*}
Observe that the Koornwinder polynomials over the triangle are classical.
\subsection{Laguerre--Jacobi Koornwinder polynomials}
In \cite{FPP12} some new examples of Koornwinder bivariate weight functions
were introduced. This two examples are studied here.
Consider the Laguerre and Jacobi weight functions in one variable
\begin{align*}
&w_1(x)=x^{\alpha}e^{-x}, \quad 0\le x<\infty, \quad \alpha>-1,\\
&w_2(y)=(1-y)^{\beta}, \quad -1\le y \le 1,\quad \beta>-1.
\end{align*}
The polynomials
\begin{equation*}
P_{n,m}(x,y) = L_{n-m}^{(\alpha+2m+1)}(x)\,x^m\,P_m^{(0,\beta)}\left( \frac{y}{x}\right), \quad 0\le m\le n,
\end{equation*}
are orthogonal with respect to
\begin{equation*}
w(x,y)=x^{\alpha-\beta}e^{-x}(x-y)^{\beta},
\end{equation*}
defined on the unbounded region
\begin{equation*}
\Omega=\{(x,y)\in\mathbb{R}^2:\,-x<y<x,\,x>0\}.
\end{equation*}
Here $\phi_1(x)=x,\,\phi_2(y)=1-y^2$, and thus (\ref{eq30}) reads
\begin{equation*}
\begin{pmatrix}
x & y\\
0 & x^2-y^2
\end{pmatrix}\,\nabla w= \begin{pmatrix}
\alpha-x\\ -\beta(x+y)
\end{pmatrix}\,w.
\end{equation*}
Multiplying this equation by the symmetrization matrix
\begin{equation*}
S =
\begin{pmatrix}
1 & \dfrac{1}{x+y}\\
\\
1 & 1+\dfrac{1}{x+y}
\end{pmatrix},
\end{equation*}
we get the following Pearson equation for $w$
\begin{equation*}
\begin{pmatrix}
x & x\\
x & x^2-y^2+x
\end{pmatrix}\nabla w=\begin{pmatrix}
\alpha-\beta-x\\ -\beta(x+y)+(\alpha-\beta-x)
\end{pmatrix}w.
\end{equation*}
A suitable choice of auxiliary functions for the decomposition method is
\begin{equation*}
a(x,y)=\frac{1}{x^2-y^2}, \quad b(x,y)=\frac{1}{x^2-y^2},\quad c(x,y)=\frac{x^2-y^2+y}{x(x^2-y^2)},
\end{equation*}
and the resulting Pearson equation is
\begin{equation*}
\begin{pmatrix}
x & x\\
x & x^2-y^2+y
\end{pmatrix}\nabla w=\begin{pmatrix}
\alpha-\beta-x\\ -\beta(x+y)+\alpha-x
\end{pmatrix}w.
\end{equation*}
The Laguerre--Jacobi Koornwinder polynomials satisfy the difference--differential equation
$$\mathcal{L}[P_{n,m}] = \lambda_{n,m}\,P_{n,m}+\lambda_{n,m-1}\,P_{n,m-1}+\lambda_{n,m-2}\,P_{n,m-2},
$$
where
$$
\mathcal{L}[\cdot] =
x\partial_{xx}+2x\partial_{xy}+(x^2-y^2+x)\partial_{yy}
+(1+\alpha-\beta-x)\partial_x+[\alpha-\beta+1-(1+\beta)x-(2+\beta)y]\partial_y,
$$
and
\begin{align*}
&\lambda_{n,m}=-n-m(m+\beta),\\
&\lambda_{n,m-1}=-(m-1)(\beta+1),\\
&\lambda_{n,m-2}=m(m-1).
\end{align*}
Notice that Laguerre--Jacobi Koornwinder polynomials are classical according to Theorem \ref{teor-clas}.
\bigskip
\subsection{Laguerre--Laguerre Koornwinder polynomials}
In \cite{FPP12} the Laguerre weight functions in one variable were considered
\begin{align*}
&w_1(x)=x^{\alpha}e^{-x}, \quad 0\le x<\infty, \quad \alpha>-1,\\
&w_2(y)=y^{\beta}e^{-y}, \quad 0\le y<\infty, \quad \beta>-1,\\
&\rho(x)=x, \quad \alpha-\beta >-1,
\end{align*}
then Laguerre--Laguerre Koornwinder polynomials defined by
\begin{equation*}
P_{n,m}(x,y)=L^{(\alpha+2m+1)}_{n-m}(x)\,x^m\,L^{(\beta)}_m\left(\frac{y}{x}\right), \quad 0\le m\le n,
\end{equation*}
are orthogonal with respect to the weight function
\begin{equation*}
w(x,y)=x^{\alpha-\beta}y^{\beta}e^{-(x+y/x)},
\end{equation*}
on the unbounded region $\Omega=[0,\infty)\times[0,\infty)$.
Here, equation (\ref{eq30}) reads
\begin{equation*}
\begin{pmatrix}
x & y \\
0 & xy
\end{pmatrix}\,\nabla w=\begin{pmatrix}
\alpha-x \\ (\beta+1)x-y
\end{pmatrix}\, w,
\end{equation*}
and a suitable symmetrization matrix is
\begin{equation*}
S =
\begin{pmatrix}
x & 0\\
y & 1
\end{pmatrix}.
\end{equation*}
A convenient choice of auxiliary equations for the decomposition method is
\begin{equation*}
a(x,y)=\frac{1}{xy}, \quad b(x,y)=\frac{1}{x^2}, \quad c(x,y)=\frac{x+y}{x^3}.
\end{equation*}
The resulting Pearson equation for $w$ is
\begin{equation*}
\begin{pmatrix}
x^2 & xy \\
xy & (x+y)y
\end{pmatrix}\nabla w=\begin{pmatrix}
(\alpha-x)x \\ (\alpha-1)y+\beta x-xy
\end{pmatrix}w.
\end{equation*}
From equation (\ref{eq35}) for this case, we conclude that $w$ also satisfies the Pearson equation
\begin{equation*}
\begin{pmatrix}
x^2 & 0\\
0 & xy
\end{pmatrix}\nabla w=\begin{pmatrix}
(\alpha-\beta-1-x)x+y \\ (\beta+1)x-y
\end{pmatrix} w,
\end{equation*}
and the Laguerre-Laguerre Koornwinder polynomials satisfy the difference--differential relation
$$
\mathcal{L}[P_{n,m}] = \lambda_{n+1,m}P_{n+1,m} + \lambda_{n,m+1} P_{n,m+1} + \lambda_{n,m} P_{n,m}+\lambda_{n,m-1}P_{n,m-1},
$$
where
$$\mathcal{L}[\cdot] =
x^2\partial_{xx} + x y\partial_{yy} + [(\alpha-\beta+1-x)x+y]\partial_x+[(\beta+2)x-y]\partial_y,
$$
and \begin{align*}
&\lambda_{n+1,m}=-(n-m),\\
&\lambda_{n,m+1}=n-m+m(m-1),\\
&\lambda_{n,m}=(n-m)(n-m+\alpha+\beta)-m,\\
&\lambda_{n,m-1}=(m-1)(\beta+2).
\end{align*}
Observe that the Laguerre--Laguerre Koornwinder weight satisfy a matrix Pearson equation with
$\deg\Phi =\deg\Psi =2$, and they are semiclassical.
\bigskip
|
train/arxiv
|
BkiUdBI4uzlhipPcvbd7
| 5 | 1 |
\section{Introduction}
\label{Introduction}
As our world becomes better connected and more open ended, and autonomous agents are no longer science fiction, a need arises for enabling groups of agents to cooperate in generating a plan for diverse tasks that none of them can perform alone, in a cost-effective manner. Indeed, much like ad-hoc networks, one would expect various contexts to naturally lead to the emergence of ad-hoc teams of agents that can benefit from cooperation. Such teams could range from groups of manufacturers teaming up to build a product that none
can build on their own, to groups of robots sent by different agencies or countries to help in disaster settings. To perform complex tasks, these agents need to combine their diverse skills effectively.
Planning algorithms can help achieve this goal.
Most planning algorithms require full information about the set of actions and state variables in the domain. However, often, various aspects of this information are private to an agent, and it is not eager to share them. For example, the manufacturer is eager to let everyone know that it can supply motherboards, but it will not want to disclose the local process used to construct them, its suppliers, its inventory level, and the identity of its employees.
Similarly, rescue forces of country A may be eager to help citizens of country B suffering from a tsunami, but without having
to provide detailed information about the technology behind their autonomous bobcat to country B, or to country C's humanoid evacuation robots. In both cases, agents have public capabilities they are happy to share, and private processes and information that support these capabilities, which they prefer (or possibly require) to be kept private.
With this motivation in mind, a number of algorithms have recently been devised for distributed privacy-preserving planning~\cite{Bonisoli14,FMAP14,LB14,NBJAIR}. In these algorithms, agents supply a public interface only, and through a distributed planning process, come up with a plan that achieves the desired goal without being required to share a complete model of their actions and local state with other agents. But there is a major caveat: it is well known from the literature on secure multi-party computation~\cite{Yao82b} that the fact that a distributed algorithm does not require an agent to {\em explicitly\/} reveal private information does not imply that other agents cannot deduce such private information from other information communicated during the run of the algorithm. Consequently, given that privacy is the raison-d'etre for these algorithms, it is important to strive to improve the level of privacy provided, and to provide formal guarantees of such privacy properties.
To the best of our knowledge, to date, there have been two attempts to address this issue. In~\cite{TozickaSK17}, the authors describe
a secure planner for multi-agent systems. However, as they themselves admit, this planner is impractical, as it requires computing all possible solutions. \cite{Brafman15} describes {\sc{secure mafs}}\, a modification of the {\sc multi-agent forward search} algorithm~\cite{nissim2014distributed} in which an agent never sends similar states. {\sc{secure mafs}}\ is an efficient algorithm. In fact, an implementation of it based on an equivalent macro sending technique~\cite{MaliahSB16} shows state of the art performance. But it is not clear what security guarantees
it offers. While~\cite{Brafman15} provides some privacy guarantees, they are restricted to very special cases, and it seems
most plausible that {\sc{secure mafs}}\ is not secure in general.
The goal of this paper is to place the development of {\sc{secure mafs}}\ on firm footing by developing appropriate notions of privacy
that are useful and realizable in the context of search algorithms, to characterize the privacy preserving properties
of {\sc{secure mafs}}\ and to provide rigorous proofs for its correctness and completeness.
We define a notion of $\beta$-indistinguishable secure computation, and more specifically, we suggest a notion of PST-secure computation which is not as strong as that of strong privacy,
but is meaningful and more stringent than weak privacy. Roughly speaking, given a function $\beta$ on planning instances,
we say that an algorithm is $\beta$-indistinguishable if it will send the same messages during computation for any two
instances whose $\beta$ value is identical. PST-secure computation refers to the special case in which $\beta$ returns
a projected version of the search space -- one in which only the value of public variables is available.
The paper is structured as follows: First, we describe the basic model of privacy-preserving classical multi-agent planning.
Then, we discuss some basic notions of privacy. Next, we gradually develop more practical versions of PST-secure planning
algorithms, eventually describing an algorithm that is, essentially {\sc{secure mafs}}, and prove that the latter is sound and complete, and
is PST-secure.
\section{The Model}
\label{model}
\textsc{ma-strips}~\cite{Brafman200828} is a minimal extension of \textsc{strips}\ to multi-agent domains.
A \textsc{strips}\ problem is a 4-tuple $\Pi = \langle P,A,I,G \rangle$, where
\begin{itemize}
\item
$P$ is a finite set of primitive propositions, which are essentially the state variables; a {\em state\/} is a truth assignment to $P$.
\item
$I$ is the initial state.
\item
$G$ is the set of goal states.
\item
$A$ is a set of actions.
Each action $a$ has the form $a=\langle \mathrm{pre}(a),\mathrm{eff}(a) \rangle$, where $ \mathrm{pre}(a)\subset P$ is the set of preconditions of $a$
and $ \mathrm{eff}(a)$ is a set of literals, denoting the effects of action $a$. We use $a(s)$ to denote the state attained by applying $a$ in $s$. The state $a(s)$ is well defined iff $s\models \mathrm{pre}(a)$. In that case, $a(s)\models p$ (for $p\in P$) iff $p\in \mathrm{eff}(a)$ or $s\models p$ and $\neg p\not\in \mathrm{eff}(a)$.
\end{itemize}
A {\em plan\/} $\pi = a_1,\ldots, a_m$ is a solution to $\Pi$ iff $a_m(\cdots a_1(s)\cdots)\models G$.
An \textsc{ma-strips}\ problem is a \textsc{strips}\ problem in which the action set $A$ is partitioned among a set $\Phi=\{\varphi_i\}_{i=1}^{k}$ of
agents. Formally, $\Pi = \langle P,\{A_i\}_{i=1}^{k},I,G \rangle$, where $P,I,G$ are as above, and $A_i$ is the set of actions of $\varphi_i$.
Work on privacy-preserving multi-agent planning seeks algorithms that generate good, or possibly optimal plans while not disclosing private information about their actions and the variables that they manipulate. For this to be meaningful, one has to first define what information is private and what information is not. Here we focus on the standard notion of private actions and private propositions. Thus, each action $a_i\in A_i$ is either {\em private} to agent $\varphi_i$ or {\em public}. Similarly, each proposition $p$ is either private to some agent $\varphi_i$ or public. To make sense, however, $p$ can be private to agent $\varphi_i$ {\em only\/} if $p$ does not appear in the description of an action $a_j\in A_j$ for $j\neq i$. Similarly, $a_i$ can be
private to $\varphi_i$ only if all propositions in $a_i$'s preconditions are either public or private to $\varphi_i$
and all propositions in $a_i$'s
effects are private to $\varphi_i$.
Hence, a {\em privacy preserving \textsc{ma-strips}\ problem} (\textsc{pp-mas}) is defined by as a set of
local planning problems:
$\Pi=\{\Pi_i : i=1,\ldots,k\}$
where
$\Pi_i = \langle P_i^{\mathrm{prv}},P^{\mathrm{pub}},A_i^{\mathrm{prv}},A_i^{\mathrm{pub}},I_i,I^{\mathrm{pub}},G \rangle$. Here, $I^{\mathrm{pub}}$ is the value of $P^{\mathrm{pub}}$ in the initial state, and
the
goal is shared among all agents and
involves public propositions only. Furthermore, any action $a\in A_i^{\mathrm{prv}}$ involves private propositions only.
We use $A_i$ to denote $A_i^{\mathrm{prv}}\cup A_i^{\mathrm{pub}}$.
A solution for a \textsc{pp-mas}\ problem is the sequence of all the public actions in a solution for the
\textsc{ma-strips}\ problem.
We note that a more refined notion of privacy was suggested in~\cite{,}. While we believe that the ideas discussed in this paper
can be extended to this setting, we leave this for future work.
Recall that in classical planning, we assume that the world state is fully observable to the acting agent and actions are deterministic. The multi-agent setting shares these assumptions, except that full observability is w.r.t.~the primitive propositions in
$P_i^{\mathrm{prv}}\cup P^{\mathrm{pub}}$.
An issue that often arises is whether private goals should be allowed, or should all goals be public. Public goals make it easier for all agents to detect goal achievement, and have been assumed in most past work. As there is a simple reduction from private to public goals, albeit one that makes public the fact that all private goals of an agent have been achieved, we will maintain the assumption that all goal propositions are public.
Next, we define the notion of a \emph{public projection}. The \textit{public projection} $\pi_{\mathrm{proj}}(a)$ of an action
$a\in A_i$, $a=\langle \mathrm{pre}(a),\mathrm{eff}(a) \rangle$, is defined
as $\pi_{\mathrm{proj}}(a)=\langle \{ p\in P^{\mathrm{pub}} | p\in\mathrm{pre}(a) \},\{\ell\in P^{\mathrm{pub}} | \ell\in\mathrm{eff}(a)\} \rangle$. That is, the same action, but with its private propositions removed.
Accordingly, $\pi_{\mathrm{proj}}(a)$ for $a\in A_i^{\mathrm{prv}}$ is empty.
The \textit{public projection} $\pi_{\mathrm{proj}}(s)$ of a state is the partial assignment obtained by projecting $s$ to $P^{\mathrm{pub}}$.
Now, we define
$\pi_{\mathrm{proj}}(\Pi)$, the
public projection of
$\Pi=\{\Pi_i : i=1,\ldots,k\}$ to be
the \textsc{strips}\ planning problem:
$\langle P^{\mathrm{pub}},\{\pi_{\mathrm{proj}}(a):a\in A^{\mathrm{pub}}_i,1\leq i \leq k\},I^{\mathrm{pub}},G\rangle$.
The \emph{search-tree} induced by a planning problem plays a key role in our definition of privacy in distributed forward search planning.
\begin{definition}
The search tree associated with an MA planning problem $\Pi=\langle P,\{A_i\}_{i=1}^{k},I,G \rangle$, denoted by $\mathrm{ST}(\Pi)$, is a tree inductively defined below, where
every node is labeled by a state and is either private to some agent or public, and every edge is labeled by an action.
The root is labeled by $I$, and is public. The children of a node $v$ labeled by a state $s$ are defined as follows:
\begin{itemize}
\item
If $v$ is public, then for every $a$ applicable in $s$ there is a child labeled by $a(s)$.
\item
If $v$ is private to $\varphi_i$, then for every $a\in A_i$ applicable in $s$ there is a child labeled by $a(s)$.
\item In both cases, the node $a(s)$ is public if $a$ is public, and $a(s)$ is private to $\varphi_i$
if $a$ is private to $\varphi_i$.
\item The edge from $s$ to $a(s)$ is labeld by $a$.
\end{itemize}
We will also assume the existence of some lexicographic ordering over states which defines
the order of the children of a node. We assume that public variables appear before private variables in this order.
\end{definition}
Next we define a concept of the public projection of a search tree. First, we project all states into their public parts.
Then, we connect every public node to its closest public descendants, remove all private nodes, and remove duplicate
children in the resulting tree. Formally:
\begin{definition}
The \emph{public-projection of the search
tree of $\Pi$} (denoted $\mathrm{PST}(\Pi)$) is a tree, defined below, whose nodes are labeled by assignments to the public variables of $\Pi$
and edges are labeled by public actions.
Each node in $\mathrm{PST}(\Pi)$ corresponds to a list of public nodes in the search-tree $ST(\Pi)$, where the public states of all the nodes in the list are the public state of the node in $\mathrm{PST}(\Pi)$ (this list is used only to construct $\mathrm{PST}(\Pi)$ from $\mathrm{ST}(\Pi)$ and is not part of $\mathrm{PST}(\Pi)$).
The tree is inductively defined.
\begin{itemize}
\item
The root of $\mathrm{PST}(\Pi)$
corresponds to the root of $\mathrm{ST}(\Pi)$ and is labeled by $I^{\mathrm{pub}}$.
\item
Let $w$ be a node in $\mathrm{PST}(\Pi)$, with public state $s$, that corresponds to public nodes $v_1,\dots,v_k$ in the search tree $\mathrm{ST}(\Pi)$. Denote the (public and private) states of $v_1,\dots,v_k$ by $s_1,\dots,s_k$ respectively. We define the children of $w$
in two stages:
\begin{itemize}
\item
First, for every $i\in\set{1,\dots,k}$ and every public descendants $v'$ of $v_i$ such that
all internal nodes in the path from $v_i$ to $v'$ are private, i.e.,
the labels of the edges on the path from $v_i$ to $v'$ are actions $a_1,\ldots,a_\ell$ such that $a_1,\ldots,a_{\ell-1}$ are private actions and $a_\ell$ is a public action,
we construct a child $w'$.
We label the edge from $w$ to $w'$ by the last actions on this path, namely, by $a_\ell$.
The public state of $w'$ is the public state in $a_k(\cdots a_2(a_1(s_i)))$ and we associate
$v'$ to $w'$.
\item
We remove duplicated children. That is, if $w_1$ and $w_2$ are children of $w$ such that the actions labeling the edges $(w,w_1)$ and $(w,w_2)$ are the same and the public states of $w_1$ and $w_2$ are the same, then we merge $w_1$ and $w_2$ and associate all the nodes associated to them to the merged node. We repeat this process until there are no children that can be merged.
\end{itemize}
\end{itemize}
\end{definition}
\section{Privacy Guarantees}
The main property of interest from a solution algorithm to
a \textsc{pp-mas}\ planning problem, aside from soundness and completeness, is the level of privacy it preserves.
The main privacy-related question one asks regarding a \textsc{pp-mas}\ algorithm is whether coalitions of agents participating in the planning algorithm will be able to gain information about the private propositions and actions of other agents.
In what follows we work under the following assumptions:
\begin{itemize}
\item Agents are {\em honest, but curious\/}. This is a well known assumption in secure multi-party computation (see, e.g.,~\cite{LindellP10}).
According to this assumption, which we believe applies to many real-world interactions among business partners and ad-hoc teams,
the agents perform the algorithm as specified, but are curious enough to collude and try to learn what they can about the other agents without acting maliciously. (Alternatively, consider malicious agents that eavesdrop on the communication among agents, but are not part of the team, so they cannot intervene.)
\item The algorithm is synchronous. That is agent operate with a common clock, and send messages in rounds and
these messages are immediately delivered without corruption or delay.
\item Perfect security, that is, even an unbounded adversary cannot learn any additional information beyond the leakage function (defined below).
\end{itemize}
To date, most work was satisfied with algorithms that never explicitly expose private information, typically by encrypting this information prior to
communicating it to other agents. Consequently, we say that an algorithm is {\em weakly private\/} if the names of private actions and
private state variables and their values are never communicated explicitly.
However, the fact that information is not explicitly communicated is not sufficient. Consider, for example an algorithm in which agents share with each other their complete domains, except that the names of private actions and state variables are obfuscated by (consistently) replacing each with some
arbitrary random string. This satisfies the requirement of weak privacy, but provides the other agents with a complete model that is isomorphic to the real model. For example, imagine a producer who expects exclusivity from its suppliers. With this scheme, the producer will not know the real names of other customers of its suppliers,
but it will certainly learn of their existence. Similarly, a shipping company may not want to have others learn about the size of its fleet, or the number of workers it employs.
At the other extreme we have {\em strong privacy\/}. We say that an algorithm is {\em strongly private\/} if no coalition of agents can deduce from the information
obtained during a run of this algorithm any information that it cannot deduce from the public projection of the planning problem,
the private information the coalition has (i.e., the initial states and the actions of the agents in the coalition), and
the public projection of ``its solution''. As we are considering search problems, where many
solutions can exists, the traditional privacy definition for functions does not apply. The problem is that the solution chosen by the algorithm can leak information (e.g.,
an algorithm that returns the lexicographically first solution leaks no previous solutions exists). See \cite{BeimelCNW08} for a discussion on this problem and a suggestion of a definition of privacy for search problems.
Furthermore, strong privacy is likely to be very difficult to achieve and to prove unless stronger cryptographic methods are introduced. With
the latter, it will be
possible to develop algorithms that are strongly private, but, at least
with our current knowledge, this is likely to come at substantial computational cost that will render them not practical for the size of inputs we would like to consider.
Weak privacy, on the other hand, seems too weak in most cases, and provides no real guarantee, as it is not clear what information is deducible from the algorithm.
Given this state of affairs, where in the existing algorithms strong privacy is not as practical as desired, whereas weak privacy tells us little, if anything, about the information that might be leaked, it is
important to provide tools that will specify the privacy guarantees of existing and new algorithms.
Here we would like to suggest a type of privacy ``lower-bound'' in the form of an indistinguishability guarantee. More specifically, given a function $\beta$ defined on planning domains, we say that an algorithm is \emph{$\beta$-indistinguishable},
if a coalition of agents participating in the planning algorithm solving a problem $\Pi$ cannot distinguish between the current domain and any other domains $\Pi'$ such that $\beta(\Pi) = \beta(\Pi')$. We provide two equivalent definitions of privacy.
We define the view of the of a set of agents $T$, denoted $\operatorname{\rm view}_T(x)$,
in an execution of a deterministic algorithm with inputs $x=(x_1,\dots,x_n)$
as all the information it sees during the execution,
namely,
the inputs of the agents in $T$ (namely, $(x_i)_{i\in T}$)
and the messages exchanged during the execution of the algorithm.
\begin{definition}
\label{def:ind-dist}
Let $\beta:\set{0,1}^*\rightarrow \set{0,1}^*$ be a (leakage) function. We say that a deterministic algorithm is \emph{$\beta$-indistinguishable} if for every set $T$ of agents and for every two inputs
$x=(x_1,\dots,x_n)$ and $y=(y_1,\dots,y_n)$ such that $x_i=y_i$ for every $i \in T$ and $\beta(x)=\beta(y)$ the view of $T$ is the same,
i.e., $\operatorname{\rm view}_T(x)=\operatorname{\rm view}_T(y)$.
\end{definition}
\begin{definition}
\label{def:ind-sim}
Let $\beta:\set{0,1}^*\rightarrow \set{0,1}^*$ be a (leakage) function. We say that a deterministic algorithm is \emph{$\beta$-indistinguishable} if there exists a simulator $\algname{Sim}$ such that for every set $T$ of agents and for every input
$x=(x_1,\dots,x_n)$ the view of $T$ is the same as the output of the simulator that is given $(x_i)_{i\in T}$ and $\beta(x)$, i.e., $\algname{Sim}(T,(x_i)_{i\in T},\beta(x))=\operatorname{\rm view}_T(x)$.
\end{definition}
In \cref{def:ind-sim}, the simulator is given the inputs of the agents in $T$ and $\beta(x)$ -- the output of the leakage function applied to the inputs of all agents. The simulator is required to produce all the messages that were exchanged during the algorithm.
If such simulator exists, then all the information that the adversary can learn from the execution of the algorithm is implied by the inputs of the parties in $T$ and $\beta(x)$.
\begin{claim}
The two definitions are equivalent.
\end{claim}
\begin{proof}
Assume that an algorithm is $\beta$-indistinguishable according to \cref{def:ind-sim}.
Let
$x=(x_1,\dots,x_n)$ and $y=(y_1,\dots,y_n)$ be two inputs
such that $x_i=y_i$ for every $i \in T$ and $\beta(x)=\beta(y)$.
Thus,
$\algname{Sim}(T,(x_i)_{i\in T},\beta(x))=\algname{Sim}(T,(y_i)_{i\in T},\beta(y)).$
Therefore, by \cref{def:ind-sim},
$\operatorname{\rm view}_T(x)=\algname{Sim}(T,(x_i)_{i\in T},\beta(x))=\algname{Sim}(T,(y_i)_{i\in T},\beta(y))=\operatorname{\rm view}_T(y)$.
Assume that an algorithm is $\beta$-indistinguishable according to \cref{def:ind-dist}. Let
$x=(x_1,\dots,x_n)$ be any input. We define a simulator for the algorithm.
Given $T,(x_i)_{i\in T},\beta(x)$ we construct a simulator $\algname{Sim}$ as follows:
\begin{itemize}
\item
Finds inputs $(y_i)_{i \notin T}$
such that $\beta(y)=\beta(x)$, where $y_i=x_i$ if $i \in T$.
\item
Outputs $\operatorname{\rm view}_T(y)$.
\end{itemize}
By \cref{def:ind-dist}, $\operatorname{\rm view}_T(x)=\operatorname{\rm view}_T(y)$, thus,
$\algname{Sim}(T,(x_i)_{i\in T},\beta(x))=\operatorname{\rm view}_T(x)$, as required in \cref{def:ind-sim}.
\end{proof}
Note that the simulator is not given the output of the function computed by the algorithm, information that is implied by the messages exchanged in the algorithm.
The simulator can compute the view of $T$, hence the output, from the information it gets.
This implies that the leakage $\beta(x)$ (together with $(x_i)_{i\in T}$) determines the output of the algorithm. This is an important feature of our definition, as we consider search problems where there can be many possible outputs. The output that an algorithm returns might leak information on the inputs (see~\cite{BeimelCNW08}), and it is not clear how to compare the privacy provided by two algorithms returning different solutions. Our definition bypasses this problem as it explicitly specifies the leakage.
In this paper, we will focus on a particular function $\beta$ that returns the public projection of the
problem's search tree. That is, the algorithms we will consider will have the property that a set of agents
cannot distinguish between two problem instances whose public projection and their PST are identical.
We will refer to this as {\em PST-indistinguishable security}.
A recently proposed example of privacy w.r.t.\ a class of domains is {\em cardinality preserving privacy}~\cite{MaliahSS17} where the idea is that agents cannot
learn the number of values of a some variable, such as the number of locations served by a track.
(Defining this formally requires using multi-valued variable domains.)
Another notion of privacy recently introduced is \emph{agent privacy}~\cite{FaltingsLP08} in which agents are not aware
of other agents with whom they do not have direct interactions -- i.e., agents that require or affect
some of the variables that appear in their own actions. This notion is more natural when such interactions
are explicitly modelled using the notion of subset-private variables~\cite{Bonisoli14}.
These notions seem more ad-hoc and weaker than our definition of privacy.
We will not discuss these notions in this paper.
\section{A PST-Indistinguishable Algorithm}
The goal of this section is to show that {\sc{secure mafs}}\ is PST-Indistinguishable. We will do it by gradually refining a very simple (and inefficient) algorithm to obtain an algorithm that is essentially identical to {\sc{secure mafs}}, which, as shown by~\cite{MaliahSB16}, is quite efficient in practice, and thus the first algorithm to be both practical and have clear theoretical guarantees.
This gradual progression will make the proofs and ideas simpler.
\subsection{A Simple Algorithm}
We start with a very simple algorithm, which we shall call PST-Forward Search.
The algorithm simply constructs $\mathrm{PST}(\Pi)$ -- the public-projection of the search
tree of $\Pi$.
The search progresses level by level in the public-projection of the search tree. In a given level of the tree, each agents $\varphi_i$: (1) computes the children of all the nodes in $\mathrm{PST}(\Pi)$, where a child of a node results from a sequence of private actions followed by a single public action by the agent,
and (2) sends the public state of each child (as well as a description of the path to the child) to all other agents (removing duplicates).
The PST-Forward Search algorithm is described in \cref{alg:simple-search}.
In this algorithm, the agents maintain a set $Q_{d}$ for every level $d$ in the tree,
which will contain all nodes in level $d$.
Every element in the set is a node represented as a pair $(\vec{s},\vec{a})$, where $\vec{s}=(s_0,\dots,s_m)$ is a sequence of public states such that $s_0=I^{\mathrm pub}$ and $\vec{a}=(a_1,\dots,a_m)$ is a sequence of public actions.
Such a pair describes a path in the PST from the root to the node in level $d$.
To find the actions that an agent can apply from a node, it needs to compute the possible private states of that node,
as this information is not contained in the message it received. To do this\red{,} the agent reconstructs its private
state, as described in Algorithm {\bf compute-private-states}. This is, of course, highly inefficient, but has the
desired privacy property.
\begin{algorithm}
\caption{PST Forward Search}
\label{alg:simple-search}
\begin{algorithmic}[1]
\STATE {\bf initialization:} $d \gets 0$; for $i \in \set{1,\dots,n}$ set $Q_{0}=\set{(I^{\mathrm{pub}},\epsilon)}$.\\
// $Q_d$ will contain the states at level $d$ of the PST. Each agent maintains a copy of it.
\WHILE{goal has not been achieved}
\STATE $d\gets d+1$; for every $i \in \set{1,\dots,n}$ agent $\varphi_i$ sets $Q_{d}\gets\emptyset$ and $C_i\gets \emptyset$.
\FOR{$i=1$ \TO $n$ }
\STATE Agent $\varphi_i$ does the following:
\FORALL{$(\vec{s},\vec{a})\in Q_{d-1}$}
\STATE let $s$ be the last state in $\vec{s}$.
\STATE executes $PS\gets $\textbf{ compute-private-states}$(i,\vec{s},\vec{a})$.
\FORALL{private state $ps \in PS$}
\FORALL{sequence $a_1,\dots,a_\ell$ of actions of $\varphi_i$ applicable from $s,ps$, where $a_1\dots,a_{\ell-1}$ are private and $a_\ell$ is public }
\STATE computes $(s',ps') \gets a_\ell(a_{\ell-1}(\cdots a_1((s,ps))))$ and $C_i \gets C_i \cup \set{((\vec{s},s'),(\vec{a},a_\ell))}.$
\ENDFOR
\ENDFOR
\ENDFOR
\STATE sends $C_i$ to all agents (where the elements of $C_i$ are sent according to some canonical order).
\STATE each agent $\varphi_j$ updates its copy: $Q_{d}\gets Q_{d} \cup C_i$.
\IF{the last state $s'$ in some $((\vec{s},s'),(\vec{a},a_\ell)) \in C_i$ satisfies the goal}
\STATE all agents output $(\vec{a},a_\ell)$ and halt.
\ENDIF
\ENDFOR
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{compute-private-states$(i,\vec{s}=(s_0,\dots,s_m),\vec{a}=(a_1,\dots,a_m))$}
\label{alg:CompPrivate}
\begin{algorithmic}[1]
\STATE\COMMENT{The algorithm reconstructs the possible private states of agent $\varphi_i$
starting from $I^{\rm pub},I_i$ and updating it according to the states is $\vec{s}$ and the actions of $\varphi_i$ in $\vec{a}$.}
\STATE let $PS_0\gets\set{I_i}$.
\FOR{$j=1$ \TO $m$}
\IF{$a_j$ is not an action of $\varphi_i$}
\STATE{$PS_j\gets PS_{j-1}$.}
\ELSE
\STATE $PS_j\gets \emptyset$.
\FORALL{$ps \in PS_{j-1}$ and sequence of private actions
$a'_1,\dots,a'_{\ell}$ in $A_i$ such that $a'_1,\dots,a'_{\ell},a_j$ is applicable from $s_{j-1},ps$}
\STATE let $(s',ps') \gets a_j(a'_{\ell}(\cdots a'_1((s_j,ps))))$.
\red{
\IF{$s'=s_j$}
\STATE{$PS_j\gets PS_j\cup \set{ps'}$.}
\ENDIF
}
\ENDFOR
\ENDIF
\ENDFOR
\RETURN{$PS_m$.}
\end{algorithmic}
\end{algorithm}
In \cref{alg:simple-search}, the messages sent correspond exactly to the PST nodes,
and therefore, two domains with an identical PST will yield identical messages. To enable an exact simulation, we need to specify the order in which each agent sends the possible sequences of children in a given level; we assume that this is done in some canonical order. We supply the formal proof of privacy in the next claim.
\begin{claim}
\cref{alg:simple-search}, the simple private search algorithm, is a PST-indistinguishable secure algorithm.
\end{claim}
\begin{proof}
The simulator, given the PST $T$, traverses the tree level by level,
in each level $d$ it goes over all agents $\varphi_i$ starting from $\varphi_1$ and ending at
$\varphi_n$, and for each agent $\varphi_i$ it sends the nodes of level $d$ resulting from an action of $\varphi_i$, where for each node it sends the public states and the actions on the path from the root to the node. The order of sending the nodes is as in the algorithm, according to the fixed canonical order.
\end{proof}
\subsection{Using IDs}
Next, we present an optimization of \cref{alg:simple-search}, which eliminates the need to compute private states, and merges some nodes in the tree, reducing the communication complexity of the algorithm.
We call this version: PST-ID Forward Search.
Notice that only actions of $\varphi_i$ change the local state of $\varphi_i$.
There are two approaches to use this observation.
In one approach, for each node that is sent, the agent sending the node can locally keep a list containing its possible local states in that node. When an agent wants to compute the children of some node, it looks for its last action in the path to the node and retrieves its possible local states after that action.
In the second approach, which we use, each agent associates the possible local states with a unique id and keeps the possible local states associated with this id. Each time an agent sends a node in the tree, it sends the public state of the node as well as the $n$ ids, encoding the local states of each agent. Notice that each id is not a function of these local states, but only of the particular PST node with which it is associated. When an agent wants to compute the children of a node resulting from its actions, it does the following:
\begin{itemize}
\item
It retrieves all private states associated with its id in this state.
\item
It expands the public state and each possible private state using all possible actions sequences containing and ending with a single public action.
\item
For every node reached, it generates a new id and associates
with it its local state in the states generated with this projected state,
keeping the ids of all other agents associated with the original node.
\item
It orders the nodes based on some lexicographic order.
\item
It sends these nodes, with their public states and their associated ids, in this order to all agents.
\end{itemize}
Note that the above algorithm sends at each stage a vector consisting of a public state and an id for
each agent. As this id encodes the private state(s) of the agent, we can think of the message as representing the state, with its private components encoded. The agent does not need to send neither the actions leading to the new node nor the father of the new node. Furthermore,
if two (or more) children of a node have the same public state, the agent does not need to send them twice;
it can send one public state, together with the ids of the other agents taken from the original node, and one new id for the agent associated with all its possible private states associated with any one of these children. We go one step further, merging all nodes generated by an agent in level $d$ (possibly with different fathers) if they have the same public state and the same ids for all other agents.
The formal description of the algorithm appears in \cref{alg:less-simple-search}.
The algorithm that recovers a solution after the goal has been reached is described in \cref{alg:recover-solution}.
\begin{algorithm}
\caption{PST-ID Forward Search}
\label{alg:less-simple-search}
\begin{algorithmic}[1]
\STATE {\bf initialization:} $d \gets 0$; for every $i \in \set{1,\dots,n}$ agent $\varphi_i$ sets $id_i\gets 0$, $Q_{0}\gets \set{(I^{\mathrm{pub}},0,\dots,0)}$, and $PS_{i}[0]\gets \set{I_i}$. \\
//$PS_{i}[j]$ denotes the local states $\varphi_i$ associated with the id $j$.
\WHILE{goal has not been achieved}
\STATE $d\gets d+1$; for every $i \in \set{1,\dots,n}$ agent $\varphi_i$ sets $Q_{d}\gets \emptyset$
and $E_i\gets \emptyset$.
\FOR{$i=1$ \TO $n$ }
\STATE agent $\varphi_i$ does the following:
\FORALL{$(s,j_1,\dots,j_n)\in Q_{d-1}$}
\FORALL{private state $ps\in PS_{i}[j_i]$ }
\FORALL{sequence $a_1,\dots,a_\ell$ of actions of $\varphi_i$ applicable from $s,ps$, where $a_1\dots,a_{\ell-1}$ are private and $a_\ell$ is public }
\STATE $(s',ps') \gets a_\ell(a_{\ell-1}(\cdots a_1((s,ps))))$
\STATE $E_i\gets E_i \cup \set{(s',j_1,\dots,j_{i-1},j_{i+1},\dots,j_n,ps')}$.
\ENDFOR
\ENDFOR
\ENDFOR
\STATE agent $\varphi_i$ sorts the elements of $E_i$, first by the public state, then by the $n-1$ ids, and then by the private state. Let $((s^1,j_1^1,\dots,j^1_{i-1},j^1_{i+1},\dots,j_n^1,ps^1)$
$\dots,(s^t,j_1^t,\dots,j^t_{i-1},j^t_{i+1},\dots,j_n^t,ps^t))$ be the sorted elements of $E_i$.
\FOR{$u=1$ \TO $t$}
\IF{$u > 1$ \AND $s^{u-1} =s^u $ \AND $(j_1^{u-1},\dots,j^{u-1}_{i-1},j^{u-1}_{i+1},\dots,j^{u-1}_n)=
(j_1^u,\dots,j^u_{i-1},j^u_{i+1},\dots,j_n^u)$}
\STATE $PS_{i}[id_i]\gets PS_{i}[id_i] \cup \set{ps^u}$.
\ELSE
\STATE $id_i\gets id_i+1$.
\STATE $C_i\gets C_i \cup \set{(s^u,j_1^u,\dots,j^u_{i-1},id_i,j^u_{i+1},\dots,j_n^u)}$ and $PS_{i}[id_i]\gets \set{ps^u}$.
\ENDIF
\ENDFOR
\STATE $\varphi_i$ sends $C_i$ to all agents (where the elements of $C_i$ are sent according to some canonical order).
\STATE each agent $\varphi_j$ updates: $Q_{d}\gets Q_{d} \cup C_i$.
\ENDFOR
\IF{the state $s$ in some element in $Q_{d}$ satisfies the goal}
\STATE the agents execute $sol \gets $ {\bf recover-solution}, output $sol$, and halt.
\ENDIF
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\cref{alg:recover-solution} described below returns a solution to the planning problem, i.e., a sequence of public actions on a path from the root of the PST to a node in level $d$ that satisfies the goal.
Clearly, this sequence of actions should be computed from the information computed by the algorithm so far.
Furthermore, to guarantee privacy, this sequence of actions should be determined by the PST (that is, a simulator can generate it from the PST). In \cref{alg:recover-solution} we choose it in a specific way that is fairly efficient (especially, if the agents keep additional information during \cref{alg:less-simple-search}).
In \cref{alg:recover-solution}, we say that
$s_{d'-1},j_{1,{d'-1}},\dots,j_{n,{d'-1}} \in Q_{d'-1}$
leads to $s_{d'},j_{1,{d'}},\dots,j_{n,{d'}} \in Q_{d'}$ by agent $\varphi_i$ if
there exist private states $ps_{d-1}\in PS_{i}[j_{d'-1}],ps_d\in PS_{i}[j_{d'}]$, and a sequence of actions $a_1,\dots,a_\ell$ of agent $\varphi_i$ such that $a_1,\dots,a_{\ell-1}$ are private and $a_\ell$ is public and
$a_1,\dots,a_{\ell}$ are applicable from $s_{d'-1},ps_{d'-1}$ and lead to $s_{d'},ps_{d'}$.
\begin{algorithm*}
\caption{recover-solution}
\label{alg:recover-solution}
\begin{algorithmic}[1]
\STATE let $s_d,j_{1,d},\dots,j_{n,d}$ be the first element in $Q_{d}$ that satisfies the goal.
\STATE \COMMENT{recall that all agents have a copy of $Q_d$.}
\FOR{$d'=d$ {\bf downto } 1 }
\STATE let $\varphi_i$ be the agent performing the last action leading to $s_{d'},j_{1,{d'}},\dots,j_{n,{d'}}$.
\STATE agent $\varphi_i$ finds the first element $s_{d'-1},j_{1,{d'-1}},\dots,j_{n,{d'-1}} \in Q_{d'-1}$
leading to $s_{d'},j_{1,{d'}},\dots,j_{n,{d'}}$.
\STATE let $a_{d'}$ be the last action in a sequence of actions
leading from $s_{d'-1},j_{1,{d'-1}},\dots,j_{n,{d'-1}}$ to $s_{d'},j_{1,{d'}},\dots,j_{n,{d'}}$
(if there is more than one such action, choose the lexicographically first action).
\STATE agent $\varphi_i$ sends $s_{d'-1},j_{1,{d'-1}},\dots,j_{n,{d'-1}}$ and $a_{d'}$ to all other agents.
\ENDFOR
\RETURN $a_1,\dots,a_d$.
\end{algorithmic}
\end{algorithm*}
\begin{claim}
\label{c:less-simple}
\cref{alg:less-simple-search}, the PST-ID Forward Search algorithm, is a PST-indistinguishable secure algorithm.
\end{claim}
\begin{proof}
We construct a simulator proving that \cref{alg:less-simple-search} is a PST-indistinguishable secure algorithm. We first supply a high level description of the simulator.
The simulator, given the PST $T$, traverses the tree level by level and simulates the algorithm. For some level $d$, it goes over the agents from agent $\varphi_1$ to $\varphi_n$ and for each agent it produces a list $C_i$ as the agent would have sent, using the nodes in
level $d$ resulting from an action of $\varphi_i$. Recall that each element in $C_i$ is a public state and a list of $n$ ids. To produce these ids (and to know which nodes should be merged), for every vertex $w$ in level $d$ the simulator computes a label, denoted by $L(w)$, that contains $n$ ids; this label is computed using the label of the father of a node $w$, denoted by $f(w)$. The labels of $w$ and $f(w)$ are the same except for the $i$th id, which is carefully computed to simulate \cref{alg:less-simple-search}.
After reaching the first level in which there is a node satisfying the goal, the simulator, using the PST tree, reconstructs the solution that
\Cref{alg:recover-solution} returns.
The simulator is formally described in \cref{sim:less-simple-search}.
The input in \cref{sim:less-simple-search} is a PST $T$\red{;} we denote its root by $root$.
It can be easily proved by induction that the simulator computes the same messages as \cref{alg:less-simple-search}.
\end{proof}
\begin{algorithm}
\caption{Simulator for \cref{alg:less-simple-search} -- The PST-ID Forward Search Algorithm}
\label{sim:less-simple-search}
\begin{algorithmic}[1]
\REQUIRE A PST tree $T$
\STATE {\bf initialization:} $d \gets 0$; for every $i \in \set{1,\dots,n}$ set $id_i\gets 0$, $Q_{0}\gets ((I^{\mathrm{pub}},0,\dots,0))$.
\WHILE{goal has not been achieved}
\STATE $d\gets d+1$; for every $i \in \set{1,\dots,n}$ set $Q_{d}\gets \emptyset$,
$C_i\gets \emptyset$, and $\tilde{E}_i\gets \emptyset$.
\STATE $L[root]=(0,\dots,0)$.
\FOR{$i=1$ \TO $n$ }
\FORALL{node $w$ in level $d$ s.t.~the edge $(f(w),w)$ is labeled by an action of $\varphi_i$}
\STATE let $L(f(w))=(j_1,\dots,j_n)$ and $s'$ be the state of node $w$.
\STATE $\tilde{E}_i\gets \tilde{E}_i \cup \set{(s',j_1,\dots,j_{i-1},j_{i+1},\dots,j_n,w)}$.
\ENDFOR
\STATE sort the elements of $\tilde{E}_i$, first by the public state, then by the $n-1$ ids, and then by $w$.
\STATE let $((s^1,j_1^1,\dots,j^1_{i-1},j^1_{i+1},\dots,j_n^1,w^1),$
$\dots,(s^t,j_1^t,\dots,j^t_{i-1},j^t_{i+1},\dots,j_n^t,w^t))$ be the sorted elements of $\tilde{E}_i$.
\FOR{$u=1$ \TO $t$}
\IF{ $u=1$ \OR $s^{u-1} \neq s^u $ \OR $(j_1^{u-1},\dots,j^{u-1}_{i-1},j^{u-1}_{i+1},\dots,j^{u-1}_n) \neq
(j_1^u,\dots,j^u_{i-1},j^u_{i+1},\dots,j_n^u)$}
\STATE $id_i\gets id_i+1$.
\STATE $C_i\gets C_i \cup \set{(s^u,j_1^u,\dots,j^u_{i-1},id_i,j^u_{i+1},\dots,j_n^u)}$.
\ENDIF
\STATE $L(w^u)\gets (j_1^u,\dots,j^u_{i-1},id_i,j^u_{i+1},\dots,j_n^u)$.
\ENDFOR
\STATE send $C_i$ on behalf of $\varphi_i$ to all agents (where the elements of $C_i$ are sent according to some canonical order).
\STATE for every $j \in \set{1,\dots,n}$ set $Q_{d}\gets Q_{d} \cup C_i$.
\ENDFOR
\IF{ the state $s$ in some element in $Q_d$ satisfies the goal}
\STATE execute $sol \gets $ {\bf sim-recover-solution}, output $sol$, and halt.
\ENDIF
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\begin{algorithm*}
\caption{sim-recover-solution}
\label{sim:recover-solution}
\begin{algorithmic}[1]
\STATE let $s_d,j_{1,d},\dots,j_{n,d}$ be the first element in $Q_{d}$ that satisfies the goal
and $W_d$ be all nodes $w$ in level $d$ whose public state is $s$ and whose label $L(w)$ is
$j_{1,d},\dots,j_{n,d}$.
\FOR{$d'=d$ {\bf downto } 1 }
\STATE let $F_{d'-1}\gets \set{f(w)|w \in W_{d'}}$.
\STATE let $s_{d'-1},j_{1,d'-1},\dots,j_{n,d'-1}$ be the first element in
$\set{(s(v),L(v)|v \in F_{d'-1}}$ (where $s(v)$ is the public state in the node $v$).
\STATE let $a_{d'}$ be the lexicographically first action labeling an edge from a node
$v \in F_{d'-1}$ such that $s(v)=s_{d'-1}$ and $L(v)=j_{1,d'-1},\dots,j_{n,d'-1}$
to a node in $W_{d'}$.
\STATE let $W_{d'-1}$ be all nodes in $F_{d'-1}$ such that $s(v)=s_{d'-1}$, $L(v)=j_{1,d'-1},\dots,j_{n,d'-1}$ and there exists an edge from them to a node in $W_{d'}$ labeled by the action $a_{d'}$.
\STATE Send the message $s_{d'-1},j_{1,d'-1},\dots,j_{n,d'-1}$ and $a_{d'}$.
\ENDFOR
\RETURN $a_1,\dots,a_d$.
\end{algorithmic}
\end{algorithm*}
\subsection{Merging More Nodes}
In PST-ID Forward Search, an agent merged two nodes if they were in the same level, they had the same public state, and the ids of the other agents were the same. The simple case when two nodes were merged is if they had the same parent and there were two sequences of actions ending with the same public state (if the last action in these sequences is the same, then they are already merged in the PPT). There are somewhat more complicated scenario when nodes are merged. For example, suppose that in some public state $s$ and private state
$ps$ in level $d$, agent $\varphi_i$ can apply two sequences of public actions $a_1,a_2$ and $a_3,a_4$, and both sequences result in the same public state $s'$. Then, the resulting two nodes are in the same level $d+2$ and they will be merged. However, suppose that also action $a_5$ is applicable in the state $s,ps$ and results in state $s'$ (in level $d+1$). The resulting node is not in the same level and the previous nodes are not merged with the new node.
As a result, the current algorithm will send two nodes that are identical in every respect, except for its id.
One key motivation for the original {\sc{secure mafs}}\ algorithm was to prevent this situation and never send two nodes
that differ only in the private state of the sending agent.
There is a simple (though probably inefficient) way of overcoming this. For this observe that, under the
assumption that an agent will never send two states that differ only in its own id, the only way two
states $s',s''$ generated by an agent $\varphi_i$ can be identical is if they have a common ancestor $s$,
and $s'$ and $s''$ were generated by applying actions of $\varphi_i$ only. As in the above example, these
could be sequences containing different numbers of public actions, and hence at different levels of the PST.
However, once a public action is applied by some other agent $\varphi_j$, its id will change, and hence $s'$ and $s''$ will
differ on $\varphi_j$'s id. Given this observation, it is easy to modify PST-ID FS to have the property that
an agent $\varphi_i$ never sends two nodes that are identical in all but (possibly) its id, which we call
PST-ID-E Forward Search. Whereas in PST-ID FS an agent will send each state obtained by applying
exactly one public action, in PST-ID-E, the agent expands the entire local sub-tree below a node in its open list.
That is, it will consider state reachable by applying more than one (of its) public actions. This could be a large
sub-tree, of course, but under the assumption that all variables have finite-domains, it is finite and with appropriate
book keeping (maintaining a closed list) can be constructed in finite time. Thus, the only change is in line 8 of~\cref{alg:less-simple-search},
where the new line is
\begin{quote}
{\bf for each} sequence $a_1,\dots,a_\ell$ of actions of $\varphi_i$ applicable from $s,ps$, where $a_1\dots,a_{\ell-1}$ are {\em public or private} and $a_\ell$ is public {\bf do}.
\end{quote}
\begin{claim}
\label{c:psd-id-e}
The PST-ID-E Forward Search algorithm is a PST-indistinguishable secure algorithm.
\end{claim}
\begin{proof}
This follows immediately from the proof of~\cref{c:less-simple} using the following observation:
Take the PST, and add to it additional edges between every node and all its descendants that
are reachable using public actions of the same agent only. Now, use the simulator for PST-ID FS
on this modified tree.
\end{proof}
Note that given the modified tree in the proof above, it is possible to recover the original ordering by
simply taking into account the number of public actions that were applied in the path from the initial
state to the current state.
\begin{claim}
\label{c:psd-id-e-differ}
In the PST-ID-E Forward Search algorithm, an agent never sends two states $s,s'$ that differ only in its own id.
\end{claim}
\begin{proof}
Consider two states $s,s'$ sent by an agent $\varphi_i$ during the run of the algorithm. Let the level of a state denote
the number of times (plus 1) a public action was applied in the path to this state by an agent such that this agent did not apply
the previous public action on the path.
First, assume that $s,s'$ have a common ancestor such that all actions on the paths from this ancestor to
$s$ and $s'$ are of the same agent $\varphi_i$. In this case, if they are identical in all other respects,
an id that contains both their private states is formed, and only one state is sent.
Suppose that $s,s'$ do not have such ancestor. Consider the sequence of states sent by agents on the paths from the root to $s$ and $s'$.
At some points, these states differ, and hence the id of the agent that sent the states will differ too. But from this point on,
the ids of all sending agent must change.
\end{proof}
\subsection{Heuristic Search}
So far, the algorithms we described expanded nodes in breadth-first manner, and followed some canonical
ordering within each level. PST-ID-E also fits this view, when levels are defined such that the level increases
only when a public action is applied by an agent who did not apply the last public action.
However, the privacy guarantees do not rest on this property.
In principle, the PST can be traversed in any order, and all the above results are correct provided the
traversal ordering is a function of the PST only. Thus, for example, any heuristic search algorithm can be used, provided the heuristic
depends on the history of the public part of the state only, or on the current public state.
This follows trivially from the fact that a simulator that has access to the PST can simulate any such ordering.
\subsection{{\sc{secure mafs}}}
We are now ready to describe a PST-indistinguishable secure algorithm that is essentially
a synchronous, breadth-first version of {\sc{secure mafs}}~\cite{Brafman15}.
{\sc{secure mafs}}\ is similar to PST-ID Forward Search (i.e., a message is sent after the application of a public action),
except that an agent never sends two states that differ only in its own private state -- in our case, its own id.
The PST-ID-E algorithm has this property, but requires that an agent first explore its entire sub-tree.
To prevent resending identical states (modulo its own id), in {\sc{secure mafs}}\ the agent must maintain a list of states sent so far. Whenever it wishes to send a state $s$ with local state $ps$, it first checks if the state $s$ was sent before. If it was, it simply updates the id associated with $s$ to include $ps$.
\begin{figure}[ht]
\centerline{
\includegraphics[width=0.5\textwidth,height=7cm]{FigExample1.pdf}}
\caption{\label{fig:example1}An example for {\sc{secure mafs}}.}
\end{figure}
However, this change alone is insufficient to maintain completeness. See Figure~\ref{fig:example1} for illustration of the following example. Consider some state $s$ that is
being expanded by $\varphi_i$. Suppose that the non-private part of the state is identical in $s'=a_2(a_1(s))$ and $s$, but the local state is different. Here $a_1,a_2$ are public actions of $\varphi_i$ that only change the private state of $\varphi_i$.
Let $a_3$ be a public action of another agent $\varphi_j$ and $a_4$ an action of $\varphi_i$.
We claim that $\varphi_i$ may never generate $a_4(a_3(s'))$, although, as we shall see, it should.
To see this, note that $\varphi_i$ will receive $a_3(s)$ from $\varphi_j$ and will expand it
before it generates $s'=a_2(a_1(s))$.
Now, suppose that $a_4$ cannot be applied in $a_3(s)$ because of $\varphi_i$'s local state, but it can be applied in $a_3(s')$. Eventually, $\varphi_i$ will generate
$s'=a_2(a_1(s))$. However, it will not send it to $\varphi_j$. It will simply update the id associated with
$s$ to include the local state of $s'$. Since $s$ was already expanded, it will not attempt to re-expand it, and will miss the state $a_4(a_3(s'))$.
To address this issue, {\sc{secure mafs}}\ must re-expand states previously expanded when their id is modified.
Specifically, in the above example, when we modify the id of $a_2(a_1(s))$, {\sc{secure mafs}}\ will
add $a_3(s')$ (with the appropriate ids) to a local queue and later see that $a_4$ is applicable from this state.
\begin{algorithm}
\caption{{\sc{secure mafs}}}
\label{alg:smafs}
\begin{algorithmic}[1]
\STATE {\bf initialization:} $d \gets 0$; $Q_{0}\gets \set{(I^{\mathrm{pub}},0,\dots,0)}$; for every $i \in \set{1,\dots,n}$ agent $\varphi_i$ sets $id_i\gets 0$, $PS_{i}[0]\gets \set{I_i}$,
and $LQ_{i,d'}\gets\emptyset$ for every $d'$.
\WHILE{goal has not been achieved}
\STATE $d\gets d+1$; for every $i \in \set{1,\dots,n}$ agent $\varphi_i$ sets $Q_{d}\gets \emptyset$, $C_{i,d}\gets\emptyset$,
and $E_i\gets \emptyset$.
\FOR{$i=1$ \TO $n$ }
\STATE agent $\varphi_i$ does the following:
\FORALL{$(s,j_1,\dots,j_n)\in Q_{d-1} \cup LQ_{i,d-1}$}
\FORALL{private state $ps\in PS_{i}[j_i]$ }
\IF{$(s,j_1,\dots,j_n)$ and $ps$ where not evaluated previously by $\varphi_i$}
\FORALL{sequence $a_1,\dots,a_\ell$ of actions of $\varphi_i$ applicable from $s,ps$, where $a_1\dots,a_{\ell-1}$ are private and $a_\ell$ is public }
\STATE $(s',ps') \gets a_\ell(a_{\ell-1}(\cdots a_1((s,ps))))$.
\IF {$(s',j_1,\dots,j_{i-1},j_{i+1},\dots,j_n,ps')$ was not generated before}
\STATE $E_i\gets E_i \cup \set{(s',j_1,\dots,j_{i-1},j_{i+1},\dots,j_n,ps')}$.
\ENDIF
\ENDFOR
\ENDIF
\ENDFOR
\ENDFOR
\STATE agent $\varphi_i$ sorts the elements of $E_i$, first by the public state, and then by the $n-1$ ids, and then by the private state. Let $((s^1,j_1^1,\dots,j^1_{i-1},j^1_{i+1},\dots,j_n^1,ps^1)$
$\dots,(s^t,j_1^t,\dots,j^t_{i-1},j^t_{i+1},\dots,j_n^t,ps^t))$ be the sorted elements of $E_i$.
\FOR{$u=1$ \TO $t$}
\IF{there exist $d' < d$ and $id$ such that $(s^u,j_1^u,\dots,j^u_{i-1},id,j^u_{i+1},\dots,j_n^u) \in C_{i,d'}$
}
\STATE update $PS_{i}[id]\gets PS_{i}[id] \cup \set{ps^u}$.
\FORALL{$(s,j_1,\dots,j_{i-1},j_{i+1},\dots,j_n)$ s.t. $(s,j_1,\dots,j_{i-1},id,j_{i+1},\dots,j_n) \in Q_{d''}$ for some $d'' <d$}
\STATE \label{line:LQ} update $LQ_{i,d+(d''-d')} \gets LQ_{i,d+(d''-d')} \cup \set{(s,j_1,\dots,j_{i-1},id,j_{i+1},\dots,j_n)}$.
\ENDFOR
\ELSIF{$u >1$ \AND $s^{u-1} =s^u $ \AND $(j_1^{u-1},\dots,j^{u-1}_{i-1},j^{u-1}_{i+1},\dots,j^{u-1}_n)=
(j_1^u,\dots,j^u_{i-1},j^u_{i+1},\dots,j_n^u)$}
\STATE $PS_{i}[id_i]\gets PS_{i}[id_i] \cup \set{ps^u}$. //Collects ids of similar states in a level
\label{step:PSi}
\ELSE
\STATE update $id_i\gets id_i+1$
\STATE update $C_{i,d}\gets C_{i,d} \cup \set{(s^u,j_1^u,\dots,j^u_{i-1},id_i,j^u_{i+1},\dots,j_n^u)}$,
and $PS_{i}[id_i]\gets \set{ps^u}$.
\ENDIF
\ENDFOR
\STATE agent $\varphi_i$ sends $C_{i,d}$ to all agents (where the elements of $C_{i,d}$ are sent according to some canonical order).
\STATE each agent $\varphi_j$ updates: $Q_{d}\gets Q_{d} \cup C_{i,d}$.
\ENDFOR
\IF{the state $s$ in some element in $Q_{d}$ satisfies the goal \label{line:finds}}
\STATE the agents execute $sol \gets $ {\bf recover-solution}, output $sol$, and halt.
\ENDIF
\ENDWHILE
\end{algorithmic}
\end{algorithm}
The pseudo-code for {\sc{secure mafs}}\ appears in~\cref{alg:smafs}. At each level we generate a number of lists of states:
The set $C_{i,d}$ contains the new states that agent $\varphi_i$ created in level $d$; these states are sent to all agents. The set $Q_d$ contains the new states created in level $d$ by some agent,
that is $Q_d=\cup_{1\leq i \leq n} C_{i,d}$.
Furthermore, the set $LQ_{i,d}$ will contain states that were generated
by $\varphi_i$, but are not being sent because a similar state was sent earlier. These lists are initially empty.
In round $d$, each agent $\varphi_i$ expands all states in $Q_{d-1}$ and in $LQ_{i,d-1}$ using any sequence
of private actions followed by a single public action. It collects all these states into $E_d$.
This list is sorted and all its elements are processed in order.
For each element, agent $\varphi_i$ checks if this state did not appear before in the states it created (namely, in $C_{i,d'}$ for some $d' <d$), and if
a similar state $s'$ that differs only in $\varphi_i$'s private state appeared earlier.
If the latter is the case, let $id$ denote the id of agent $\varphi_i$ in $s'$.
We now go over all states in previous $Q_t$'s that have the id $id$, and add them to an appropriate $LQ$ list.
The list selected reflects the number of public actions that were applied to reach them from $s'$.
\commentout{
\begin{algorithm}
\caption{Simulator for {\sc{secure mafs}}}
\label{sim:smafs}
\begin{algorithmic}[1]
\REQUIRE A PST tree $T$
\STATE Initialization: $d \gets 0$; for every $i \in \set{1,\dots,n}$ set $id_i\gets 1$, $Q_{i,0}\gets ((I^{\mathrm{pub}},0,\dots,0))$,
$E_{i}=\{\}$.
\WHILE{goal has not been achieved}
\STATE $d\gets d+1$; for every $i \in \set{1,\dots,n}$ set $Q_{i,d}\gets \emptyset$,
$C_i\gets \emptyset$, and $\tilde{E}_i\gets \emptyset$.
\STATE $L[root]=(0,\dots,0)$.
\FOR{$i=1$ \TO $n$ }
\FORALL{node $w$ in level $d$ s.t. the edge $(f(v),w)$ is labeled by an action of $\varphi_i$}
\STATE Let $L(f(v))=(j_1,\dots,j_n)$ and $s'$ be the state of node $w$.
\STATE $\tilde{E}_i\gets \tilde{E}_i \cup \set{(s',j_1,\dots,j_{i-1},j_{i+1},\dots,j_n,w)}$.
\ENDFOR
\STATE Sort the elements of $\tilde{E}_i$, first by the public state, then by the $n-1$ ids, and then by $w$.
\STATE Let $((s^1,j_1^1,\dots,j^1_{i-1},j^1_{i+1},\dots,j_n^1,w^1),$
$\dots,(s^t,j_1^t,\dots,j^t_{i-1},j^t_{i+1},\dots,j_n^t,w^t))$ be the sorted elements of $\tilde{E}_i$.
\FOR{$u=1$ \TO $t$}
\IF{ $u=1$ \OR $s^{u-1} \neq s^u $ \OR $(j_1^{u-1},\dots,j^{u-1}_{i-1},j^{u-1}_{i+1},\dots,j^{u-1}_n) \neq
(j_1^u,\dots,j^u_{i-1},j^u_{i+1},\dots,j_n^u)$}
\STATE $id_i\gets id_i+1$.
\STATE $C_{i,d}\gets C_{i,d} \cup \set{(s^u,j_1^u,\dots,j^u_{i-1},id_i,j^u_{i+1},\dots,j_n^u)}$.
\ENDIF
\STATE $L(w^u)\gets (j_1^u,\dots,j^u_{i-1},id_i,j^u_{i+1},\dots,j_n^u)$.
\ENDFOR
\STATE Send $C_i$ on behalf of $\varphi_i$ to all agents (where the elements of $C_i$ are sent according to some canonical order).
\STATE For every $j \in \set{1,\dots,n}$ set $Q_{j,d}\gets Q_{j,d} \cup C_i$.
\ENDFOR
\IF{ the state $s$ in some element in $Q_{1,d}$ satisfies the goal}
\STATE Execute $sol \gets $ {\bf sim-recover-solution}, output $sol$, and halt.
\ENDIF
\ENDWHILE
\end{algorithmic}
\end{algorithm}
}
Observe that {\sc{secure mafs}}\ enjoys the property that an agent $\varphi_i$ will never send two states that differ only in its
own id.
\begin{algorithm}
\caption{Simulator for \cref{alg:smafs} -- {\sc{secure mafs}}}
\label{sim:smafs}
\begin{algorithmic}[1]
\REQUIRE A PST tree $T$
\STATE {\bf initialization:} $d \gets 0$; for every $i \in \set{1,\dots,n}$ set $id_i\gets 1$, $Q_{0}\gets ((I^{\mathrm{pub}},0,\dots,0))$.
\WHILE{goal has not been achieved}
\STATE $d\gets d+1$; for every $i \in \set{1,\dots,n}$ set $Q_{d}\gets \emptyset$,
$C_{i,d}\gets \emptyset$, and $\tilde{E}_i\gets \emptyset$.
\STATE $L[root]=(0,\dots,0)$.
\FOR{$i=1$ \TO $n$ }
\FORALL{node $w$ in level $d$ s.t.~the edge $(f(w),w)$ is labeled by an action of $\varphi_i$}
\STATE let $L(f(w))=(j_1,\dots,j_n)$ and $s'$ be the state of node $w$.
\STATE $\tilde{E}_i\gets \tilde{E}_i \cup \set{(s',j_1,\dots,j_{i-1},j_{i+1},\dots,j_n,w)}$.
\ENDFOR
\STATE sort the elements of $\tilde{E}_i$, first by the public state, then by the $n-1$ ids, and then by $w$.
\STATE let $((s^1,j_1^1,\dots,j^1_{i-1},j^1_{i+1},\dots,j_n^1,w^1),$
$\dots,(s^t,j_1^t,\dots,j^t_{i-1},j^t_{i+1},\dots,j_n^t,w^t))$ be the sorted elements of $\tilde{E}_i$.
\FOR{$u=1$ \TO $t$}
\IF{ there exists $d'< d$ and $id$ such that $(s^u,j_1^u,\dots,j^u_{i-1},id,j^u_{i+1},\dots,j_n^u) \in C_{i,d'}$}
\STATE $L(w^u)\gets (j_1^u,\dots,j^u_{i-1},id,j^u_{i+1},\dots,j_n^u)$.
\ELSE
\IF{ $u=1$ \OR $s^{u-1} \neq s^u $ \OR $(j_1^{u-1},\dots,j^{u-1}_{i-1},j^{u-1}_{i+1},\dots,j^{u-1}_n) \neq
(j_1^u,\dots,j^u_{i-1},j^u_{i+1},\dots,j_n^u)$}
\STATE $id_i\gets id_i+1$.
\STATE $C_{i,d}\gets C_{i,d} \cup \set{(s^u,j_1^u,\dots,j^u_{i-1},id_i,j^u_{i+1},\dots,j_n^u)}$.
\ENDIF
\STATE $L(w^u)\gets (j_1^u,\dots,j^u_{i-1},id_i,j^u_{i+1},\dots,j_n^u)$.
\ENDIF
\ENDFOR
\STATE send $C_{i,d}$ on behalf of $\varphi_i$ to all agents (where the elements of $C_{i,d}$ are sent according to some canonical order).
\STATE for every $j \in \set{1,\dots,n}$ set $Q_{d}\gets Q_{d} \cup C_{i,d}$.
\ENDFOR
\IF{ the state $s$ in some element in $Q_{d}$ satisfies the goal}
\STATE execute $sol \gets $ {\bf sim-recover-solution}, output $sol$, and halt.
\ENDIF
\ENDWHILE
\end{algorithmic}
\end{algorithm}
|
train/arxiv
|
BkiUcIDxK6-gD5TlcefH
| 5 | 1 |
\section{Quantum channels}
Quantum channels, or completely-positive trace-preserving maps, are
the most general maps between quantum systems. They enjoy a diverse
range of applications, primarily in the quantum information community
\cite{caruso2014}, but also in studies of matrix product states \cite{Fannes1992,Perez-Garcia2006},
entanglement renormalization \cite{Giovannetti2008,Pfeifer2009},
computability theory \cite{aaronson2016}, and even biological inference
processes \cite{Lee2016}. The canonical form of a quantum channel
$\A$ and its adjoint $\A^{\dgt}$ (a generalization of the Heisenberg
picture defined under the Frobenius norm) is \cite{Sudarshan1961,Kraus1971,Choi1975}
\begin{equation}
\A\left(\r\right)=\sum_{\ell}A^{\ell}\r A^{\ell\dg}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\text{and}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\A^{\dgt}\left(O\right)=\sum_{\ell}A^{\ell\dg}OA^{\ell}\,,\label{eq:kraus}
\end{equation}
where $\A$ acts on states $\r$ and $\A^{\dgt}$ on operators $O$.
The matrices $A^{\ell}$ are called the Kraus operators of $\A\equiv\left\{ A^{\ell}\right\} $,
eq. (\ref{eq:kraus}) is the Kraus form of $\A$, and the only requirement
for the channel to be trace preserving is (for $I$ identity)
\begin{equation}
\sum_{\ell}A^{\ell\dg}A^{\ell}=I\,.\label{eq:cptp}
\end{equation}
Quantum channels can be represented as matrices acting on a vectorized
density matrix, i.e., the $D\times D$ matrix $\r$ written as a $D^{2}$-dimensional
vector. Vectorization essentially ``flips'' the bra part in each
of the outer products making up $\r$ and $\A$ is written as a $D^{2}\times D^{2}$
matrix of the form $\hat{\A}=\sum_{\ell}A^{\ell}\ot A^{\ell\star}$
acting on the vectorized $\r$ strictly from the left. This \textit{matrix
or Liouville representation} of $\A$ \cite{Caves1999} is equivalent
to the Kraus representation (\ref{eq:kraus}), and I slightly abuse
notation by ignoring hats and not distinguishing the two.
In the matrix representation, channels can be studied in terms of
their eigenvalues and eigenmatrices. The eigenvalues of all channels
are contained in the unit disk, and this work focuses on the eigenvalues/matrices
$\st$ on the periphery of that disk, i.e.,
\begin{equation}
\A\left(\st\right)=e^{i\la}\st\,\,\,\,\,\,\,\,\,\,\,\,\,\,\text{for some real }\la\,.
\end{equation}
Such eigenmatrices are called the channel's (right) \textit{rotating
points}, and those with $\la=0$ are called \textit{fixed points}.
The $\St$'s do not have to be physical states themselves, but they
are a matrix basis for such states. Since $\A$ may not be diagonalizable,
the eigenmatrices $J$ of its adjoint \textemdash{} left rotating
points \textemdash{} may be different from $\St$:
\begin{equation}
\A^{\dgt}\left(J\right)=e^{-i\la}J\,.
\end{equation}
Left rotating points will be called \textit{conserved quantities}
because their expectation value is either constant or oscillates with
successive powers of $\A$, but does not decay:
\begin{equation}
\tr\{J\A^{n}(\r)\}=\tr\{\A^{\dgt n}(J)\r\}=e^{-in\la}\tr\{J\r\}\,.
\end{equation}
The general block structure of $\varPsi$'s is already well-known
\cite{robin,Lindblad1999,BlumeKohout2008,baumr,Carbone2015}, and
here the focus is on the structure of the $J$'s. It is important
to note that there are as many conserved quantities as there are rotating
points (more technically, the Jordan normal form of $\A$ contains
only trivial Jordan blocks for all eigenvalues on the periphery of
the unit disk; see, e.g., Prop 6.2 in Ref. \cite{wolf2010}).
In the limit of many applications of $\A$, all eigenmatrices with
eigenvalues not on the periphery of the unit disk will become irrelevant
and all that will be left of the channel is the projection onto the
subspace spanned by the rotating points. The collective effect of
many applications of $\A$ is quantified by the channel's \textit{asymptotic
projection} $\ppp$,
\begin{equation}
\ppp(\r)\equiv\lim_{n\rightarrow\infty}\A^{\a n}(\r)\,,\label{eq:asproj}
\end{equation}
which projects onto the eigenspace of the peripheral spectrum of the
channel. The extra parameter $\a$ allows one to take the limit in
such a way as to remove the eigenvalues $e^{i\la}$ arising from application
of $\A$ on $\r$. For any $\la=\frac{2\pi}{N}n$ (for some positive
integers $n,N$), rotating points of $\A$ are fixed points of $\A^{N}$,
so one simply takes $\a=N$ to get rid of the extra phases. Other
$\la$ which are not rational multiples of $2\pi$ can similarly be
removed to arbitrary accuracy \cite{robin,Wolf2010b,wolf2010} by
remembering that irrational numbers are limits of sequences of rationals.
The above limit is a direct generalization of the large time limit
of Markovian/Lindbladian channels $\A_{t}=e^{t\L}$ for some Lindbladian
$\L$. However, in that case, $\lim_{t\rightarrow\infty}e^{t\L}$
can produce residual unitary evolution which cannot be removed by
clever manipulation of the limit.
The asymptotic projection is expressible in terms of (superoperator)
projections onto the eigenspaces of the rotating points,
\begin{equation}
\ppp(\r)=\sum_{\la,\m}\st_{\la\m}\tr\left\{ J^{\la\m\dg}\r\right\} \,,\label{eq:ap}
\end{equation}
where the rotating points are indexed by their eigenvalue $e^{i\la}$
and $\m$ counts any degeneracies for each $\la$. In that sense,
conserved quantities are as important as fixed points despite being
less well-understood. Conveniently, the rotating points and their
corresponding conserved quantities can be made biorthogonal, $\tr\{J^{\la\m\dg}\st_{\varTheta\n}\}=\d_{\la\varTheta}\d_{\m\n}$.
The $\st$'s thus determine the basis elements of a generalized Bloch
vector \cite{alicki_book,schirmer} of the asymptotic state $\ppp(\r)$
while the $J$'s determine the coefficients of said Bloch vector.
The biorthogonality condition easily implies that $\ppp$ is really
a projection \textemdash{} $\ppp^{2}=\ppp$.
If a channel has a unique fixed point $\st$ and no rotating points,
then the unique conserved quantity is the identity (due to the necessity
of trace preservation) and $\ppp(\r)=\st\tr\{\r\}=\st$. Channels
with more non-trivial $\ppp$ are therefore those with multiple fixed
or rotating points. As a simple example of such a channel, consider
$\A=\{A\}$ acting on $2\times2$ matrices with one Kraus operator
$A=\text{diag}\{1,e^{i\t}\}$. Such a channel sports two fixed points,
the identity and the Pauli matrix $Z$, and two rotating points $\s_{\pm}$
with eigenvalues $\la=\pm\t$. In fact, since there is only one Kraus
operator, such a channel is actually unitary. For a non-unitary example,
set $\t=\pi$ (so $A=Z$) and add the Pauli matrix $X$ as another
Kraus operator {[}normalizing both $A$'s by $\frac{1}{\sqrt{2}}$
to satisfy trace preservation (\ref{eq:cptp}){]}. This channel has
the identity as the unique fixed point and $Y$ as the only rotating
point with $\la=\pi$. Since both Kraus operators are Hermitian, the
left and right fixed points are the same; we will see examples when
they are not later. Other examples of $\ppp$ come from recovery maps
in quantum error-correction, which take a state which has undergone
an error and project it back into the protected subspace of the quantum
code \cite{Ippoliti2014}.
\section{Structure of conserved quantities\label{sec:Structure-of-conserved}}
\subsection{Faithful channels}
The first part focuses on channels that do not contain a decaying
subspace. This means that no populations $|\psi\ket\bra\psi|$ decay
completely to zero under many applications of the channel: $\bra\psi|\R_{\E}(|\psi\ket\bra\psi|)|\psi\ket\neq0$
for all states $|\psi\ket$, a channel $\E$, and its asymptotic projection
$\R_{\E}$. Equivalently, the channel has to have one fixed point
$\r$ which is of full rank ($\bra\psi|\r|\psi\ket>0$ for all $|\psi\ket$).
The structural differences between such channels and channels which
do admit decay warrant a special definition:
\begin{defn*}
A channel $\E\equiv\{E_{\ell}\}$ is \textit{faithful }if it admits
a full-rank (i.e., faithful) fixed point $\r$. In other words,
\begin{equation}
\exists\,\r>0\,\,\text{ such that }\,\,\E(\r)=\r\,.
\end{equation}
\end{defn*}
Here, I always use $\E$ to denote faithful channels and later show
how $\E$ can be extended to channels $\A$ which act on a larger
Hilbert space and admit a decaying subspace. In this sense, $\E$
is the faithful channel of $\A$. Note that the number of fixed points
is independent of this condition, and Table \ref{tab:Some-types-of-channels}
relates this definition to others.
\begin{table}
\begin{tabular}{cccc}
\toprule
& FP unique? & $\exists$ full-rank FP? & $\exists$ rot. point?\tabularnewline
\midrule
\midrule
ergodic \cite{Raginsky2002,Raginsky2002a,Burgarth2007} & Yes & & \tabularnewline
faithful {[}here{]} & & Yes & \tabularnewline
irreducible \cite{Davies1970,wolf2010} & Yes & Yes & \tabularnewline
mixing \cite{Burgarth2007} & Yes & & No\tabularnewline
primitive \cite{Sanz2010,wolf2010} & Yes & Yes & No\tabularnewline
\bottomrule
\end{tabular}\caption{\label{tab:Some-types-of-channels}Some types of channels; FP$=$fixed
point. A blank entry means there is no requirement for that definition.
For semigroups, mixing is also known as relaxing \cite{Burgarth2013a}
and faithful is also known as minimal \cite{ABFJ}. Primitive is equivalent
to strongly irreducible \cite{Sanz2010}.}
\end{table}
The first result is regarding the relationship between the conserved
quantities $J$ and the Kraus of operators of $\E$. It is a generalization
of a theorem for fixed points of faithful channels \cite{robin,Kribs2003,Choi2006,Gheondea2016},
which states that a conserved quantity $J$ with eigenvalue $\la=0$
commutes with all of the Kraus operators. It is shown that conserved
quantities with $\la\neq0$ commute up to a phase. For the aforementioned
example $\E=\{E\}$ with $E=\text{diag}\{1,e^{i\t}\}$, the conserved
quantity $\s_{+}$ satisfies $\s_{+}E=e^{-i\t}E\s_{+}$. This turns
out to be true for all faithful channels and reduces to known results
for ergodic channels (\cite{Burgarth2013a}, Thm. 9). It can be proven
using Thms. 4.1-4.2 and Corollary 4.3 in Ref. \cite{Novotny2012};
a more direct proof is in the appendix.
\begin{numtheorem}{\hyperref[prop:1]{Proposition 1.}}
Let $\E=\left\{ E_{\ell}\right\} $ be a faithful channel. Let $J$
be a conserved quantity of $\E$, i.e., $\E^{\dgt}\left(J\right)=e^{-i\la}J$
for some real $\la$. Then, for all $\ell$,
\begin{equation}
JE_{\ell}=e^{-i\la}E_{\ell}J\,.\label{eq:com}
\end{equation}
\end{numtheorem}
Assuming $\E^{\dgt}(J_{1})=e^{-i\la{}_{1}}$ and $\E^{\dgt}(J_{2})=e^{-i\la{}_{2}}$,
Eq. (\ref{eq:com}) easily implies that $\E^{\dgt}(J_{1}J_{2})=e^{-i(\la{}_{1}+\la_{2})}J_{1}J_{2}$.
Combined with the fact that there must be $\leq D^{2}$ conserved
quantities, this implies that there are some constraints on $\la$
such that there remain finitely many eigenvalues. This brings us to
the second result about the eigenvalues of a specific subset of conserved
quantities.
Each conserved quantity $J=\jd+\jn$ can be decomposed into a diagonalizable
part $\jd$ and a nilponent part $\jn$ \cite{vaughn_book} ($\jn^{N}=0$
for $N\leq D$, the dimension of the Hilbert space). While $\la$
can be an irrational multiple of $2\pi$ for strictly nilpotent $J$,
it turns out that $e^{i\la}$ are $N$th roots of unity for all diagonalizable
$J$ with $N\leq D$. In other words, given any conserved quantity
$J$, $J^{D}$ is either a zero, the identity, or a projection. This
extends similar results (\cite{wolf2010}, Thm. 6.6; \cite{Fannes1992},
Prop. 3.3; \cite{Bialonczyk2017}, Corr. 3) to faithful channels.
It is not, however, as thorough a characterization of the peripheral
spectrum as Ref. \cite{Wolf2010b}, Thm. 9.
\begin{numtheorem}{\hyperref[prop:2]{Proposition 2.}}
Let $\E=\left\{ E_{\ell}\right\} $ be a faithful channel. Let $\jd$
be such that $\E^{\dgt}(\jd)=e^{-i\la}\jd$ for some real $\la$ and
assume $\jd$ is diagonalizable. Then, there exists an integer $n$
such that
\begin{equation}
\la=\frac{2\pi}{N}n\,\,\,\,\,\,\,\text{for some}\,\,\,\,\,\,\,N\leq D\,.
\end{equation}
\end{numtheorem}
Let us assume a unitary conserved quantity, $J^{\dg}J=JJ^{\dg}=I$,
and show that the above two propositions extend known results (\cite{wolf2010},
Prop. 6.7) from irreducible to faithful channels. Proposition \ref{prop:2}
readily implies that $\E$ is covariant (more specifically, invariant
or symmetric) under $J$,
\begin{equation}
J\E(\r)J^{\dg}=\E(J\r J^{\dg})\,\,\,\,\,\,\forall\r\,,
\end{equation}
so conserved quantities are symmetries of the channel. Proposition
\ref{prop:2} implies that $J^{N\leq D}=I$, so the set $\{J^{n}\}_{n=0}^{N-1}$
forms the symmetry group $\Z_{N}$. Note that the symmetry group is
never infinite for finite dimension $D$. Generalizing this, the set
of unitary conserved quantities thus forms a finite group under which
$\E$ is covariant. This is a one-way Noether-type theorem linking
conserved quantities to symmetries (see Ref. \cite{pub011} or Ref.
\cite{thesis}, Ch. 2.6, for the semigroup analogue). This cannot
be extended to a two-way theorem because symmetries of a channel are
not always conserved quantities. A simple counterexample is the channel
$\E=\{X/\sqrt{2},Z/\sqrt{2}\}$, for which the Hadamard operation
$H$ taking $X\leftrightarrow Z$ is a symmetry, but is not conserved
{[}$\E^{\dgt}(H)=0${]}.
\subsection{General channels}
Now let us extend faithful channels to channels which do not contain
a full-rank fixed point. While Props. \ref{prop:1}-\ref{prop:2}
break down for general channels, the extension below implies that,
for every general channel, there is a corresponding faithful channel
for which they hold.
Any faithful channel $\E=\left\{ E_{\ell}\right\} $ can be extended
to a channel $\A=\left\{ A^{\ell}\right\} $ which contains a decaying
subspace (also, transient subspace \cite{ying2013}). Specifically,
the Kraus operators of $\A$ are
\begin{equation}
A^{\ell}=\begin{pmatrix}E_{\ell}\vspace{4pt} & A_{\ur}^{\ell}\\
0 & A_{\lr}^{\ell}
\end{pmatrix}\equiv\begin{pmatrix}A_{\ul}^{\ell}\vspace{4pt} & A_{\ur}^{\ell}\\
0 & A_{\lr}^{\ell}
\end{pmatrix}\,.\label{eq:bl}
\end{equation}
The dimensions of the square matrices $E_{\ell}$ and $A_{\lr}^{\ell}$
can differ, and the bounds of $\ell$ can change by padding the same
$E$ with two different pairs of matrices in $\urbig$ (``upper right'')
and $\lrbig$ (``lower right'') to make two different $A$'s. The
zero matrix in $\llbig$ is necessary to make sure that $\ulbig$
is the largest invariant subspace; thus, all rotating points of $\A$
are the same as those of $\E$. In addition, $\A$ needs to be a legitimate
channel, i.e., satisfy eq. (\ref{eq:cptp}). Writing out the $A^{\ell}$'s
in blocks {[}as in eq. (\ref{eq:bl}){]} yields the conditions\begin{subequations}\label{eq:conds}
\begin{align}
\sum_{\ell}A_{\ul}^{\ell\dg}A_{\ul}^{\ell} & =\pp\label{eq:conds1}\\
\sum_{\ell}A_{\ul}^{\ell\dg}A_{\ur}^{\ell} & =0\label{eq:conds2}\\
\sum_{\ell}(A_{\ur}^{\ell})^{\dg}A_{\ur}^{\ell}+A_{\lr}^{\ell\dg}A_{\lr}^{\ell} & =\qq\,,
\end{align}
\end{subequations}where $\qq$ is the projection on $\lrbig$ and
$\pp=I-\qq$ is the projection onto $\ulbig$ (with $\tr\left\{ P\right\} \equiv D$).
For each faithful channel $\E$, there are an infinite number of possible
extensions $\A$. Conversely, an arbitrary channel $\A$ either is
a faithful channel or contains one. The remaining two completely positive
maps associated with this decomposition of $\A$, $\{A_{\ur}^{\ell}\}$
and $\{A_{\lr}^{\ell}\}$, are both trace-decreasing.
Now let us develop the required notation. Just like $\pp$ and $\qq$
split the Hilbert space into two parts, they can be used to split
the space of operators on a Hilbert space into four ``corners''
$\{\ulbig,\urbig,\llbig,\lrbig\}$ \cite{ABFJ}. Each of the four
corners corresponds to its own superoperator projection. For example,
\begin{equation}
\R_{\ur}(O)\equiv\pp O\qq\equiv O_{\ur}
\end{equation}
for any operator $O$. The other three projections are defined accordingly.
One can graphically determine which corner a product of operators
belongs to by multiplying their blocks as matrices (e.g., $A_{\ll}B_{\ur}\in\lrbig$).
Moreover, the four-corners projections add graphically ($\R_{\ul}+\R_{\lr}\equiv\R_{\di}$)
and are Hermitian ($\R_{\emp}^{\dgt}=\R_{\emp}$). Analogous to studying
operators in terms of their matrix elements, one can study superoperators
in terms of their four-corners decomposition. For example,
\begin{equation}
\R_{\ul}\A\R_{\lr}(\r)=\pp\A\left(\qq\r\qq\right)\pp=\sum_{\ell}A_{\ur}^{\ell}\r_{\lr}(A_{\ur}^{\ell})^{\dg}\label{eq:tr}
\end{equation}
is the map $\{A_{\ur}^{\ell}\}$ which transfers $\r_{\lr}$ from
$\lrbig$ to $\ulbig$. ``Diagonal'' elements are denoted as $\A_{\emp}\equiv\R_{\emp}\A\R_{\emp}$
for convenience, so the faithful channel $\E\equiv\R_{\ul}\A\R_{\ul}$
and similarly $\{A_{\lr}^{\ell}\}\equiv\R_{\lr}\A\R_{\lr}$.
With conditions (\ref{eq:bl}) and (\ref{eq:conds}), $\A$ contains
a decaying subspace of dimension $\tr\left\{ \qq\right\} $ and the
same rotating points as $\E$. But what about the conserved quantities?
Those are not the same because, by trace preservation, they need to
make sure that all state populations (and sometimes some coherences)
in $\lrbig$ are transferred to $\ulbig$. For example, the identity
is (always) a conserved quantity of $\A$, but the analogous conserved
quantity of $\E$ is $\pp$. Denoting the conserved quantities of
$\E$ as $J_{\ul}$, it will now be shown how to extend them to form
$J$, the conserved quantities of $\A$. Having defined this notation,
it is easy to write out the conserved quantities of the extended channel
$\A$.
\begin{numtheorem}{\hyperref[prop:3]{Proposition 3.}}
The conserved quantities of $\A$ corresponding to eigenvalues $e^{i\la}$
are
\begin{equation}
J=J_{\ul}+J_{\lr}=J_{\ul}-(\A_{\lr}^{\dgt}-e^{-i\la})^{-1}\A^{\dgt}(J_{\ul})\,,\label{eq:main}
\end{equation}
where $J_{\ul}$ are conserved quantities of $\A_{\ul}=\E$.
\end{numtheorem}
An important corollary of the above proposition is that $J_{\of}=0$.
After plugging in this formula for $J$ into $\ppp$ (\ref{eq:ap}),
this means that the asymptotic projection has only two pieces:
\begin{equation}
\ppp=\R_{\ul}\ppp\R_{\di}\equiv\ps+\ppp\R_{\lr}\,,\label{eq:ppp}
\end{equation}
where the \textit{faithful projection} (for semigroups, minimal projection
\cite{ABFJ})
\begin{equation}
\ps(\cdot)\equiv\ppp\R_{\ul}(\cdot)=\sum_{\la,\m}\St_{\la\m}\tr\{J_{\ul}^{\la\m\dg}\cdot\}
\end{equation}
is the asymptotic projection of the faithful channel $\E$. The piece
$\ps$ is responsible for preserving parts of an initial state $\r$
which is in $\ulbig$ while the piece $\ppp\R_{\lr}$ is a channel
mapping states from $\lrbig$ onto the subspaces spanned by the rotating
points of $\A$, all located in $\ulbig$. The key result here is
that the rotation induced by $\la$, besides inducing phases on the
rotating points, also contributes to the decay of information from
$\lrbig$ into $\ulbig$. Namely, the inverse of the piece $(\A^{\dgt}-e^{-i\la})_{\lr}$
modulates the decoherence induced during the decay in a way that depends
on how close the eigenvalues of $\A_{\lr}$ are to the phases $e^{i\la}$
\begin{equation}
\ppp\R_{\lr}(\r)=-\sum_{\la,\m}\st_{\la\m}\tr\left\{ J_{\ul}^{\la\m\dg}\left[\A(\A-e^{i\la})_{\lr}^{-1}\right](\r_{\lr})\right\} \,,\label{eq:main-asymptotic-projection}
\end{equation}
where the superoperator in square brackets acts on $\r_{\lr}$. The
$\la=0$ case reduces to known results (\cite{robin}, Lemma 5.8;
\cite{Cirillo2015}, Prop. 7),
\begin{equation}
\ppp\R_{\lr}=\ps\A\left(\id-\A\right)_{\lr}^{-1}\,,
\end{equation}
where $(\id-\A)_{\lr}^{-1}$ (with $\I$ the superoperator identity)
can be thought of as the quantum version of the fundamental matrix
from classical Markov chains \cite{markov_book}. These formulas also
reduce to the Lindbladian result (\cite{ABFJ}, Prop. 3) if we let
$\A=e^{\L}\rightarrow\id+\L$ for some Lindbladian $\L$ and $e^{-i\la}\rightarrow1-i\la$.
In the Lindblad case, some dependence on $\la$ can be canceled by
properly tuning $\L_{\lr}$ (\cite{thesis}, Sec. 3.2.3).
\section{Application: information preserving structures\label{sec:Application:-information-preserv}}
This section lists some uses of the above result and includes an algorithm
that outputs a properly organized $\ppp$ given a channel $\A$.
\subsection{Asymptotic probabilities}
Expounding on the above, eq. (\ref{eq:main-asymptotic-projection})
allows us to find the asymptotic \cite{Cirillo2015} (also, reachability
\cite{ying2013}) probabilities of a given initial state $\r$ to
reach a particular subspace of $\ulbig$. The new result here is determination
of the \textit{coherences} reached by $\r$, assuming knowledge of
the left ($J_{\ul}^{\la\m}$) and right ($\St_{\la\m}$) rotating
points of $\E$. To show this, recall that the $\St_{\la\m}$'s can
be made orthonormal, $\tr\{\St_{\la\m}^{\dg}\St_{\varTheta\n}\}=\d_{\la\varTheta}\d_{\m\n}$.
(Loosely speaking, this is because the $\St$'s are a matrix basis
used to write all asymptotic density matrices and so must be well-behaved;
for more rigor, see Sec. \ref{subsec:Algorithm-for-finding}.) To
determine the coefficient in front of the basis element $\St_{\la\m}$
in the asymptotic state $\rout=\ppp(\r)$, instead of applying $\A$
a sufficiently large number of times to determine $\ppp$, simply
calculate
\begin{equation}
\tr\{\St_{\la\m}^{\dg}\rout\}=\tr\left\{ J_{\ul}^{\la\m\dg}\left[\id-\A(\A-e^{i\la})_{\lr}^{-1}\right](\r)\right\} \,.
\end{equation}
\subsection{Error-correction of a decoherence-free subspace}
Let us assume that now all of $\ulbig$ consists of rotating or fixed
points, so $\A_{\ul}=\E$ is a unitary channel. An example of this
case is $\A_{\ul}=\left\{ E\right\} $, where $E=\text{diag}\{1,e^{i\t}\}$
is the Kraus operator that mentioned before. The necessary and sufficient
condition on the $A$'s for this to hold is
\begin{equation}
A_{\ul}^{\ell}=a_{\ell}U\label{eq:dfs}
\end{equation}
for some unitary $U$, real $a_{\ell}$, and such that $\sum_{\ell}|a_{\ell}|^{2}=1$
to satisfy the condition (\ref{eq:conds1}). Since there is no decay
in $\ulbig$, that portion forms a \textit{decoherence-free subspace}
(DFS) \cite{Lidar1998} and $\ps=\R_{\ul}$. The form of $A_{\ul}$
also implies that $\R_{\ul}\A\R_{\of}=0$ and the statement of Prop.
\ref{prop:1} implies that the rotating points reduce to being outer
products of eigenstates of $U$.
The form (\ref{eq:bl}) of $A$ with the above restriction on $A_{\ul}$
generalizes the previous DFS condition from eq. (11) of Ref. \cite{lidar2003}
(see also Refs.~\cite{Karasik2008,Kamizawa2018} for different formulations).
The difference is that now $A_{\ur}$ does not have to be zero, so
information from $\lrbig$ flows into the DFS $\ulbig$. For example,
in quantum error-correction, $\ulbig$ is the logical subspace, $\lrbig$
is the orthogonal error subspace, and the piece $\ppp\R_{\lr}$ plays
the role of a ``recovery channel'' which attempts to recover the
leaked information after an error \cite{Ippoliti2014}. It turns out
one can remove the inverse term from $\ppp\R_{\lr}$, putting the
piece in Kraus form. Setting $A_{\lr}=0$ and $A_{\ul}=\pp$ (unitary
evolution within DFS is trivial) eliminates $\A_{\lr}$ and reduces
$\ppp\R_{\lr}$ to the transfer map (\ref{eq:tr}),
\begin{equation}
\ppp\R_{\lr}=\R_{\ul}\A\R_{\lr}\,,\label{eq:arb}
\end{equation}
with Kraus operators $A_{\ur}$. Condition (\ref{eq:conds2}) on $A_{\ur}$
reduces to $\sum_{\ell}A_{\ur}^{\ell}=0$, which is automatically
satisfied by the set of operators $\{\pm A_{\ur}^{\ell}/\sqrt{2}\}$.
However, the channel created by those operators is the same as $\{A_{\ur}^{\ell}\}$,
so $\ppp$ embeds an arbitrary recovery channel from the error subspace
$\lrbig$ to code subspace $\ulbig$.
\subsection{How to find $\protect\ppp$\label{subsec:Algorithm-for-finding}}
In a more complicated case than a DFS, $\ulbig$ is factorized into
a DFS and an auxiliary subspace, forming a \textit{noiseless subsystem
(NS)} \cite{Knill2000}. Evolution on the DFS is still unitary while
the auxiliary subspace contains one fixed and no rotating points.
The Kraus operators for $\E=\A_{\ul}$ are then $A_{\ul}^{\ell}=U\ot B^{\ell}$,
where $U$ acts on the DFS and $B^{\ell}$ are Kraus operators on
the auxiliary space. This reduces to the DFS case (\ref{eq:dfs})
if the dimension of the auxiliary space is one. In the most general
case, the rotating and fixed points of $\E$ can be block-diagonalized
into a direct sum of blocks, with each block being an NS \cite{robin,Lindblad1999,BlumeKohout2008,baumr,Carbone2015}.
In that case, the Kraus operators can be written as
\begin{equation}
A_{\ul}^{\ell}=\bigoplus_{\varkappa}U_{\varkappa}\otimes B^{\ell,\varkappa}\,,\label{eq:decomp}
\end{equation}
where $U_{\varkappa}$ is unitary and the Kraus map $\{B^{\ell,\varkappa}\}_{\ell}$
for each $\varkappa$ is primitive (see Table \ref{tab:Some-types-of-channels}).
This blocks-of-factors structure or \textit{shape} of $A_{\ul}^{\ell}$
is the most general form of an information-preserving structure \cite{robin}
and has deep connections to the theory of matrix algebras \cite{wolf2010}.
The key to organizing the rotating points and conserved quantities
is converting to a \textit{canonical basis} \textemdash{} a basis
which respects the above block structure. In such a basis (utilizing
the block index $\varkappa$), rotating points are of the form $\St_{\la\m}^{\varkappa}=e_{\m}^{\varkappa}\ot\varrho^{\varkappa}$
(where $\m$ is now used to label the matrix units $e_{\m}^{\varkappa}$
of the space of $U_{\varkappa}$ and $\varrho^{\varkappa}$ is the
unique fixed point of $\{B^{\ell,\varkappa}\}_{\ell}$) while their
dual conserved quantities are $J_{\ul}^{\varkappa\la\m}=e_{\m}^{\varkappa}\ot P^{\varkappa}$
(where $P^{\varkappa}$ is the identity on the auxiliary subspace).
Thus, conserved quantities in each block are related to rotating points
via a division by (i.e., inversion of all nonzero eigenvalues of)
the auxiliary fixed point, $J_{\ul}^{\varkappa\la\m}=\St_{\la\m}^{\varkappa}(\varrho^{\varkappa})^{-1}$
\cite{Novotny2012,Novotny2017}. It is well-known among experts that
$\{J_{\ul}^{\varkappa\la\m}\}$ form a \textit{matrix algebra} \textemdash{}
a vector space (where the vectors are matrices) that is closed under
multiplication and the conjugate transpose operation. It is important
to keep in mind that all of this extra structure in $\ulbig$ does
not put any constraints on the remaining parts $\{A_{\ur},A_{\lr}\}$
of $\A$, the extension of $\E$; this is why it was avoided until
now. Moreover, $\{J^{\varkappa\la\m}\}$ do \textit{not} have to form
a matrix algebra.
There exist several algorithms to determine the shape (\ref{eq:decomp})
of $\A$ \cite{robin,Holbrook2003,Choi2006,Knill2006,Maehara2010,Wang2013,Guan2018}.
A straightforward way \cite{robin} to find the form (\ref{eq:decomp})
for a general channel $\A$ is to diagonalize $\A$ and apply standard
matrix algebra techniques \cite{Holbrook2003,Maehara2010} to find
a canonical basis for the algebra of conserved quantities in $\ulbig$.
Using Prop. \ref{prop:3}, I slightly extend the algorithm from Ref.
\cite{robin} to one that finds and organizes not just the conserved
quantities restricted to $\ulbig$, but the full conserved quantities
as well. Once again, the main new inclusion is the determination of
conserved quantities whose eigenvalue is modulus one (as opposed to
exactly one).
\begin{lyxalgorithm*}
Finding and organizing $\ppp$
Find the rotating points $\St$ and conserved quantities $J$ by diagonalizing
$\A$
Construct $\ppp$ and $\pp$, the projection onto $\textnormal{range}\{\ppp(I)\}$
Find the projected conserved quantities $J_{\ul}\equiv\pp J\pp$
Decompose the algebra spanned by $J_{\ul}$ into canonical form using,
e.g., Refs.~\cite{Holbrook2003,Maehara2010}
Determine a canonical basis $\St_{\la\m}^{\varkappa}$ for the rotating
points and $J_{\ul}^{\varkappa\la\m}$ for the conserved quantities
Extend $J_{\ul}^{\varkappa\la\m}$ to $J^{\varkappa\la\m}$ via Prop.
\ref{prop:3}.
\end{lyxalgorithm*}
Note that $\ulbig$ is the range of $\ppp(I)$, i.e., $\ppp(I)\propto\pp$,
because $I$ is dual to the maximally mixed fixed point $\frac{1}{\tr\{\pp\}}\pp$
and is the only conserved quantity with nonzero trace.
\section{Application: matrix product states}
For those who skimmed Secs. \ref{sec:Structure-of-conserved}-\ref{sec:Application:-information-preserv},
those parts focused on the distinction between a channel $\A$ and
its corresponding faithful channel $\E\equiv\R_{\ul}\A\R_{\ul}$ \textemdash{}
$\A$ restricted to the largest invariant subspace $\ulbig$ (equivalently,
the range of $\A$'s maximal-rank fixed point). The block $\lrbig$
thus forms a decaying subspace, but the asymptotic projection $\ppp$
(\ref{eq:ap}) of $\A=\{A^{\ell}\}$ nevertheless retains information
from states in $\lrbig$ by transferring it into $\ulbig$ through
the operators $A_{\ur}^{\ell}$. Here, this decomposition is applied
to matrix product states (MPS) in order to obtain an unambiguous thermodynamic
limit for any MPS that is translationally invariant in the bulk, but
has non-trivial boundary effects. Then, I show how one can absorb
any dependence of said limit on the decaying parts $\lrbig$ of the
bond degrees of freedom into the boundary conditions. This allows
one to shorten the bond dimension and use the transfer matrix $\A_{\ul}=\E$
instead of the full $\A$.
\subsection{What are MPS?}
Our playground is now a one-dimensional lattice consisting of $2M+1$
spins. Each spin is $d$-dimensional and indexed by the physical index
$\ell$. An MPS $|\P\ket$ that is translationally-invariant in the
bulk of the lattice can be written as
\begin{equation}
|\P_{\A}^{\{B\}}\ket\propto\sum_{\ell_{-M},\cdots,\ell_{M}=0}^{L-1}\tr\{BA^{\ell_{-M}}\cdots A^{\ell_{M}}\}|\ell_{-M}\cdots\ell_{M}\ket\,,\label{eq:mps}
\end{equation}
where $A^{\ell}$ is an $L$-dimensional vector of $N\times N$ matrices
(for some \textit{bond dimension} $N$) and $B$ is an $N\times N$
matrix quantifying the boundary conditions. The bond dimension determines
the degree of entanglement of the spins, with $N=1$ corresponding
to a separable state. Physically meaningful boundaries are either
$B=I$ (the identity) for translationally invariant MPS's or $B=|r\ket\bra l|$
for some states $|r\ket,|l\ket$ quantifying the effect of the boundary
on the right and left ends of the chain.
By performing similarity transformations on the $A$'s, all MPSs can
be put into a canonical form \cite{Perez-Garcia2006,Cirac}, in which
the $A$'s satisfy eq. (\ref{eq:cptp}) and therefore form a Kraus
map $\A\equiv\{A^{\ell}\}_{\ell=0}^{L-1}$. This map is usually called
a \textit{transfer channel} (also, double tensor \cite{Zeng2015}),
and it appears when one of the lattice sites from eq. (\ref{eq:mps})
is traced out.
Continuing to trace out more sites while also taking the thermodynamic
limit of the MPS ($M\rightarrow\infty$), one can obtain the normalization
of the state:
\begin{equation}
\lim_{M\rightarrow\infty}\bra\P_{\A}^{\{B\}}|\P_{\A}^{\{B\}}\ket=\lim_{M\rightarrow\infty}\Tr\{\A^{\a(2M+1)}\B\}=\Tr\{\ppp\B\}\,,
\end{equation}
where $\B\equiv B\ot B^{\star}$, the trace is over superoperator
space, and $\a$ is the parameter that eliminates phases stemming
from rotating points. The addition of $\a$, which physically is equivalent
to blocking sites of the MPS and taking the limit of blocks, allows
one to define an unambiguous and non-pathological thermodynamic limit
for general boundary conditions.
As an example, for periodic boundary conditions $B=I$ and faithful
channels $\E$ containing rotating points $\varPsi_{\la=\frac{2\pi}{N}n}$
and conserved quantities $J^{\la=\frac{2\pi}{N}n}$ satisfying $\tr\{J^{\la\dg}\varPsi_{\la^{\prime}}\}=\d_{\la,\la^{\prime}}$,
the normalization is
\begin{equation}
\lim_{M\rightarrow\infty}\bra\P_{\E}^{\{I\}}|\P_{\E}^{\{I\}}\ket=\lim_{M\rightarrow\infty}\sum_{n=0}^{N-1}e^{i\frac{2\pi}{N}\a(2M+1)n}\,.
\end{equation}
Picking $\a=1$ yields zero whenever $2M+1$ and $N$ are noncommensurate
\{e.g., \cite{Haegeman2014}, Eq. (130)\}. In contrast, setting $\a=N$
gives $N$ (as was also noticed recently in Ref. \cite{Cirac}).
A similar equation occurs if one wants to evaluate observables in
the thermodynamic limit (see below). In this way, the transfer channel
and boundaries determine the properties of the MPS in the thermodynamic
limit. One can also use $\B$ to get rid of any undesired components
of $\R_{\A}$ \cite{Ueda2011}. Note that $|\Psi_{\R_{\A}}^{\{B\}}\ket$
is also the fixed-point MPS that $|\Phi_{\A}^{\{B\}}\ket$ flows to
under RG transformations \cite{Verstraete2005,Wei2010,Cirac2017},
and
\begin{equation}
\lim_{M\rightarrow\infty}\bra\P_{\A}^{\{B\}}|\P_{\A}^{\{B\}}\ket=\bra\Psi_{\R_{\A}}^{\{B\}}|\Psi_{\R_{\A}}^{\{B\}}\ket\,,
\end{equation}
so simplifying $\R_{\A}$ also yields insight into the structure of
RG fixed points.
\subsection{Boundary effects in the thermodynamic limit}
The exact connection between quantum channels and MPSs has been well-studied
for the case when the MPS is injective \textemdash{} when its corresponding
transfer channel has only one fixed point and no rotating points.
Since the results from the previous section are exactly about cases
where there are arbitrary numbers of fixed and rotating points, here
we will quantify the connection between asymptotics of quantum channels
and the thermodynamic limit of non-injective MPS. The approach is
somewhat reverse of what has been done before (see Sec. 3.2.2 of \cite{Perez-Garcia2006}):
instead of first considering a general MPS, I consider a general channel
$\A$ and simplify its corresponding MPS in the thermodynamic limit
by applying the above results about $\A$'s structure. Since applying
identical transformations $U$ to each site is the same as changing
basis for the Kraus operators of $\A$,
\begin{equation}
A^{\ell}\rightarrow\sum_{\ell^{\prime}}U_{\ell\ell^{\prime}}A^{\ell^{\prime}}\,,
\end{equation}
more technically this is a study of sets of MPS related by local
unitaries.
Let us apply the four-corners decomposition onto the MPS in order
to determine which blocks are relevant in the thermodynamic limit.
Assume that each $A^{\ell}=A_{\thd}^{\ell}$ now has a decaying subspace
$\lrbig$ and that powers of $\A$ eventually transfer the state completely
into $\ulbig$. Recall that $A_{\ul}^{\ell}\equiv E_{\ell}$ and so
the channel which determines the right fixed points is $\A_{\ul}\equiv\{E_{\ell}\}=\E$.
After some algebra, the coefficient $\tr\{B(A_{\ell_{-M}}\cdots A_{\ell_{M}})\}$
in the MPS (\ref{eq:mps}) becomes equal to
\begin{align}
& \,\,\,\phantom{+}\tr\left\{ B_{\ul}(E_{\ell_{-M}}\cdots E_{\ell_{M}})\right\} +\tr\left\{ B_{\lr}(A_{\lr}^{\ell_{-M}}\cdots A_{\lr}^{\ell_{M}})\right\} \nonumber \\
& +{\displaystyle \sum_{m=-M}^{M}}\tr\left\{ B_{\ll}(E_{\ell_{-M}}\cdots E_{\ell_{m-1}})A_{\ur}^{\ell_{m}}(A_{\lr}^{\ell_{m+1}}\cdots A_{\lr}^{\ell_{M}})\right\} \,.\label{eq:mps2}
\end{align}
The first term corresponds to the usual MPS $|\P_{\E}^{\{B\}}\ket$
whose transfer matrix $\E$ is faithful. The second term vanishes
in the thermodynamic limit because its corresponding transfer matrix
does not have any fixed points. When $B_{\ll}\neq0$, the third term
is present and has the form of a translationally-invariant domain
wall excitation. Therefore, the decaying subspace $\lrbig$ corresponds
to extra degrees of freedom on each site which house such an excitation.
This excitation is never present for periodic boundary conditions
($B=I$), allowing one to straightforwardly derive a standard irreducible
form for MPS with such boundary conditions in which the first and
second terms are decomposed into smaller irreducible blocks \cite{Perez-Garcia2006,Cirac}.
Let us continue to focus on ``twisted'' boundaries $B_{\ll}\neq0$.
The main result is that, in the thermodynamic limit, contributions
from extra degrees of freedom corresponding to $\lrbig$ can equivalently
be described by considering only $\A_{\ul}=\E$, but given a \textit{mixture}
of MPS having different boundary conditions. Culminating with Eq.
(\ref{eq:result-boundary-conditions-mps}), it will be shown that,
in the thermodynamic limit, expectation values of local observables
with an MPS $|\P_{\A}^{\{B\}}\ket$ can be equivalently calculated
from expectation values with the MPS $\{|\P_{\E}^{\{B_{k}\}}\ket\}_{k=0}^{K}$,
where $K>1$ and $B_{k}$ are distinct boundary conditions dependent
on $\R_{\A}\R_{\lr}$ and $B$.
Let us evaluate the expectation value of an observable $O$ on a site
in the thermodynamic limit. The number of lattice sites between the
site which supports $O$ and both boundaries is infinite and $\a$
is used to remove any phases occurring due to rotating points {[}see
eq. (\ref{eq:asproj}){]}. This allows one to simplify a previous
form of such a limit, eq. (133) of Ref. \cite{Haegeman2014}, and
remove any convergence issues arising from such phases. After some
algebra,
\begin{equation}
\lim_{M\rightarrow\infty}\bra\P_{\A}^{\{B\}}|O|\P_{\A}^{\{B\}}\ket=\Tr\{\R_{\A}\O\R_{\A}\B\}\equiv\Tr\{\O_{\A}\B\}\,,
\end{equation}
where the corresponding superoperator is
\begin{equation}
\O\equiv\sum_{k,\ell=0}^{d-1}\bra\ell|O|k\ket A_{k}\ot A_{\ell}^{\star}\,.
\end{equation}
To finish the calculation, decompose $\ppp$ using eq. (\ref{eq:ppp})
and $\O$ using the block form of $A^{\ell}$ (\ref{eq:bl}), yielding
$\O\R_{\ul}=\R_{\ul}\O\R_{\ul}$ and correspondingly
\begin{equation}
\lim_{M\rightarrow\infty}\bra\P_{\A}^{\{B\}}|O|\P_{\A}^{\{B\}}\ket=\Tr\{\O_{\E}(\B_{\ul}+\ppp\R_{\lr}\B\R_{\ul})\}\,,\label{eq:mps-observable}
\end{equation}
where $\O_{\E}\equiv\ps\O\ps$ and $\B_{\ul}\equiv\R_{\ul}\B\R_{\ul}$.
The $\B_{\ul}$ term is the standard contribution of boundary effects
located in $\ulbig$ and corresponds to the first term in the form
of the MPS (\ref{eq:mps2}). By contrast, the only piece of $\B$
contributing to the second term in Eq. (\ref{eq:mps-observable})
is $\R_{\lr}\B\R_{\ul}=B_{\ll}\ot B_{\ll}^{\star}$, corresponding
to the \textit{third} term in the form of the MPS (\ref{eq:mps2}).
As a sanity check, taking periodic boundary conditions ($B=I=I_{\di}$)
yields $\R_{\lr}\B\R_{\ul}=0$ and so only the first term in Eq. (\ref{eq:mps-observable})
remains. In general, the domain-wall-like excitations from the third
term in Eq. (\ref{eq:mps2}) combined with ``twisted'' boundary
conditions $B_{\ll}\neq0$ \textit{can} contribute to the thermodynamic
limit of the MPS.
\subsection{Absorbing boundary effects}
One can interpret the contribution of $\lrbig$ in a different way
by thinking of both terms from Eq. (\ref{eq:mps-observable}) as coming
from the effective boundary on $\ulbig$,
\begin{equation}
\overline{\B}\equiv\B_{\ul}+\ppp\R_{\lr}\B\R_{\ul}=\overline{\B}_{\ul}\,.
\end{equation}
Since $\R_{\A}\R_{\lr}$ is a channel from $\lrbig$ to a subspace
of $\ulbig$, one can decompose it in terms of some Kraus operators
$F^{k}=F_{\ur}^{k}$: $\R_{\A}\R_{\lr}=\sum_{k=1}^{K}F^{k}\otimes F^{k\star}$.
(These Kraus operators are of course related to the rotating points
$R_{\la\m}$ and the $\lrbig$ pieces of conserved quantities $L_{\lr}^{\la\m}$
from the previous section.) The rank $K$ is bounded by $\min\{\dim\ulbig,\dim\lrbig\}$,
so it is independent of the system size $M$. This shows that the
effects of $\lrbig$ can just as well be simulated by a \textit{superposition}
of effective boundary conditions $B_{\ul}$ with those from the set
$\{F_{\ur}^{k}B_{\ll}\}_{k=1}^{K}$,
\begin{equation}
\overline{\B}=\sum_{k=0}^{K}\B_{k}\equiv\sum_{k=0}^{K}B_{k}\otimes B_{k}^{\star}\,,
\end{equation}
where $B_{0}=B_{\ul}$ and $B_{k>0}=F_{\ur}^{k}B_{\ll}$. Plugging
in the above form for $\overline{\B}$ into Eq. (\ref{eq:mps-observable}),
\begin{equation}
\lim_{M\rightarrow\infty}\bra\P_{\A}^{\{B\}}|O|\P_{\A}^{\{B\}}\ket=\Tr\{\O_{\E}\overline{\B}\}=\sum_{k=0}^{K}\Tr\{\O_{\E}\B_{k}\}\,.
\end{equation}
Working backwards, each term in the sum over $k$ corresponds to the
thermodynamic limit of the MPS $|\Psi_{\E}^{\{B_{k}\}}\ket$:
\begin{equation}
\lim_{M\rightarrow\infty}\bra\P_{\A}^{\{B\}}|O|\P_{\A}^{\{B\}}\ket=\sum_{k=0}^{K}\lim_{M\rightarrow\infty}\bra\P_{\E}^{\{B_{k}\}}|O|\P_{\E}^{\{B_{k}\}}\ket\,.\label{eq:result-boundary-conditions-mps}
\end{equation}
Therefore, when calculating expectation values of local observables,
one can drop $\lrbig$ as long as one includes a \textit{mixture}
of MPS with different boundary conditions.
The same occurs with two observables $O^{(1)}$ and $O^{(2)}$ (with
corresponding superoperators $\O^{(1)}$ and $\O^{(2)}$) separated
by some number of sites $W$,
\begin{equation}
\lim_{M\rightarrow\infty}\bra\P_{\A}^{\{B\}}|O^{(1)}O^{(2)}|\P_{\A}^{\{B\}}\ket=\tr\left\{ \R_{\A}\O^{(1)}\A^{W}\O^{(2)}\overline{\B}\right\} \,,
\end{equation}
and take the $W\rightarrow\infty$ limit by blocking sites in order
to get rid of any phases from rotating points. This yields
\begin{equation}
\lim_{M,W\rightarrow\infty}\bra\P_{\A}^{\{B\}}|O^{(1)}O^{(2)}|\P_{\A}^{\{B\}}\ket=\tr\left\{ \O_{\E}^{(1)}\O_{\E}^{(2)}\overline{\B}\right\} \,,
\end{equation}
where $\O_{\E}^{(i)}=\R_{\E}\O^{(i)}\R_{\E}$. Similarly, consider
an observable touching the left boundary:
\begin{align}
\lim_{M\rightarrow\infty}\bra\P_{\A}^{\{B\}}|O^{(L)}|\P_{\A}^{\{B\}}\ket & =\Tr\{\O^{(L)}\R_{\A}\B\}=\Tr\{\O_{\E}^{(L)}\overline{\B}\}\,.\label{eq:leftside}
\end{align}
Somewhat surprisingly, considering an observable touching the right
boundary produces something completely different:
\begin{equation}
\lim_{M\rightarrow\infty}\bra\P_{\A}^{\{B\}}|O^{(R)}|\P_{\A}^{\{B\}}\ket=\Tr\{\R_{\A}\O^{(R)}\B\}\,.
\end{equation}
Notice how $\R_{\A}$ now comes before the observable {[}cf. the first
equality of Eq. (\ref{eq:leftside}){]}, which results in a series
of new terms stemming from combinations of $A_{\ur}^{\ell}$ and $A_{\lr}^{\ell}$
with $B$. Why is there an asymmetry between the two boundaries? This
has to do with the fact that we had initially assumed an asymmetric
form for our MPS, $A^{\ell}=A_{\thd}^{\ell}$. The domain wall-type
excitations represented by the third term in Eq. (\ref{eq:mps2})
are such that there is always a $A_{\lr}$ at the right-most site
$M$.
Since one has to block sites in order to have a valid thermodynamic
limit, one might imagine that effects of periodicities in the MPS
(i.e., effects of the rotating points) are eliminated. This is not
the case due to the presence of the eigenvalues $e^{i\la}$ in the
piece $\R_{\A}\R_{\lr}$ (\ref{eq:main-asymptotic-projection}). This
piece in turn affects the boundary conditions $\{B_{i}\}_{i}$ required
to make sure Eq. (\ref{eq:result-boundary-conditions-mps}) is satisfied.
Thus, MPS with rotating points retain some of their properties even
in a thermodynamic limit which blocks sites.
\section{Conclusion}
An important property of quantum channels $\A$ is their asymptotics,
i.e., their behavior in the limit of infinite applications, akin to
the infinite-time limit of Lindbladians \cite{ABFJ}. An infinite
product of $\A$ produces the channel's asymptotic projection $\ppp$
\textemdash{} a projection on all of the non-decaying eigenspaces
of the channel (i.e., whose eigenvalues have unit modulus). The superoperator
$\ppp$ can be constructed out of the channel's left and right rotating
points, or as they are called here, conserved quantities $J$ and
steady-state basis elements $\St$. There has been a lot of overlapping
work quantifying such asymptotics, but there have remained a few gaps
in the literature when it came to considering conserved quantities
with eigenvalue other than one. The aim of the first half of this
work is to close those gaps in simple and standalone fashion.
I start off with two results about channels admitting a full-rank
fixed point, which I call faithful. The first is that any $J$ commute
with a faithful channel's Kraus operators up to a phase. The second
is that the eigenvalue of any diagonalizable $J$ of a faithful channel
is an $N$th root of unity, where $N$ is bounded by the dimension
of the channel's Hilbert space. A third result deals with determining
the dependence of the asymptotic state on initial state and on properties
of $\A$. An analytical formula is derived that quantifies the dependence
of the final state on initial states located in $\A$'s decaying eigenspaces
(i.e., whose eigenvalues are less than one in modulus).
The aim of the second half of this work is to apply the third result
above to matrix product states (MPS), where asymptotics come into
play in the thermodynamic limit or in the limit of infinite renormalization
transformations. In the same way that asymptotic states depend on
initial states, the thermodynamic limit of MPS (whose transfer matrices
admit more than one fixed point) depends on the boundary conditions.
In such situations, the effects of any decaying bond degrees of freedom
can be absorbed in the boundary conditions. Quantitatively, it is
shown that the thermodynamic expectation value of a local operator
$O$ with an MPS having transfer matrix $\A$ and boundary condition
$B$ is equivalent to a sum of expectations values with MPS having
$\A$ restricted to its largest invariant subspace and several different
boundary conditions $\{B_{i}\}$ (\ref{eq:result-boundary-conditions-mps}).
Since similar two-dimensional MPS (often called ``PEPS'' \cite{Cirac2011})
and multiscale entanglement renormalization ansatz (MERA \cite{Giovannetti2008,Pfeifer2009})
states also correspond to a transfer channel, such techniques may
further generalized to study PEPS dependence on boundaries and dependence
of MERA hierarchies on their ``caps''.
\begin{acknowledgments}
Insightful discussions with L. Jiang, M. Fraas, N. Schuch, M. M. Wolf,
D. Perez-Garcia, M. B. Sahinoglu, A. M. Turner, B. Bradlyn, F. Ticozzi,
and X. Chen are acknowledged. This research was supported in part
by the National Science Foundation (PHY-1748958) and the Walter Burke
Institute for Theoretical Physics at Caltech. I thank KITP Santa Barbara
for their hospitality as part of the Quantum Physics of Information
workshop.
\end{acknowledgments}
|
train/arxiv
|
BkiUdf05qsBB3XW1f7AP
| 5 | 1 |
\section{Introduction}
One of the most outstanding problems in gravitation theory is the
study of the relation that exists between the critical phenomena
and the process of black hole formation.
The studies of non-linearity of the Einstein field equations near
the threshold
of
black hole formation reveal very rich phenomena \cite{Chop93},
which are quite similar to critical phenomena in Statistical
Mechanics and Quantum Field Theory \cite{Golden}. In particular,
by numerically studying the gravitational collapse of a massless
scalar field in $3+1$-dimensional spherically symmetric
spacetimes, Choptuik found that the mass of such formed black
holes takes a scaling form,
\begin{equation}
\label{I.1}
M_{BH} = C(p)\left(p -p^{*}\right)^{\gamma},
\end{equation}
where $C(p)$ is a constant and depends on the initial data, and
$p$ parameterises a family of initial data in such a way that when
$p > p^{*}$ black holes are formed, and when $p < p^{*}$ no black
holes are formed. It was shown that, in contrast to $C(p)$, the
exponent $\gamma$ is universal to all the families of initial
data studied. Numerically it was determined as $\gamma \sim 0.37$.
The solution with $p = p^{*}$, usually called the critical
solution, is found also universal. Moreover, for the massless
scalar field it is periodic, too. Universality of the critical
solution and exponent, as well as the power-law scaling of the
black hole mass all have given rise to the name {\em Critical
Phenomena in Gravitational Collapse}. Choptuik's studies were soon
generalised to other matter fields \cite{Gun00,Wang01}, and now
the following seems clear: (a) There are two types of critical
collapse, depending on whether the black hole mass takes the
scaling form (\ref{I.1}) or not. When it takes the scaling form,
the corresponding collapse is called Type $II$ collapse, and when
it does not it is called Type $I$ collapse. In the type $II$
collapse, all the critical solutions found so far have either
discrete self-similarity (DSS) or homothetic self-similarity
(HSS), depending on the matter fields. In the type $I$ collapse,
the critical solutions have neither DSS nor HSS. For certain
matter fields, these two types of collapse can co-exist. (b) For
Type $II$ collapse, the corresponding exponent is universal only
with respect to certain matter fields. Usually, different matter
fields have different critical solutions and, in the sequel,
different exponents. But for a given matter field the critical
solution and the exponent are universal. So far, the
studies have been mainly restricted to spherically symmetric case
and their non-spherical linear perturbations. Therefore, it is not
really clear whether or not the critical solution and exponent are
universal with respect to different symmetries of the
spacetimes \cite{Cho03,Wang03}. (c) A critical solution for both of the two
types has one and only one unstable mode. This now is considered as one
of the main criteria for a solution to be critical. (d) The
universality of the exponent is closely related to the last
property. In fact, using dimensional analysis \cite{Even} one can
show that
\begin{equation}
\label{I.2}
\gamma = \frac{1}{\left|k\right|},
\end{equation}
where $k$ denotes the unstable mode.
From the above, one can see that to study (Type $II$)
critical collapse, one
may first find some particular solutions by imposing certain
symmetries, such as, DSS or HSS. Usually this considerably
simplifies the problem. For example, in the spherically symmetric
case, by imposing HSS symmetry the Einstein field equations can be
reduced from PDE's to ODE's. Once the particular solutions are
known, one can study their linear perturbations and find out the
spectrum of the corresponding eigenmodes. If a solution has one
and only one unstable mode, by definition we may consider it as a
critical solution (See also the discussions given in
\cite{Brady02}). The studies of critical collapse have been
mainly numerical so far, and analytical ones are still highly
hindered by the complexity of the problem, even after imposing
some symmetries.
Lately, Pretorius and Choptuik (PC) \cite{PC00} studied
gravitational collapse of a massless scalar field in an anti-de
Sitter background in $2+1$-dimensional spacetimes with circular
symmetry, and found that the collapse exhibits critical
phenomena and the mass of such formed black holes takes the
scaling form of Eq.(\ref{I.1}) with $\gamma = 1.2 \pm 0.02$, which
is different from that of the corresponding $3+1$-dimensional
case. In addition, the critical solution is also different, and,
instead of having DSS, now has HSS. The above results were
confirmed by independent numerical studies \cite{HO01}. However,
the exponent obtained by Husain and Olivier (HO), $\gamma \sim
0.81$, is quite different from the one obtained by PC. It is not
clear whether the difference is due to numerical errors or to some
unknown physics.
After the above numerical work, analytical studies of the same
problem soon followed up \cite{Gar01,CF01,GG02,HWW04}. In
particular, Garfinkle found a class, say, $S[n]$, of exact
solutions to the Einstein-massless-scalar field equations and
showed that in the strong field regime the $n = 4$ solution fits
very well with the numerical critical solution found by PC.
Lately, Garfinkle and Gundlach (GG) studied their linear
perturbations and found that only the solution with $n = 2$ has
one unstable mode, while the one with $n = 4$ has three
\cite{GG02}. According to Eq.(\ref{I.2}), the corresponding
exponent is given by $\gamma = 1/|k| = 4/3$. Independently,
Hirschmann, Wu and one of the present authors (HWW) systematically
studied the problem, and found that the $n = 4$ solution indeed
has only one unstable mode \cite{HWW04}. This difference actually
comes from the use of different boundary conditions. As a matter
of fact, in addition to the ones imposed by GG \cite{GG02}, HWW
further required that no matter field should come out of the
already formed black holes. This additional condition seems
physically quite reasonable and has been widely used in the
studies of black hole perturbations \cite{Chandra83}. However, now
the corresponding exponent is given by $\gamma = 1/|k| = 4$,
which is significantly different from the numerical ones. So far,
no explanations about these differences have been worked out,
yet.
Self-similarity is usually
divided into two classes, one is the discrete self-similarity
mentioned above, and the other is the so-called kinematic
self-similarity (KSS) \cite{CH89}, and sometimes it is also called
continuous self-similarity (CSS). KSS or CSS is further classified
into three different kinds, the zeroth, first and second. The
kinematic self-similarity of the first kind is also called
homothetic self-similarity, first introduced to General Relativity
by Cahill and Taub in 1971 \cite{CT71}. In Statistical Mechanics,
critical solutions with KSS of the second kind seem more generic
than those of the first kind \cite{Golden}. However, critical
solutions with KSS of the second kind have not been found so far
in gravitational collapse, and it would be very interesting to
look for such solutions.
We shall present in this work the study of the linear perturbations
of the $2+1$-dimensional circularly symmetric solution, obtained
in a previous work \cite{CFJW04}, with kinematic self-similarity of
the second kind and show that the background solution is not critical.
In Section II we present the field equations with kinematic self-similarity
of the second kind. In Section III we perturb linearly the field equations.
In Section IV we present the solution of the linear perturbation equations.
I Section V we apply the boundary conditions in the perturbed solutions and
in Section VI we conclude our work.
\section{The Field Equations with Kinematic Self-similarity of the
Second Kind}
The general metric for such spacetimes can be written in the form
\begin{equation}
\label{1.1}
ds^{2} = e^{2\Phi(r,t)}dt^2-e^{2\Psi(r,t)}dr^2-r^{2}S(r,t)^2 d\theta^2.
\end{equation}
Then, the corresponding non-vanishing components of the Ricci tensor
are
\begin{eqnarray}
\label{1.2a}
R_{tt} &=& e^{2(\Phi-\Psi)}\left[\Phi_r\left(\Phi_r-\Psi_r+\frac{S_r}{S}
+\frac{1}{r}\right)+\Phi_{rr}\right]-\frac{S_{tt}}{S}+\Phi_t
\frac{S_t}{S}+\Phi_t\Psi_t-{\Psi_t}^2-\Psi_{tt}\nonumber\\
R_{tr} &=&
\frac{\Psi_t}{r}+\Psi_t\frac{S_r}{S}+\Phi_r\frac{S_t}{S}-\frac{S_t}{rS}-
\frac{S_{rt}}{S}\nonumber\\
R_{rr} &=&
e^{2(\Psi-\Phi)}
\left[\Psi_{tt}+\Psi_t\left({\Psi_t}+\frac{S_t}{S}-\Phi_t\right)\right]
-\Phi_{rr}+\Phi_r\Psi_r-{\Phi_r}^2-\frac{S_{rr}}{S}-2\frac{S_r}{rS}
+\Psi_r\frac{S_r}{S}+\frac{\Psi_r}{r}\nonumber\\
R_{\theta\theta} &=&
r^2 S^2\left\{e^{-2\Phi}\left[\frac{S_t}{S}
\left(\Psi_t-\Phi_t\right)+\frac{S_{tt}}{S}\right]-e^{-2\Psi}\left[\frac{S_r}{S}\left(\frac{2}{r}
+\Phi_r-\Psi_r\right)+\frac{1}{r}(\Phi_r-\Psi_r)+\frac{S_{rr}}{S}\right]\right\},
\end{eqnarray}
where the indices $t$ and $r$ denote differentiation to the time
coordinate and
the radial coordinate, respectively.
Then, introduce two self-similar variables $\tau$ and $x$ via the
relations \begin{equation} \label{1.3a} x= \ln \left(\frac r{\left(-t\right)^{\frac 1\alpha
}}\right),\;\;\;\;\; \tau =-\ln \left(-t\right),
\end{equation}
or inversely,
\begin{equation}
\label{1.3b}
r = e^{(\alpha x - \tau)/\alpha},\;\;\;
t = - e^{-\tau},
\end{equation}
where $\alpha$ is a {\it dimensionless} constant.
For any given function $f\left(t,r\right) $, we have
\begin{eqnarray}
\label{1.4}
f_{,t} &=&-\frac 1{\alpha t}\left(\alpha
f_{,\tau }+f_{,x}\right),\;\;\;\;
f_{,r}=\frac 1rf_{,x}, \nonumber \\
f_{,tr} &=&-\frac 1{\alpha tr}\left( \alpha f_{,\tau
x}+f_{,xx}\right),\;\;\;\;\;
f_{,rr}=\frac 1{r^2}\left( f_{,xx}-f_{,x}\right) , \nonumber \\
f_{,tt} &=&\frac 1{\alpha ^2t^2}\left( \alpha ^2f_{,\tau \tau }
+2\alpha f_{,\tau x}+f_{,xx}+\alpha ^2f_{,\tau }+\alpha
f_{,x}\right),
\end{eqnarray}
where the comma means differentiation.
Substituting these equations into Eq.(\ref{1.2a}), we find that
in terms of the self-similar variables, the Ricci tensor is given by
\begin{eqnarray}
\label{1.2b}
R_{tt} &=&
\frac{e^{2(\Phi-\Psi)}}{r^2}\left\{\Phi_x(\Phi_x-\Psi_x+\frac{S_x}{S})+
\Phi_{xx}\right\}-\frac{1}{\alpha^2
t^2}\left\{\alpha^2\left[\Psi_{\tau\tau}+
\Psi_\tau(1+\Psi_\tau-\Phi_\tau)+\frac{S_{\tau\tau}}{S}+\frac{S_\tau}{S}
(1-\Phi_\tau)\right]\right. \nonumber\\
&+&\alpha\left[2\Psi_{\tau x}+\Psi_\tau(\Psi_x-\Phi_x)+
\Psi_x(\Psi_\tau-\Phi_\tau+1)+2\frac{S_{\tau
x}}{S}+\frac{S_x}{S}(1-\Phi_\tau)-\frac{S_\tau}{S}\Phi_x\right] \nonumber \\
&+&\left. \left[\Psi_{xx}+
\Psi_x(\Psi_x-\Phi_x)+\frac{S_{xx}}{S}-\frac{S_x}{S}\Phi_x\right]\right\}
\nonumber\\
R_{tr} &=& -\frac{1}{\alpha
tr}\left\{\alpha\left[\Psi_\tau\left(1+\frac{S_x}{S}\right)+\frac{S_\tau}{S}
\left(\Phi_x-1\right)-\frac{S_{\tau x}}{S}\right]+\Psi_x
\left(1+\frac{S_x}{S}\right)
+\frac{S_x}{S}\left(\Phi_x-1\right)-\frac{S_{xx}}{S}\right\}\nonumber\\
R_{rr} &=&\frac{e^{2(\Psi-\Phi)}}{\alpha^2
t^2}\left\{\alpha^2\left[\Psi_{\tau\tau}+\Psi_\tau\left(1+\Psi_\tau+
\frac{S_\tau}{S}-\Phi_\tau\right)\right]\right.\\ \nonumber
& & \left.+\alpha\left[2\Psi_{\tau
x}+\Psi_x\left(1+\Psi_\tau+\frac{S_\tau}{S}-\Phi_\tau\right)+
\Psi_\tau\left(\Psi_x+\frac{S_x}{S}-\Phi_x\right)\right]\right.\nonumber\\
&+&\left.\Psi_{xx}+\Psi_x\left(\Psi_x+\frac{S_x}{S}-\Phi_x\right)\right\}+
\frac{1}{r^2}\left[\Phi_x\left(\Psi_x-\Phi_x+1\right)-\Phi_{xx}+
\Psi_x\left(1+\frac{S_x}{S}\right)-\frac{1}{S}\left(S_{xx}+S_x\right)\right]
\nonumber\\
R_{\theta\theta} &=& r^2S^2\left\{\frac{e^{-2\Phi}}{\alpha^2 t^2
S}\left[\alpha^2\left(S_\tau\left(1+\Psi_\tau-\Phi_\tau\right)+S_{\tau\tau}\right)+
\alpha\left(S_\tau\left(\Psi_x-\Phi_x\right)+S_x\left(1+\Psi_\tau-\Phi_\tau\right)+
2S_{\tau x}\right)\right.\right.\nonumber\\
&+&\left.\left.S_x\left(\Psi_x-\Phi_x\right)+S_{xx}\right]
-\frac{e^{-2\Psi}}{r^2}\left[\frac{1}{S}\left(S_x\left(1+\Phi_x-\Psi_x\right)+
S_{xx}\right)+\Phi_x-\Psi_x\right]\right\},
\end{eqnarray}
where the indices $\tau$ and $x$ mean differentiation in respect to
these variables.
\section{The Linear Perturbation of the Field Equations}
Once we have the general expressions of $R_{\mu\nu}$ in terms of $\tau$ and
$x$, let us consider the following perturbations,
\begin{eqnarray}
\label{1.5a}
\Phi(\tau, x) &=& \Phi_{0}(x) + \epsilon \Phi_{1}(x)e^{k\tau},\nonumber\\
\Psi(\tau, x) &=& \Psi_{0}(x) + \epsilon \Psi_{1}(x)e^{k\tau},\nonumber\\
S(\tau, x) &=& S_{0}(x) + \epsilon S_{1}(x)e^{k\tau},\nonumber\\
\label{1.5}
\phi(\tau, x) &=& \phi_{0}(t) + \epsilon \phi_{1}(x)e^{k\tau},
\end{eqnarray}
where $\epsilon$ is a very small real constant, the quantities with
subscripts
``1" denote perturbations, those with ``0" denote the background
self-similar solutions.
The background solution is given by
\begin{eqnarray}
\label{backsol}
\Phi_0(x) &=& 0, \nonumber\\
\Psi_0(x) &=& -\frac{1}{2}\alpha x, \nonumber\\
S_0(x) &=& \frac{2}{2-\alpha}e^{-\frac{1}{2}\alpha x},\nonumber\\
\phi_{0}(t) &=& 2q\ln(-t),
\end{eqnarray}
and the apparent horizon is given by
\begin{equation}
\label{rAHt}
r_{AH}(t) = \left[\left(2-\alpha\right)(-t)^{1/2}\right]^{2/(2-\alpha)}.
\end{equation}
where $\varphi_0$, $\psi_{0}$ and $s_{0}$ are integration constants,
$q = \pm \frac{1}{\sqrt{8}}$ and $\alpha < 2$ \cite{CFJW04}.
It is understood that there may be many perturbation modes for different
values (possibly complex) of the constant $k$. The general perturbation
will be the sum of these individual modes. Those modes with $Re(k) > 0$
grow as $\tau \rightarrow
\infty$ and are referred to as unstable modes, while the ones with
$Re(k) < 0$ decay and are referred to as stable modes. By definition,
critical solutions will have one and only one unstable mode.
Substituting Eq.(\ref{1.5}) into Eq.(\ref{1.2b}) and then we will have
\begin{equation}
\label{1.7}
R_{\mu\nu} = R_{\mu\nu}(\tau, x, \epsilon).
\end{equation}
Now considering $R_{\mu\nu}$ is function of $\epsilon$ only, then we
expand it
in terms of $\epsilon$,
\begin{equation}
\label{1.8}
R_{\mu\nu}(\tau, x, \epsilon) =
\frac{1}{(-t)^{2}}\left\{R_{\mu\nu}^{(0)}(x)
+ \epsilon R_{\mu\nu}^{(1)}(x)e^{k\tau} +
O\left(\epsilon^{2}\right) \right\},
\end{equation}
where $R_{\mu\nu}^{(0)}(x)$ is the part of the Ricci tensor corresponding
background,
and $R_{\mu\nu}^{(1)}(x)$ the perturbation part, which is function of the
background, $\Phi_{0}(x), \Psi_{0}, S_{0}(x)$ and the linear perturbations,
$\Phi_{1}(x), \Psi_{1}, S_{1}(x)$.
In the paper of Hirschmann, Wang \& Wu \cite{HWW04} it was calculated
them for the self-similar solutions of the first kind but in double null
coordinates [cf. Eq.(65) given there].
To first order in $\epsilon$, it can be shown that the
non-vanishing components of the Ricci tensor are given by
\begin{eqnarray}
\label{eqsa}
R_{tt}^{(1)}(x)&=&
e^{2\left(\Phi_0 - \Psi_0 -x +\frac{\tau}{\alpha} \right)}
\left\{ {\Phi_0}^\prime \left( 2{\Phi_1}^\prime - {\Psi_1}^\prime \right) - {\Phi_1}^\prime {\Psi_0}^\prime +{\Phi_1}^{\prime \prime}
+ 2 \left( \Phi_1 - \Psi_1 \right)\left[ {\Phi_0}^\prime \left({\Phi_0}^\prime -{\Psi_0}^\prime\right)
+ {\Phi_0}^{\prime \prime} \right] {\phantom{\frac{1}{1}}} \right.
\nonumber \\ & & \left.
+ \frac{1}{S_0}\left[
- \frac{{S_0}^\prime S_1}{S_0} {\Phi_0}^\prime
+ 2 \left( \Phi_1 - \Psi_1 \right) {\Phi_0}^\prime {S_0}^\prime + {\Phi_0}^\prime {S_1}^\prime +{\Phi_1}^\prime {S_0}^\prime \right]
\right\} +
\\ \nonumber & & +
\frac{e^{2 \tau}}{\alpha^2}
\left\{ {\Phi_0}^\prime \left(
\alpha k \Psi_1 +{\Psi_1}^\prime \right) +
{\Psi_0}^\prime \left(\alpha k \Phi_1 +{\Phi_1}^\prime \right) -
2 {\Psi_0}^\prime\left(\alpha k \Psi_1 +{\Psi_1}^\prime \right) -
\alpha^2 k^2 \Psi_1 - 2 \alpha k {\Psi_1}^\prime - {\Psi_1}^{\prime \prime}
\right. \\ \nonumber & & {\phantom{
\frac{e^{2 \tau}}{\alpha^2}}}
- \alpha^2 k \Psi_1
- \alpha {\Psi_1}^\prime +\\ \nonumber & &
+ \frac{1}{S_0} \left[
{\Phi_0}^\prime \left( \alpha k S_1 + {S_1}^\prime \right)
+ {S_0}^\prime \left( \alpha k \Phi_1 + {\Phi_1}^\prime \right)
- \alpha^2 k^2 S_1 - 2 \alpha k {S_1}^\prime - {S_1}^{\prime \prime} -
\alpha^2 k S_1 - \alpha {S_1}^\prime \right. \\ \nonumber &&
{\phantom{
\frac{e^{2 \tau}}{\alpha^2}}} \left. \left.
- \frac{1}{S_0} \left(
{S_0}^\prime S_1 {\Phi_0}^\prime - {S_0}^{\prime \prime} S_1 - \alpha {S_0}^\prime S_1
\right) \right] \right\} \\
R_{tr}^{(1)}(x)&=&
-\frac{e^{\frac{\alpha +1}{\alpha}\tau - x}}{\alpha S_0}
\left[ - \frac{S_1}{S_0} \left( {S_0}^\prime - {\Phi_0}^\prime {S_0}^\prime -{\Psi_0}^\prime {S_0}^\prime + {S_0}^{\prime \prime} -
S_0 {\Psi_0}^\prime
\right) - {\Phi_0}^\prime \left( \alpha k S_1 + {S_1}^\prime \right) - {\Psi_0}^\prime {S_1}^\prime
\right. +
\nonumber\\
& & \phantom{{-\frac{e^{\frac{\alpha +1}{\alpha}\tau - x}}{\alpha}}}
\left. -
{S_0}^\prime \left( \alpha k \Psi_1 + {\Psi_1}^\prime + {\Phi_1}^\prime \right)
+ \alpha k {S_1}^\prime + {S_1}^{\prime \prime} - {\Psi_0}^\prime S_1
- S_0 \left( \alpha k \Psi_1 + {\Psi_1}^\prime \right) +
\alpha k S_1 + {S_1}^\prime
\right]
\\ \nonumber
R_{rr}^{(1)}(x) &=&
\frac{e^{2 \left(\Psi_0 -\Phi_0 + \tau \right)}}{\alpha^2}
\left[ 2\left(\Psi_1 - \Phi_1 \right)
\left( {{\Psi_0}^\prime}^2+{\Psi_0}^{\prime \prime} + \alpha {\Psi_0}^\prime - {\Phi_0}^\prime {\Psi_0}^\prime + {\Psi_0}^\prime {S_0}^\prime \right)
+ 2 \alpha k \Psi_1 {\Psi_0}^\prime + 2 {\Psi_0}^\prime {\Psi_1}^\prime + \alpha^2 k^2 \Psi_1
\right. +
\nonumber \\ & &
{\phantom{\frac{e^{2\left(\Psi_0 - \Phi_0 + \tau \right)}}{\alpha^2}}}
+ 2 \alpha k {\Psi_1}^\prime + {\Psi_1}^{\prime \prime} + \alpha^2 k \Psi_1 + \alpha {\Psi_1}^\prime
- \alpha k \Psi_1 {\Phi_0}^\prime - {\Phi_0}^\prime {\Psi_1}^\prime - \alpha k {\Psi_0}^\prime \Phi_1
- {\Psi_0}^\prime {\Phi_1}^\prime + \nonumber
\\ & & \phantom{{-\frac{e^{\frac{\alpha +1}{\alpha}\tau - x}}{\alpha}}}
\left.
\frac{1}{S_0^2}
\left( \alpha k S_0 {\Psi_0}^\prime S_1 + S_0 {\Psi_0}^\prime {S_1}^\prime + \alpha k
S_0 {S_0}^\prime \Psi_1 + S_0 {S_0}^\prime {\Psi_1}^\prime - S_1 {\Psi_0}^\prime {S_0}^\prime \right) \right] +
\nonumber\\ & & -
e^{2\left( \frac{ \tau}{\alpha} -x \right)}
\left[
- {\Phi_1}^\prime {\Psi_0}^\prime + {\Phi_1}^{\prime \prime} - {\Phi_1}^\prime + 2 {\Phi_0}^\prime {\Phi_1}^\prime - {\Phi_0}^\prime
{\Psi_1}^\prime - \frac{1}{S_0} \left(
{\Psi_0}^\prime {S_1}^\prime + {\Psi_1}^\prime {S_0}^\prime + S_0 {\Psi_1}^\prime - {S_1}^{\prime \prime} - {S_1}^\prime \right) \right. +
\nonumber \\ && {\phantom{e^{2\left( \frac{ \tau}{\alpha} -x \right)}}}
+ \left. \frac{S_1}{S_0^2} \left(
{\Psi_0}^\prime {S_0}^\prime - {S_0}^{\prime \prime} -{S_0}^\prime \right) \right]
\nonumber \\
R_{\theta\theta}^{(1)}(x) &=&
-e^{-2\Psi_0} \left[ \left( S_1 - 2S_0 \Psi_1\right)
\left( {\Phi_0}^\prime {S_0}^\prime + {\Phi_0}^\prime S_0 - {\Psi_0}^\prime {S_0}^\prime - {\Psi_0}^\prime S_0 + {S_0}^{\prime \prime} +{S_0}^\prime \right)
\right. +
\nonumber
\\ && \left.
+ S_0 \left( {\Phi_0}^\prime {S_1}^\prime + {\Phi_0}^\prime S_1 + {\Phi_1}^\prime {S_0}^\prime + {\Phi_1}^\prime S_0
- {\Psi_0}^\prime {S_1}^\prime - {\Psi_0}^\prime S_1 - {\Psi_1}^\prime {S_0}^\prime - {\Psi_1}^\prime S_0 + {S_1}^{\prime \prime} + {S_1}^\prime \right)
\right] -
\\ && -
\frac{e^{2 \left(\frac{\alpha-1}{\alpha} \tau +x -\Phi_0
\right)}}{\alpha^2}
\left[
\left( S_1 - 2S_0 \Phi_1\right)
\left( {\Phi_0}^\prime {S_0}^\prime - {\Psi_0}^\prime {S_0}^\prime - {S_0}^{\prime \prime} -\alpha {S_0}^\prime\right)
\phantom{\frac{}{1}} \right. +
\nonumber \\ && +
S_0 \left( \alpha k {\Phi_0}^\prime S_1 + {\Phi_0}^\prime {S_1}^\prime
+ \alpha k \Phi_1 {S_0}^\prime + {\Phi_1}^\prime {S_0}^\prime - \alpha k S_1 {\Psi_0}^\prime
-{\Psi_0}^\prime {S_1}^\prime - \alpha k \Psi_1 {S_0}^\prime +
\right.
\nonumber \\ &&
\left. \left.
- {\Psi_1}^\prime {S_0}^\prime - \alpha^2 k^2 S_1 - 2\alpha k
{S_1}^\prime - {S_1}^{\prime \prime} - \alpha^2 k S_1 - \alpha {S_1}^\prime \right) \right],
\nonumber
\end{eqnarray}
where the prime denotes differentiation in respect to $x$.
Once we have $R_{\mu\nu}^{(1)}(x)$, we have to calculate the quantities
\begin{equation}
\label{1.9}
A_{\mu\nu} \equiv \phi_{,\mu}\phi_{,\nu}.
\end{equation}
Substituting Eqs.(\ref{1.5}) into the above equations, we have
\begin{equation}
\label{1.10}
A_{\mu\nu}(\tau, x, \epsilon) =
\frac{1}{(-t)^{2}}\left\{A_{\mu\nu}^{(0)}(x)
+ \epsilon A_{\mu\nu}^{(1)}(x)e^{k\tau} +
O\left(\epsilon^{2}\right) \right\},
\end{equation}
where $A_{\mu\nu}^{(0)}(x)$ is the part of the background,
and $A_{\mu\nu}^{(1)}(x)$ the perturbation part, which is function of the
background, $\phi_{0}(t)$, and the linear perturbation,
$\phi_{1}(x)$, and given by
\begin{eqnarray}
\label{eqsb}
A_{tt}^{(1)}(x)&=& -\frac{e^{2\tau}}{\alpha} \left[4 q \left(
\alpha k \phi_1 + \phi_1^\prime \right) \right]\nonumber\\
A_{tr}^{(1)}(x)&=&
- \frac{e^{\frac{\alpha +1}{\alpha}\tau-x}}{\alpha}
\left[ 2 q \phi_1^\prime \right]
\nonumber\\
A_{rr}^{(1)}(x) &=& 0
\nonumber\\
A_{\theta\theta}^{(1)}(x) &=& 0,
\end{eqnarray}
where the dot means time differentiation.
Once we have $A_{\mu\nu}^{(1)}(x)$ and $R_{\mu\nu}^{(1)}(x)$, the linear
perturbation equations are given by
\begin{equation}
\label{eqs}
R_{\mu\nu}^{(1)}(x) = A_{\mu\nu}^{(1)}(x),
\end{equation}
which in general are complicated.
After we have the general linear perturbation
equations (\ref{eqs}), then we turn to consider the background solutions
given by Eqs.(\ref{backsol}). By virtue of the simple form of
the solutions and the fact $\phi_{0}(x) = 0$, Eqs.(\ref{eqs}) can be
solved in our case.
\begin{eqnarray}
\label{sisteq1}
\alpha^2 k \Phi_1 + \alpha {\Phi_1}^\prime +\alpha^2 k^2 \Psi_1 + 2\alpha k {\Psi_1}^\prime + {\Psi_1}^{\prime \prime}
+\frac{(2-\alpha)e^{\frac{\alpha}{2}x}}{4} \left[ \left( 2 k^2 +2 k +\frac{1}{2} \right) \alpha^2 S_1
+ 2 \alpha \left( 2 k +1 \right) {S_1}^\prime + 2 {S_1}^{\prime \prime} \right]
= & & \\ \nonumber
-4 q \alpha \left(\alpha k \phi_1 + \phi_1^\prime \right), & &
\end{eqnarray}
\begin{equation}
\label{sisteq2}
{\Phi_1}^{\prime \prime}= 0,
\end{equation}
\begin{equation}
\label{sisteq3}
(\alpha -2)\left[\alpha k \Psi_1 + {\Psi_1}^\prime \right] +
\alpha {\Phi_1}^\prime + \frac{(2-\alpha)e^{\frac{\alpha}{2}x}}{2} \left[ \alpha \left( 2 k + 1 \right) S_1 + \left( 2 \alpha k +
\alpha +2 \right) {S_1}^\prime + 2 {S_1}^{\prime \prime} \right]
= - 2 q \phi_1^\prime,
\end{equation}
\begin{equation}
\label{sisteq4}
\alpha^2 k \left( 2 k -1 \right) \Psi_1 +
\alpha \left( 4 k -1\right) {\Psi_1}^\prime
+ 2 {\Psi_1}^{\prime \prime} + \alpha \left( \alpha k \Phi_1 + {\Phi_1}^\prime \right)
- \alpha \frac{(2-\alpha)e^{\frac{\alpha}{2}x}}{4} \left[ \left( 2 \alpha k + \alpha \right) S_1 + 2 {S_1}^\prime \right]
= 0,
\end{equation}
\begin{equation}
\label{sisteq5}
\left( 2 - \alpha \right) {\Psi_1}^\prime + \left( 2 - \alpha \right) {\Phi_1}^\prime - 2 {\Phi_1}^{\prime \prime}
- \frac{(2-\alpha)e^{\frac{\alpha}{2}x}}{2} \left[ \alpha S_1 + \left( 2 + \alpha \right) {S_1}^\prime + 2 {S_1}^{\prime \prime} \right]
= 0,
\end{equation}
\begin{equation}
\label{sisteq6}
\alpha \left( \alpha k \Psi_1 + {\Psi_1}^\prime \right)
- \alpha \left( \alpha k \Phi_1 + {\Phi_1}^\prime \right) -
\frac{(2-\alpha)e^{\frac{\alpha}{2}x}}{2} \left[ \alpha^2 k \left( 1 + 2 k \right) S_1 + \alpha \left( 1 + 4 k \right)
{S_1}^\prime + 2 {S_1}^{\prime \prime} \right]
= 0,
\end{equation}
\begin{equation}
\label{sisteq7}
\left( 2 - \alpha \right) {\Psi_1}^\prime - \left( 2 - \alpha \right) {\Phi_1}^\prime
- \frac{(2-\alpha)e^{\frac{\alpha}{2}x}}{2} \left[ \alpha S_1 + \left( 2 + \alpha \right) {S_1}^\prime + 2 {S_1}^{\prime \prime} \right]
= 0.
\end{equation}
\section{The Solutions of the Linear Perturbation Equations}
We will solve the system of the perturbed Eqs.(\ref{sisteq1})-(\ref{sisteq7}).
From Eq.(\ref{sisteq2}) we have
\begin{equation}
\label{phi1}
\Phi_1=ax+b.
\end{equation}
From Eqs.(\ref{sisteq5}) and (\ref{sisteq7}) we have
\begin{equation}
\label{sisteq5-7}
(2-\alpha)\Phi_1'=0,
\end{equation}
which solutions are $\alpha=2$ (which is out of range of our solution)
or $\Phi_1'=0$. Thus, from Eq.(\ref{phi1}) we have that
\begin{equation}
\label{phi1xb}
\Phi_1=b=constant.
\end{equation}
Using Eq.(\ref{phi1xb}) and summing the Eq.(\ref{sisteq5})
and Eq.(\ref{sisteq7}) we get
\begin{equation}
\label{sisteq5+7}
\Psi_1'={{e^{{1 \over 2} \alpha x}} \over {2}}[\alpha S_1 +
(2+\alpha)S_1'+ 2S_1'']
\end{equation}
Using Eq.(\ref{phi1xb}) and substituting Eq.(\ref{sisteq5+7}) into
Eq.(\ref{sisteq6}) we get
\begin{equation}
\label{psi1x}
\Psi_1=b-{\Psi_1' \over {\alpha k}}+{{(2-\alpha)e^{{1 \over 2} \alpha x}} \over
{2\alpha^2k}}[\alpha^2k(1+2k)
S_1 + \alpha(1+4k)S_1'+ 2S_1'']
\end{equation}
Substituting Eq.(\ref{sisteq5+7}) into Eq.(\ref{psi1x}) and
differentiating it, we have
\begin{equation}
\label{dfeqS1x}
A S_1 + B S_1' + C S_1'' + 4 S_1''' + E e^{-{1 \over 2}\alpha x} = 0,
\end{equation}
where
\begin{equation}
\label{A}
A = {\alpha^2 \over 2} (- 8\alpha k^3 + 4 \alpha k + \alpha + 16 k^3 - 4 k),
\end{equation}
\begin{equation}
\label{B}
B = \alpha ( - 8\alpha k^2 + 4\alpha k + 3\alpha + 16 k^2 ),
\end{equation}
\begin{equation}
\label{C}
C = 2 (8\alpha k -\alpha - 4 k + 4),
\end{equation}
\begin{equation}
\label{E}
E = 4\alpha^2 b k^2.
\end{equation}
Since our background solution with second kind self-similarity is identical
to the solution with self-similarity of the first kind \cite{CFJW04}, we will
study hereinafter only the case $\alpha=1$. Thus,
\begin{equation}
A = 4k^3+\frac{1}{2},
\end{equation}
\begin{equation}
B = 8 k^2 + 4 k + 3,
\end{equation}
\begin{equation}
C = 8 k + 6,
\end{equation}
\begin{equation}
E = 4bk^2,
\end{equation}
and the solution of equation (\ref{dfeqS1x}) is given by
\begin{equation}
S_1(x) = -\frac{b e^{-\frac{1}{2} x}}{k-1}+c_1 e^{-\frac{1}{2} (1+2 k) x}+c_2 e^{-\frac{1}{2} (k+1+\sqrt{\Delta}) x}+c_3 e^{-\frac{1}{2} (k+1-\sqrt{\Delta}) x}.
\end{equation}
where
\begin{equation}
\Delta= -k (3 k-4).
\end{equation}
In the next section we will apply the boundary conditions for two special
cases: $\Delta > 0$ and $\Delta < 0$.
\section{The Boundary Conditions for the Perturbed Solutions}
We will apply the boundary conditions only at two regions of the spacetime:
at the centre of the spacetime $r = 0$ and
at the event horizon $r_{AH}$ given by equation (\ref{rAHt}),
that furnishes
\begin{equation}
\label{xAH}
r_{AH}=-t.
\end{equation}
Thus, the metric at the apparent horizon is given by
\begin{equation}
\label{ds2AH}
ds^2_{AH}=dt^2 - dr^2 - 4 (-t)^2 d \theta^2.
\end{equation}
It can be easily seen from this metric that the apparent horizon is singular,
in this case, only at $t$ $=$ 0. Then the final state of the collapse is a marginally naked singularity.
We would like to note that for the perturbed part of the metric (\ref{1.1})
to represent circular symmetry, some physical and geometrical
conditions needed to be imposed \cite{Yasuda}. For gravitational
collapse, we impose the following conditions at $r=0$ :
\begin{description}
\item ($i$) There must exist a symmetry axis, which can be expressed as
\begin{equation}
\label{cd1}
X \equiv \left|\xi^{\mu}_{(\theta)}\xi^{\nu}_{(\theta)}
g_{\mu\nu} \right| \rightarrow 0,
\end{equation}
as $r \rightarrow 0$ , we have chosen the radial coordinate
such that the axis is located at $r = 0$ , and
$\xi^{\mu}_{(\theta)}$ is the Killing vector with a close orbit,
and given by $\xi^{\alpha}_{(\theta)}\partial_{\alpha} =
\partial_{\theta}$.
\item ($ii$) The spacetime near the symmetry axis is locally flat, which
can be written as \cite{Kramer80}
\begin{equation}
\label{cd2}
\frac{X_{,\alpha}X_{,\beta} g^{\alpha\beta}}{4X}
\rightarrow - 1,
\end{equation}
as $r \rightarrow 0$ .
Note that solutions failing to satisfy this
condition sometimes are also acceptable. For example, when the
left-hand side of the above equation approaches a finite constant,
the singularity at $r = 0$
may be related to a point-like particle \cite{VS}.
\item ($iii$) No closed timelike curves (CTC's). In spacetimes with
circular symmetry, CTC's can be easily introduced. To ensure
their absence, we assume that the condition
\begin{equation}
\label{cd3}
\xi^{\mu}_{(\theta)}\xi^{\nu}_{(\theta)}g_{\mu\nu} < 0,
\end{equation}
holds in the whole spacetime.
\end{description}
\subsection{Case $\Delta > 0$}
In this case we have from equation (\ref{psi1x}) that
\begin{eqnarray}
\Psi_1(x)&=& \frac{1}{k-1} \left[ k^2 c_3 e^{\frac{1}{2} x (k+\sqrt{-k (3 k-4)})}+k^2 c_2 e^{\frac{1}{2} x (k-\sqrt{-k (3 k-4)})}+3 e^{x k} k b-2 c_2 k e^{\frac{1}{2} x (k-\sqrt{-k (3 k-4)})} \right. \nonumber \\
&+& \left. c_3 \sqrt{-k (3 k-4)} k e^{\frac{1}{2} x (k+\sqrt{-k (3 k-4)})}-2 c_3 k e^{\frac{1}{2} x (k+\sqrt{-k (3 k-4)})}-c_2 \sqrt{-k (3 k-4)} k e^{\frac{1}{2} x (k-\sqrt{-k (3 k-4)})} \right. \nonumber \\
&-&\left. c_3 \sqrt{-k (3 k-4)} e^{\frac{1}{2} x (k+\sqrt{-k (3 k-4)})}-2 b e^{x k}+c_2 e^{\frac{1}{2} x (k-\sqrt{-k (3 k-4)})}+c_2 \sqrt{-k (3 k-4)} e^{\frac{1}{2} x (k-\sqrt{-k (3 k-4)})} \right. \nonumber \\
&+& \left. c_3 e^{\frac{1}{2} x (k+\sqrt{-k (3 k-4)})} \right] e^{-x k}+c_4 e^{-x k}.
\end{eqnarray}
In order to apply the first boundary condition (\ref{cd1}), we have to calculate the quantity
$\sqrt{X}=r S_1$, which can be written as
\begin{equation}
rS_1=\frac{b}{k-1}(-r t)^{\frac{1}{2}}+c_1 r^{\frac{1}{2}-k}(-t)^{\frac{1}{2}+k} +
c_2 r^{-\frac{1}{2}(k-1+\sqrt{\Delta})}(-t)^{\frac{1}{2}(k+1+\sqrt{\Delta})}+
c_3 r^{-\frac{1}{2}(k-1-\sqrt{\Delta})}(-t)^{\frac{1}{2}(k+1-\sqrt{\Delta})}.
\end{equation}
Since the limit of $rS_1$ must vanishes when $r \rightarrow 0$, all
the exponents
of $r$ must be greater than zero. It is easily shown that some the exponents
cannot satisfy this condition, then the first condition is not fulfilled.
Thus, these perturbations are limited by the boundary conditions.
\subsection{Case $\Delta < 0$}
In this case we have from equation (\ref{psi1x}) that
\begin{eqnarray}
\Psi_1(x)&=&-\frac{e^{-\frac{1}{2} x k}}{k-1} \left[ -2 c_2 k^2 \cos \left(\frac{1}{2} \sqrt{k (3 k-4)} x \right)+4 c_2 k \cos \left( \frac{1}{2} \sqrt{k (3 k-4)} x \right)-3 k b e^{\frac{1}{2} x k} \right. \nonumber \\
&+& \left. 2 c_2 k \sqrt{k (3 k-4)} \sin \left( \frac{1}{2} \sqrt{k (3 k-4)} x \right)-2 c_2 \sqrt{k (3 k-4)} \sin \left( \frac{1}{2} \sqrt{k (3 k-4)} x \right)+2 b e^{\frac{1}{2} x k} \right. \nonumber \\
&-& \left. 2 c_2 \cos \left( \frac{1}{2} \sqrt{k (3 k-4)} x \right) \right]+c_4 e^{-x k}.
\end{eqnarray}
Then we have now two possibilities for the arbitrary constants $c_2$ and
$c_3$ in order to get a real function $S_1(x)$: $c_2 = c_3$ and $c_2 = -c_3$.
For $c_2 = c_3$ we get
\begin{equation}
S_1(x) = -\frac{b e^{-\frac{1}{2} x}}{k-1}+c_1 e^{-\frac{1}{2} (1+2 k) x}+2c_2 e^{-\frac{1}{2} (k+1+) x} \cos \left( \frac{1}{2}\sqrt{-\Delta} x \right),
\end{equation}
and $rS_1$ is given by
\begin{equation}
rS_1=\frac{b}{k-1}(-r t)^{\frac{1}{2}}+c_1 r^{\frac{1}{2}-k}(-t)^{\frac{1}{2}+k} +
2c_2 r^{-\frac{1}{2}(k-1)}\cos\left[ \frac{1}{2}\sqrt{-\Delta} \ln \left( \frac{r}{-t} \right) \right](-t)^{\frac{1}{2}(k+1)}.
\end{equation}
Applying again
the condition (\ref{cd1}),
we can see that all the exponents of $r$ must be
greater than zero only when $k < 0$, which admits only stable modes for the
perturbation.
For the second boundary
condition (\ref{cd2}),
we have surveyed several sets of values of $b$, $c_1$,
$c_2$, $c_4$ and $k$, and we have found at least one set that satisfies this
condition: $b=0$, $c_1=0$, $c_2=1$, $c_4=1$, $k=-1$. In this case we get
\begin{equation}
lim_{r \rightarrow 0} \; rS_1 = -7-4\sqrt{7},
\end{equation}
or
\begin{equation}
lim_{r \rightarrow 0} \; rS_1 = -4+4\sqrt{7}.
\end{equation}
For $c_2 = -c_3$ we get
\begin{equation}
S_1(x) = -\frac{b e^{-\frac{1}{2} x}}{k-1}+c_1 e^{-\frac{1}{2} (1+2 k) x}-2 i c_2 e^{-\frac{1}{2} (k+1+) x} \sin \left( \frac{1}{2}\sqrt{-\Delta} x \right).
\end{equation}
Since
this case is analogous to the case where $c_2=c_3$, we will not present it.
\section{Conclusions}
We have presented in this work the study of the linear perturbations
of the $2+1$-dimensional circularly symmetric solution, obtained
by Chan, da Silva, Villas da Rocha \& Wang
\cite{CFJW04}, with kinematic
self-similarity of the second kind. We have shown there that the
solution is Kantowski-Sachs like \cite{Kramer80} and it may be considered
as representing a Friedmann-like cosmological model in a 2+1 dimensional
spacetime.
We have obtained in this paper an exact solution for the perturbed
equations, which admits only stable modes,
showing that our Friedmann-like background solution in 2+1 dimension is stable.
This result is in agreement with the conclusions of the work of
Hirschmann, Wang \& Wu \cite{HWW04}.
\section*{Acknowledgments}
The authors would like to thank Dr. Anzhong Wang for the helpful discussions
and suggestions. The financial assistance from UERJ (JFVdaR) and FAPERJ/UERJ
(MFAdaS) is gratefully acknowledged. The author (R.C.) acknowledges the
financial support from FAPERJ (no. E-26/171.754/2000 and E-26/171.533/2002).
|
train/arxiv
|
BkiUbojxaJJQnMJkW1AU
| 5 | 1 |
\section{Introduction}
Code-based cryptography relies crucially on the hardness of decoding generic linear codes.
This problem has been studied for a long time and despite many efforts on this issue \cite{P62,S88,D91,B97b,MMT11,BJMM12,MO15}
the best algorithms for solving this problem \cite{BJMM12,MO15} are exponential in the number of errors that have to be corrected:
correcting $t$ errors in a binary linear code of length $n$ has with the aforementioned algorithms a cost of
$ 2^{ct(1+ o(1)) }$ where $c$ is a constant depending of the code rate $R$ and the algorithm.
All the efforts that have been spent on this problem have only managed to decrease slightly this exponent $c$.
Let us emphasize that this exponent is the key for estimating the security level of any code-based cryptosystem.
All the aforementioned algorithms can be viewed as a refinement of the original Prange algorithm \cite{P62} and are actually all
referred to as ISD algorithms. There is however an algorithm that does not rely at all on Prange's idea and does not belong to the ISD family: statistical decoding proposed first by Al Jabri in \cite{J01} and improved a little bit by Overbeck in \cite{O06}. Later on,
\cite{FKI07} proposed an iterative version of this algorithm. It is essentially a two-stage algorithm, the first step
consisting in computing an exponentially large number of parity-check equations of the
smallest possible weight $w$, and then from these parity-check equations the error is recovered by some
kind of majority voting based on these parity-check equations.
However, even if the study made by R. Overbeck in \cite{O06} lead to the
conclusion that this algorithm did not allow better attacks on the cryptosystems he considered, he did not propose
an asymptotic formula of its complexity that would have allowed to conduct a systematic study of the performances
of this algorithm. Such an asymptotic formula has been proposed in \cite{FKI07} through a simplified analysis of
statistical decoding, but as we will see this analysis does not capture accurately
the complexity of statistical decoding. Moreover both papers did not assess in general the complexity of the first step of the algorithm which consists in computing a large set of parity-check equations of moderate weight.
The primary purpose of this paper is to clarify this matter by giving three results.
First, we give a rigorous asymptotic study of the exponent $c$ of statistical decoding
by relying on asymptotic formulas for Krawtchouk polynomials \cite{IS98}. The number of equations which are needed for this method turns out to be
remarkably simple for a large set of parameters. In Theorem \ref{biasSDecoding} we prove that the number of parity check equations
of weight $\omega n$ that are needed in a code of length $n$ to decode $\tau n$ errors is of order $O(2^{n(H(\omega)+H(\tau)-1)})$ (when we ignore polynomial factors) and this as soon as $\omega \geq \frac{1}{2} - \sqrt{\tau-\tau^2}$.
For instance, when we consider the hardest instances of the decoding problem which correspond to the
case where the number of errors is equal to the Gilbert-Varshamov bound, then essentially our results indicate that we have to take
{\em all} possible parity-checks of a given weight (when the code is assumed to be random) to perform statistical decoding.
This asymptotic study also allows to
conclude that the modeling of iterative
statistical decoding made in \cite{FKI07} is too optimistic. Second, inspired by ISD techniques, we propose a rather efficient method for
computing a huge set of parity-check equations of rather low weight.
Finally, we give a lower bound on the complexity of this algorithm that shows that it can not improve upon Prange's algorithm
for the hardest instances of decoding.
This lower bound follows by observing that the number $P_w$ of the parity-check equations of weight $w$ that are needed
for the second step of the algorithm
is clearly a lower-bound on the complexity of statistical decoding. What we actually prove in the last part
of the paper is that irrelevant of the way we obtain these parity-check equations in the first step, the lower bound
on the complexity of statistical decoding coming from the infimum of these $P_w$'s is always larger than the complexity of the Prange
algorithm for the hardest instances of decoding.
\section{Notation}
As our study will be asymptotic, we neglect polynomial factors and use the following notation:
\begin{nota}
Let $f,g : \mathbb{N} \rightarrow \mathbb{R}$, we write $f = \tilde{O}(g)$ iff there exists
a polynomial $P$ such that $f=O(Pg)$.
\end{nota}
Moreover, we will often use the classical result $\binom{n}{w} = \tilde{O}\left( 2^{nH\left( \frac{w}{n} \right)} \right)$
where $H$ denotes the binary entropy. We will also have to deal with complex numbers and follow the convention of the article \cite{IS98} we use here:
${\text{\bf i}}$ is the imaginary unit satisfying the equation ${\text{\bf i}}^2=-1$, $\Re(z)$ is the real part of the complex number $z$ and we choose the branch of the
complex logarithm with
$$
\ln(z) = \ln|z| + {\text{\bf i}} \arg(z), \;\;z \in {\mathbb{C}} \setminus [-\infty,0],
$$
and $\arg(z) \in[-\pi,\pi)$.
\section{Statistical Decoding}
\label{sdeco}
In the whole paper we consider the computational decoding problem which we define as follows:
\begin{problem}
Given a binary linear code of length $n$ of rate $R$, a word $y \in \mathbb{F}_{2}^{n}$
at distance $t$ from the code, find a codeword $x$ such that $d_{H}(x,y)=t$ where $d_{H}$ denotes the Hamming distance.
\end{problem}
Generally we will specify the code by an arbitrary generator matrix $G$ and we will denote by CSD$(G,t,y)$ a specific instance of this problem.
We will be interested as is standard in cryptography in the case where
$G \in \mathbb{F}_{2}^{Rn\times n}$ \textit{is supposed to be random}.
The idea behind statistical decoding may be described as follows.
We first compute a very large set ${\mathscr S}$ of parity-check equations of some weight $w$ and compute all
scalar products $\langle y,h \rangle$ (scalar product is modulo $2$)
for $h \in {\mathscr S}$. It turns out that if we consider only the parity-checks involving a given code position $i$ the
scalar products have a probability of being equal to $1$ which depends whether there is an error in this position or not.
Therefore counting the number of times when $\langle y,h \rangle=1$ allows to recover the error in this position.
Let us analyze now this algorithm more precisely.
To make this analysis tractable we will need to make a few simplifying assumptions.
The first one we make is the same as the one made by R. Overbeck in \cite{O06}, namely that
\begin{ass}\label{ass:one}
The distribution of the $\langle y,h \rangle$'s when $h$ is drawn uniformly at random from the dual codewords of weight $w$ is approximated by the distribution of $\langle y,h \rangle$ when
$h$ is drawn uniformly at random among the words of weight $w$.
\end{ass}
A much simpler model is given in \cite{FKI07} and is based on modeling the distribution of the $\scp{y}{h}$'s
as the distribution of $\scp{y}{h}$ where the coordinates of $h$ are i.i.d. and distributed as a Bernoulli variable of parameter $w/n$.
This presents the advantage of making the analysis of statistical decoding much simpler and allows to analyze more refined versions of statistical decoding.
However as we will show, this is an oversimplification and leads to an over-optimistic estimation of the complexity of
statistical decoding.
The following notation will be useful.
\begin{nota}{ }\mbox{} \\
$\cdot$ $S_w \mathop{=}\limits^{\triangle} \{x \in \mathbb{F}_2^n : w_H(x)=w\}$ denotes the set of binary of words of length $n$ of weight $w$; \\
$\cdot$ $S_{w,i} \mathop{=}\limits^{\triangle} \{x \in S_w: x_i = 1\}$;\\
$\cdot$ ${\mathscr H}_w \mathop{=}\limits^{\triangle} {\mathcal C}^\perp \cap S_w$;\\
$\cdot$ ${\mathscr H}_{w,i} \mathop{=}\limits^{\triangle} {\mathcal C}^\perp \cap S_{w,i}$;\\
$\cdot$ $X \sim \mathcal{B}(p)$ means that $X$ follows a Bernoulli law of parameter $p$ ;\\
$\cdot$ $h \sim S_{w,i}$ means we pick $h$ uniformly at random in $S_{w,i}$.
\end{nota}
\subsection{Bias in the parity-check sum distribution}
\label{bias}
We start the analysis of statistical decoding by computing the following probabilities which approximate the
true probabilities we are interested in (which correspond to choosing $h$ uniformly at random
in ${\mathscr H}_{w,i}$ and not in $S_{w,i}$) under Assumption \ref{ass:one}
\[ q_{1}(e,w,i) = \mathbb{P}_{h \sim S_{w,i}} \left( \langle e,h \rangle = 1 \right) \mbox{ when } e_{i} = 1 \]
\[ q_{0}(e,w,i) = \mathbb{P}_{h \sim S_{w,i}} \left( \langle e,h \rangle = 1 \right) \mbox{ when } e_{i} = 0 \]
These probabilities are readily seen to be equal to
\[ q_{1}(e,w,i) = \frac{\mathop{\sum}\limits_{j \mbox{ \tiny{even}}}^{w-1} \binom{t-1}{j} \binom{n-t}{w-1-j}} {\binom{n-1}{w-1}} \]
\[ q_{0}(e,w,i) = \frac{\mathop{\sum}\limits_{j \mbox{ \tiny{odd}}}^{w-1} \binom{t}{j} \binom{n-t-1}{w-1-j}} {\binom{n-1}{w-1}} \]
They are independent of the error and the position $i$. So, in the following we will use the notation $q_{1}$ and $q_{0}$.
We will define the biases $\varepsilon_{0}$ and $\varepsilon_{1}$ of statistical decoding by
\[ q_{0} = \frac{1}{2} + \varepsilon_{0} \mbox{ } ; \mbox{ } q_{1} = \frac{1}{2} + \varepsilon_{1} \]
It will turn out, and this is essential, that $\varepsilon_0 \neq \varepsilon_1$.
We can use these biases ``as a distinguisher''. They are at the heart of statistical decoding.
Statistical decoding is nothing but a statistical hypothesis testing algorithm distinguishing between two hypotheses :
\[ {\mathcal H}_0 \mbox{ } : \mbox{ } e_{i} = 0 \quad ; \quad {\mathcal H}_{1} \mbox{ } : \mbox{ } e_{1} = 1 \] based on computing
the random variable $V_{m}$ for $m$ uniform and independent draws of vectors in ${\mathscr H}_{w,i}$:
\[ V_{m} = \sum_{k=1}^{m} \sgn(\varepsilon_{1} - \varepsilon_{0}) \cdot \langle y,h^{k} \rangle \in \mathbb{Z} \]
We have $\langle y,h^{k}\rangle \sim \mathcal{B}(1/2 + \varepsilon_{l})$ according to $\mathcal{H}_{l}$. So the expectation of $V_{m}$ is given under $\mathcal{H}_{l}$ by:
\[ E_{l} = m \sgn(\varepsilon_{1} - \varepsilon_{0}) (1/2 + \varepsilon_{l}) \]
We point out that we have $E_{1} > E_{0}$ regardless of the term $\sgn(\varepsilon_{1} - \varepsilon_{0})$. In order to apply the following proposition, we make the following assumption:
\begin{ass}
\label{ass:two}
$\langle y,h^{k} \rangle$ are independent variables.
\end{ass}
\begin{proposition}
[Chernoff's Bound]$ $ Let $0 < p < 1$, $Y_{1},\cdots,Y_{m}$ i.i.d $\sim \mathcal{B}(p)$ and we set $Z_{m} = \sum_{k=1}^{m} Y_{k}$. Then,
\[ \forall t \geq 0, \quad \mathbb{P}\left( | Z_{m} - mp | \geq m \delta \right) \leq 2e^{-2m\delta^{2}} \]
\end{proposition}
\textbf{Consequences:}
Under $\mathcal{H}_{l}$, we have
\[ \mathbb{P}\left( | V_{m} - m\sgn(\varepsilon_{1}- \varepsilon_{0})\cdot (1/2 + \varepsilon_{l}) | \geq m \cdot \frac{|\varepsilon_{1} - \varepsilon_{0}|}{2} \right) \leq 2\cdot 2^{-m \cdot \frac{(\varepsilon_{1} - \varepsilon_{0})^{2}}{2\ln(2)}} \]
To take our decision we proceed as follows: if $V_{m} < \frac{E_{0} + E_{1}}{2}$ where
\[ \frac{E_{1} + E_{0}}{2} = \frac{m}{2} \sgn(\varepsilon_{1} - \varepsilon_{0}) (1+\varepsilon_{1} + \varepsilon_{0}) \]
we choose $\mathcal{H}_{0}$ and $\mathcal{H}_{1}$ if not.
For the cases of interest to us (namely $w$ and $t$ linear in $n$) the bias $\varepsilon_1 - \varepsilon_0$ is an exponentially small function of the
codelength $n$ and it is obviously enough to choose $m$ to be of order $O\left( \frac{\log n}{(\varepsilon_{1}-\varepsilon_{0})^{2}}\right)$ to be able
to make the good decisions on all $n$ positions simultaneously.
\par{\em On the optimality of the decision.}
All the arguments used for distinguishing both hypotheses are very crude and this raises the question whether a better test exists. It turns out that in the regime of interest to us, namely $t$ and $w$ linear in $n$, the term $\tilde{O}\left( \frac{1}{(\varepsilon_{1}-\varepsilon_{0})^{2}}\right)$ is of the right order. Indeed our statistical test amounts actually to the Neymann-Pearson test (with a threshold in this case which is not necessarily in the middle, i.e. equal to $m\frac{1+\varepsilon_0+\varepsilon_1}{2}$). In the case of interest to us, the bias between both distributions $\varepsilon_1 - \varepsilon_0$ is exponentially small in $n$ and Chernoff's bound captures accurately the large deviations of the random variable $V_m$. Now we
could wonder whether using some finer knowledge about the hypotheses ${\mathcal H}_0$ and ${\mathcal H}_1$ could do better. For instance we know the a priori probabilities
of these hypotheses since $\mathbb{P}(e_i=1)=\frac{t}{n}$. It can be readily verified that using Bayesian hypothesis testing based on the a priori knowledge of the a priori probabilities of both hypotheses does not allow to change
the order of number of tests which is still $\tilde{O}\left( \frac{1}{(\varepsilon_{1}-\varepsilon_{0})^{2}}\right)$ when $t$ and $w$ are linear in $n$.
\subsection{The statistical decoding algorithm}
\label{sdecobias}
Statistical decoding is a randomized algorithm which uses the previous distinguisher. As we just noted, this distinguisher needs $\tilde{O}\left( \frac{1}{(\varepsilon_{1}-\varepsilon_{0})^{2}}\right)$ parity-check equations of weight $w$ to work. This number obviously depends on $w,R$ and $t$ and we use the notation:
\begin{nota}
$P_{w} \mathop{=}\limits^{\triangle} \frac{1}{(\varepsilon_{1} - \varepsilon_{0})^{2}}$.
\end{nota}
Now we have two frameworks to present statistical decoding. We can consider the computation of $\tilde{O}(P_{w})$ parity-check equations as a pre-computation or to consider it as a part of the algorithm. To consider the case of pre-computation, simply remove Line $4$ of Algorithm 1
and consider the ${\mathscr S}_{i}$'s as an additional input to the algorithm. \texttt{ParityCheckComputation}$_{w}$ will denote an algorithm which for an input $G,i$ outputs $\tilde{O}(P_{w})$ vectors of ${\mathscr H}_{w,i}$.
\begin{algorithm}
\caption{\texttt{DecoStat} : \textbf{Statistical Decoding}}
\begin{algorithmic}[1]
\State $Input : G \in \mathbb{F}_{2}^{Rn \times n}, y = xG + e \in \mathbb{F}_{2}^{n}, w \in \mathbb{N}$
\State $Output : e$ /*\textit{Error Vector}*/
\For{$i = 1 \cdots n$}
\State ${\mathscr S}_{i} \leftarrow \texttt{ParityCheckComputation}_{w}(G,i)$ /*\textit{Auxiliary Algorithm}*/
\State $V_{i} \leftarrow 0$
\ForAll{$h \in {\mathscr S}_{i}$}
\State $V_{i} \leftarrow V_{i} + \sgn(\varepsilon_{1} - \varepsilon_{0})\cdot \langle y,h \rangle $
\EndFor
\If{$V_{i} < \sgn(\varepsilon_{1} - \varepsilon_{0}) P_{w} \frac{1+\varepsilon_{1}+\varepsilon_{0}}{2}$}
\State $e_{i} \leftarrow 0$
\Else
\State $e_{i} \leftarrow 1$
\EndIf
\EndFor
\State \Return $e$
\end{algorithmic}
\end{algorithm}
Clearly statistical decoding complexity is given by
\begin{itemize}
\item When the ${\mathscr S}_{i}$'s are already stored and computed: $\tilde{O} \left( P_{w} \right)$;
\item When the ${\mathscr S}_{i}$'s have to be computed: $\tilde{O}\Big( P_{w} + |\emph{\texttt{PC}$_{w}$}| \Big) $
where $|\emph{\texttt{PC}$_{w}$}|$ stands for the complexity of the call \texttt{ParityCheckComputation}$_{w}$.
\end{itemize}
As explained in introduction, our goal is to give the asymptotic complexity of statistical decoding. We introduce for this purpose the following notations:
\begin{nota}$ $
$\cdot$ $\omega \mathop{=}\limits^{\triangle} \frac{w}{n}$;
$\cdot$ $\tau \mathop{=}\limits^{\triangle} \frac{t}{n}$.
\end{nota}
The two following quantities will be the central object of our study.
\begin{definition}[Asymptotic complexity of statistical decoding]
We define the asymptotic complexity of statistical decoding when the ${\mathscr S}_i$'s are already computed by
$$
\pi(\omega,\tau) \mathop{=}\limits^{\triangle} \varliminf_{n \to + \infty} \frac{1}{n} \log_{2} P_{w}
$$
whereas the asymptotic complexity of the complete algorithm of statistical decoding (including the computation of
the parity-check equations) is defined by
$$
\pi^{complete}(\omega,\tau) \mathop{=}\limits^{\triangle} \varliminf_{n \to + \infty} \frac{1}{n} \max
\Big( \log_{2} P_{w}, \log_{2}|\text{\upshape{\texttt{ParityCheckComputation}}}_{w}| \Big).
$$
\end{definition}
\begin{rem}
One could wonder why these quantities are defined as infimum limits and not directly as limits. This is due to the fact that in certain regions
of the error weight and parity-check weights the asymptotic bias may from time to time become much smaller than it typically is. This bias is indeed proportional
to values taken by a Krawtchouk polynomial and for certain errors weights and parity-check weights we may be close to the zero of the
relevant Krawtchouk polynomial (this corresponds to the second case of Theorem \ref{th:expansion}).
\end{rem}
We are looking for explicit formulas for $\pi(\omega,\tau)$ and $\pi^{complete}(\omega,\tau)$. The second quantity depends
on the algorithm which is used. We will come back to this issue in
Subsection \ref{fram}.
For our purpose we will use Krawtchouk polynomials and asymptotic expansions for them coming from \cite{IS98}.
Let $m$ be a positive integer, we recall that the Krawtchouk polynomial of degree $v$ and order $m$, $p_v^m(X)$ is defined for $v \in \{0,\cdots,m\}$ by:
\[ p_{v}^{m}(X) = \frac{(-1)^{v}}{2^{v}} \sum_{j=0}^{v} (-1)^{j} \binom{X}{j} \binom{m-X}{v-j} \quad \mbox{where, } \binom{X}{j} = \frac{1}{j!}\left( X (X-1) \cdots (X-j+1) \right) \]
These Krawtchouk polynomials are readily related to our biases.
We can namely observe that $\sum_{j=0}^{w-1} \binom{t-1}{j} \binom{n-t}{w-1-j} = \binom{n-1}{w-1}$ to recast the following
evaluation of a Krawtchouk polynomial as
\begin{eqnarray}
- \frac{(-2)^{w-2}}{\binom{n-1}{w-1}} p_{w-1}^{n-1}(t-1) &= & \frac{\sum_{j=0}^{w-1} (-1)^{j} \binom{t-1}{j} \binom{n-t}{w-1-j}}{2 \binom{n-1}{w-1}}
\nonumber\\
& = & \frac{\sum_{\substack{j=0 \\ j \text{ even}}}^{w-1} \binom{t-1}{j} \binom{n-t}{w-1-j}
- \sum_{\substack{j=1 \\ j \text{ odd}}}^{w-1} \binom{t-1}{j} \binom{n-t}{w-1-j} }{2 \binom{n-1}{w-1}} \nonumber\\
& = & \frac{2 \sum_{ \substack{j=0 \\ j \text{ even}}}^{w-1} \binom{t-1}{j} \binom{n-t}{w-1-j}
- \binom{n-1}{w-1} }{2\binom{n-1}{w-1}}\nonumber\\
& = & \varepsilon_1 \label{eq:epsilon1}
\end{eqnarray}
We have a similar computation for $\varepsilon_0$
\begin{eqnarray}
\frac{(-2)^{w-2}}{\binom{n-1}{w-1}} p_{w-1}^{n-1}(t) &= & - \frac{\sum_{j=0}^{w-1} (-1)^{j} \binom{t}{j} \binom{n-1-t}{w-1-j}}{2 \binom{n-1}{w-1}}
\nonumber\\
& = &- \frac{\sum_{\substack{j=0 \\ j \text{ even}}}^{w-1} \binom{t}{j} \binom{n-1-t}{w-1-j}
- \sum_{\substack{j=1 \\ j \text{ odd}}}^{w-1} \binom{t}{j} \binom{n-1-t}{w-1-j} }{2 \binom{n-1}{w-1}} \nonumber \\
& = &- \frac{\binom{n-1}{w-1}- 2 \sum_{ \substack{j=0 \\ j \text{ odd}}}^{w-1} \binom{t}{j} \binom{n-1-t}{w-1-j}
}{2\binom{n-1}{w-1}} \nonumber\\
& = & \varepsilon_0 \label{eq:epsilon0}
\end{eqnarray}
Let us recall Theorem 3.1 in \cite{IS98}.
\begin{theorem}[{\cite[Th. 3.1]{IS98}}]\label{th:expansion}
Let $m,v$ and $s$ be three positive integers.
We set $\nu \mathop{=}\limits^{\triangle} \frac{v}{m}, \alpha \mathop{=}\limits^{\triangle} \frac{1}{\nu}$ and $\sigma = \frac{s}{m }$.
We assume $\alpha \geq 2$.
Let $$p(z) = \log_2 z - \frac{\sigma}{\nu} \log(1+z) - \left(\alpha - \frac{\sigma}{\nu} \right) \log_2(1-z).$$
$p'(z)=0$ has two solutions $x_1$ and $x_2$ which are the two roots of the equation
$
(\alpha - 1) X^{2} + (\alpha - 2\frac{\sigma}{\nu}) X + 1 =0
$.
Let $D \mathop{=}\limits^{\triangle} \left(\alpha- 2 \frac{\sigma}{\nu} \right)^2-4(\alpha-1)$ and
$\Delta \mathop{=}\limits^{\triangle} \alpha - \frac{2\sigma}{\nu}$. The two roots are equal to $\frac{- \Delta \pm \sqrt{D}}{2 (\alpha-1)}$ and $x_1$ is defined to be
root $ \frac{- \Delta + \sqrt{D}}{2 (\alpha-1)}$.
There are two cases to consider
\begin{itemize}
\item
In the case $\frac{\sigma}{\nu} \in ( 0,\alpha/2 - \sqrt{\alpha- 1})$, $D$ is positive, $x_1$ is a real negative number and we can write
\begin{equation}
\label{eq:real}
p_{v}^{m}(s) = Q_{\sigma,\nu}(v) 2^{-(p(x_1)+1) v}
\end{equation}
where $Q_{\sigma,\nu}(v) \mathop{=}\limits^{\triangle} -\sqrt{\frac{1-r^2}{2 \pi r D v}}( 1 + O(v^{-1/2}))$
and $r\mathop{=}\limits^{\triangle} -x_1$.
\item In the case $\frac{\sigma}{\nu} \in ( \alpha/2 - \sqrt{\alpha- 1},\alpha/2)$, $D$ is negative, $x_1$ is a complex number
and we have
\begin{equation}
\label{eq:complex}
p_{v}^{m}(s) = R_{\sigma,\nu}(v) \Im\left( \frac{2^{-(p(x_1)+1) v}}{x_1 \sqrt{2p"(x_1)}} (1+ \delta(v)) \right)
\end{equation}
where $\Im(z)$ denotes the imaginary part of the complex number $z$,
$\delta(v)$ denotes a function which is $o(1)$ uniformly in $v$,
and
$
R_{\sigma,\nu}(v) \mathop{=}\limits^{\triangle} \frac{1+O(v^{-1/2})}{\sqrt{\pi v}}
$.
\end{itemize}
The asymptotic formulas hold uniformly on the compact subsets of the corresponding open intervals.
\end{theorem}
\begin{remark}
Note that strictly speaking \eqref{eq:real} is incorrectly stated in \cite[Th. 3.1]{IS98}. The problem is that (3.20) is incorrect
in \cite{IS98}, since both $p"(-r_1)$ and $p^{(3)}(-r_1)$ are negative and taking a square root of these expressions leads to a purely
imaginary number in (3.20). This can be easily fixed since the expression which is just above (3.20) is correct and it just remains
to take the imaginary part correctly to derive \eqref{eq:real}.
\end{remark}
It will be helpful to use the following notation from now on.
\begin{nota}
\begin{eqnarray*}
m & \mathop{=}\limits^{\triangle} & n-1 \\
v & \mathop{=}\limits^{\triangle} & w-1\\
\nu & \mathop{=}\limits^{\triangle} & \frac{v}{m}\\
\alpha & \mathop{=}\limits^{\triangle} & \frac{1}{\nu}\\
\sigma_0 & \mathop{=}\limits^{\triangle} & \frac{t}{m}\\
\sigma_1 & \mathop{=}\limits^{\triangle} & \frac{t-1}{m}
\end{eqnarray*}
and for $i \in \{0,1\}$ we define the following quantities
\begin{eqnarray*}
p_i(z) & \mathop{=}\limits^{\triangle} & \log_2 z - \frac{\sigma_i}{\nu} \log(1+z) - \left(\alpha - \frac{\sigma_i}{\nu} \right) \log_2(1-z) \\
\Delta_i &\mathop{=}\limits^{\triangle} &\alpha - \frac{2\sigma_i}{\nu} \\
D_i &\mathop{=}\limits^{\triangle} &\left(\alpha- 2 \frac{\sigma_i}{\nu} \right)^2-4(\alpha-1) \\
z_i &\mathop{=}\limits^{\triangle} &\frac{- \Delta_i + \sqrt{D_i}}{2 (\alpha-1)}
\end{eqnarray*}
\end{nota}
We are now going use these asymptotic expansions to derive explicit formulas for $\pi(\omega,\tau)$.
We start with the following lemma.
\begin{lemma}\label{lem:real}
With the hypothesis of Proposition just above, we have
\[ \frac{\varepsilon_{0}}{\varepsilon_{1}} = - \frac{1+z_1}{1-z_1} \left(1 + O(w^{-1/2}\right). \]
\end{lemma}
\begin{proof}
From \eqref{eq:epsilon1} and \eqref{eq:epsilon0} we have
\begin{equation}\label{eq:ratio}
\frac{\varepsilon_0}{\varepsilon_1} = - \frac{p_{w-1}^{n-1}(t)}{p_{w-1}^{n-1}(t-1)}
\end{equation}
By using Theorem \ref{th:expansion} we obtain when plugging the asymptotic expansions of the Krawtchouk polynomials into
\eqref{eq:ratio}
\begin{eqnarray}
\frac{\varepsilon_0}{\varepsilon_1}& =& - \frac{Q_{\sigma_0,\nu}(v) 2^{-p_0(z_0) v}}{Q_{\sigma_1,\nu}(v) 2^{-p_1(z_1) v}} \nonumber\\
& =& - \frac{Q_{\sigma_0,\nu}(v)}{Q_{\sigma_1,\nu}(v) }2^{(p_1(z_1)-p_0(z_0))v} \label{eq:ratio0}
\end{eqnarray}
We clearly have $\sigma_1 = \sigma_0 - \frac{1}{m}$ and $z_1 = z_0 + O\left( \frac{1}{m} \right)$ and
therefore from the particular form of $Q_{\sigma_i,\nu}(v)$ we deduce that
\begin{equation}
\frac{Q_{\sigma_0,\nu}(v)}{Q_{\sigma_1,\nu}(v) } = 1+ O(v^{-1/2}).
\end{equation}
We observe now that
\begin{eqnarray}
\frac{\sigma_1}{\nu} - \frac{\sigma_0}{\nu}& = & \frac{t-1}{v} - \frac{t}{v}\\
& = & - \frac{1}{v}
\end{eqnarray}
and therefore
\begin{eqnarray}
(p_1(z_1) - p_0(z_0))v\!\!\! & = & \!\!\!\left( \log_{2}(z_1) - \frac{\sigma_1}{\nu} \log_{2}(1+z_1) - (\alpha - \frac{\sigma_1}{\nu}) \log_{2}(1-z_1) \right. \nonumber\\
& & \left. - \log_{2}(z_0) + \frac{\sigma_0}{\nu} \log_{2}(1+z_0) + (\alpha - \frac{\sigma_0}{\nu}) \log_{2}(1-z_0) \right) v \nonumber \\
& = &\!\!\! \left( \log_2 \frac{z_1}{z_0} - \frac{\sigma_0}{\nu} \log_{2}\frac{1+z_1}{1+z_0} - (\alpha - \frac{\sigma_0}{\nu}) \log_{2}\frac{1-z_1}{1-z_0}
+ \frac{1}{v} \log_2(1+z_1) - \frac{1}{v} \log_2(1-z_1) \right) v \nonumber\\
& = & \!\!\!\left( \log_2 \frac{z_1}{z_0} - \frac{\sigma_0}{\nu} \log_{2}\frac{1+z_1}{1+z_0} - (\alpha - \frac{\sigma_0}{\nu}) \log_{2}\frac{1-z_1}{1-z_0}
\right) v + \log_2 \frac{1+z_1}{1-z_1} \label{eq:end}
\end{eqnarray}
It is insightful to express the term
$ \log_2 \frac{z_1}{z_0} - \frac{\sigma_0}{\nu} \log_{2}\frac{1+z_1}{1+z_0} - (\alpha - \frac{\sigma_0}{\nu}) \log_{2}\frac{1-z_1}{1-z_0}
$
as
\begin{eqnarray*}
\log_2 \frac{z_1}{z_0} - \frac{\sigma_0}{\nu} \log_{2}\frac{1+z_1}{1+z_0} - (\alpha - \frac{\sigma_0}{\nu}) \log_{2}\frac{1-z_1}{1-z_0}
& =& p_0(z_1) - p_0(z_0)
\end{eqnarray*}
The point is that $p_0'(z_0)=0$ and $z_1 = z_0+ \delta$ where $\delta = O(1/m)$. Therefore
$$
p_0(z_1)-p_0(z_0)=p_0(z_0 + \delta)-p_0(z_0) = O(\delta^2) = O(1/m^2).
$$
Using this in \eqref{eq:end} and then in \eqref{eq:ratio0} implies the lemma.
\end{proof}
From this lemma we can deduce that
\begin{lemma}\label{lem:realcomplete}
Assume $\alpha \geq 2$ and $\frac{\sigma_i}{\nu} \in ( 0, \alpha/2 - \sqrt{\alpha -1} )$ for $i \in \{0,1\}$. We have
\[ \varepsilon_{0} -\varepsilon_{1} = (-1)^v\sqrt{\frac{(1+z_1)(1-\nu)}{(z_1-z_1^2)D_1}}2^{-\left(p(z_1)+\frac{H(\nu)}{\nu}\right)v}(1 + O(w^{-1/2})). \]
\end{lemma}
\begin{proof}
We have
\begin{eqnarray}
\varepsilon_{0} -\varepsilon_{1} &=& - \varepsilon_1 \frac{1+z_1}{1-z_1}(1 + O(w^{-1/2})) - \varepsilon_1 \nonumber\\
&= & - \varepsilon_1 \left( \frac{1+z_1}{1-z_1}(1 + O(w^{-1/2}) + 1\right) \nonumber\\
&= & - 2 \varepsilon_1 \left( \frac{1}{1-z_1} + O\left(w^{-1/2}\frac{1+z_1}{1-z_1}\right) \right) \nonumber\\
& = & - \frac{2 \varepsilon_1}{1-z_1}(1 + O(w^{-1/2}) \;\;\text{ (since $-1 \leq z_i \leq 0$)} \nonumber\\
& = & - \frac{(-2)^{v-1}\sqrt{2 \pi v (1-\nu)}}{2^{\frac{vH(\nu)}{\nu}}}Q_{\sigma_1,\nu}(v) 2^{-(p(z_1)+1)v} \frac{2}{1-z_1}(1 + O(w^{-1/2})\label{eq:milieu}\\
& = & \frac{(-2)^{v}\sqrt{2 \pi v (1-\nu)}}{2^{\frac{vH(\nu)}{\nu}}}\sqrt{\frac{1-z_1^2}{-2 \pi z_1 D_1 v }} 2^{-(p(z_1)+1)v} \frac{2}{1-z_1}(1 + O(w^{-1/2}) \nonumber \\
& = & (-1)^v\sqrt{\frac{(1+z_1)(1-\nu)}{(z_1-z_1^2)D_1}}2^{-\left(p(z_1)+\frac{H(\nu)}{\nu}\right)v}(1 + O(w^{-1/2})
\end{eqnarray}
where we used in \eqref{eq:milieu}
\begin{eqnarray*}
\binom{m}{v} & = & \frac{2^{mH(\nu)}}{\sqrt{2 \pi m \nu (1-\nu)}} \\
& = & \frac{2^{\frac{vH(\nu)}{\nu}}}{\sqrt{2 \pi v (1-\nu)}}
\end{eqnarray*}
\end{proof}
The second case corresponding to $\frac{\sigma_i}{\omega} \in ( \alpha/2 - \sqrt{\alpha -1},\alpha/2 )$ is handled by the following lemma
(note that it is precisely the ``sin'' term that appears in it that lead us to define $\pi(\omega,\tau)$ as an infimum limit and not as a limit)
\begin{lemma}\label{lem:complex}
When $\frac{\sigma_i}{\omega} \in ( \alpha/2 - \sqrt{\alpha -1},\alpha/2 )$ for $i \in \{0,1\}$ we have
\[ \varepsilon_1 - \varepsilon_0 =
\frac{(-1)^v \sqrt{1-\nu}}{\left|(z_0-z_0^2)\sqrt{p_0"(z_0)}\right|} 2^{-v \left( \Re(p_0(z_0))+ \frac{H(\nu)}{\nu} \right)}
\sin \left( v \theta - \theta_0 +o(1) \right) (1+o(1))\]
where
$\theta \mathop{=}\limits^{\triangle} \arg \left(2^{-p_0(z_0)}\right)$ and $\theta_0 \mathop{=}\limits^{\triangle} \arg \left( (z_0-z_0^2)\sqrt{p_0"(z_0)} \right)$.
\end{lemma}
\begin{proof}
The proof of this lemma is very similar to the proof of Lemma \ref{lem:real}.
From \eqref{eq:epsilon1} and \eqref{eq:epsilon0} we have
\begin{equation}\label{eq:ratiodiff}
\varepsilon_1 - \varepsilon_0 = - \frac{(-2)^{w-2}}{\binom{n-1}{w-1}} \left(p_{w-1}^{n-1}(t) + p_{w-1}^{n-1}(t-1) \right)
\end{equation}
By plugging the asymptotic expansion of Krawtchouk polynomials
given in Theorem \ref{th:expansion} into
\eqref{eq:ratiodiff} we obtain
\begin{eqnarray*}
\varepsilon_1 - \varepsilon_0 & =& - \frac{(-2)^{w-2}}{\binom{n-1}{w-1}} \left(
R_{\sigma_1,\nu}(v) \Im\left( \frac{2^{-(p_1(z_1)+1) v}}{z_1 \sqrt{2p_1"(z_1)}} (1+ \delta_1(v)) \right)
+ R_{\sigma_0,\nu}(v) \Im\left( \frac{2^{-(p_0(z_0)+1) v}}{z_0 \sqrt{2p_0"(z_0)}} (1+ \delta_0(v)) \right)\right)
\end{eqnarray*}
where the $\delta_i$'s are functions which are of order $o(1)$ uniformly in $v$.
We clearly have $\sigma_1 = \sigma_0 - \frac{1}{m}$ and $z_1 = z_0 + O\left( \frac{1}{m} \right)$ and
therefore from the particular form of $R_{\sigma_i,\nu}(v)$ we deduce that
\begin{eqnarray*}
R_{\sigma_1,\nu}(v) &= &R_{\sigma_0,\nu}(v) \left(1+ O(v^{-1/2})\right)\\
\frac{1}{z_1 \sqrt{2p_1"(z_1)}} & = & \frac{1}{z_0 \sqrt{2p_0"(z_0)}} \left(1 + O\left( \frac{1}{m} \right) \right)
\end{eqnarray*}
From this we deduce that
\begin{eqnarray}
\varepsilon_1 - \varepsilon_0 & =& \frac{(-1)^{v}}{2 \binom{n-1}{w-1}} R_{\sigma_0,\nu}(v) \left(
\Im\left( \frac{2^{-p_1(z_1) v}}{z_0 \sqrt{2p_0"(z_0)}} (1+ o(1)) \right)
+ \Im\left( \frac{2^{-p_0(z_0) v}}{z_0 \sqrt{2p_0"(z_0)}} (1+ \delta_0(v)) \right)\right) \nonumber\\
&= & \frac{(-1)^{v}}{2 \binom{n-1}{w-1}} R_{\sigma_0,\nu}(v) \Im \left(
\left( \frac{2^{-p_0(z_0) v}}{z_0 \sqrt{2p_0"(z_0)}} \left(1+ \delta_0(v)+ 2^{(p_0(z_0)-p_1(z_1)) v}(1+o(1))\right) \right)
\right) \label{eq:e0_e1}
\end{eqnarray}
We now observe that
\begin{eqnarray}
p_0(z_0)-p_1(z_1) & = & \log_{2}(z_0) - \frac{\sigma_0}{\nu} \log_{2}(1+z_0) - (\alpha - \frac{\sigma_0}{\nu}) \log_{2}(1-z_0)\\
& & - \log_{2}(z_1) + \frac{\sigma_1}{\nu} \log_{2}(1+z_1) + (\alpha - \frac{\sigma_1}{\nu}) \log_{2}(1-z_1) \nonumber \\
&= & \log_2 \frac{z_0}{z_1} - \frac{\sigma_0}{\nu} \log_{2}\frac{1+z_0}{1+z_1} - (\alpha - \frac{\sigma_0}{\nu}) \log_{2}\frac{1-z_0}{1-z_1}
+ \left( \frac{\sigma_1}{\nu} -\frac{\sigma_0}{\nu}\right)\log_2\frac{1+z_1}{1-z_1} \nonumber \\
& = & \log_2 \frac{z_0}{z_1} - \frac{\sigma_0}{\nu} \log_{2}\frac{1+z_0}{1+z_1} - (\alpha - \frac{\sigma_0}{\nu}) \log_{2}\frac{1-z_0}{1-z_1}
- \frac{1}{v} \log_2\frac{1+z_1}{1-z_1} \label{eq:moyen}
\end{eqnarray}
where \eqref{eq:moyen} follows from the observation
\begin{eqnarray*}
\frac{\sigma_1}{\nu} - \frac{\sigma_0}{\nu}& = & \frac{t-1}{v} - \frac{t}{v}\\
& = & - \frac{1}{v}
\end{eqnarray*}
Recall that $z_1 = z_0 + \delta$ where $\delta = O(1/m)$ and that
\begin{eqnarray*}
\log_2 \frac{z_0}{z_1} - \frac{\sigma_0}{\nu} \log_{2}\frac{1+z_0}{1+z_1} - (\alpha - \frac{\sigma_0}{\nu}) \log_{2}\frac{1-z_0}{1-z_1}& = &
p_0(z_0)-p_0(z_1) \\
& = & p_0(z_0)-p_0(z_0+\delta)
\end{eqnarray*}
The point is that $p_0'(z_0)=0$ and therefore
$$
p_0(z_0)-p_0(z_0+\delta) = O(\delta^2) = O(1/m^2).
$$
Using this in \eqref{eq:moyen} and then multiply by $v$
implies
\begin{equation}
(p_0(z_0)-p_1(z_1))v = - \log_2 \frac{1-z_1}{1+z_1}+O(1/v)= - \log_2 \frac{1-z_0}{1+z_0}+O(1/v)
\end{equation}
We can substitute for this expression in \eqref{eq:e0_e1} and obtain
\begin{eqnarray}
\varepsilon_1 - \varepsilon_0
&= & \frac{(-1)^{v}}{2 \binom{n-1}{w-1}} R_{\sigma_0,\nu}(v) \Im \left(
\left( \frac{2^{-p_0(z_0) v}}{z_0 \sqrt{2p_0"(z_0)}} \left(1 + \frac{1+z_0}{1-z_0} + o(1) \right) \right)
\right) \nonumber \\
& = &
\frac{(-1)^{v}}{\binom{m}{v}} R_{\sigma_0,\nu}(v) \Im \left(
\left( \frac{2^{-p_0(z_0) v}}{(z_0-z_0^2) \sqrt{2p_0"(z_0)}} (1+ o(1)) \right)
\right) \label{eq:ouf}
\end{eqnarray}
Recall that
\begin{eqnarray*}
R_{\sigma,\nu}(v) & = & \frac{1+o(1)}{\sqrt{\pi v}}\\
\binom{m}{v} & = & \frac{2^{mH(\nu)}}{\sqrt{2 \pi m \nu (1-\nu)}} \\
& = & \frac{2^{\frac{vH(\nu)}{\nu}}}{\sqrt{2 \pi v (1-\nu)}}
\end{eqnarray*}
By using this in \eqref{eq:ouf} we obtain
\begin{eqnarray}
\varepsilon_1 - \varepsilon_0
& = &
\frac{(-1)^v \sqrt{2 \pi v (1-\nu)}}{\sqrt{ \pi v}\left|(z_0-z_0^2)\sqrt{2p_0"(z_0)}\right|} 2^{-v \left( \Re(p_0(z_0))+ \frac{H(\nu)}{\nu} \right)}
\sin \left( v \theta - \theta_0 \right) (1+o(1)) \\
& = & \frac{(-1)^v \sqrt{1-\nu}}{\left|(z_0-z_0^2)\sqrt{p_0"(z_0)}\right|} 2^{-v \left( \Re(p_0(z_0))+ \frac{H(\nu)}{\nu} \right)}
\sin \left( v \theta - \theta_0 +o(1)\right) (1+o(1))
\end{eqnarray}
\end{proof}
From Lemmas \ref{lem:realcomplete} and \ref{lem:complex} we deduce immediately that
\begin{corollary}
\label{cor:biasSDecoding}
We set $\gamma = \frac{1}{\omega}$,
\begin{itemize}
\item If $\frac{\tau}{\omega} \in ( 0,\gamma/2 - \sqrt{\gamma - 1})$:
\[ \pi(\omega,\tau) = 2 \left( \omega \left( \log_{2}(r) - \frac{\tau}{\omega} \log_{2}(1-r) - (\gamma - \frac{\tau}{\omega}) \log_{2}(1+r) \right) +H(\omega) \right) \]
\[ \mbox{where } r \mbox{ is the smallest root of } (\gamma - 1) X^{2} - (\gamma - 2\frac{\tau}{\omega}) X + 1 \]
\item If $\frac{\tau}{\omega} \in ( \gamma/2-\sqrt{\gamma-1},\gamma/2 )$:
\[ \pi(\omega,\tau) = 2 \left( \omega \Re \left( \log_{2}(z) -\frac{\tau}{\omega} \log_{2}(1+z) - (\gamma - \frac{\tau}{\omega}) \log_{2}(1-z) \right) + H\left( \omega \right)\right) \]
\[ \mbox{ where } z = r e^{{\text{\bf i}} \varphi} \mbox{ with } r = \frac{1}{\sqrt{\gamma-1}} \mbox{ and } \cos(\varphi) = \frac{ 2\frac{\tau}{\omega} - \gamma}{2\sqrt{\gamma - 1}} \]
\end{itemize}
\end{corollary}
\begin{rem}
These asymptotic formulas turn out to be already accurate in the "cryptographic range" as it is shown in Figure \ref{fig:numBias}.
\end{rem}
\begin{figure}[h!]
\centering
\includegraphics[scale = 0.6]{biasNum.png}
\caption{Comparison of the asymptotic and numeric exponents for $\tau = H^{-1}(1-R)$. \label{fig:numBias}}
\end{figure}
Amazingly enough these formulas can be simplified a lot in the second case of the corollary as shown by the following theorem.
\begin{theorem}[Asymptotic complexity of statistical decoding]
\label{biasSDecoding}
\mbox{ }\\
\begin{itemize}
\item If $\tau \in \left( 0,\frac{1}{2} - \sqrt{\omega - \omega^2}\right)$: $\pi(\omega,\tau) = 2 \omega \log_{2}(r) - 2 \tau \log_{2}(1-r) - 2(1- \tau) \log_{2}(1+r) + 2H(\omega)$ where $r$ is the smallest root of $(1- \omega) X^{2} - (1 - 2\tau) X + \omega =0$.
\item If $\tau \in \left( \frac{1}{2} - \sqrt{\omega - \omega^2},\frac{1}{2} \right)$: $\pi(\omega,\tau) = H(\omega)+H(\tau) -1.$
\end{itemize}
\end{theorem}
\begin{proof}
The first case is just a slight rewriting. To prove the formula corresponding to the second case let us recall that
the $z$ that appears in the second case of Corollary \ref{cor:biasSDecoding} satisfies $p'(z)=0$ where
$$
p(z) \mathop{=}\limits^{\triangle} \omega \log_2 z - \tau \log_2 (1+z) - (1 - \tau) \log_2(1-z).
$$
Let
\begin{eqnarray*}
f(\omega,\tau) &\mathop{=}\limits^{\triangle} &2 \left( \omega \Re \left( \log_{2}(z) - \frac{\tau}{\omega} \log_{2}(1+z) - (\gamma - \frac{\tau}{\omega}) \log_{2}(1-z) \right) +H(\omega)
\right)\\
& = & 2 \Re(p(z)) + 2H(\omega).
\end{eqnarray*}
Let us first differentiate this expression with respect to
$\omega$:
\begin{eqnarray}
\frac{\partial f(\omega,\tau)}{\partial \omega} & = & 2 \Re(p'(z))\frac{\partial z}{\partial \omega} + 2 \Re(\log_2(z)) + 2\log_2 \frac{1-\omega}{\omega} \nonumber \\
& = & 2 \Re(\log_2(z)) + 2 \log_2 \left(\frac{1-\omega}{\omega}\right) \label{eq:derivative1}
\end{eqnarray}
Since $z = r e^{{\text{\bf i}} \varphi}$ with $r = \frac{1}{\sqrt{\gamma-1}}$, we deduce that
$$
2 \Re(\log_2(z)) = 2\log_2 r = 2 \log_2\left(\frac{1}{\sqrt{\gamma-1}}\right) = \log_2\left( \frac{1}{1/\omega-1}\right) = \log_2\left(\frac{\omega}{1-\omega}\right).
$$
Substituting this expression for $2 \Re(\log_2(z))$ in \eqref{eq:derivative1} yields
\begin{equation}\label{eq:partial_omega}
\frac{\partial f(\omega,\tau)}{\partial \omega} = \log_2\left(\frac{\omega}{1-\omega}\right) + 2 \log_2 \left(\frac{1-\omega}{\omega} \right)= \log_2 \left(\frac{1-\omega}{\omega}\right) = H'(\omega).
\end{equation}
We continue the proof by differentiating now $f(\omega,\tau)$ with respect to $\tau$:
\begin{eqnarray}
\frac{\partial f(\omega,\tau)}{\partial \tau} & = & 2 \Re(p'(z))\frac{\partial z}{\partial \tau} - 2 \Re\left(\log_2(1+z) - \log_2(1-z) \right) \nonumber \\
& = & - 2 \Re\left(\log_2\left(\frac{1+z}{1-z}\right)\right) \nonumber
\end{eqnarray}
Recall that $z$ is also given by one of the two roots of $(1-\omega)X^2 + (1-2\tau) X + \omega = 0$ (see Theorem \ref{th:expansion} for the root which is actually chosen)
and therefore
$$
z = \frac{2\tau -1 + {\text{\bf i}} \sqrt{4 \omega(1-\omega)-(1-2\tau)^2}}{2(1-\omega)}
$$
From this we deduce that
\begin{eqnarray*}
1 + z & = & \frac{1 -2\omega + 2\tau + {\text{\bf i}} \sqrt{4 \omega(1-\omega)-(1-2\tau)^2}}{2(1-\omega)}\\
1-z & = & \frac{3 -2\omega - 2\tau - {\text{\bf i}} \sqrt{4 \omega(1-\omega)-(1-2\tau)^2}}{2(1-\omega)}\\
\end{eqnarray*}
\begin{eqnarray*}
- 2 \Re \left(\log_2\left(\frac{1+z}{1-z}\right)\right) & = & - 2 \Re \left(\log_2\left(\frac{1 -2\omega + 2\tau + {\text{\bf i}} \sqrt{4 \omega(1-\omega)-(1-2\tau)^2}}{3 -2\omega - 2\tau - {\text{\bf i}} \sqrt{4 \omega(1-\omega)-(1-2\tau)^2}}\right)\right)\\
& = &- 2 \log_2 \left| \frac{1 -2\omega + 2\tau + {\text{\bf i}} \sqrt{4 \omega(1-\omega)-(1-2\tau)^2}}{3 -2\omega - 2\tau - {\text{\bf i}} \sqrt{4 \omega(1-\omega)-(1-2\tau)^2}}\right|\\
& = &- \log_2 \frac{(1 -2\omega + 2\tau)^2+4 \omega(1-\omega)-(1-2\tau)^2 }{(3 -2\omega - 2\tau)^2+4 \omega(1-\omega)-(1-2\tau)^2 }\\
& = & -\log_2 \frac{1+4\omega^2+4\tau^2-4\omega+4\tau-8\omega \tau+4\omega-4\omega^2-1-4\tau^2+4\tau }{9+4\omega^2+4\tau^2-12\omega-12\tau+ 8\omega \tau+4\omega-4\omega^2-1-4\tau^2+4\tau}\\
& = & -\log_2 \frac{8\tau -8 \omega \tau}{8-8\omega-8\tau+8 \omega \tau}\\
& = & -\log_2 \frac{8\tau(1-\omega)}{8(1-\omega)(1-\tau)}\\
& = & - \log_2 \frac{\tau}{1-\tau}\\
& = & H'(\tau)
\end{eqnarray*}
These two results on the derivative imply that
$$
f(\omega,\tau) = H(\omega) + H(\tau) + C
$$
for some constant $C$ which is easily seen to be equal to $-1$ by letting $\omega$ go to $0$ and $\tau$ go to $\frac{1}{2}$ in $f(\omega,\tau)$.
\end{proof}
\section{The binomial model}
\cite{FKI07} introduced another model for the parity-check equations used in statistical decoding.
Instead of assuming that they are chosen randomly of a given weight $w$, the authors of \cite{FKI07} assume that they are random binary words of length $n$
where the entries are chosen independently of each other according to a Bernoulli distribution of parameter $w/n$. In other words, the expected weight
is still $w$ but the weight of the parity-check equation is not fixed anymore and may vary. We will call it the {\em binomial model} of
weight $w$ and length $n$ and refer to our model as the constant weight model of weight $w$.
The binomial model presents the advantage of simplifying significantly
the analysis of statistical decoding. It is easy to analyze the simple statistical decoding algorithm that we consider here and to compute asymptotically the
number of parity-check equations that ensure successful decoding. We will do this in what follows.
But the authors of \cite{FKI07} went further since they were even able to analyze asymptotically
an iterative version of statistical decoding by following some of the ideas of \cite{SV04}.
They showed that
\begin{proposition}[{\cite[Proposition 2.1 p.405]{FKI07}}]
In the binomial model of weight $w$ and length $n$, the number of check sums that are necessary to correct with large enough probability
$t$ errors by using the iterative decoding algorithm of \cite{FKI07} is well estimated by $O(J_{\text{min}})$ with
\[ J_{\text{min}} = \left( \frac{n}{n-2w} \right)^{2(t-1)} = \left( 1- \frac{2w}{n} \right)^{-2(t-1)} \]
where the constant in the ``big O'' depends on the ratio $t/n$.
\end{proposition}
Let us first show that naive statistical decoding performs almost as well when we forget about polynomial factors. It makes sense in order to compare both models
to introduce some additional notation.
\begin{eqnarray*}
q^{\text{bin}}_0 & = & \mathbb{P}^{\text{bin}} \left( \langle e,h \rangle = 1 |h_i=1\right) \mbox{ when } e_{i} = 0\\
q^{\text{bin}}_1 & = & \mathbb{P}^{\text{bin}} \left( \langle e,h \rangle = 1 |h_i=1\right) \mbox{ when } e_{i} = 1\\
\end{eqnarray*}
where $h$ is a parity-check equation chosen according to the binomial model and the probability is taken over the random choice of $h$
in this model (and $\mathbb{P}^{\text{bin}}$ means that we take the probabilities according to the binomial model).
These quantities do not depend on $i$. It will also be convenient to define
$\varepsilon^{\text{bin}}_0$ and $\varepsilon^{\text{bin}}_0$ as
$$
q^{\text{bin}}_0 = \frac{1}{2} + \varepsilon^{\text{bin}}_0 \;\;; \;\; q^{\text{bin}}_1 = \frac{1}{2} + \varepsilon^{\text{bin}}_1.
$$
The computations of \cite[Sec II. B]{FKI07} show that
\begin{eqnarray*}
q^{\text{bin}}_0 & = & \frac{1 - \left(1-\frac{2w}{n}\right)^t}{2}\\
q^{\text{bin}}_1 & = & \frac{1 + \left(1-\frac{2w}{n}\right)^{t-1}}{2}
\end{eqnarray*}
This implies that
$$
\varepsilon^{\text{bin}}_0 = - \frac{\left(1-\frac{2w}{n}\right)^t}{2} \;\;;\;\; \varepsilon^{\text{bin}}_1 = \frac{\left(1-\frac{2w}{n}\right)^{t-1}}{2}.
$$
It is also convenient in order to distinguish both models to rename the quantities $q_0$, $q_1$, $\varepsilon_0$ and $\varepsilon_1$ that were introduced before by
referring to them as $q^{\text{con}}_0$, $q^{\text{con}}_1$, $\varepsilon^{\text{con}}_0$ and $\varepsilon^{\text{con}}_1$ respectively.
We can perform the same statistical test as before by computing from $m$ parity-check equations $h^1,\dots,h^m$ all involving the bit $i$ we want to decode, the quantity
$$
V_m = \sum_{k=1}^m \sgn(\varepsilon^{\text{bin}}_1 -\varepsilon^{\text{bin}}_0) \langle y,h^{k}\rangle= \sum_{k=1}^m \langle y,h^{k}\rangle.
$$
The expectation of this quantity is $E_b \mathop{=}\limits^{\triangle} m\left( \frac{1}{2} + \varepsilon^{\text{bin}}_b \right)$ depending
on the value $b\in \{0,1\}$ of the bit we want to decode.
We decide that the bit we want to decode is equal to $0$ if $V_m < \frac{E_0+E_1}{2}$ and $1$ otherwise. As before, we observe that by Chernoff's bound we make a wrong decision with probability at most $2 \cdot 2^{-m \frac{(\varepsilon^{\text{bin}}_1- \varepsilon^{\text{bin}}_0)^2}{2 \ln(2)}}$.
This probability can be made to be of order $o(1/n)$ by choosing $m$ as
$m = K \log n \frac{1}{(\varepsilon^{\text{bin}}_1- \varepsilon^{\text{bin}}_0)^2}$ for a suitable constant $K$. In this case, decoding the whole sequence succeeds with probability $1-o(1)$.
In other words, naive statistical decoding succeeds for $m = O\left( \log n \frac{1}{(\varepsilon^{\text{bin}}_1- \varepsilon^{\text{bin}}_0)^2}\right)$.
We may observe now that
\begin{eqnarray*}
\frac{1}{(\varepsilon^{\text{bin}}_1- \varepsilon^{\text{bin}}_0)^2} & = & O\left( (1-2w/n)^{-2{t-1}}\right)\\
& = & O(J_{\text{min}})
\end{eqnarray*}
This means that naive statistical decoding needs only marginally more equations in the binomial model (namely a multiplicative factor of order $O(\log n)$).
To summarize the whole discussion,
the number of parity-checks needed for decoding is
\begin{itemize}
\item with iterative statistical decoding over the binomial model
$$O\left(\frac{1}{(\varepsilon^{\text{bin}}_1- \varepsilon^{\text{bin}}_0)^2}\right),$$
\item with naive statistical decoding over the binomial model
$$O\left(\frac{\log n}{(\varepsilon^{\text{bin}}_1- \varepsilon^{\text{bin}}_0)^2}\right)$$
\item with naive statistical decoding over the constant weight model
$$O\left(\frac{\log n}{(\varepsilon^{\text{con}}_1- \varepsilon^{\text{con}}_0)^2}\right).$$
\end{itemize}
One might wonder now whether there is a difference between both models. It is very tempting to conjecture that both models are very close to each other since the expected weight of the parity-checks is $w$ in both cases. However this is not the
case, we are really in a large deviation situation where the bias of some extreme weights take over the bias corresponding to the typical weight of the parity check equations.
To illustrate this point, we choose the weight to be $w = \omega n$, the number of errors as $t= \tau n$ for some fixed $\omega$ and $\tau$, and then let $n$ go to infinity.
The normalized exponent\footnote{Here the number of equations is a function of the form
$\tilde{O}\left(e^{\alpha(\tau,\omega)n}\right)$ and we mean here the coefficient $\alpha(\omega,\tau)$.}
of the number of parity-check equations which is needed is
$$
\lim\limits_{n \to +\infty} \frac{1}{n} \log_{2}\left(\frac{1}{(\varepsilon^{\text{bin}}_1 - \varepsilon^{\text{bin}}_0)^{2}}\right) = -2 \tau \log_{2}\left(1- 2 \omega\right)
$$
in the binomial case, whereas
$
\lim\limits_{n \to +\infty} \frac{1}{n} \log_{2}\left(\frac{1}{(\varepsilon^{\text{con}}_1 - \varepsilon^{\text{con}}_0)^{2}}\right)
$
is given by Theorem \ref{biasSDecoding} in the constant weight case and both terms are indeed different in general. One case which is particularly interesting is when
$\tau$ and $\omega$ are chosen as $\tau = H^{-1}(1-R)$ and $\omega = R/2$, where $R$ is the code rate we consider.
This corresponds to the hardest case of syndrome decoding and when the parity-check equations of this weight can
be easily obtained as we will see in Section \ref{sec:naive}.
The two normalized exponents are compared on Figure \ref{fig:KF} as a function of the rate $R$. As we see, there is a huge difference.
The problem with the model chosen in \cite{FKI07} is that it is a very favorable model for statistical decoding. To the best of our knowledge there are no
efficient algorithms for producing such parity-checks when $\omega \leq R/2$.
Note that even such an algorithm were to exist, selecting appropriately only one weight would not change the exponential complexity of the algorithm
(this will be proved in Section \ref{sec:single}). In other words, in order to study statistical decoding we may restrict ourselves, as we do here, to
considering only one weight and not a whole range of weights.
\begin{figure}[h!]
\centering
\includegraphics[scale = 0.6]{biasApproximation.png}
\caption{Comparison of the normalized exponents with $\tau = H^{-1}(1-R)$ of the number of parity-check equations which are needed in the binomial and the constant weight case. \label{fig:KF}}
\end{figure}
The difference between both formulas is even more apparent when considering the slopes at the origin as shown in Figure \ref{fig:KFS}.
\begin{figure}
\centering
\includegraphics[scale = 0.6]{biasApproximationClose0.png}
\caption{Comparison of the complexities with $\tau = H^{-1}(1-R)$ for rate close to $0$ \label{fig:KFS}}
\end{figure}
However both models get closer when the error weight decreases. For instance when considering a relative error $\tau=H^{-1}(1-R)/2$, we see in Figure \ref{fig:KFDGV2} that the difference between both models
gets significantly smaller. Actually the difference vanishes when the relative error tends to $0$, as shown by Proposition \ref{subcpx}.
\begin{figure}[h!]
\centering
\includegraphics[scale = 0.6]{dgvsur2.png}
\caption{Comparison of the normalized exponents with $\tau = H^{-1}(1-R)/2$ of the number of parity-check equations which are needed
in the binomial and the constant weight case. \label{fig:KFDGV2}}
\end{figure}
\begin{proposition} [Asymptotic complexity of statistical decoding for a sub-linear error weight]\label{subcpx}$ $
\[ \pi(\omega,\tau) \mathop{=}\limits_{\tau \rightarrow 0} -2\tau \log_{2}(1-2\omega) + o(\tau) \]
\end{proposition}
\begin{proof}
As $\tau$ decreases to $0$, we consider for $\pi(\omega,\tau)$ the first formula which is given in Theorem \ref{biasSDecoding}. We have:
\begin{align}
\pi(\omega,\tau) &= 2 \omega \log_{2}(r) - 2 \tau \log_{2}(1-r) - 2(1 - \tau) \log_{2}(1+r) + 2H(\omega) \nonumber \\
&= 2 \omega \log_{2}(r) - 2 \log_{2}(1+r) -2 \tau \log_{2}\left( \frac{1-r}{1+r} \right) + 2 H (\omega) \label{eq:a}
\end{align}
with
\[ r = \frac{1-2\tau - \sqrt{ (1 - 2 \tau)^{2} - 4 \omega (1-\omega)}}{2(1-w)} \]
Let us compute now Taylor series expansion of $r$ when $\tau \rightarrow 0$. We start with
\begin{align*}
r&= \frac{1-2\tau - \sqrt{ 1 - 4 \omega (1-\omega) -4\tau + 4\tau^{2}}}{2(1-\omega)} \\
&= \frac{1-2\tau - \sqrt{ (1 - 2 \omega)^{2} - 4 \tau + o(\tau) }}{2(1-\omega)}
\end{align*}
Now using the fact that:
\[ (A^{2} - \varepsilon)^{1/2} \mathop{=}\limits_{\varepsilon \rightarrow 0} A- \frac{\varepsilon}{2A} + o(\varepsilon) \]
we have:
\begin{align*}
r&= \frac{1-2\tau - (1-2\omega) + \frac{2\tau}{1-2\omega} + o(\tau)}{2(1-\omega)} \\
&= \frac{\omega}{1- \omega} + \frac{ \tau \omega}{(1-\omega)(1-2\omega)} +o(\tau)
\end{align*}
And we deduce that:
\[ 1-r = \frac{1-2\omega}{1-\omega}- \frac{ \tau \omega}{(1-\omega)(1-2\omega)} +o(\tau) \]
\[1+r = \frac{1}{1-\omega} + \frac{ \tau \omega}{(1-\omega)(1-2\omega)} +o(\tau) \]
and therefore
\begin{equation}
\label{eq:b}
-2 \tau \log_{2}\left( \frac{1-r}{1+r} \right) \mathop{=}\limits_{\tau \rightarrow 0} -2 \tau \log_{2}(1-2\omega) + o(\tau)
\end{equation}
Now using the fact that:
\[ \log_{2}(A + \varepsilon) \mathop{=}\limits_{\varepsilon \rightarrow 0} \log_{2}(A) + (1/\ln(2)) \left( \frac{\varepsilon}{A} + o(\varepsilon) \right) \]
we have the asymptotic expansions with the logarithms:
\[\log_{2}(r) = \log_{2}\left( \frac{\omega}{1-\omega} \right) + (1/\ln(2)) \left( \frac{\tau}{1 - 2 \omega} + o(\tau) \right) \]
\[ \log_{2}(1+r) = \log_{2}\left( \frac{1}{1-\omega} \right) + (1/\ln(2)) \left( \frac{\tau \omega}{1 - 2 \omega} + o(\tau) \right) \]
So we deduce that:
\begin{align*}
2 \omega \log_{2}(r) - 2 \log_{2}(1+r) &= 2 \omega \log_{2}\left( \frac{\omega}{1-\omega} \right) - 2 \log_{2}\left( \frac{1}{1-\omega} \right) + o(\tau) \\
&= 2\omega \log_{2}(\omega) + 2 (1-\omega) \log_{2}(1-\omega) + o(\tau) \\
&= -2 H(\omega) + o(\tau)
\end{align*}
So by plugging this expression with \eqref{eq:b} in \eqref{eq:a} we have the result.
\end{proof}
The sublinear case is also relevant to cryptography since several
McEliece cryptosystems actually operate at this regime, this is true for the original McEliece system with fixed rate binary Goppa codes \cite{M78} or with the
MDPC-McEliece cryptosystem \cite{MTSB13}. In this regime, \cite{CS16} showed that all ISD algorithms have the same asymptotic complexity when the number $t$ of errors to correct is equal to $o(n)$ and this is given by:
\[ 2^{-t \log_{2}(1-R)(1+ o(1))} \]
Let us compare the exponents of statistical decoding and the ISD algorithms when we want to correct a sub-linear error weight.
When $t=o(n)$ the complexity we are after is subsexponential in the length.
The only algorithm finding moderate weight parity-check equations in subexponential time we found is Algorithm \ref{alg:gauss}. It produces parity-check equations of weight $Rn/2$ in amortized time $\tilde{O}(1)$. So with this algorithm, the exponent of statistical decoding is given by $-2\tau \log_{2}(1-R)$ which is twice the exponent of all the ISDs. We did not conclude for a relative weight $< R/2$ as in any case,
all the algorithms we found needed exponential time to output enough equations to perform statistical decoding. So unless one comes up with
an algorithm that is able to produce parity-check equations of relative weight $< R/2$ in subexponential time, statistical decoding is not better that any ISDs when we have to correct $t=o(n)$ errors.
\section{Studying the single weight case is sufficient}
\label{sec:single}
The previous section showed that if it is much more favorable when it comes to perform statistical decoding to produce parity-check equations following the binomial model of weight $w$
rather than parity-checks of constant weight $w$. The problem is that as far as we know, there is no efficient way of producing
moderate weight parity-check equations (let us say that we call moderate any weight $\leq 1+Rn/2$) which would follow such a model. Even the ``easy case'', where $w = 1+Rn/2$ and where it is trivial to
produce such equations by simply putting the parity-check matrix in systematic form and taking rows in this matrix
\footnote{For more details
see Section \ref{sec:naive}}, does not follow the binomial model : the standard deviation
of the parity-check equation weight is easily seen to be different between what is actually produced by the algorithm and the binomial model of weight $1+Rn/2$. Of course, this does not mean that
we should rule out the possibility that there might exist such efficient algorithms. We will however prove that under very mild conditions, that even such an algorithm were to exist
then anyway it would produce by nature parity-checks of different weights and that we would have a statistical decoding algorithm of the same exponential complexity which would keep only {\em one very
specific weight}. In other words, it is sufficient to care about the single weight case as we do here when we study just the exponential complexity of statistical decoding.
To verify this, we fix an arbitrary position we want to decode and assume that some algorithm has produced in time $T$,
$m = \sum_{j=1}^n m_j$
parity check equations involving this position where $m_j$ denotes the number of parity-check equations of weight $j$.
The equations of weight $j$ are denoted by $h_{1}^{j},\dots,h_{m_j}^j$. Statistical decoding is based on simple statistics involving
the values
$\langle y,h_{s}^{j}\rangle$. To simplify a little bit the expressions we are going to manipulate, let us introduce
\[ X_{s}^{j} \mathop{=}\limits^{\triangle} \langle y,h_{s}^{j} \rangle \]
Similarly to Assumptions \ref{ass:one} and \ref{ass:two}, we assume that the distribution of $\langle y,h_{s}^{j} \rangle$ is approximated by the distribution of $\langle y,h_{s}^{j} \rangle$ when $h_{s}^{j}$ is drawn uniformly at random among the words of weight $j$ and the $\langle y,h_{s}^{j} \rangle$'s are independent. So we have $X_{s}^{j} \sim \mathcal{B}(1/2 + \varepsilon_{l}(j))$ under the hypothesis $\mathcal{H}_{l}$ and $\varepsilon_{l}(j)$ is the bias defined in Subsection \ref{bias} for a weight $j$. Our aim now is to find a test distinguishing both hypotheses $\mathcal{H}_{0}$ and $\mathcal{H}_{1}$. As in Subsection \ref{bias} it will be the Neymann-Pearson test.
We define the following quantity where $\mathbb{P}_{\mathcal{H}_{l}}$ denotes the probability under the hypothesis $\mathcal{H}_{l}$:
\[ q \mathop{=}\limits^{\triangle} \ln \left( \frac{\mathbb{P}_{\mathcal{H}_{0}} \left( X_{1}^{1} = x_{1}^{1},\cdots,X_{m_{1}}^{1}= x_{m_1}^{1},\cdots, X_{1}^{n} = x_{1}^{n},\cdots,X_{m_{n}}^{n}= x_{m_n}^{n} \right) }{\mathbb{P}_{\mathcal{H}_{1}} \left( X_{1}^{1} = x_{1}^{1},\cdots,X_{m_{1}}^{1}= x_{m_1}^{1},\cdots, X_{1}^{n} = x_{1}^{n},\cdots,X_{m_{n}}^{n}= x_{m_n}^{n} \right)} \right) \]
The lemma of Neymann-Pearson tells to us to proceed as follows: if $q>\Theta$, where $\Theta$ is some threshold, choose $\mathcal{H}_{0}$ and $\mathcal{H}_{1}$ otherwise. In this case, no other statistic test will lead to lower false detection probabilities at the same time. In our case, it is enough to set the threshold $\Theta$ to $0$ since it can be easily verified that no other choices will not change the exponent of the number of samples we need for having vanishing false detection probabilities.
We set $p_{l}(j) \mathop{=}\limits^{\triangle} 1/2 + \varepsilon_{l}(j)$, $I_{0}(j) = \# \{ 0 \in \{x_{1}^{j},\cdots,x_{m_{j}}^{j} \} \}$ and $I_{1}(j) = \# \{ 1 \in \{x_{1}^{j},\cdots,x_{m_{j}}^{j} \} \}$, we have:
\begin{align*}
\frac{\mathbb{P}_{\mathcal{H}_{0}} \left( X_{1}^{1} = x_{1}^{1},\cdots,X_{m_{1}}^{1}= x_{m_1}^{1},\cdots, X_{1}^{n} = x_{1}^{n},\cdots,X_{m_{n}}^{n} = x_{m_n}^{n}\right) }{\mathbb{P}_{\mathcal{H}_{1}} \left( X_{1}^{1} = x_{1}^{1},\cdots,X_{m_{1}}^{1}= x_{m_1}^{1},\cdots, X_{1}^{n} = x_{1}^{n},\cdots,X_{m_{n}}^{n}= x_{m_n}^{n} \right)} &= \prod_{j=1}^{n} \frac{ p_{0}(j)^{I_{1}(j)}\cdot (1-p_{0}(j))^{I_{0}(j)}}{ p_{1}(j)^{I_{1}(j)}\cdot (1-p_{1}(j))^{I_{0}(j)}}
\end{align*}
Therefore by taking the natural logarithm of this expression and $\sum_{k=1}^{m_{j}} X_{k}^{j} = I_{1}(j)$ and $I_{1}(j)+ I_{0}(j) = m_{j}$, we have:
\begin{align*}
q &= \sum_{j=1}^{n} I_{0}(j) \left[ \ln(1-p_{0}(j)) - \ln(1-p_{1}(j)) \right] + I_{1}(j) \left[ \ln(p_{0}(j)) - \ln(p_{1}(j)) \right] \\
&= \sum_{j=1}^{n} (m_{j} - I_{1}(j)) \left[ \ln(1-p_{0}(j)) - \ln(1-p_{1}(j)) \right] + \sum_{s=1}^{m_{j}} X_{s}^{j} \left[ \ln(p_{0}(j)) - \ln(p_{1}(j)) \right] \\
&= \sum_{j=1}^{n} \sum_{s=1}^{m_{j}} X_{s} \left[ \ln(p_{0}(j)) - \ln(1-p_{0}(j)) + \ln(1-p_{1}(j)) - \ln(p_{1}(j)) \right] \\
& \qquad \qquad + m_{j} \ln \frac{1-p_{0}(j)}{1-p_{1}(j)}
\end{align*}
We now use the Taylor series expansion around $0$ : $\ln(1/2 + x) = -\ln(2) + 2x - \frac{4x^{2}}{2} + \frac{8x^{3}}{3} + o(x^{3})$ and we deduce
for $i$ in $\{0,1\}$:
\begin{align*}
\ln(p_{i}(j)) &= \ln(1/2 + \varepsilon_{i}(j)) \\
&=-\ln(2) + 2\varepsilon_{i}(j) - 2\varepsilon_{i}(j)^2 + (8/3) \varepsilon_{i}(j)^3 + o(\varepsilon_{i}(j)^3)
\end{align*}
\begin{align*}
\ln(1-p_{i}(j)) &= \ln(1/2 - \varepsilon_{i}(j)) \\
&=-\ln(2) -2\varepsilon_{i}(j) - 2\varepsilon_{i}(j)^2 - (8/3) \varepsilon_{i}(j)^3 + o(\varepsilon_{i}(j)^3)
\end{align*}
We have,
\begin{align*}
q &= \sum_{j=1}^{n} \sum_{s=1}^{m_{j}} X_{s} \cdot \left( (4\varepsilon_{0}(j) + (16/3) \varepsilon_{0}(j)^{3} + o(\varepsilon_{0}(j)^{3}) - 4 \varepsilon_{1}(j) - (16/3) \varepsilon_{1}(j)^{3} + o(\varepsilon_{1}(j)^{3}) \right) \\
&\qquad \qquad - 2 m_{j} \cdot \left( \varepsilon_{0}(j) - \varepsilon_{1}(j) +o(\varepsilon_{0}(j)) + o(\varepsilon_{1}(j) ) \right) \\
&= 4 \sum_{j=1}^{n} \sum_{s=1}^{m_{j}} X_{s}^{j} \left( ( \varepsilon_{0}(j) - \varepsilon_{1}(j) + O(\varepsilon_0(j)^3)
+ O(\varepsilon_1(j)^3) \right)\\
& \qquad \qquad + m_{j} \ln \frac{1-p_{0}(j)}{1-p_{1}(j)} \\
&\approx 4 \sum_{j=1}^{n} \sum_{s=1}^{m_{j}} Y_{s}^{j} + C
\end{align*}
where
\[Y_{s}^{j} \mathop{=}\limits^{\triangle} (\varepsilon_{0}(j) - \varepsilon_{1}(j)) X^{j}_{s} \]
and $C$ is the constant defined by:
\[ C \mathop{=}\limits^{\triangle} + m_{j} \ln \frac{1-p_{0}(j)}{1-p_{1}(j)} \]
This computation suggests to use the random variables $Y_{s}^{j}$ to build our distinguisher with the Neyman-Pearson likelihood test. By the assumptions on the $X_{s}^{j}$'s, the $Y_{s}^{j}$'s are independent and we have under $\mathcal{H}_{l}$:
\[ \mathbb{P} \left( Y_{s}^{j} =0 \right) = \frac{1}{2} - \varepsilon_{l}(j) \quad ; \quad \mathbb{P} \left( Y_{s}^{j} = (\varepsilon_{0}(j) - \varepsilon_{1}(j)) \right) = \frac{1}{2} + \varepsilon_{l}(j) \]
The expectation of $Y_{s}^{j}$ under $\mathcal{H}_{l}$ is given by:
\[\mathbb{E}\left( Y_{s}^{j} \right) = (\varepsilon_{0}(j) - \varepsilon_{1}(j))\cdot \left( \frac{1}{2} + \varepsilon_{l}(j) \right) \]
As for our previous distinguisher we define the random variable $V_{m}$ for $m=\sum_{j=1}^{n} m_{j}$ uniform and independent draws of vectors $h_{s}^{j}$ in ${\mathscr H}_{w_{j},i}$:
\[ V_{m} \mathop{=}\limits^{\triangle} \sum_{j=1}^{n} \sum_{s=1}^{m_{j}} Y_{s}^{j} \]
The expectation of $V_{m}$ depends on which hypothesis $\mathcal{H}_{l}$ holds. When hypothesis $\mathcal{H}_{l}$ holds,
we denote the expecation of $V_n$ by $E_l$. The difference $E_{0} - E_{1}$ is given by:
\begin{align*}
E_{0} - E_{1} &= \sum_{j=1}^{n} \sum_{k=1}^{m_{j}} (\varepsilon_{0}(j) - \varepsilon_{1}(j)) \left( \frac{1}{2} + \varepsilon_{0}(j) \right) - (\varepsilon_{0}(j) - \varepsilon_{1}(j)) \left( \frac{1}{2} + \varepsilon_{1}(j) \right) \\
&= \sum_{j=1}^{n} \sum_{k=1}^{m_{j}} (\varepsilon_{0}(j) - \varepsilon_{1}(j))^{2} \\
&= \sum_{j=1}^{n} m_{j} (\varepsilon_{0}(j) - \varepsilon_{1}(j))^{2}
\end{align*}
The deviations of $V_m$ around its expectation will be quantified through Hoeffding's bound which gives in this case up to constant factors
in the exponent the right behavior of the probability that $V_m$ deviates from its expectation
\begin{proposition}[Hoeffding's Bound]
Let $Y_{1},\cdots,Y_{m}$ independent random variables, $a_{1},\cdots,a_{m}$ and $b_{1},\cdots,b_{m}$ with $a_{s} < b_{s}$ such that:
\[ \forall s, \mbox{ } \mathbb{P} \left( a_{s} \leq Y_{s} \leq b_{s} \right) = 1 \]
We set $Z_{m} = \sum_{s=1}^{m} Y_{s}$, then:
\[ \mathbb{P} \left( |Z_{m} - \mathbb{E}(Z_{m}) | \geq t \right) \leq 2\exp\left( - \frac{2t^{2}}{\sum_{s=1}^{m} (b_{s} - a_{s})^{2}} \right) \]
\end{proposition}
In order to distinguish both hypotheses, we set $t = \frac{E_{0} - E_{1}}{2}$. So under $\mathcal{H}_{l}$, we have
\begin{eqnarray*}
\mathbb{P} \left( \left|V_{m} - E_l\right| \geq \frac{E_{0} - E_{1}}{2} \right) & = & \mathbb{P} \left( \left|V_{m} - E_l\right| \geq \sum_{j=1}^{n} \frac{m_{j}}{2} (\varepsilon_{0}(j) - \varepsilon_{1}(j))^{2} \right) \\
&\leq & 2 \exp \left( - \frac{2/4 \left( \sum_{j=1}^{n} m_{j} (\varepsilon_{0}(j) - \varepsilon_{1}(j))^{2} \right)^{2} }{\sum_{j=1}^{n} m_{j} (\varepsilon_{0}(j) - \varepsilon_{1}(j))^{2}} \right) \\
& = &2\exp \left( -\frac{1}{2} \left( \sum_{j=1}^{n} m_{j} (\varepsilon_{0}(j) - \varepsilon_{1}(j))^{2} \right) \right)
\end{eqnarray*}
We decide that hypothesis $\mathcal{H}_{1}$ holds if $V_m < \frac{E_0+E_1}{2}$ and that $\mathcal{H}_{0}$ holds otherwise.
It is clear that the probability $P_{e}$ to make a wrong decision with this distinguisher is smaller than $2 e^{- \frac{1}{2} \left( \sum_{j=1}^{n} m_{j} (\varepsilon_{0}(j) - \varepsilon_{1}(j))^{2} \right) }$. If we want $P_{e} \leq 2e^{-\eta}$ for any fixed $\eta$, $m_{1},\cdots,m_{n}$ have to be such that:
\begin{equation}
\label{eq:probError}
\frac{1}{2} \sum_{j=1}^{n} m_{j} (\varepsilon_{0}(j) - \varepsilon_{1}(j))^{2} \geq \eta \Rightarrow \sum_{j=1}^{n} m_{j} (\varepsilon_{0}(j) - \varepsilon_{1}(j))^{2} \geq 2\eta
\end{equation}
Note that this is really the right order (up to some contant factor) for the amount of equations which is needed (the Hoeffding bound captures well up to constant factors
the probability of the error of the distinguisher in this case) and using optimal Bayesian decision does not allow to change up to multiplicative factors
the number of equations that are needed for a fixed relative error weight.
Now assume that
\begin{ass}
\label{ass:polyEqPar}
If we can compute $m$ parity-check equations of weight $w$ in time $T$, we are able to compute $n \cdot m$ parity-check equations of this weight in time $O(nT)$.
\end{ass}
This assumption holds for all ``reasonable'' randomized algorithms producing random parity-checks with uniform/quasi uniform
probability as long as $n \cdot m$ is at most some constant fraction (with a constant $<1$) of the total number of parity-check equations.
Now we set $j_{0}$ such that:
\begin{equation}
\label{eq:leadingWeight}
m_{j_{0}}(\varepsilon_{0}(j_0) - \varepsilon_{1}(j_0))^{2} = \mathop{\max}\limits_{1 \leq j \leq n} \{ m_{j}(\varepsilon_{0}(j) - \varepsilon_{1}(j))^{2} \}
\end{equation}
Clearly if we take now instead of the original $m$ parity-check equations just the $n \cdot m_{j_0}$ parity check equations of
weight $j_0$ the probability does of error does not get smaller than the bound $2e^{-\eta}$ that we had before since
\[ n \cdot m_{j_{0}}(\varepsilon_{0}(j_0) - \varepsilon_{1}(j_0))^{2} \geq \sum_{j=1}^{n} m_{j} (\varepsilon_{0}(j) - \varepsilon_{1}(j))^{2} \Rightarrow 2 e^{-\frac{1}{2}n\cdot m_{j_{0}} (\varepsilon_{0}(j) - \varepsilon_{1}(j))^{2}} \leq 2e^{-\eta} \]
So, under Assumption \ref{ass:polyEqPar} if our distinguisher with several weights has enough parity-check equations available, we are able in polynomial time to compute $n \cdot m_{j_{0}}$ parity-check equations of weight $w_{j_{0}}$ where $j_{0}$ is chosen such that (\ref{eq:leadingWeight}) holds and with these parity-check equations the distinguisher of Subsection \ref{bias} can work too. The complexity of statistical decoding without the phase of computation of the parity-check equations is the number of parity-check equations that it is needed.
So, under Assumption \ref{ass:polyEqPar}, its complexity with our first distinguisher will be for each codelength $n$ the same up to a polynomial mutiplicative factor as the
complexity with the second distinguisher. Moreover, under Assumption \ref{ass:polyEqPar} the complexity of the computation of the parity-check equations that is needed for both distinguishers is the same up to a polynomial factor. As the $\varepsilon_{1}(j) - \varepsilon_{0}(j)$ are exponentially small in $n$, in order to have a probability of success which tends to $1$, the $m_{j}$'s of both distinguisher have to be of order $\tilde{O} \left( \frac{1}{(\varepsilon_{0}(j)-\varepsilon_{1}(j))^{2}}\right)$. It leads to the conclusion that the asymptotic exponent of the statistical decoding is the same with considering
some well chosen weight or several weights. We stress that this conclusion is about an asymptotic study of the complexity of statistical decoding. Indeed, in practice Algorithms \ref{alg:gauss} and \ref{alg:fusion} can output many parity-check equations of weight ''close'' to $Rn/2$ and $r + (Rn-l)/2$. It will be counter-productive not to keep them and use them with the distinguisher we just described.
\section{A simple way of obtaining moderate weight parity-check equations}
\label{sec:naive}
As we are now able to give a formula for $\pi(\omega,\tau)$ we come back to the algorithm \\ \texttt{ParityCheckComputation}$_{w}$ in order to estimate $\pi^{complete}(\omega,\tau)$.
There is an easy way of producing parity-check equations of moderate weight by Gaussian elimination.
This is given in Algorithm \ref{alg:gauss} that provides a method for finding parity-check equations of weight $w =\frac{Rn}{2}$ of an $[n,Rn]$ random code.
Gaussian elimination (\texttt{GElim}) of an $Rn \times n$ matrix $G_{0}$ consists in finding $U$ ($Rn \times Rn$ and non-singular) such that:
\[ UG_{0} = \lbrack I_{Rn} | G' \rbrack \]
$L_{j}(G)$ denotes the $j-$th row of $G$ in Algorithm \ref{alg:gauss}.
\begin{algorithm}[H]
\caption{\texttt{ParityCheckComputation}$_{Rn/2}$ \label{alg:gauss}}
\begin{algorithmic}[1]
\State Input : $G \in \mathbb{F}_{2}^{Rn \times n}, i \in \mathbb{N}$
\State Output : ${\mathscr S}_i$ /*\textit{$P_{Rn/2}$ parity-check equations}*/
\State ${\mathscr S}_i \leftarrow \lbrack \mbox{ } \rbrack$
\While{$|{\mathscr S}_i| < P_{Rn/2}$}
\State $P \leftarrow$ random $n \times n$ permutation matrix
\State $\lbrack G' | I_{Rn} \rbrack \leftarrow$ \texttt{GElim}($GP$) and if it fails return to line 5
\State $H \leftarrow$ $\lbrack I_{n(1-R)} | {G'}^T \rbrack$ /*Parity matrix of the code*/
\For{$j=1$ to $n(1-R)$}
\If{$L_{j}(H)_{i} = 1$ and $w_{H}(L_{j}(H)) = Rn/2$}
\State ${\mathscr S}_i \leftarrow {\mathscr S}_i \cup \{ L_{j}(H) P^T\}$
\EndIf
\EndFor
\EndWhile
\State \Return ${\mathscr S}$
\end{algorithmic}
\end{algorithm}
Algorithm \ref{alg:gauss} is a randomized algorithm. Randomness comes from the choice of the permutation $P$.
It is straightforward to check that this algorithm returns $P_{Rn/2}$ parity-check equations of weight $Rn/2$ in time $\tilde{O}\left( P_{Rn/2} \right)$.
Now we set $\tau = H^{-1}(1-R)$. This relative weight, which corresponds to the Gilbert-Varshamov bound, is usually used to measure the efficiency of decoding algorithms.
Indeed it corresponds to the critical error weight below which we still have with probability $1-o(1)$ a unique solution to the decoding problem. It can be viewed as
the weight for which the decoding problem is the hardest, since the larger the weight the more difficult the decoding
problem seems to be (this holds at least for all known decoding algorithms of generic linear codes).
As a consequence of Propositions 2 and 4, we have the following theorem:
\begin{theorem}
\label{theobias}
[Naive Statistical Decoding's asymptotic complexity]$ $
With the computation of parity-check equations of weight $Rn/2$ thanks to
\\ \emph{\texttt{ParityCheckComputation}}$_{Rn/2}$, we have:
\[ \pwta{R/2}{\tau} = \pwtca{R/2}{\tau}\]
where $\pwta{R/2}{\tau}$ is given by Theorem \ref{biasSDecoding}.
\end{theorem}
Exponents (as a function of $R$) of Prange's ISD and statistical decoding are given in Figure \ref{prstatdec}. As we see the difference is huge. This version of statistical decoding can not
be considered as an improvement over ISDs. However, as $\omega \mapsto \pwta{\omega}{\tau}$ for $\tau$ fixed is an increasing function in $\omega$, we have to study the case $\omega < R/2$. It is the subject of the next section.
We will give there an algorithm computing efficiently parity-check equations of smaller weight than $Rn/2$.
However we also prove there that no matter how efficiently we perform the pre-computation step, any version of statistical decoding is worse than Prange's ISD.
\begin{figure}
\centering
\includegraphics[scale = 0.6]{PrangeStatisticalDecoding.png}
\caption{Asymptotic Exponents of Prange ISD and Statistical Decoding for $\tau=H^{-1}(1-R)$ et $\omega=R/2$} \label{prstatdec}
\end{figure}
\section{Improvements and limitations of statistical decoding}
\label{impvlim}
\subsection{Framework}
\label{fram}
Before giving an improvement and giving lower bounds on the complexity of statistical decoding, we would like to come back to the computation problem of
the ${\mathscr S}_i$'s in the complexity of statistical decoding. Our aim is to clarify
the picture a little bit.
We stress that statistical decoding complexity is, if the ${\mathscr S}_{i}$'s are already computed and stored, (up to a polynomial factor) the number of equations we use to take our decision.
We denote by ${\mathcal D}_w$ the part of statistical decoding which uses these parity-check equations to perform the decoding and by
${\mathcal A}_w$ the randomized algorithm used for outputting a certain number of random parity-check equations of weight $w$.
\texttt{ParityCheckComputation}$_{w}$ is assumed to make a certain number of calls to ${\mathcal A}_w$. It is assumed that ${\mathcal A}_w$ outputs $N_w$ parity-check equations of weight $w$ in time $T_w$ each time we run it.
We assume that statistical decoding needs $\tilde{O}(P_w)$ equations. If we consider the computations of parity-check equations as part of statistical decoding, its complexity is given by:
\[ \tilde{O} \left( P_w + T_w \cdot \max (1,\frac{P_w}{N_w}) \right) \]
When $\frac{T_w}{N_w} = \tilde{O}(1)$, we say $\mathcal{A}_w$ gives equations in amortized time $\tilde{O}(1)$. With this condition if we assume $P_w \geq N_w$, the complexity is the number of equations needed.
In any case, complexity of statistical decoding is lower-bounded by $\tilde{O}(P_w)$ and the lower the equation weight $w$, the
lower the number of equations $P_w$ we need for performing statistical decoding.
The goal of this section is to show how to find many parity-check equations of weight $<Rn/2$ in an efficient way and to give a minimal weight for which it makes sense to make this operation.
\subsection{A lower bound on the complexity of statistical decoding}
\label{lim}
As we just
pointed out, statistical decoding needs $\tilde{O}\left(P_{w} \right)$ parity-check equations of weight $w$ to work. Its complexity is therefore always greater than $\tilde{O}\left(P_{w}\right)$. We assume
again the code we want to decode to be a random code. This assumption is standard in the cryptographic context.
The expected number of parity-check equations of weight $w$ in an $[n,Rn]$ random binary linear code is $\frac{\binom{n}{w}}{2^{Rn}}$.
Obviously if $w$ is too small there are not enough equations for statistical decoding to work, we namely need that
$$
P_w \leq \frac{\binom{n}{w}}{2^{Rn}}.
$$
The minimum $\omega_{0}(R,\tau)$ such that this holds is clearly given by the minimal $\omega$ such that the following expression holds
\[ \pwta{\omega}{\tau} = H\left( \omega \right) - R \]
So $\omega_{0}(R,\tau)$ gives the minimal relative weight such that asymptotically the number of parity-check equations needed for decoding is exactly the number of parity-check equations of
weight $w_{0}(R,\tau)$ in the code, where $w_0(R,\tau) \mathop{=}\limits^{\triangle} \omega_0(R,\tau)n$.
Below this weight, statistical decoding can not work (at least not for random linear codes).
In other words the asymptotic exponent of statistical decoding is always lower-bounded by $\pwta{w_{0}(R,\tau)}{\tau}$.
In the case of a relative error weight given by the Gilbert-Varshamov bound $\tau_{\text{DGV}}=H^{-1}(1-R)$, Theorem \ref{theobias} leads to the conclusion that
\[ \omega_{0}(R,\tau_{\text{DGV}}) = \frac{1}{2} - \sqrt{\tau_{\text{DGV}} - \tau_{\text{DGV}}^2} \]
Moreover for all relative weights greater than $\omega_{0}(R,\tau_{\text{DGV}})$ the number of parity-check equations that are needed is exactly the number of parity-check equations of this weight that exist in a random code. This result is rather intriguing and does not seem to have a simple interpretation.
The relative minimal weight $w_{0}(R,\tau_{\text{DGV}})$ is in relationship with the first linear programming bound of McEliece-Rodemich-Rumsey-Welch and can be interpreted
through its relationship with the zeros of Krawtchouk polynomials. This bound arises from the fact that from Theorem \ref{theobias}, we know that $\omega_{0}(R,\tau_{\text{DGV}})$ corresponds to the relative
weight where we switch from the complex case to the real case, and this happens precisely when we leave the region of zeros of
the Krawtchouk polynomials.
Thanks to Figure \ref{fig:limita} which compares Prange's ISD, statistical decoding with parity-check equations of relative weight $R/2$ and $\omega_{0}(R,\tau)$ with $\tau = H^{-1}(1-R)$, we clearly see on the one hand that there is some room of improving upon naive statistical decoding based on parity-check equations of weight $Rn/2$, but on the other hand that even with the best improvement upon statistical decoding we might hope for, we will still be above the most naive information set decoding algorithm, namely Prange's algorithm.
\begin{figure}
\centering
\includegraphics[scale = 0.6]{OptiStatDec.png}
\caption{Asymptotic exponents of Prange ISD, naive statistical decoding and optimal/optimistic statistical decoding for $\tau=H^{-1}(1-R)$ \label{fig:limita}}
\end{figure}
\subsection{An improvement close to the lower bound}
\label{impv}
The goal of this subsection is to present an improvement to the computation of parity-check equations and to give its asymptotic complexity. R. Overbeck in \cite[Sec. 4]{O06} showed how to compute parity-check equations thanks to Stern's algorithm. We are going to use this algorithm too. However, whereas Overbeck used many iterations of this algorithm to produce a few parity-check equations of small weight, we observe that this algorithm produces in a natural
way during its execution a large number of parity-check equations of relative weight smaller than $R/2$.
We will analyze this process here and show that it yields an algorithm $\mathcal{A}_w$ that gives equations in amortized time $\tilde{O}(1)$.
To find parity-check equations, we described an algorithm which just performs Gaussian elimination and
selection of sufficiently sparse rows.
In fact, it is the main idea of Prange's algorithm. As we stressed in introduction, this algorithm has been improved rather significantly over the years (ISD family). Our idea to improve the search for parity-check equations is to use precisely these improvements. The first significant improvement is due to Stern and Dumer \cite{S88,D91}. The main idea is to solve a sub-problem with the birthday paradox. We are going to describe this process and show how it allows to improve upon naive statistical decoding.
We begin by choosing a random permutation matrix $P \in \mathbb{F}_{2}^{n \times n}$ and putting the matrix $GP$ into the systematic form:
\[
\begin{bmatrix}
I_{Rn-l} & G_{1} \\
0 & G_{2}
\end{bmatrix} \mbox{ where } G_{1} \in \mathbb{F}_{2}^{(Rn-l) \times (n(1-R)+l)} \mbox{ and } G_{2} \in \mathbb{F}_{2}^{l \times (n(1-R)+l)}
\]
\text{}
1. We solve CSD($G_{2},r,0_{\lbrack l \rbrack}$).
2. For each solution $e$, we output $e_{s} = (eG_{1}^T,e)P^T$.
\begin{rem}
We recall that solving CSD($G_{2},r,0_{\lbrack l \rbrack}$) means to find $r$ columns of $G_{2}$ which yield $0$.
\end{rem}
$\cdot$ \textbf{Soundness:} We have
$$Ge_s^T=GP\begin{bmatrix} G_1e^T \\ e^T \end{bmatrix} = \begin{bmatrix}
I_{Rn-l} & G_{1} \\
0 & G_{2}
\end{bmatrix}\begin{bmatrix} G_1e^T \\ e^T \end{bmatrix} = \begin{bmatrix} G_1e^T + G_1e^T \\ G_2e^T \end{bmatrix}=0$$ and therefore $e$ is a parity-check equation of
$\mathcal{C}$.
$\cdot$ \textbf{Number of solutions:} The number of solutions is given by the number of solutions of 1. Furthermore, the complexity of this algorithm is up to a polynomial factor given by the complexity of 1.
\begin{rem}
This algorithm may not provide in one step enough solutions. In this case, we have to put $G$ in another systematic form ({\em i.e.} choose another permutation). The randomness of our
algorithm will come from this choice of permutation matrix.
\end{rem}
$\cdot$ \textbf{Solutions' weight:} In our model $G$ is supposed to be random. So we can assume the same hypothesis for $G_{2}$. As the length of its rows is $Rn -l$, we get asymptotically parity-check equations of weight:
\[ r+\frac{Rn-l}{2}(1 + o(1)) \]
The first part of this algorithm can be viewed as the first part of ISD algorithms. There is a general presentation of these algorithms in \cite{FS09} in Section 3. All the efforts that have been spent to improve Prange's ISD can be applied to solve the first point of our algorithm. To solve this point, Dumer suggested to put $G_{2}$ in the following form:
\[ G_{2} = \lbrack G_{2}^{(1)} | G_{2}^{(2)} \rbrack \mbox{ where } G_{2}^{(i)} \in \mathbb{F}_{2}^{l \times \frac{n(1-R) + l}{2}} \]
and to build the lists:
\[ \mathcal{L}_{1} = \left\{ \left( e_{1},G_{2}^{(1)} e_1^T\right) \mbox{ } | \mbox{ } e_{1} \in \mathbb{F}_{2}^{\frac{n(1-R)+l}{2}} \mbox{ and } w_{H}(e_{1}) = \frac{r}{2} \right\} \]
\[ \mathcal{L}_{2} = \left\{ \left( e_{2},G_{2}^{(2)}e_2^T \right) \mbox{ } | \mbox{ } e_{2} \in \mathbb{F}_{2}^{\frac{n(1-R)+l}{2}} \mbox{ and } w_{H}(e_{2}) = \frac{r}{2} \right\} \]
Then we intersect these two lists with respect to the second coordinate and we keep the associated first coordinate. In other words, we get:
\[ \{ (e_{1},e_{2}) \mbox{ } | \mbox{ } w_{H}(e_{i}) = \frac{r}{2} \mbox{ and } G_{2}^{(1)}e_1^T = G_{2}^{(2)}e_2^T \} \]
\begin{rem}
This process is called a fusion.
\end{rem}
Algorithm \ref{alg:fusion} summarizes this formally.
\begin{algorithm}[H]
\caption{\texttt{DumerFusion}\label{alg:fusion}}
\begin{algorithmic}[1]
\State Input : $G \in \mathbb{F}_{2}^{Rn \times n},l,r$.
\State Output : ${\mathscr S}$ /*\textit{subset of ${\mathscr H}_{w}$}*/
\State ${\mathscr S} \leftarrow \lbrack \mbox{ } \rbrack$ /*\textit{Empty list}*/
\State ${\mathscr T} \leftarrow \lbrack \mbox{ } \rbrack$ /* \textit{Hash table}*/
\State $P \leftarrow$ random $n \times n$ permutation matrix
\State We find $U \in \mathbb{F}_{2}^{Rn\times Rn}$ non-singular such that $UGP = \begin{bmatrix}
I_{Rn-l} & G_{1} \\
0 & G_{2}
\end{bmatrix}$
\State We partition $G_{2}$ as $\lbrack G_{2}^{(1)}|G_{2}^{(2)}\rbrack$ where $G_{2}^{(i)} \in \mathbb{F}_{2}^{l \times \left( \frac{n(1-R)+l}{2} \right)}$
\ForAll{$e_{1} \in \mathbb{F}_{2}^{(n(1-R)+l)/2}$ of weight $r/2$}
\State $x \leftarrow G_{2}^{(1)}e_1^T $
\State ${\mathscr T}\lbrack x \rbrack \leftarrow {\mathscr T} \lbrack x \rbrack \cup \{ e_{1} \}$
\EndFor
\ForAll{$e_{2} \in \mathbb{F}_{2}^{(n(1-R)+l)/2}$ of weight $r/2$}
\State $x \leftarrow G_{2}^{(2)} e_2^T$
\ForAll{$e_{1} \in {\mathscr T}\lbrack x \rbrack$}
\State $e \leftarrow (e_{1},e_{2})$
\State ${\mathscr S}\leftarrow {\mathscr S} \cup \{ (eG_{1}^T,e)P^T \}$
\EndFor
\EndFor
\end{algorithmic}
\end{algorithm}
As we neglect polynomial factors, the complexity of Algorithm 3. is given by:
\[ \tilde{O} \left( \binom{(n(1-R)+l)/2}{r/2} + \# {\mathscr S} \right) \]
Indeed, we only have to enumerate the hash table construction (first factor) and the construction of ${\mathscr S}$. In order to estimate $\#{\mathscr S}$ we use the following classical proposition:
\begin{prop}
Let $L_{1},L_{2} \subseteq \{0,1\}^{l}$ be two lists where inputs are supposed to be random and distributed uniformly. Then, the expectation of
the cardinality of their intersection is given by:
\[ \frac{\# L_{1} \cdot \# L_{2}}{2^{l}} \]
\end{prop}
As we supposed $G_{2}$ random, we can apply this proposition to \texttt{DumerFusion}. Therefore,
\begin{prop}[\texttt{DumerFusion}'s complexity]$ $
\emph{\texttt{DumerFusion}}'s complexity is given by:
\[ \tilde{O}\left( \binom{(n(1-R)+l)/2}{r/2} + \frac{\binom{(n(1-R)+l)/2}{r/2}^{2}}{2^{l}} \right) \]
and it provides on average
\[ \frac{\binom{(n(1-R)+l)/2}{r/2}^{2}}{2^{l}} \]
solutions
\end{prop}
In order to study this algorithm asymptotically, we introduce the following notations and relative parameters:
\begin{nota}$ $
$\cdot$ $N_{r,l} \mathop{=}\limits^{\triangle} \frac{\binom{(n(1-R)+l)/2}{r/2}^{2}}{2^{l}}$ ;
$\cdot$ $T_{r,l} \mathop{=}\limits^{\triangle} \binom{(n(1-R)+l)/2}{r/2} + \frac{\binom{(n(1-R)+l)/2}{r/2}^{2}}{2^{l}}$ ;
$\cdot$ $\rho = \frac{r}{n}$ ;
$\cdot$ $\lambda = \frac{l}{n}$.
\end{nota}
We may observe that $N_{r,l}$ gives the number of parity-check equations that \texttt{DumerFusion} outputs in one iteration and $T_{r,l}$
is
the running time of one iteration.
There are many ways of choosing $r$ and $l$.
However
in any case (see Subsection \ref{lim}), as the weight of parity-check equations we get with \texttt{DumerFusion} is $(r + \frac{R - l}{2})(1+o(1))$
we have to choose $r$ and $l$ such that
$$w_{0}(R,t) \leq r + (R-l)/2 $$
which is equivalent to
\begin{equation}
\label{asym}
\omega_{0}(R,\tau) \leq \rho + \frac{R- \lambda}{2}
\end{equation}
The following lemma gives
an asymptotic choice of $\rho$ and $\lambda$ that allows
to get parity-check equations in amortized time $\tilde{O}(1)$:
\begin{lemma}
If
\begin{equation}
\label{amtime}
\rho = (1-R+\lambda) \cdot H^{-1}\left(\frac{2\lambda}{1-R+\lambda} \right)
\end{equation}
\emph{\texttt{DumerFusion}} provides parity-check equations of relative weight $\rho + \frac{R-\lambda}{2}$ in amortized time $\tilde{O}(1)$. Moreover, with this constraint we have asymptotically :
\[ N_{r,l} = \tilde{O} \left( 2^{\lambda \cdot n} \right) \]
\end{lemma}
\begin{proof} We remark that $T_{r,l} = N_{r,l} + \binom{(n(1-R)+l)/2}{r/2}$. Our goal is to find $\rho, \lambda$ such that asymptotically $\frac{T_{r,l}}{N_{r,l}} = \tilde{O}(1)$. The constraint
\eqref{amtime} follows from $\binom{u}{v} = \tilde{O}\left( 2^{u \cdot H (u/v)} \right)$.
\end{proof}
We are now able to give the asymptotic complexity of statistical decoding with the use of \texttt{DumerFusion} strategy.
\begin{theorem} With the constraints (\ref{asym}), (\ref{amtime}) and
\begin{equation}
\label{oneite}
\lambda \leq \pwta{\rho + \frac{R-\lambda}{2}}{\tau}
\end{equation}
for $(\rho,\lambda)$ we have:
\[ \pwtca{\rho + (R-\lambda)/2}{\tau}= \pwta{\rho + (R- \lambda)/2}{\tau} \]
\end{theorem}
\begin{proof} Thanks to (\ref{amtime}) and (\ref{oneite}) we use Subsection \ref{fram} and we conclude that under theses constraints we have $\pwta{\rho + (r-\lambda)/2}{\tau}=\pwtca{\rho + (r-\lambda)/2}{\tau}$.
\end{proof}
\begin{rem} We summarize the meaning of the constraints as:
\begin{itemize}
\item With (\ref{asym}) we are sure there exists enough parity-check equations for statistical decoding to work;
\item With (\ref{amtime}) \texttt{DumerFusion} gives parity-check equations in amortized time $\tilde{O}(1)$;
\item With (\ref{oneite}) \texttt{DumerFusion}
provides always no more equations in one iteration
than we need.
\end{itemize}
\end{rem}
In order to get the optimal statistical decoding complexity we minimize $\pi(\rho + (R-\lambda)/2,\tau)$
(with $\pi(\rho + (R-\lambda)/2,\tau)$ given by Theorem \ref{biasSDecoding}) under constraints \eqref{asym}, \eqref{amtime} and \eqref{oneite}. The exponent of statistical decoding with this strategy is given in Figure \ref{fig:limit}.
\begin{figure}
\centering
\includegraphics[scale = 0.6]{DumerStat.png}
\caption{Asymptotic exponents of naive statistical decoding and with the use of optimal \texttt{DumerFusion} and optimal/optimistic statistical decoding for $\tau=H^{-1}(1-R)$ \label{fig:limit}}
\end{figure}
As we see, \texttt{DumerFusion} with our strategy allows statistical decoding to be optimal for rates close to $0$.
We can further improve \texttt{DumerFusion} with ideas of \cite{MMT11} and \cite{BJMM12}, however this comes at the expense of having a much more involved analysis and would not allow to go beyond the barrier of the lower bound on the complexity of statistical decoding given in the previous subsection. Nevertheless with the same strategy, these improvements lead to better rates with an optimal work of statistical decoding.
\section{Conclusion}
\label{concl}
In this article we have revisited statistical decoding with a rigorous study of its asymptotic complexity. We have shown that under Assumption 1 and 2 this algorithm is regardless of any strategy we choose for producing the moderate weight parity-check equations needed by this algorithm
always worse than Prange ISD for the hardest instance of decoding (i.e. for a number of errors equal to Gilbert Varshamov bound).
In this case a very intriguing phenomenon happens, we namely need for a large range of parity-check weights all the parity-check available
in the code to be be able to decode with this technique.
It seems very hard to come up with choices of rate, error weight and length for which
statistical decoding might be able to compete with ISD even if this can not be totally ruled out by the study we have made here.
However there are clearly more sophisticated techniques which could be used to improve upon statistical decoding.
For instance using other strategies by grouping positions together and
using all parity-check equations involving bits in this group could be another possible interesting generalization of statistical decoding.
|
train/arxiv
|
BkiUdkg5qWTBEb4NrwPM
| 5 | 1 |
\section{Introduction}
\vspace{-4px}
\textcolor{blue}{While autonomous robots are able to accomplish an increasing variety of tasks, a key challenge that still remains is \emph{how} they should pursue and trade off between their goals.
In recent years,} there has been significant work on interactively learning user preferences of robot behaviors~\cite{jeon2020reward,sadigh2017active,biyik2020active, biyik2019asking, wilde2020active,basu2018learning,holladay2016active,li2021roial,shah2020interactive,wilde2019bayesian,wilde2020improving,bajcsy2017learning,palan2019learning,li2021learning,cakmak2011human}.
Usually, the user is provided with one or more robot trajectories, and is asked to provide feedback through pairwise comparisons \cite{sadigh2017active,biyik2020active,biyik2019asking,wilde2020active,basu2018learning,holladay2016active,li2021roial}, rankings of the trajectories~\cite{myers2021learning,brown2019extrapolating}, or physical feedback~\cite{kollmitz2020learning,bajcsy2017learning,li2021learning}. The underlying reward function governing human preferences can then be learned through this implicit feedback.
Specifically, one framework with minimal complexity for the user is \emph{learning from choice feedback}~\cite{sadigh2017active,biyik2020active, biyik2019asking, wilde2020active,basu2018learning,holladay2016active,li2021roial}, where the robot demonstrates two alternative trajectories for some task. The user then simply chooses their preferred behavior allowing the robot to infer an underlying reward function for the user preferences.
Choice feedback, although simple to collect, is limiting in a number of ways. Consider the example shown in Fig.~\ref{fig:intro_example}, where a robot is tasked to serve a drink to a customer. The customer might have different preferences over the type of drink to have (milk, orange juice, or water), or the specifics of the trajectory the robot takes (e.g., if it goes over the stove or around it which can affect the temperature of the drink or the likelihood of the robot accidentally hitting the pan handle). A strict choice feedback between two trajectories does not really capture these intricacies of human preferences. We thus need to have a more expressive way of collecting data from humans. Our key insight is that allowing users to provide a scaled approach on a slider (as shown in Fig.~\ref{fig:intro_example}) can provide a more expressive medium for learning from humans and capture nuances in their preferences.
In this work, we propose \emph{scale feedback} as a new mode of interaction: Instead of a strict question on which of the two proposed trajectories the user prefers, we allow for more nuanced feedback using a slider bar. We design a Gaussian model for how users provide scale feedback, and learn a reward function capturing human preferences. Similar to prior work in robotics, we assume this reward is a linear function of a set of features~\cite{abbeel2004apprenticeship, wilde2020improving, palan2019learning, holladay2016active}, where the main task of learning from scale feedback is to recover the weights of this reward function. To learn in a data-efficient manner, we actively generate our queries to the user, i.e., pairs of trajectories demonstrated to a user similar to Fig.~\ref{fig:intro_example}, by optimizing two well-known objectives of information gain~\cite{biyik2019asking} and max regret~\cite{wilde2020active}.
We demonstrate the performance benefit of scale feedback over choice in a driving simulation. Further, we investigate its practicality in \textcolor{blue}{two user studies} with the real robot experiment shown in Fig.~\ref{fig:intro_example}. Our results suggest scale feedback leads to significant improvements in learning performance.
\begin{figure}[!t]
\centering
\includegraphics[width=.95\textwidth]{fig/front_fig_v2.png}
\vspace{-9px}
\caption{Scale feedback allows users to provide finely detailed comparisons between different options.}
\label{fig:intro_example}
\vspace{-19px}
\end{figure}
\section{Related Work}
\vspace{-6px}
Learning from human feedback is an important problem in developing interactive robots that work alongside humans.
Researchers study learning from demonstrations \cite{abbeel2004apprenticeship, ziebart2008maximum, gonzalez2018modeling}, corrections \cite{losey2018including, zhang2019learning, li2021learning}, ordinal feedback \cite{chu2005gaussian,li2021roial}, rankings~\cite{myers2021learning,brown2019extrapolating, brown2020better, chen2020learning}, critiques \cite{argall2007learning, cui2018active}, and choices \cite{sadigh2017active, biyik2020active, biyik2019asking, wilde2020active, wirth2017survey}.
\textcolor{blue}{While demonstrations usually are very informative, they are not always viable:} Demonstrating the desired behavior might require a high level of expertise \cite{villani2018survey}, or can be difficult in high-order systems \cite{akgun2012trajectories, losey2020controlling,akgun2012keyframe}.
Choice questions minimize interface complexity and mental effort for the user. However, when the user is indifferent towards both options, learning becomes difficult since users may become noisier in their responses. Thus, \cite{holladay2016active, biyik2019asking, basu2018learning} investigate modifications of learning from choice where users can also answer \emph{About Equal}. These two forms of choice feedback are usually referred to as \emph{strict} and \emph{soft} choice.
When the user chooses the neutral answer, the robot learns to assign about equal reward to the presented trajectories.
While choice feedback provides an easy medium for learning from humans, \textcolor{blue}{it provides at most one bit of information}. Thus, to effectively learn from choice feedback, we often need to actively generate the queries made to humans. Previous work has investigated different auxiliary measures that are greedily optimized to enable efficient learning, including expected volume removal \cite{sadigh2017active}, information gain \cite{biyik2019asking}, and regret \cite{wilde2020active}.
In the proposed scale feedback framework, we take the soft choice approach one step further: Instead of three discrete values for feedback (prefer A, prefer B, neutral) users give quasi-continuous feedback. This allows the user to indicate by \emph{how much} they prefer one option over the other.
Slider bars have been used in robotics for tuning parameters \cite{racca2020interactive}. More related to our work, \citet{cabi2020scaling} proposed using them for \emph{reward sketching}. Instead of assigning a numerical preference between presented options, users continuously indicate the robot's progress towards some goal. However, this requires users to assign scores to different parts of trajectories. Developing the scale feedback for preference-based learning, we retain the ease of comparing trajectories.
\vspace{-6px}
\section{Problem Formulation}
\vspace{-6px}
We now introduce the notation we use in this paper and formulate the learning problem.
\vspace{-3px}
\noindent\textbf{Reward function.} We consider the scenario where a robot needs to customize its behavior to the preferences of a user Alice.
We assume Alice evaluates robot paths $P\in \mathcal{P}$ based on a vector of features $\vect{\phi}^P=\begin{bmatrix}\phi_1(P), \ldots,\phi_n(P)\end{bmatrix}$. Similar to prior works in robotics \cite{abbeel2004apprenticeship, wilde2020improving, palan2019learning, holladay2016active}, we define a linear reward function $r$ that assigns a numerical value to a path $P$ by weighting a set of features:
\begin{equation}
r(P, \vect{w}) = \vect{\phi}^P \cdot \vect{w}.
\end{equation}
These features are usually provided by a domain expert incorporating the core factors that the reward needs to capture, e.g., collision with other objects, or distance to the goal.
Further, the robot has access to a motion planner that finds an optimal path given a set of weights, i.e., the planner is a (deterministic) function $\rho:\mathbb{R}^n\to\mathcal{P}$ where $\rho(\vect{w}) = \arg\max_{P\in\mathcal{P}}r(P, \vect{w})$.
\vspace{-3px}
\noindent\textbf{Regret.}
Similar to \cite{wilde2020active}, we define the \emph{regret} between any two weights $(\vect{w},\vect{w}')$ as the difference in the reward $\vect{w}'$ assigns to the paths $\rho(\vect{w})$ and $\rho(\vect{w}')$:
\begin{equation}
R(\vect{w}, \vect{w}') = \vect{\phi}(\rho(\vect{w}'))\cdot \vect{w}' - \vect{\phi}(\rho(\vect{w}))\cdot \vect{w}'\:,
\label{eq:regret}
\end{equation}
which quantifies the suboptimality when the true weights are $\vect{w}'$, but the path is optimized using $\vect{w}$.
\vspace{-3px}
\noindent\textbf{Learning.}
Let $\w^*$ denote Alice's weights for the reward function. These weights are not known to the robot; the only information initially available is a prior distribution $\mathbb{P}(\vect{w}=\w^*)$.
The robot learns $\w^*$ by iteratively presenting her with two paths $P$ and $Q$ for $K$ iterations.
We extend the \emph{learning from choice} framework, where users simply indicate the path they prefer, \textcolor{blue}{to a setting where they instead provide a more finely detailed \emph{scale feedback}.}
\begin{definition}[Scale feedback]
Presented with two paths $P$ and $Q$, Alice returns numerical feedback $\psi\in[-1,1]$. If $\psi=0$, this means Alice has no preference between the paths, $\psi=1$ equals a strong preference for path $P$ and $\psi=-1$ a strong preference for path $Q$.
\end{definition}
\vspace{-8px}
From an interface design and expressiveness perspective, it is undesirable to have users give a numerical value for $\psi$. Instead, they can express such a feedback with a slider bar with a more fine-grained set of options. An example is illustrated in Fig.~\ref{fig:intro_example}.
We let $D_K=\{(P_1,Q_1,\psi_1),\dots, (P_K,Q_K,\psi_K)\}$ be the set of recorded user feedback.
\noindent\textbf{Performance Measures.} Let $\hat{\w}$ be the robot's estimate of $\w^*$, and $\xi(\hat{\w}, \w^*)$ be a performance measure for the learning process. Previous works focused on the \emph{alignment} of weights \cite{sadigh2017active,biyik2019asking}, $\mathtt{Alignment}\!=\!\nicefrac{\hat{\w}\cdot \w^*}{||\hat{\w}||\cdot ||\w^*||}$, measuring the cosine similarity of vectors $\hat{\w}$ and $\w^*$, i.e., how well the parameters of Alice's reward function are learned.
Alternatively, \citet{wilde2020active} proposed the relative error in \emph{cost}. We adapt this as the $\mathtt{Relative\_Reward}=\nicefrac{\vect{\phi}(\rho(\hat{\w}))\cdot \w^*}{\vect{\phi}(\rho(\w^*))\cdot \w^*}$, measuring how much Alice likes the trajectory optimized for $\hat{\w}$ compared to the one optimized for $\w^*$.
\noindent\textbf{Problem Statement.} Let $\pi$ be an adaptive policy for designing queries $(P,Q)$, and let $D_K(\pi \mid \w^*)$ be the expected set of user feedback when a user $\w^*$ is queried by $\pi$ for $K$ iterations. Given a robot motion planner $\rho$, a user with preferences $\w^*$, and a budget of $K$ rounds to query the user about their \emph{scale feedback} on two presented paths, our goal is to find an adaptive policy $\pi$ that solves
\begin{equation}
\max_{\pi}
\xi\left(
\mathbb{E}\left[ \vect{w} \mid
D_K(\pi\mid\w^*)
\right],
\w^*
\right).
\label{eq:problem}
\end{equation}
\vspace{-12px}
\section{Approach}
\vspace{-4px}
We now briefly review learning from choice, and then extend the framework to scale feedback.
\vspace{-2px}
\subsection{Choice Feedback}
\vspace{-2px}
When presented with two paths $P$ and $Q$, a user returns an ordering $P\succeq Q$ ($P$ is preferred) or $P\preceq Q$ ($Q$ is preferred). In a noiseless setting, we have
\begin{equation}
r(P,\w^*) - r(Q,\w^*)\geq 0 \iff P\succeq Q.
\label{eq:learch_from_choice}
\end{equation}
\begin{wrapfigure}{R}{0.6\textwidth}
\centering
\vspace{-15px}
\includegraphics[width=\linewidth]{fig/choice_scale_scale.png}
\vspace{-15px}
\caption{Different feasible sets learned from choice and scale feedback. Shown is the updated weightspace (green) after observing user feedback for one $(P,Q)$ pair. If $\psi=1$ scale feedback learns a tighter halfspace; when $\psi\in(0,1)$ scale feedback learns an equality, i.e., a hyperplane.}
\vspace{-20px}
\label{fig:feasible_sets}
\end{wrapfigure}
That is, the path $P$ has a reward that is at least as high as that of $Q$ with respect to the hidden true user weights $\w^*$.
Using $r(P,\vect{w})=\vect{\phi}^P\cdot\vect{w}$, we can tighten our notation and write $(\vect{\phi}^P-\vect{\phi}^Q)\cdot\w^*$ instead of $r(P,\w^*) - r(Q,\w^*)$.
Equation \eqref{eq:learch_from_choice} already contains an observation model: If the user chooses path $P$, the robot can infer that $P$ has a higher reward with respect to $\w^*$.
This inequality defines a halfspace $\Lambda(P,Q)=\{\vect{w} \mid (\vect{\phi}^P-\vect{\phi}^Q)\cdot\w^*\geq 0\}$ containing all weights that are \emph{feasible} given the observed user choice. Over $k$ iterations, we can intersect the sets $\Lambda(P_1,Q_1), \dots,\Lambda(P_k,Q_k)$ to obtain the \emph{feasible set} $\mathcal{F}_k$ shown in Fig.~\ref{fig:feasible_sets}(a). By definition, this feasible set is convex.
\vspace{-2px}
\subsection{Scale Feedback}
\vspace{-2px}
Scale feedback allows the robot to gain more information: the robot can also infer by \emph{how much} the user prefers $P$, allowing for learning tighter feasible sets.
We extend the model in \eqref{eq:learch_from_choice} and show how a noiseless user would provide scale feedback and then study how a robot can learn from it.
\begin{definition}[Maximum Reward Gap]
Given a user $\w^*$, the maximum reward gap is
\begin{equation}
\delta^* = \max_{P,Q\in \mathcal{P}}r(P,\w^*) - r(Q,\w^*) = \max_{P,Q\in \mathcal{P}}(\vect{\phi}^P-\vect{\phi}^Q)\cdot\w^*.
\label{eq:max_reward_gap}
\end{equation}
\end{definition}
\vspace{-8px}
We notice that the maximum reward gap cannot be computed, since $\w^*$ is unknown to the robot. Nevertheless, we can formulate the user choice model and then derive an observation model.
\noindent\textbf{User model.}
The maximum reward gap helps to define when a noiseless user would indicate a strong preference.
We assume this occurs if and only if the difference in reward of $P$ and $Q$ with respect to $\w^*$ is at least $\alpha^*\delta^*$ for some $0\!<\!\alpha^*\!\leq\!1$. Here $\alpha^*$ is a saturation parameter which governs at what reward difference (w.r.t. to the maximum gap) the user's feedback gets saturated to a strong preference.
For any other $(P,Q)$ where $\lvert(\vect{\phi}^P\!-\!\vect{\phi}^Q)\cdot\w^*\rvert\!\in\! [0,\alpha^*\delta^*)$ we assume the user to linearly scale $\psi$ between $-1$ and $1$, which leads to the following model.
\begin{definition}[Noiseless User Model]
Presented with two paths $P$ and $Q$, a noiseless user with parameter $\alpha^*\!\in\!(0,1]$ will always provide the following feedback:
\begin{equation}\begin{aligned}
\psi =
\begin{cases}
1
\quad\text{ if } r(P,\w^*) - r(Q,\w^*)
\geq \alpha^*\delta^*,\\
-1
\quad\text{ if }
r(Q,\w^*)-r(P,\w^*)\geq \alpha^*{\delta^*},\\
\nicefrac{(r(P,\w^*) - r(Q,\w^*))}{\alpha^*\delta^*}
\quad\text{ otherwise }.
\end{cases}
\end{aligned}
\label{eq:noisefree_model_saturated}
\end{equation}
\end{definition}
\vspace{-8px}
We illustrate the noiseless user model in Fig.~\ref{fig:saturated_model_alpha} under different saturation parameters $\alpha^*$. In Fig.~\ref{fig:saturated_model_examples}, we show a simulated example: for a fixed $\w^*$ we simulate how users with different values for $\alpha^*$ would provide scale feedback to the same $20$ queries. For larger $\alpha^*$, they position the slider closer to the neutral position.
Finally, we derive an observation model for the noiseless user:
\begin{equation}
\begin{aligned}
\psi=-1 &\implies r(P,\w^*) - r(Q,\w^*)\leq \psi\alpha^*\delta^*\\
\psi\in(-1,1) & \implies r(P,\w^*) - r(Q,\w^*)= \psi\alpha^*\delta^*,\\
\psi=1 &\implies r(P,\w^*) - r(Q,\w^*)\geq \psi\alpha^*\delta^*.
\end{aligned}
\label{eq:obs_model_det_saturated}
\end{equation}
\begin{figure}[!t]
\centering
\centering
\begin{subfigure}[t]{0.43\textwidth}
\includegraphics[width=.99\textwidth]{fig/scale_noiseless_feedback.png}
\caption{User model for providing scale feedback with $\alpha^*=1$ (blue) and $\alpha^*=0.7$ (green).}
\label{fig:saturated_model_alpha}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.56\textwidth}
\centering
\includegraphics[width=.99\textwidth]{fig/Saturation_example_noiseless.png}
\caption{Example slider feedback for different $\alpha$. The boxplots indicate the four quartiles of the absolute slider values.}
\label{fig:saturated_model_examples}
\end{subfigure}
\vspace{-5px}
\caption{Noiseless user model.}
\label{fig:sat_model}
\vspace{-22px}
\end{figure}
Figures \ref{fig:feasible_sets}(b) and \ref{fig:feasible_sets}(c) illustrate the resulting feasible sets from \eqref{eq:obs_model_det_saturated}.
Moreover, we notice the user-specific and unknown parameters $\alpha^*$ and $\delta^*$ always appear as a product.
Thus, we can introduce an auxiliary parameter $\beta=\alpha^*\delta^*$ to write \eqref{eq:obs_model_det_saturated} as $\begin{bmatrix}-\psi,\vect{\phi}^P\!-\!\vect{\phi}^Q\end{bmatrix} \cdot \begin{bmatrix}\beta,\w^*\end{bmatrix} \lesseqgtr0$.
As the model remains linear, the notion of halfspaces and the feasible set $\mathcal{F}$ can be extended to the augmented vector space.
\subsection{Probabilistic User Feedback}
\label{sec:prob_model}
In practice, users are often noisy; they might consider additional or slightly different features than the robot, not follow the linear reward function, or simply be uncertain in some answers. Since we cannot expect users to always provide slider feedback following \eqref{eq:noisefree_model_saturated}, we introduce a probabilistic model where we add uncertainty to the placement of the slider.
Another practical limitation is the fact that we cannot collect truly continuous feedback from the users. Instead, the slider bar has a step size $\epsilon \in (0,1]$ such that the user provides feedback of the form $n\epsilon$ for $n\in\mathbb{Z}$ and $-\epsilon^{-1}\leq n\leq\epsilon^{-1}$. Note that $\epsilon\to0$ retains the continuous scale feedback, whereas $\epsilon=1$ gives the soft choice model where the feedback is always in $\{-1,0,1\}$.
\begin{definition}[Probabilistic User Model]
Given a user $\w^*$ and a query $(P,Q)$, let $\psi$ be the user feedback defined in the noiseless user model in \eqref{eq:noisefree_model_saturated}.
A probabilistic user using a slider bar with a step size of $\epsilon$ then provides feedback
\begin{equation}\begin{aligned}
\mu = \textrm{round}\!\left(\psi + \nu, \epsilon\right)
\end{aligned}
\label{eq:noisy_model}
\end{equation}
where $\nu$ is a zero-mean Gaussian noise, i.e., $\nu\sim \mathcal{N}(0, \sigma^2)$ with standard deviation $\sigma$, and $\textrm{round}\!\left(x,\epsilon\right)$ outputs $n\epsilon$ closest to $x$ such that $n\in\mathbb{Z}\cap [-\epsilon^{-1},\epsilon^{-1}]$.
\end{definition}
\vspace{-5px}
\noindent\textbf{Probabilistic Observation Model.}
Given the probabilistic user model, we now show how a robot can infer about $\w^*$ from scale feedback.
In the noiseless case, user feedback defines a feasible set. For the probabilistic case, we instead derive a distribution over $\vect{w}$ and $\alpha$.
Let $\delta(\vect{w})\!=\!\max_{P,Q\in\mathcal{P}}(\vect{\phi}^P\!-\!\vect{\phi}^Q)\!\cdot\!\vect{w}$, similar to \eqref{eq:max_reward_gap}. Then for $0\!<\!\alpha\!\leq\!1$, the belief is defined
\begin{equation}
f(\vect{w}, \alpha \mid \psi, P, Q) =
\begin{cases}
\tilde{f}(\vect{w}, \alpha \mid \psi, P, Q) & \textrm{if } \psi \in(-1,1),\\
f^+(\vect{w}, \alpha \mid \psi, P, Q) & \textrm{if } \psi=1,\\
f^-(\vect{w}, \alpha \mid \psi, P, Q) & \textrm{if } \psi=-1,
\end{cases}
\label{eq:user_observation_noisy}
\end{equation}
where
\begin{equation}
\begin{aligned}
\tilde{f}(\vect{w}, \alpha \mid \psi, P, Q) &\propto
\begin{cases}
1 \text{ if } \textcolor{blue}{\begin{bmatrix}-\psi, \vect{\phi}^P-\vect{\phi}^Q\end{bmatrix} \cdot \begin{bmatrix}\alpha\delta(\vect{w}), \vect{w}\end{bmatrix}}=0,\\
0\text{ otherwise }.
\end{cases}\\
f^+(\vect{w} , \alpha \mid \psi, P, Q) &\propto
\begin{cases}
1 \text{ if } \textcolor{blue}{\begin{bmatrix}-\psi, \vect{\phi}^P-\vect{\phi}^Q\end{bmatrix} \cdot \begin{bmatrix}\alpha\delta(\vect{w}), \vect{w}\end{bmatrix}}\geq 0,\\
0\text{ otherwise }.
\end{cases}\\
f^-(\vect{w}, \alpha \mid \psi, P, Q) &\propto
\begin{cases}
1 \text{ if } \textcolor{blue}{\begin{bmatrix}-\psi, \vect{\phi}^P-\vect{\phi}^Q\end{bmatrix} \cdot \begin{bmatrix}\alpha\delta(\vect{w}), \vect{w}\end{bmatrix}}\leq 0,\\
0\text{ otherwise }.
\end{cases}\\
\end{aligned}
\label{eq:prob_obs_beta}
\end{equation}
Given noisy user feedback $\mu$ as in \eqref{eq:noisy_model}, we can define a probabilistic density function $f(\psi \mid \mu)$.
Together with \eqref{eq:user_observation_noisy} we derive a compound probability distribution
\begin{equation}
f(\vect{w}, \alpha \mid \mu, P, Q)
=
\int_{-1}^1 f(\vect{w}, \alpha \mid \psi, P, Q)f(\psi \mid \mu) d\psi.
\label{eq:compound}
\end{equation}
where we can write $f(\psi \mid \mu)$ for $\psi\in[-1,1]$ as
\begin{equation}
f(\psi \mid \mu) \propto \begin{cases}
\Phi\left(\frac{\mu-\psi+\epsilon/2}{\sigma}\right) & \textrm{if }\mu=-1,\\
\Phi\left(\frac{\psi-\mu+\epsilon/2}{\sigma}\right) - \Phi\left(\frac{\psi-\mu-\epsilon/2}{\sigma}\right) & \textrm{if }\mu\in(-1,1),\\
\Phi\left(\frac{\psi-\mu+\epsilon/2}{\sigma}\right) & \textrm{if }\mu=1,\\
\end{cases}
\label{eq:slider_from_noisy}
\end{equation}
and $f(\psi \!\mid\! \mu)\!=\!0$ for $\psi\not\in[-1,1]$. Here, $\Phi$ denotes the cdf of a standard normal distribution.
Finally, given a sequence $D_K\!=\!\{(P_k,Q_k,\mu_k)\}_{k=1}^K$ and some prior $f(\vect{w},\alpha)$, the joint posterior is
\begin{equation}
f(\vect{w}, \alpha \mid D_K) \propto f(\vect{w}, \alpha)
\prod_{k=1}^K f(\vect{w}, \alpha \mid \mu_k, P_k, Q_k).
\label{eq:joint_post}
\end{equation}
Here, we can factor $f(\vect{w}, \alpha)$ as $\mathbb{P}(\vect{w})\mathbb{P}(\alpha)$ by assuming $\vect{w}$ and $\alpha$ are independent and we also have a prior for $\alpha^*$. We then take the expectation of the posterior $f(\vect{w}, \alpha \mid D_K)$ as our learned user model.
\vspace{-4px}
\section{Algorithm Design}
\vspace{-4px}
\label{sec:algorithm}
We now outline the learning algorithm.
Over $K$ iterations:
(i) the robot \emph{actively} generates a query $(P_k,Q_k)$ given previous observations $D_{k-1}$, (ii) the user provides feedback to the query in the form of the slider value $\mu_k$ (in the noiseless case, $\mu_k=\psi_k$), and (iii) the robot updates its dataset $D_k$ using \eqref{eq:joint_post}.
After iteration $K$, the algorithm returns the expected weight $\hat{\w}=\mathbb{E}[\vect{w} \mid D_K]$.
\vspace{-1px}
\subsection{Worst Case Error Bound}
\vspace{-1px}
To compare scale feedback to choice feedback, we establish a worst case bound on the performance measures for both frameworks.
We introduce the \emph{worst-case error} as the maximum negative performance measure, $1-\xi(\vect{w}, \w^*)$. The constant in front ensures a positive value, which we then discount with the posterior belief, given observations $D$:
\begin{equation}
\mathtt{Err}^{\max}(\w^*, D)=\max_{\vect{w}} f(\vect{w} \mid D) (1-\xi(\vect{w}, \w^*)).
\label{eq:upper_bound_true_error}
\end{equation}
This describes the worst $\vect{w}$ the robot could pick, discounted by the posterior distribution $f$ learned from data $D$.
In the noiseless setting, this simplifies to $\max_{\vect{w}\in\mathcal{F}}1-\xi(\vect{w}, \w^*)$.
\begin{proposition}[Upper error bound]
\label{prop:upper_bound}
Let $D^S$ denote the observation made from scale feedback and $D^C$ be the observation from choice feedback for the same set of queries. For any user weights $\w^*$, it holds in the noiseless setting that $\mathtt{Err}^{\max}(\w^*, D^S)\leq \mathtt{Err}^{\max}(\w^*, D^C)$.
\end{proposition}
\vspace{-6px}
\textcolor{blue}{The proof follows from the observation $\mathcal{F}^{\mathtt{Scale}}\subseteq\mathcal{F}^{\mathtt{Choice}}$, i.e., scale feedback removes more volume from the weight set. Hence, the worst choice of an estimate $\hat{\w}$ given observations is guaranteed to have a smaller worst case error when using scale feedback. The full proof is in Appendix~\ref{app:proposition_proof}.}%
\subsection{Active Query Generation}
To learn $\w^*$ efficiently, the robot chooses the query $(P,Q)$ it presents to the user. While randomly selected queries often lead to some learning progress, actively designing a query can drastically improve learning when the number of iterations is limited.
Two recent approaches for learning from choice are information gain \cite{biyik2019asking} and max regret \cite{wilde2020active}.
Information gain seeks to reduce the robot's uncertainty over $\vect{w}$ while choosing queries that are easy to answer for the user. \textcolor{blue}{Max regret, on the other hand, minimizes} the maximum regret by showing mutual worst case paths, which also results in easy queries. We leverage both of these methods for our active query generation in scale feedback.
We start with the information gain. Let $H$ denote Shannon's information entropy \cite{wasserman2010all}.
As the outcome of the query is yet unknown, a greedy step takes the expectation over $\mu$:
\begin{equation}
\underset{P,Q}{\max}\;
H(\vect{w},\alpha \mid P,Q)
- \mathbb{E}_{\mu\mid P,Q}\big[H(\vect{w},\alpha \mid \mu,P,Q)\big].
\label{eq:entropy_greedy}
\end{equation}
We approximate the computation of entropy by summing over a set $\Omega$ of $M$ samples of $(\vect{w},\alpha)\sim f$. Thus, following the derivation in \citet{biyik2019asking}, the new query $(P,Q)$ solves
\begin{equation}
\underset{P,Q}{\max}
\sum_{\mu}\sum_{(\vect{w},\alpha)\in\Omega}
\frac{\mathbb{P}(\mu \mid P,Q,\vect{w},\alpha)}{M}
\log_2\left(
\frac{M \cdot \mathbb{P}(\mu \mid P,Q,\vect{w},\alpha)}
{\sum_{(\vect{w}',\alpha')\in\Omega} \mathbb{P}(\mu \mid P,Q,\vect{w}',\alpha')}
\right).
\label{eq:entropy_greedy_extended}
\end{equation}
The max regret policy generates queries $(P,Q)$ such that if the robot learned $P$ but the user optimal solution would be $Q$ is a worst case. With a symmetric perspective over $P$ and $Q$, we have
\begin{equation}
\underset{\vect{w}^P,\alpha^P,\vect{w}^Q,\alpha^Q}{\max}
\mathbb{P}(\vect{w}^P,\alpha^P \mid D_k)\mathbb{P}(\vect{w}^Q, \alpha^Q \mid D_k)
\bigg(
R(\vect{w}^P,\vect{w}^Q) + R(\vect{w}^Q,\vect{w}^P)
\bigg),
\label{eq:maxregret_greedy}
\end{equation}
where $R(\cdot,\cdot)$ is the reward difference defined in \eqref{eq:regret}.
By observing feedback to such queries it greedily improves the probabilistic worst case error. In contrast to the information gain approach, maximum regret requires $P$ and $Q$ to be optimal trajectories for some users $(\vect{w}^P,\alpha^P)$ and $(\vect{w}^Q,\alpha^Q)$.
On the other hand, maximum regret does not require a one-step look-ahead and thus no summation over potential feedback values $\mu$, making it computationally lighter.
Equations \eqref{eq:entropy_greedy_extended} and \eqref{eq:maxregret_greedy} now give us two different policies for solving the initial problem \eqref{eq:problem}. In the simulations, we compare how the performance of both benefits from scale feedback.
\vspace{-4px}
\section{Simulation Results}
\label{sec:simulations}
\vspace{-4px}
\begin{wrapfigure}{R}{0.45\textwidth}
\centering
\vspace{-35px}
\includegraphics[width=\linewidth]{fig/simulations/plot_pairs_1.png}
\vspace{-15px}
\caption{Comparison of scale feedback and soft choice for different active query methods.}
\vspace{-15px}
\label{fig:driver_extended}
\end{wrapfigure}
We now present our main simulation results. Additional results can be found in the Appendix.
\noindent\textbf{Experiment Setup.}
We simulate the presented framework using the Driver experiment used in \cite{sadigh2017active, biyik2019asking, wilde2020active,basu2018learning}. We modify the setup by adding $6$ new features, obtaining a more challenging $10$-dimensional problem (details on the features, as well as results for the original driver can be found in the Appendix).
$71$ distinct user preferences $\w^*$ are drawn uniformly at random, and each user is simulated with $\alpha^*\!\in\!\{.25, .5, .75, 1\}$, making it $284$ runs for each method.
We set $\sigma\!=\!0.1$ for the noise level.
We generate a set of $200$ distinct sample trajectories by drawing random weights $\vect{w}$ and then computing their optimal trajectories. The active query generation methods then optimize over this set.
We evaluate learning using the alignment metric and the relative reward.
As a baseline we use soft choice (strict choice showed a slightly poorer performance). To ensure a fair comparison, we emulate soft choice by setting the step size to $\epsilon=1$ and use the same noise model for both forms of feedback.
\noindent\textbf{Results.}
Fig.~\ref{fig:driver_extended} shows the alignment and relative reward for the driver experiment for information gain, max regret and random query generation.
We observe that in all cases scale feedback significantly improves the performance over soft choice in both metrics ($p<.001$ in all cases with two-sample $t$-test). When using the proposed scale feedback, the alignment after $20$ iterations improves from $.77$ to $.86$ for information gain, from $.67$ to $.76$ for max regret, and from $.64$ to $.75$ for random queries. The relative reward improves for information gain and max regret similarly from $.97$ to $1$, i.e., the learned solution is optimal. Both methods make most progress \textcolor{blue}{during} the first $10$ iterations. Random queries improve the final relative reward from $.94$ to $.97$.
Overall, \textcolor{blue}{the simulation showcases that} scale feedback improves learning, independent of the query selection method. For information gain and max regret, scale feedback allows for finding optimal solution, i.e., collecting $100\%$ reward, within a small budget of iterations.
In Appendix~\ref{app:simulations}, we show additional simulation results for higher noise.
\vspace{-4px}
\section{User Study}
\label{sec:study}
\vspace{-4px}
Finally, we analyze the scale feedback in comparison with choice feedback and under different active querying methods with \textcolor{blue}{two user studies}.\footnote{We have IRB approval from a research compliance office under the protocol number IRB-52441. A summary video is at \url{https://sites.google.com/view/reward-learning-scale-feedback}, and the code at \url{https://github.com/Stanford-ILIAD/reward-learning-scale-feedback}.} In both studies, we used $\epsilon=0.1$ for scale queries.
\noindent\textbf{Experiment Setup.} We designed a serving task with a Fetch robot \cite{wise2016fetch} as shown in Fig.~\ref{fig:intro_example}, and generated a dataset of $120$ distinct trajectories. Human subjects were told they should train the robot to bring the drink to the customer in the manner they prefer, paying attention to the following five factors: the drink (out of $3$ options) to be served, the orientation of the pan in front of the robot, moving the drink behind or over the pan, the maximum height of the path, and the speed. The subjects were also informed about the types of queries they will respond to.
\noindent\textbf{Independent Variables.} \textcolor{blue}{In the first experiment, we wanted to compare scale and soft choice under random querying, and scale under random and information gain querying. Hence, we varied the query type and the querying algorithm among: (i) soft choice with random querying, (ii) scale with random querying, and (iii) scale with information gain querying. In the second experiment, we wanted to compare scale and soft choice under information gain querying. Hence, we employed: (i) soft choice with information gain querying, and (ii) scale with information gain querying. For all, we took $\sigma=0.35$ based on pilot trials with different users (see Appendix~\ref{app:sigma_selection}).}
\noindent\textbf{Procedure.} We recruited $18$ participants ($5$ female, $13$ male, ages 20 -- 55) \textcolor{blue}{for the first, and $14$ participants ($5$ female, $9$ male, ages 20 -- 56) for the second experiment.} Due to the pandemic conditions, the subjects participated in the study remotely with an online interface as in Fig.~\ref{fig:intro_example}. The study started with an instructions page with a two-question quiz to make sure the participants understood how to use the interface. After reading the instructions, we had the subjects fill a form where they indicated their preferences for each of the five individual factors described above, to encourage them to be consistent in their responses during the data collection.
\textcolor{blue}{In the experiments,} each participant responded to $10$ queries generated with each of the algorithms. After each of these $10$-query sets, they were shown the optimal trajectory from the dataset with respect to their learned reward function. The participants responded to a $5$-point Likert scale survey (1-Strongly Disagree, 5-Strongly Agree) for this trajectory: ``The displayed trajectory fits my preferences on the task." We also collected scale feedback for $10$ more randomly-generated queries for validation in each experiment. We randomized the order of these sets (of $10$ queries) to prevent any bias. The interface provided a ``Sync Videos" button to restart both videos for easier comparison.
\noindent\textbf{Dependent Measures.}
As an objective measure of the learning performance, we calculated the log-likelihood of the validation set (of $10$ scale queries\footnote{\textcolor{blue}{We present results with a validation set that consists of both scale and soft choice feedback in Appendix~\ref{app:validation_mixture_data}.}}) under the posterior $f(\vect{w},\alpha \mid D)$ learned using the $10$ queries generated via each algorithm, i.e., we calculated:
\begin{equation}
\mathtt{Log\!-\!Likelihood} = \log \mathbb{P}(D_{\textrm{validation}} \mid D) = \log \mathbb{E}_{\vect{w} \mid D}\left[\mathbb{P}(D_{\textrm{validation}} \mid \vect{w})\right]
\end{equation}
We also used the responses to the $5$-point Likert scale survey questions to measure how well the learned rewards achieve the task. Finally, the users took a post-experiment survey where they rated (from $1$ to $5$) the easiness and expressiveness of soft choice and scale questions.
\noindent\textbf{Hypotheses.} We test the following hypotheses.\\
\textbf{H1.} \textit{Scale feedback leads to faster learning than soft choice feedback.}\\
\textbf{H2.} \textit{Querying based on information gain accelerates learning compared to random querying.}\\
\textcolor{blue}{\textbf{H3.} \textit{Users will prefer information gain over random querying in terms of the optimized trajectories.}}\\
\textcolor{blue}{\textbf{H4.} \textit{Users will prefer scale feedback over soft choice feedback in terms of the optimized trajectories.}}\\
\textcolor{blue}{\textbf{H5.} \textit{Users will rate the scale feedback as easy as soft choice feedback.}\\
\textbf{H6.} \textit{Users will rate the scale feedback as expressive as soft choice feedback.}}
\begin{figure}[t]
\includegraphics[width=1\textwidth]{fig/user_study_results.png}
\vspace{-20px}
\caption{All results are shown for the first experiment (mean$\pm$s.e. over $18$ subjects).}
\label{fig:user_study_results}
\vspace{-13px}
\end{figure}
\begin{figure}[t]
\includegraphics[width=1\textwidth]{fig/user_study2_results.png}
\vspace{-20px}
\caption{\textcolor{blue}{All results are shown for the second experiment (mean$\pm$s.e. over $14$ subjects).}}
\label{fig:user_study2_results}
\vspace{-22px}
\end{figure}
\noindent\textbf{Results.}
\textcolor{blue}{We present results of the first and the second experiments in Figs.~\ref{fig:user_study_results} and \ref{fig:user_study2_results}, respectively. It can be seen that the log-likelihood of the validation set after learning the reward function via scale feedback is higher than learning via soft choice feedback, under both random and information querying.} Besides, information gain based query generation accelerates the learning and leads to higher log-likelihood values compared to random querying. All of these comparisons are statistically significant with $p<.001$ (paired-sample $t$-test), so they strongly support \textbf{H1} and \textbf{H2}.
In Fig.~\ref{fig:user_study_results}(b), it can be seen active querying led to learning reward functions that better optimize trajectories compared to random querying -- this comparison was somewhat significant with $p\approx .05$, supporting \textbf{H3}. \textcolor{blue}{In fact, when we fit a Gaussian distribution to the ratings, we observe that it is $1.95$ times as likely to get a better rating with information gain querying than random querying.} Surprisingly, learning via soft choice achieved slightly higher reward than learning via scale \textcolor{blue}{when queries were randomly selected, and slightly lower reward when queries were generated based on information gain. However, these comparisons are not statistically significant. This is indeed analogous to the relative reward comparisons in Fig.~\ref{fig:driver_extended}: more complex tasks might be needed to better analyze the difference between the two methods.} Thus, we neither reject nor accept \textbf{H4}.
Finally, the subjective results in Fig.~\ref{fig:user_study_results}(c) \textcolor{blue}{and \ref{fig:user_study2_results}(c) suggest} that users find the soft choice feedback slightly, but consistently, easier than the scale feedback ($p<.01$), rejecting \textbf{H5}. This is not surprising, as it is often easier to make a pairwise comparison and the ``About Equal" option in the soft choice questions makes them even easier \cite{biyik2019asking}. On the other hand, there was no statistically significant difference in terms of expressiveness of scale and soft choice feedback, partially supporting \textbf{H6}. In summary, it is interesting that our users perceived the soft choice as easier and even more expressive at times; even though quantitatively, the scale feedback significantly outperforms the soft choice.
\vspace{-4px}
\section{Discussion}
\label{sec:discussion}
\vspace{-4px}
\noindent\textbf{Summary.}
We proposed scale feedback for reward learning where users provide more nuanced feedback than choice. We introduced a user model and showed how a robot can infer reward from noisy scale feedback. We adapted state-of-the-art query generation methods to accelerate learning. In simulations and a user study, scale feedback significantly improved learning. Users rank choice feedback as slightly easier, but both forms of feedback as equally expressive. However, the minor decrease in ease of use is out-weighted by a strong improvement in learning performance.
\noindent\textbf{Future Work.} \textcolor{blue}{We proposed scale queries as a way to give nuanced feedback between two trajectories. It is possible to extend them to $n+1$ trajectories, with specialized user interfaces that allow users to select a point from an $n$-simplex instead of a slider bar. Future work should investigate this and if users can still give reliable feedback to these more complex queries.}
\textcolor{blue}{In our experiments, we used a pre-computed trajectory set. Alternatives, e.g., optimizing queries over action sets as in \cite{sadigh2017active}, or using planners as in \cite{wilde2020active}, should be studied for real-time online learning systems.}
The high estimate of $\sigma$ in the user studies suggests the proposed probabilistic model may be inaccurate. Future work should refine the user model, including interactively learning $\sigma$; \textcolor{blue}{or fit a new user model that does not necessarily adopt a Gaussian noise}.
Surprisingly, users did not perceive scale feedback as more expressive. This could be addressed with improving interface design as well as designing a query generation method that actively exploits \textcolor{blue}{the slider's expressiveness}.
\acknowledgments{This research is partially supported by the Natural Sciences and Engineering Research Council of Canada (NSERC). The authors would also like to acknowledge funding by NSF grants \#1849952 and \#1941722, FLI grant RFP2-000, and DARPA.}
{\small
|
train/arxiv
|
BkiUdv45qoYA4xX7GGvl
| 5 | 1 |
\section{Introduction}
Quantum computation is believed to be more powerful
than classical computation, mainly due to the celebrated
Shor algorithm\cite{Shor94}.
It is yet unclear whether and how quantum computers will
be physically realizable,\cite{Loyd93,DiV95,CZ}
but as any physical system, they {\em in principle}
will be subjected to noise, such as decoherence\cite{Zur91,Unr94,PSE95},
and inaccuracies.
Without error corrections, the effect of noise can
ruin the entire computation\cite{Unr94,Chu95}, so
we need to protect the computation against quantum noise.
Already the question of protecting quantum information is harder
than the classical analog because one should also protect the
quantum correlations between the quantum bits (qubits).
However, it was shown \cite{CS95,Ste}
that good quantum error correcting codes
exist, a result which was followed by many explicit examples(ex:
\cite{Sho95,LMPZ}).
This does not immediately imply the existence of noise resistant
quantum computation, since the computation causes the effect
of faults to spread.
Recently Shor\cite{Sho96}
showed how to use quantum codes in order to perform
fault tolerant quantum computation when the {\it noise rate}, or the
fault probability each time step, per qubit or gate,
is polylogarithmically small.
In this paper, we close the gap
and show how to perform fault tolerant quantum
computation in the presence of constant noise rate, with polylogarithmic
cost in time and size.
The error corrections which where described so far used a combination
of classical and quantum operations.
We would like to define a model of noisy quantum computation\cite{AB96},
such that errors and error corrections can be described
entirely inside this model.
Working in the correct model will enable to prove the result of this paper.
Sequential quantum computation can not be noise resistant\cite{AB96},
so we work with quantum circuits\cite{Deu89,Yao93}, and since
the state of a noisy quantum system
is in general a
probability distribution over pure states, i.e. a {\it mixed state},
and not merely a pure state as in
the standard model, we use quantum circuits with mixed states\cite{AN96}.
Since noise is a dynamic process which depends on time,
the circuit will be divided to levels, or time steps.
Unlike Yao\cite{Yao93} we allow
qubits to be input and output at different times to the circuit.
This is crucial, since with the restriction that all qubits
are initialized at time $0$, it is not possible to compute
fault tolerantly without an exponential blowup in the size of
the circuit\cite{ABIN}.
Between the time steps, we add the noise process, which is a probabilistic
process:
each qubit or gate undergoes a fault
with independent probability $\eta$ per step,
and $\eta$ is referred to as the {\it noise rate}.
The list of times and places where faults had occurred, namely
the {\it fault path},
is random, and naturally, the function that the
circuit computes is a weighted average
over the noise process.
{~}
\noindent{\bf Computing on encoded states:}
Let us first describe how one can protect quantum information against noise,
using quantum error correcting codes\cite{CS95,Ste}.
The idea behind these codes is, as in classical linear block codes,
to spread the state of one qubit
on a number of qubits, called a ``block'',
such that even if some of the qubits in a block are damaged,
the state of the system can be recovered from the other correct qubits.
If we want to preserve the state of $n$ qubits, we encode it on $n$
blocks. The whole state should be recoverable if
not too many errors occurred in each block.
Quantum codes that can correct $d$ faults
have the property that if not more than $d$ faults
occurred in each block, there is some operation that can recover
the encoded state given the damaged state.
The difference from classical codes is that the quantum correlations should
be recovered as well, and not only the logical states of the qubits.
However, it turns out that applying classical error corrections in one
basis of the Quantum space can correct the logical states, while applying
classical error corrections in some other basis corresponds to corrections
of the quantum correlations.
In order to protect a quantum computation against faults, one
can try to compute on encoded states, and not on the states themselves,
using quantum codes.
The circuit $M_1$ which will compute on encoded states
is defined as a simulation of the original circuit $M_0$.
A qubit in the original circuit transforms to a block of qubits
in the simulating circuit,
and each time step transforms
to a {\it working period} of several time steps.
To simulate the computation done in the $t'th$ time step in $M_0$,
we will apply in $M_1$ the analog computation, on encoded states.
Each gate in $M_0$ will transform to some ``procedure'' in $M_1$,
which computes this gate on encoded states.
The procedure might require ancilla
qubits, so these are added to the circuit $M_1$, and are initialized only
when needed.
If $M_1$ is initialized with some encoded state,
we expect this state to evolve by the computation
along some ``trajectory'', such that at the end of each working period
it encodes the correct corresponding state of $M_0$.
The input and output to $M_1$ will simply be a duplication of the
inputs and outputs of $M_0$, on the corresponding blocks.
We will therefore need to add on each block, before
computation begins, an {\it initialization procedure},
that transforms the duplicated input, i.e a
string of $0's$ or a string of $1'$s, to the encoding state $|S_0>$,
or $|S_1>$.
At the end of the computation we will need the opposite transformation,
so we will use a {\it reading Procedure} that transforms a block in the
state
$|S_0>$ or $|S_1>$ back to a string
of $0$'s or $1'$s.
Computing on encoded quantum states does not
automatically provide protection against faults, since
errors accumulate, and
when the damage has effected too many qubits in one block,
the correct state is no longer recoverable.
In order to be able to correct the state,
error corrections should be applied all the time,
so that the error can not accumulate.
Therefore in each working period in $M_1$ we will add a step of error
corrections of each block.
The working period will be divided to two stages: a computation stage
and a correction stage.
The idea is therefore to apply alternately computation stages and
correction stages, hoping that during the computation stage
the damage that accumulated is still small enough so that
the corrections are still able to correct it.
One should notice that this ``hope'' not always comes true.
During the
computation faults can ``spread'' to places not in the fault path.
This spread can happen if a gate operates on a damaged qubit and some
``correct'' qubits- in general, this can cause all the qubits that
participate in the gate to be damaged.
If for example, a gate procedure consists of one gate operating on the
whole block, than one fault is already unrecoverable.
The procedures should be designed in such a way that
a fault can not effect all the qubits in the block.
In general, a fault at time $t$ in qubit $q$ can effect
qubit $q'$ at time $t'>t$ if there is a path in the circuit
connecting the points $(q,t)$ and $(q',t')$.
Since we want to be able to correct after each computation stage,
We can define the ``spread'' of a procedure as the maximal number of qubits
in each block in the output of the procedure, which are effected by
a fault that happened in this procedure.
If we use only procedures with small spread,
then if not too many errors happened during the computation stage
in each procedure, the error corrections will still be able to
correct the damage using the other undamaged qubits.
We are actually looking for a pair of a
quantum code which can correct $d$ errors,
and a corresponding set of {\it universal} gates, such that
their procedures, with respect to the code,
allow one fault to spread to at most $d$ qubits.
Since the error corrections, initialization, and reading procedures
are also subjected to faults, we need them to
have small spread too.
Such codes, with the corresponding universal set of gates,
will be called {\it quantum computation codes}.
We now want to show that the reliability of the simulating circuit
is larger than the original circuit.
In the original circuit, if one fault occurred the
computation failed, but the simulating circuit can tolerate
a number of errors, say $k$, in each procedure, since the error corrections
correct them afterwards.
The effective noise rate of $M_1$
is thus the probability for more than $k$ errors in
a procedure, and it will be smaller than the actual noise rate $\eta$,
if the parameters are chosen correctly.
However, it seems that an improvement from a constant noise rate
to polynomially small effective noise rate, as we need for fault tolerance,
is hard to get in the above scheme.
In \cite{Sho96} it is shown how to apply the above scheme, with a specific
quantum computation code, to achieve
fault tolerance when the noise rate $\eta$ is taken to be
logarithmically small.
{~}
\noindent{\bf Improvement to constant noise rate}
To improve this result, we use {\it concatenated simulations},
which generalizes the works of Gacs \cite{Gacs89} to the quantum case.
The idea
is that since the simulating circuit is
also a circuit, it's effective noise rate
can be improved by simulating it again,
and so on
for several levels.
Even a very small improvement in each level will suffice since the
improvement is exponential in the number of levels.
Such a small improvement can be achieved when using a code of constant
block size, if the noise rate is smaller than some threshold $\eta_0$.
Note, that each level simulates the error corrections in the
simulated level, and adds error corrections in the current level.
The final circuit thus includes error corrections of all the levels,
where the computation of error corrections of high levels is corrected all
the time in small levels.
The lower the level, the more often are
error corrections of this level applied. This is in correspondence with
the fact that lower levels work on smaller blocks,
that are more likely to be quickly damaged.
The whole scheme relies heavily on the fact that the places where
errors occur are random and independent.
Most of the faults will probably be corrected by error corrections
in the first level,
and if this does not work, than using other blocks which where
corrected in the first level, the error will probably be corrected
in the second level. The probability that the highest blocks
can not be recoverable is polynomially small.
Not any quantum computation code can be used in the above scheme.
There are two restrictions:(1)
When applying the simulation, we replace the gates by fault tolerant
procedures.
Since we want to simulate the new circuit as well, we need
that these procedures use gates that can be replaced by
fault tolerant procedures as well.
Hence the universal set of gates associated with the
quantum computation code that we use, must have fault tolerant
procedures which use gates from the same universal set of gates.
This is the ``closeness'' restriction.
(2) Let us consider a two level simulation.
If a simulated error correction operates on an encoded wrong word, it clearly
corrects it.
But what happens if it gets as an input some state which
does not encode any word?
The simulated error correction ``understands'' only encoded words.
If we demand that error correction in the lower level
corrects any state to some word in the code, than the input for the
error correction of the upper level will maybe be
wrong but it will be
an encoded word, so it can be understood and corrected by the upper level.
The second restriction is therefore that the error correction
takes any state to some word in the code.
Quantum computation codes which satisfy both restrictions are called
{\it proper quantum computation codes}.
{~}
\noindent{\bf Explicit proper codes over $F_p$}
We describe two classes of proper quantum computation codes.
We first describe the class of quantum codes in \cite{CS95}
generalized to codes over $F_p$, for any prime $p$.
These codes are defined for general quantum circuits which
consist of particles with $ p\ge 2$ possible states to be in.
We call such quantum particles
{\it qupits}, as a generalization to qubits.
Non binary codes where defined independently also
by Chuang\cite{Chu96} and Knill\cite{Knill}.
The proofs that these are quantum codes\cite{CS95}
transform smoothly from $F_2$ to $F_p$.
For $p=2$, we show how to convert these codes to proper quantum codes, using
the set of gates and procedures described in \cite{Sho96},
which are modified to fit the definition of
proper quantum codes.
The second example of proper quantum codes
uses a special case of the linear quantum codes,
which is the quantum analog of random polynomial codes\cite{BGW}.
The requirements imposed on quantum computation codes are actually
very similar
to the requirements that are imposed on the secret sharing schemes
that are used to perform secure fault-tolerant distributed computation
\cite{BGW}. In both cases measuring a few qubits, or reading a few
secret shares, must give no information about the unencoded data,
and in both cases we must show how to perform computation
in a fault-tolerant way on the encoded data.
To adopt the techniques of \cite{BGW} to the quantum setting one
can use the same encoding but instead of selecting a random
polynomial to share a secret we simply pick the superposition
of all those polynomials.
The noise rate threshold is shown to be larger than $\simeq 10^{-6}$.
{~}
The results of this paper hold also with
a more general noise model, in which
for all integer $k$, for any set of $k$
points, the probability that a fault occured in all the points is bounded by $\eta^k$,
and an adversary can pick a
transformation that operates on all the damaged qubits together.
The same result is true for quantum circuits which are
allowed to operate only on nearest neighbor qubits
(In this case the threshold will be smaller.)
Similar results to those of this paper
where independently discovered by Knill, Laflamme and Zurek\cite{KLZ}.
{\bf Organization of paper:} In section 2 we define the model of noisy
quantum circuits.
Section 3 is devoted to describe one step of the simulation,
given any quantum computation code.
In section 4 we present concatenated simulations and prove that
this scheme provides noise resistant computation in the presence of
a constant noise rate, given any proper quantum code.
In section 5 we present two explicit classes of
proper quantum computation code.
Section 6 discusses possible extensions, conclusions and open questions.
\section{Noisy Quantum Circuits}
In this section we
recall the definitions of quantum circuits\cite{Saq,Deu89,Yao93}
with mixed states\cite{AN96},
We then define noisy quantum circuits, which model
a physical realization of quantum
circuits\cite{AB96}.
The faults are probabilistic, and occur in single qubits and in gates.
\subsubsection{Pure states}
We deal with systems of $n$ two-state quantum
particles, or ``qubits''. The {\em pure state} of such a system
is a unit vector, denoted $|\alpha\rangle$,
in the $2^{n}$ dimensional complex space $\cal{C}$$^{2^{n}}$.
We view $\cal{C}$$^{2^{n}}$ as a
tensor product of $n$ two dimensional spaces, each corresponding to a qubit:
$\cal{C}$$^{2^{n}}= \cal{C}$$^{2}\otimes...\otimes\cal{C}$$^{2}$.
As a basis for $\cal{C}$$^{{2}^{n}}$,
we use the $2^{n}$ orthogonal {\it basic states}:
$|i\rangle=|i_{1}\rangle\otimes
|i_{2}\rangle....\otimes|i_{n}\rangle,0\le i< 2^{n}$,
where $i$ is in binary representation,
and each $i_{j}$ gets 0 or 1.
A general unit vector $|\alpha\rangle$ in $\cal{C}$$^{2^{n}}$ is
called a ``pure state'', and
is a {\em superposition}
of the basic states:
$|\alpha\rangle = \sum_{i=1}^{2^{n}} c_{i}|i\rangle$,
with $\sum_{i=1}^{2^{n}} |c_{i}|^{2}=1$. $|\alpha\rangle$ corresponds
to the vector
$v_{\alpha}=(c_{1},c_{2},...,c_{2^{n}})$.
$v_{\alpha}^{\dagger}$, the complex conjugate of $v_{\alpha}$,
is denoted $\langle\alpha|$.
The inner product between $|\alpha\rangle$ and $|\beta\rangle$
is $\langle\alpha|\beta\rangle=
(v_{\alpha},v^{\dagger}_{\beta})$.
The matrix $v_{\alpha}^{\dagger}v_{\beta}$
is denoted as $|\alpha\rangle\langle\beta|$.
An isolated system of n qubits
develops in time by a unitary matrix,
of size $2^{n} \times 2^{n}$:
\( |\alpha(t_{2})\rangle = U|\alpha(t_{1})\rangle.\)
A quantum system in $\cal{C}$$^{{2}^{n}}$ can be {\em observed} by
{\em measuring} the system.
An important
measurement is a {\it basic
measurement} of
a qubit $q$, of which the possible outcomes are $0,1$.
For the state $|\alpha\rangle=\sum_{i=1}^{2^{n}} c_{i}|i\rangle$,
the probability for outcome $0$ is $p_{0}= \sum_{i, i|_{q}=0}|c_{i}|^{2} $
and the state of the system
will {\em collapse} to
$|\beta\rangle=\frac{1}{p_{0}}\sum_{i, i|_{q}=0} c_{i}|i\rangle$,
(the same for $1$).
A unitary operation $U$ on $k$ qubits
can be applied on n qubits,
$n\geq k$, by taking the {\bf extension} $\tilde{U}$ of $U$,
i.e. the tensor product of $U$ with an identity matrix on
the other qubits.
The deffinition can be generalised to circuits which operate on $p-$state
quantum particles, or {\it qupits}. (simply replace
$2$ by $p$ in the
definitions above).
\subsubsection{ Mixed states}
A system which is not ideally isolated from
it's environment is described by a {\em mixed state}.
There are two equivalent descriptions of mixed states:
mixtures and density matrices.
We use density matrices in this paper.
A system in the {\bf mixture} $\{\alpha\}=\{p_{k},|\alpha_{k}\rangle\}$
is with probability $p_{k}$
in the pure state $|\alpha_{k}\rangle$.
The rules of development in time and measurements
for mixtures are
obtained by applying {\bf classical} probability to
the rules
for pure states.
A density matrix $\rho$
on $\cal{C}$$^{2^{n}}$ is an hermitian positive semi definite complex matrix
of dimensions $2^{n}\times 2^{n}$,
with $tr(\rho)=1$.
A pure state $|\alpha\rangle=\sum_{i} c_{i}|i\rangle$
is associated the density matrix
\(\rho_{|\alpha\rangle} = |\alpha\rangle\langle\alpha|\) i.e.
\(\rho_{|\alpha\rangle}(i,j)= c_{i}c_{j}^{*}.\)
A mixture
$\{\alpha\}=\{p_{l},|\alpha_{l}\rangle\}$,
is associated the density matrix :
\(\rho_{\{\alpha\}} = \sum_{l} p_{l} \rho_{|\alpha_{l}\rangle}.\)
The operations on a density
matrix are defined such that the correspondence to mixtures is preserved.
If a unitary matrix $U$ transforms the mixture
\(\{\alpha\}=\{p_{l},|\alpha_{l}\rangle\}\) to
\(\{\beta\}=\{p_{l},U|\alpha_{l}\rangle\},\)
then
\(\rho_{\{\beta\}} = \sum_{l} p_{l}
U|\alpha_{l}\rangle\langle\alpha_{l}|U^{\dagger}=
U\rho_{\{\alpha\}}U^{\dagger}.\)
A basic measurement of the $j'$th qubit in $\rho$
gives, the outcome $0$ with the probability which is
the sum of the diagonal terms of $\rho$, which relate to
the basic states $i$ with $i_j=0$:
$pr(0)=\sum_{i=1}^{2^{n}} \rho_{i,i} \delta(i_j=0)$.
conditioned that the outcome is the eigenvalue $0$,
the resulting density matrix is $O_{0}\circ(\rho)$, which is the minor
of $\rho$ which includes only rows and columns which relate
to basic states $i$ with $i_j=0$.
(This minor should of course be normalized to have trace one).
Without conditioning on the outcome
the resulting density matrix will be
\(O\circ(\rho)=
Pr(0) O_{0}\circ(\rho)+Pr(1) O_{1}\circ(\rho). \)
which differs from $\rho$, only in
that the entries in $\rho$ which connected between
$0$ and $1$ on the same qubit are put to zero.
Given a density matrix $\rho$ of n qubits,
the reduced density matrix of a subsystem,$ A$,
of, say, $m$ qubits is defined as an average over the states of
the other qubits:
\( \rho|_{A}(i,j)= \sum_{k=1}^{2^{n-m}} \rho(ik,jk)\).
\subsection{\bf Quantum circuits with mixed states}
{\em A quantum unitary gate}, $g$, of order $k$ is a complex unitary
matrix of size $2^{k} \times 2^{k}$.
A density matrix $\rho$ will transform
by the gate to \(g\circ\rho = \tilde{U}\rho\tilde{U}^{\dagger}\),
where $\tilde{U}$ is the extension of $U$.
{\em A Quantum circuit}
is a directed acyclic graph
with $n$ inputs and $n$ outputs.
Each node $v$ in the graph is labeled by a quantum gate $g_{v}$.
The in-degree and out-degree of $v$ are equal to the order of $g_{v}$.
Some of the outputs are labeled ``result'' to indicate that
these are the qubits that will give the output of the circuit.
The wires in the circuit correspond to qubits.
An initial density matrix $\rho$
transforms by a circuit $Q$ to a
final density matrix $Q\circ \rho =
g_{t}\circ...\circ g_{2}\circ g_{1}\circ \rho$,
where the gates $ g_{t}...g_{1}$ are applied in a topological order.
For an input string $i$,
the initial density matrix is $\rho_{|i\rangle}$.
The output of the circuit is the outcome of applying basic measurements
of the result qubits, on the final density matrix
$Q\circ\rho_{|i\rangle}$. Since the outcomes of measurements
are random, the function that the circuit computes is a
{\it probabilistic function}, i.e. for input $i$ it outputs
strings according to a distribution which depends on $i$.
\subsection{Noisy Quantum Circuits}
As any physical system, a quantum system is subjected to noise.
The process of noise is dynamic and depends on time.
We therefore divide the quantum circuit to levels, or time steps.
We permit that qubits are input and output at different
times, and we say a qubit is {\it alive}
from $t_{1}$ to $t_{2}$ if it is input to the circuit
at $t_{1}$ and output at $t_{2}$.
The ``space time'' of the noisy quantum circuit is a two dimentional array,
consisting of all
the pairs $(q,t)$,
of a qubit $q$ and time $t$, where the qubit $q$ is alive at time $t$.
The volume of the circuit $M$, is the number of points in it's
space time and it is denoted by $V(M)$.
In our model of noisy quantum circuits
each qubit and each gate are
damaged with probability $1/2>\eta>0$ each time step.
The damage operates as follows:
A unitary operation operates on the qubit, (or on the qubits that
are output from the gate in the case of a gate damage)
{\it and} on a state of
the environment (The environment can be represented by $m$
qubits in some state).
This operation results in a density matrix of the $n+m$ qubits.
We reduce this density matrix to the $n$ qubits of the
circuit to get the new density matrix after the damage.
A noise step happens between two levels, or two time steps of computation,
and in a noise step each gate or qubit is damaged with probability $\eta$.
The density matrix of the circuit develops by applying alternately
computation steps and noise steps.
Each ``run'' of the computation is subjected to a specific ``fault path'',
which indicates where and when fault occured.
Each run ends up with some output.
The function computed by the noisy quantum circuit is naturally
the average over the outputs, on the probabilistic process of noise.
\section{Quantum computation on encoded states}
In the following section we define quantum codes
and quantum computation codes.
Then we describe how to improve reliability of computation using
quantum computation codes.
Quantum computation codes are used
abstractly in this section, while
explicit examples of such codes are given in section \ref{explicit}.
\subsection{Quantum block codes}
A quantum linear block linear code is a function $\phi$
from the Hilbert space
of a qubit to a Hilbert space of $m$ qubits:
$\phi: \cal{C}$$^{2} \longmapsto \cal{C}$$^{2^{m}}.$
$m$ is called the size of the block in the code.
Such a code induces a linear function from $\cal{C}$$^{2^{n}}$
to $\cal{C}$$^{2^{mn}}$ in the following way:
a pure state in $\cal{C}$$^{2^{n}}$, $|\alpha\rangle=\sum_{i} c_i |i\rangle$
will transform to
$|\beta\rangle=\sum_{i} c_i
\phi|i_{1}\rangle\phi|i_{2}\rangle...\phi|i_{n}\rangle$.
A pure state in the image of $\phi$ is called a word in the code.
The above definition can be extended to density matrices:
A mixed state of $n$ qubits will be encoded by the corresponding probability
over the encoding of the pure states.
A mixture of words in the code is also said to be in the code.
The error in the encoded state is sparse if not too many qubits in each block
are damaged.
Using this notion of sparse sets, we can define a metric
on block density matrices.
They will be close if the deviation between them
is confined to a sparse set of qubits:
\begin{deff}
Let $B$ be the qubits in $n$ blocks of $m$ qubits.
A set $A\in B$ of qubits is said to be $k-$sparse
if in each block there are not more than $k$ qubits in $A$.
Two density matrices $\rho_1$ and $\rho_2$ of the qubits $B$
are said to be $k-$deviated
if there is an $k-$sparse set of qubits, $A$, such that
reduced on all qubits except those in $A$ they are equal:
$\rho_1|_{B-A}=\rho_2|_{B-A}$.
\end{deff}
The deviation is a metric since it satisfies the triangal inequality,
and it is zero only for equal density matrices.
The quantum code corrects $d$ errors if there is some recovery procedure
$R$, that when applied on all the blocks in any density matrix that
$d$-deviates from a code word $w$, the output is the recovered
word $w$ (maybe tensored
product with ancilla qubits used by the correction procedure).
\subsection{Quantum computation codes}
A computation code is a quantum code which provides a way to perform
gates on the encoded states fault tolerantly.
The procedure $Pg$ that simulates a gate $g$ with respect to a quantum code
is a sequence of gates which transforms the encoded state to the
encoded output of the gate: $Pg(\phi(i)\otimes |0>)=\phi(g(i))\otimes |a>$,
where we have used extra ancilla qubits.
These qubits are not counted as the inputs or outputs of the procedure.
A quantum procedure
is said to have spread $l$ if no qubit or gate effects more than
$l$ outputs of the procedure.
We will need procedures with small spread for fault tolerant computation.
Since we want to convert any arbitrary circuit to a more reliable one,
we need the set of gates that have $l$-spread
procedures to be universal.
\begin{deff}
A quantum block code $C$ is said to be a quantum computation code
with spread $l$
if there exists a universal set of quantum gates $G$ such that (1)
for any gate $g\in G$ there exists a
procedure $P_g$ with respect to $C$, with spread $ l$,
and (2) There exist
correction , initialization and reading procedures with spread $l$.
\end{deff}
\subsection{Improving reliability by block simulations}
To simulate some circuit by a quantum computation code,
we first convert it to a circuit which uses only gates from
the universal set of the code. Then we simulate this new circuit
as was explained:
We now convert each qubit to a block of qubits, each time step to
a working period, and each gate
to the corresponding procedure, and besides that we
add in each working period
a correction procedure on each block.
Apart from all that, we also add initialization procedures before the first
working period of each computation block and a reading procedure
after the last working period of each result block.
The space-time of the simulating circuit $M_1$ can be devided to
rectangles, where each rectangle will correspond to one procedure,
in the following way:
First, devide the time to alternating stages:
computation stages, in which one time step of $M_0$ is simulated,
i.e. one level of gate procedures is applied, and a
corrections stage, in which one level of
error correction procedures are applied.
Each stage is a ``strip'' in the space time.
Each strip can be devided to rectangles by deviding the qubits to sets:
A correction strip will be devided such that in each rectangle
a correction of one qubit is computed.
In a computation strip, we devide the strip to rectangles
by deviding the qubits to sets, where each set of qubits participates
in exactly one procedure.
Each rectangle thus corresponds to one procedure.
We show that if a fault path in $M_1$ is such that no more than a few
faults occured in each rectangle, then indeed the
computation succeeds. The number of faults allowed in one rectangle
is bounded so that when taking into account the spread of the fault,
the number of qubits effected in each block
at the end of one working period
is not too big, so that the density matrix can still be recovered.
\begin{deff}
A fault path of a medium $M_1$ that block simulates $M_0$
is said to be a ``$k-$sparse fault path''
if no more than $k$ faults occured in each rectangle.
\end{deff}
\begin{lemm}\label{onelevel}
Let $C$ be a quantum computation code that corrects $d$ errors,
with spread $l$.
Let $M_1$ be a block simulating circuit.
Consider a computation of $M_1$ subjected to a $k-$sparse fault path
with $d\ge 2kl$.
At the end of each working period
the density matrix is $d-$deviated from the correct one.
\end{lemm}
{\bf Proof:}
We will prove by induction on $t$ a stronger assertion,
that at the end of the $t'$th working period
the density matrix is $d/2-$deviated from the correct one.
For $t=0$ the deviation is zero.
Suppose that the density matrix at the end of the $t'$th working period
is $d/2$-deviated from the correct
matrix. After the computation stage, not more than
$kl$ qubits are effected in each block, so the
density matrix is $kl+d/2$-deviated. Since $kl+d/2\le d$, the correction
procedure indeed corrects the error, only that during
the corrections new errors occur.
Again, the number of effected qubits is not more than $kl$
in each block, and all the other qubits transform as they should, so they
are corrected. Hence after the correction procedure the matrix is
$kl$-deviated. Since $kl\le d/2$ this proves the induction step.\bbox
We can now compute the effective noise rate of $M_1$.
This will be smaller than $\eta$ if the parameters are chosen correctly:
\begin{theo}
Let $M_1$ simulate $M_0$ by the computation code $C$,
which corrects $d$ errors,
have spread $l$, with all rectangles smaller than $a$.
The effective noise rate of $M_1$ is
\(\le
2\left(\begin{array}{c}a\\d/2l+1\end{array}\right)\eta^{\frac{d}{2l}+1}\).
\end{theo}
{\bf Proof:}
If the fault path in $M_1$ is $d/2l$ sparse, the final density matrix
is $d-$deviated from the correct one by lemma \ref{onelevel}.
Measuring all the qubits in the result blocks, and taking
majority in each block, gives the correct answer by lemma \ref{maj}
in the appendixes.
The number of the rectangles in $M_1$ is less than twice the number
of points where faults can occur in $M_0$.
Therefore the effective noise rate is smaller than the probability for
not more than $d/2l$ faults in two rectangles of $M_1$.
The probability for a rectangle to have more than
$k$ faults is smaller
then the number of possibilities to choose $k+1$ points in the rectangle,
times the probability for these points to have a fault, which gives the result.
\bbox
\section{Concatenated simulations}
In this section, we define proper quantum code,
and concatenated simulations by such codes.
We prove that the reliability of the computation
can be improved to a constant using
$log(log(n))$ levels of simulations, when the noise
is smaller then some constant imposed by the parameters of the code.
\subsection{Improving reliability to a constant}
We define a proper quantum code:
\begin{deff}
A quantum computation code which is associated a set of gates $G$
is proper if (1) The gate procedures, and the
initialization, reading
and correction procedures use only gates from $G$, and
(2) The correction procedure
takes any density matrix to some word in the code.
\end{deff}
Let $M_0$ be a quantum circuit.
We define recursively $M_r$,
an $r$-simulating circuit of a circuit $M_0$ by the proper quantum
computation code $C$, as the simulation by $C$ of $M_{r-1}$.
The recursive simulations induce a definition of $s$-blocks:
Every qubit in one stage of the simulation transforms to a block of $m$
qubits in
the next stage of the simulation, and this block transforms to $m$ blocks and
so on.
One qubit in $M_{r-s}$ transforms to $m^s$ qubits in $M_r$.
This set of qubits in $M_r$ is called an $s$-block.
An $0$-block in $M_r$ is simply a qubit.
The recursive simulations also induce a definition of $s$-rectangles:
Each space time point in $M_{r-s}$ transforms to a
set of space time points in the following simulation $M_{(r-s+1)}$,
which in their turn transform to more points in the following
stages of the simulation.
The set of all these points in $M_r$ that originated from one
space time point in $M_{(r-s)}$ are called an $s$-rectangle.
The definition of $s$-rectangles defines a devision of the
space time of $M_r$, and this devision is
a refinement of the devision to $(s+1)$-rectangles.
An $0$-rectangle is just a space time point in $M_r$.
A density matrix of $M_r$ is recovarable if it deviates on a
``sparse'' set of qubits, i.e that in each level, there are enough
blocks that can be recovered. If at some level there are enough blocks that
can be recovered, the other blocks will be corrected by the error corrections
in the upper level.
\begin{deff}
Let $B$ be the set of qubits in $n$ $r-$blocks .
An $(r,k)$-sparse set of qubits $A$ in $B$ is
a set of qubits in which for every $r-$block in $B$, there are at most
$k$ $(r-1)-$blocks such that the set $A$ in these blocks
is not $(r-1,k)$ sparse.
An $(0,k)-$sparse set of qubits $A$ is an empty set of qubits.
Two density matrices $\rho_1,\rho_2$, are said to be
$(r,k)$-deviated if there exist an $(r,k)$-sparse set of qubits $A\in B$,
such that $\rho_1|_{B-A}=\rho_2|_{B-A}.$
\end{deff}
The deviation is a metric, as is shown in the appendixes, lemma
\ref{sparsemetric}.
We define ``sparse'' fault paths, that do not increase the deviation
in this metric too much:
\begin{deff}
A set of space time points in an $r-$rectangle
is said to be $(r,k)$-sparse if there are no more than $k$
$(r-1)-$ rectangles, in which the set is not $(r-1,k)$-sparse.
An $(0,k)$-sparse set in an $0-$rectangles (which is one space time point)
is an empty set. a fault path in $M_r$ is $(r,k)$-sparse
if in each
$r-$rectangle, the set is $(r,k)-$sparse.
\end{deff}
We claim that if the fault path is sparse enough,
then the error corrections keep the deviation small.
\begin{lemm}\label{Theproof}
Let $C$ be a proper code that corrects $d$ errors,
with spread $l$.
Let $M_r$ be a medium which $r-$simulates $M_0$ by $C$.
Consider a computation subjected to an $(r,k)$-sparse fault path
with $2kl\le d$.
At the end of each $r-$working period
the density matrix is $(r,d/2)$-deviated from the correct one.
\end{lemm}
{\bf Proof:}The proof is by induction on $r$.
The structure of the proof goes as follows: we
first prove by induction on the number of levels, $r$,
three assertions together. Assuming that the fault path is $(r,k)$-sparse,
an $r$-computation stage
does not cause too much $r-$deviation,
an $r-$correction corrects a small enough $r-$deviation,
and an $r-$correction brings any state to a density metric which is not
too $r-$deviated from some word in the code.
The first two assertions applied
alternately for the $r-$computation and $r-$correction stage in $M_r$,
will give the desired result.
The proof involves many details, and
is given in the appendixes.
\bbox
We can now prove the main result of this paper:
\begin{theo}\label{Thetheo}
Let $C$ be a quantum computation code,
which corrects $d$ errors,
have spread $l$, and size of all procedures smaller than $a$.
Let $M_0$ be a quantum circuit, with size $s$ and depth $t$.
There exists a quantum circuit
$M_r$ of size $O(s\cdot polylog(s))$ and depth $O(t\cdot polylog(t))$,
such that in the presence of noise $\eta$ which satisfies
\(\left(\begin{array}{c}a\\d/2l+1\end{array}\right)\eta^{d/2l}< 1\)
$M_r$ outputs the correct answer with probability $\ge 2/3$.
\end{theo}
{\bf Proof:}
If the fault path is $(r,d/2l)$-sparse, than the final density matrix is
indeed $(r,d)$-deviated from the correct one, by lemma \ref{Theproof}.
Measuring all result blocks in a density matrix
$(r,d)$-deviated from the correct final density matrix, and taking
majority in each $r$-block, gives the correct answer by lemma \ref{maj}
in the appendixes.
Hence the probability for $M_r$
to succeed is larger than the probability for a fault
path to be $(r,d/2l)$-sparse. With the assumption on $\eta$,
the probability for one rectangle to have a set of faults which is
$(r,d/2l)$-sparse can be shown to be exponentially (in $r$) close to one.
Again, details can be found in the appendix.
Taking $r$ to be $O(log(log(V(M_0)))$ makes this probability high enough.
Since the growth in cost is exponential in $r$,
(we use codes with a constant size), and the number of levels is
$ O(log(log(V(M_0)))$, the cost
is polylogarithmic.
\bbox
{~}
{\bf Remark:} Theorem \ref{Thetheo} requires
that the code can correct $d>1$ errors.
A similar result holds for $d=1$, with the threshold
\(\left(\begin{array}{c}b\\2\end{array}\right)\eta< 1\)
where $b$ is the maximal size of slightly different rectangles,
defined to contain a computation and a correction procedure together.
The proof is almost the same.
In some cases this threshold is better. \bbox
\section{Explicit proper quantum computation codes}\label{explicit}
Linear quantum codes\cite{CS95} are represented, using classical codes
over $F_p$, and shown to be proper for $p=2$.
A subclass of linear codes, polynomial quantum codes, is defined
and shown to be proper.
\subsection{Linear quantum codes over $F_p$. }
A linear code of length $m$ and dimension $k$ over the field $F_p$
is a subspace of dimension $k$
in $F_p^m$, where $F_p^m$ is the $m$ dimensional
vector space over the field of $p$ elements.
Given two linear codes $C_1$ and $C_2$ such that
$\{0\}\subset C_2 \subset C_1 \subset F_p^m$ consider
the following set of quantum states in the Hilbert
space $\cal{C}$$^{p^{m}}$:
\[\forall a\in C1: |S_a>=p^{-(m-k)/2}\sum_{v\in C_2}|a+v>.\]
If $(a1-a2)\in C_2$ than $|S_{a1}>=|S_{a2}>$, otherwise
$<S_{a1}|S_{a2}>=0.$
Hence these states construct a basis for a linear subspace of
the Hilbert space $\cal{C}$$^{p^{m}}$,
with dimention $z=p^{dim(C1)-dim(C2)}$. This subspace is our quantum code.
Define a second basis of this subspace to be:
\[\forall a\in C_2^{\perp}: |C_a>=\frac{1}{\sqrt{z}}
\sum_{b\in C_1/C_2}w^{a\cdot b}|S_b>~ ,~ w=e^{\frac{2\pi i}{p}}.\]
If $C1$ and $C_2^{\perp}$ both have minimum weight $d$,
then the quantum code can correct for $t=\lfloor\frac{d-1}{2}\rfloor$,
by applying classical error corrections with respect to the code
$C_1$, first in the $S-$basis, and than in the $C-$basis.
The proofs in \cite{CS95} transforms smoothly
to this general case.
\begin{theo}
For $p=2$, linear codes are proper and have spread $l=1$.
\end{theo}
{\bf Proof:}
The universal set of gates which is associated with the above codes is:
$|a,b>\longmapsto |a,a+b>$,
$|a>\longmapsto\frac{1}{\sqrt{2}}\sum_{b}
(-1)^{ab}|b>$,
$|a>\longmapsto |1-a>$,
$|a>\longmapsto (i)^a|a>$, and
$|a,b,c>\longmapsto |a,b,c+ab>$,
where all the addition and multiplication are in $F_2$(i.e. mod $2$).
This set is universal\cite{Sho96}.
The correction procedure is done by applying classical error corrections,
with respect to the code $C_1$, transforming to the $C-$basis by
applying bitwise the gate
$|a>\longmapsto\frac{1}{\sqrt{2}}\sum_{b} (-1)^{ab}|b>$ correcting
classically again and rotating back to the $S-$basis.
The initialization is an error correction, with respect to the code $C_2$
and not $C_1$, which corresponds to the quantum error correction to the code
which consists of $|S_0>$ alone.
The reading procedure is applied by computing independently $m$ times the
$a$ from the state $|S_a>$.
To apply the procedures of all the gates, except the Toffoli gate, we simply
apply the gate bitwise on all the qubits (if the block size is $m$,
we apply the gate $m$ times).
A detailed description
is given in the appendixes.
The spread of all these procedures is $l=1$.\bbox
\subsection{Polynomial quantum codes}
To correct $d$ errors, set $m=4d+1$
and set $p>m+1$.
Let $\alpha_1,\alpha_2,...,\alpha_m$ be $m$ distinct non zero elements
of $F_p$ such that the polynomial
\(G(x)=\Pi_{i=1}^{m}(x-\alpha_i)\)
has a non-zero coefficient of $x^{2d}$.
(Such $\alpha_i$ exist because $|F_p|>m+1$). Denote by
\(V_1=\left\{f(x)\in F(x)~|~deg f(x)\le d\right\},\)
\(V_2=\left\{f\in V_1~|~f(0)=0\right\},\)
\(C_1=\left\{(f(\alpha_1),...,f(\alpha_m))~|~f\in V_1\right\}\subset F_p^m,\)
\(C_2=\left\{(f(\alpha_1),...,f(\alpha_m))~|~f\in V_2\right\}\subset C_1.\)
As before, we use the codes $C_1$ and $C_2$ to define the quantum code:
\[\forall a\in F,~~~~~~
|S_a>=\frac{1}{\sqrt|V2|}\sum_{f\in V_1,f(0)=a}
|f(\alpha_1),...,f(\alpha_m)>.\]
\begin{theo}
Polynomial codes are proper quantum computation codes with spread $l=1$.
\end{theo}
{\bf Proof:}
The universal set of gates used is:
$\forall~c\in F$, $|a>\longmapsto|a+c>$,
$|a>|b>\longmapsto |a>|a+b>$,
$0\ne c\in F$: $|a>\longmapsto|ac>$,
$|a>|b>|c>\longmapsto |a>|b>|c+ab>$,
$\forall c\in F$ $|a>\longmapsto w^{ca}|a>$, and
for $0<r<p$, the Fourier transform:
$|a>\longmapsto\frac{1}{\sqrt{p}}\sum_{b\in F}w^{rab}|b>.$
Clearly, all classical reversible functions can be spanned by this set.
We find an explicit unitary matrix in the group generated by this set,
which has infinite order.
We than use group representation theory to show that this group is dense
in $SU(n)$. By \cite{Sol},
the rate of approaching each finite matrix is exponentially fast.
The initialization,reading and correction procedure are exactly
as in the general linear code, where transforming between the
$S-$basis and the $C-$basis is
done by the Fourier transform.
The procedures $|Sa>\longmapsto|S(a+c)>$, $|Sa>|Sb>\longmapsto |Sa>|S(a+b)>$,
and $|Sa>\longmapsto|S(ac)>$
can be performed by applying pitwise
the corresponding gates.
Other procedures use interpolation techniques\cite{BGW}.
\bbox
\section{Generalizations and open problems}
The result implies that quantum computation might be practical if the
noise in the system can be made very small.
The results should
motivate physicists to achieve lower noise rates, and
theoreticians to develop a theory for proper quantum codes, and seek
such codes with
better parameters, to push the threshold as high as possible.
The point at which the physical data meets the theoretical threshold
is where quantum computation becomes
practical.
The results of this paper hold also in the case of
circuits which allow to operate only on nearest neighbors.
(We thank Richard Cleve for pointing that out to us.)
This is true since the procedures we use, which are of constant size,
can be made, with constant cost, to operate only on nearest neighbors, by
adding gates that swap between qubits.
However, the bound on $\eta$ in this case will be smaller.
We are grateful to I. Cirac and P. Zoller for the following idea:
In most quantum systems, it is reasonable to assume that different
types of faults occur with different frequencies.
In such systems, one can improve the bound on the noise rate significantly,
by using in most levels of the simulation a quantum code which can correct
only for the most frequent errors,
while for less frequent errors it is enough to correct only once in
a few levels.
Examples will be given in the final version.
It is therefore important to have good understanding of the
different noise rates for
different types of faults,
for specific potential physical realizations of quantum computers.
Our scheme requires a polylogarithmic blow-up in the
depth of the circuit.
Multilinear Quantum codes can reduce
the depth
from a multiplicative factor of $O(log(n)$ to a factor of
$O(log(log(n))$,
but reducing this to a constant, as in the classical case,
remains an open problem.
\section{Acknowledgments}
We wish to thank Noam Nisan and Peter Shor
for helpful discussions and essential remarks.
We wish to thank Thomas Beth for helpful suggestions, and Richard Cleve
for solving the question of nearest neighbor gates.
We are grateful to Ignassio Cirac and Peter Zoller for the nice idea of
how to improve the results for specific systems.
\small
\bibliographystyle{plain}
|
train/arxiv
|
BkiUc_k4dbggz-0vznT6
| 5 | 1 |
\section*{Abstract}
In this note we discuss the possibility of detecting the accompanying X-ray emission from sources of fast radio bursts with the eROSITA telescope onboard the Spektr-RG observatory.
It is shown that during four years of the survey program about 300 bursts are expected to appear in the field of view of eROSITA.
About 1\% of them will be detected by ground-based radio telescopes.
For a total energy release $\sim~10^{46}$~ergs depending on the spectral parameters and absorption in the interstellar and intergalactic media, an X-ray flare can be detected from distances from $\sim 1$~Mpc (thermal spectrum with $kT=200$~keV and strong absorption) up to $\sim1$~Gpc (power-law spectrum with photon index $\Gamma=2 $ and realistic absorption).
Thus, eROSITA observations might help to provide important constraints on parameters of sources of fast radio bursts, or may even allow to identify the X-ray transient counterparts, which will help to constrain models of fast radio bursts generation.
\section{Introduction}
Fast radio bursts (FRBs) are short ($\sim$ms) bright (peak fluxes up to $\sim$100~Jy) radio flashes (for a review, see \cite{Popov:2018}).\footnote{On-line arXiv:1806.03628.} The first event from this class of transients was introduced in 2007 in \cite{lorimer2007}.
Since then, several dozens of such bursts have been detected \cite{petroff2016}. \footnote{See the online FRB catalogue http://www.frbcat.org.}
A large dispersion measure and other considerations suggest an extragalactic origin of this phenomenon. By now, reliable identification has been made only for a single source of repeating fast radio bursts --- FRB 121102 (the notation: year-month-day)\footnote{Since this paper was accepted a new repeating source of FRBs has been discovered (The CHIME/FRB Collaboration 2019).}.
The source is located in a dwarf galaxy with active star formation at redshift $z=0.193$ (corresponding photometric distance is 972 Mpc) \cite{tendulkar2017}.
Nowadays, there are lots of models explaining the nature of FRB sources (see catalogue of theories in \cite{platts2018}).
This reflects a wide uncertainty in the description of the nature of these objects.
Nonetheless, the main current models associate FRB with neutron stars.
Magnetic energy dissipation in young neutron stars is one of the most prospective hypotheses about the nature of FRBs. Particularly, FRB generation may be associated with magnetar hyperflares (see review \cite{2015RPPh...78k6901T} for this type of sources and forms of their activity). This model was proposed immediately after the discovery of the first FRB \cite{popov2007}\footnote{Originally this paper was published just as an e-print. It can be found at arXiv.org: 0710.2006.}.
An important prediction of this model is a simultaneous radiation pulse from FRBs in X-rays and, possibly, gamma-ray ranges (see, for example, \cite{lyubarsky2014} and \cite{murase2016}). The luminosity of the only observed hyperflare from SGR 1806-20 was $\sim10^{47}~$erg s$^{-1}$, and the total energy release was $\sim10^{46}$~erg \cite{2005Natur.434.1107P}.
In this paper we discuss the possibility of registration of X-ray radiation accompanying FRBs with the eROSITA telescope, the detailed description of which is presented in \cite{merloni2012}.
Obtaining a positive result in such observations will give an opportunity to verify (or derive strict limits) the hyperflare model of FRBs.
eROSITA (extended ROentgen Survey with an Imaging Telescope Array) is the primary instrument on the forthcoming Spectrum-Roentgen-Gamma (SRG) mission.
This telescope will be used to study the entire sky in the X-ray band. Over 4 years of operation eROSITA will make 8 full surveys of the sky in the energy range from a few tenths up to 10 keV.
In the soft X-ray band ($\sim$0.5-2 keV) this instrument will be about 20
times more sensitive than the ROSAT satellite. In the hard band (2-10 keV) it will provide the first imaging survey of the whole sky at those energies.
\section{Estimating the number of bursts}
In this section we estimate the number of FRB flares that will fall in the field of view of eROSITA.
To date, various estimates of the number of FRBs qualitatively converge with each other, giving a rate of about several thousand events per day in the entire sky with a flux of more than a few tenths of Jy. In the estimates below, we will use the value of $N_{\Sigma}=10^4$ bursts per day, which matches the analysis carried out in \cite{thornton2013}, \cite{vander2016}.
Given that the field of view of eROSITA is 0.833 square degrees,
it can be calculated that approximately 0.2 bursts are seen per day, which can potentially be detected by ground-based radio telescopes. On average, one burst will fall into the field of view of the telescope in 5 days, and in 4 years of survey observations the number of such events will be about 300.
It should be noted that a comparable number of bursts should have fallen within the XMM-Newton field of view in $\sim18$ years of operation. However, it is difficult to identify a short non-repeating weak burst. Due to the small number of FRBs registered in the radio band, there were no cases where XMM-Newton or another instrument would have observed the radio burst area at the time of the event. This is primarily due to the low rate of registration of radio bursts during the work of XMM-Newton. In the next few years, the number of detected bursts will increase significantly. It is therefore essential to estimate the number of future detected radio bursts that fall within the field of view of eROSITA.
Currently, both radio telescopes already operating for a long time (64-m Parkes telescope, the Arecibo antenna, Green Bank Telescope) and new instruments such as ASKAP (Australian Square Kilometre Array Pathfinder) and UTMOST (Molonglo Observatory Synthesis Telescope) are actively used to search for FRBs. In addition, the search will be conducted with the new 500-m FAST antenna in China. It is expected that in the near future HIRAX (Hydrogen Intensity Real-time Analysis eXperiment) and CHIME (Canadian Hydrogen Intensity Mapping Experiment) will be able to detect several dozen of flashes per day \cite{rajwade2017}. Optimistic estimates raise this number to one hundred bursts per day for each telescope \cite{walters2018}.
Therefore, a reasonable assumption is that in total ground-based radio telescopes will detect $\sim$1\% of all bursts.
If bursts occur evenly across the sky (which is a good assumption due to the extragalactic origin of FRBs and the absence of correlation with known local extragalactic structures), it can be calculated that the number of detected radio bursts falling into the field of view of eROSITA will be
\begin{equation}
N = \frac{N_X N_R}{N_{\Sigma}},
\end{equation}
where $N_R$ is the number of flashes detected by ground-based telescopes, $N_X$ is the number of flashes detected by eROSITA. Hence, it can be estimated that, for the total of all the time of survey observations on the SRG satellite, several events ($\sim3$) recorded by ground-based radio telescopes (HIRAX, CHIME, ASKAP, UTMOST, etc.) will fall in the field of view of the eROSITA.\footnote{
In addition, one can expect to have approximately one of the FRB recorded by radio telescopes within the ART-XC field of view during the time of the satellite operation.}
This makes it relevant to conduct a more detailed evaluations of the ability of eROSITA to register X-ray flares that may accompany the radio bursts.
\section{The possibility of recording hyperflares}
In this section we consider parameters of X-ray flares in order to evaluate whether the sensitivity of eROSITA is high enough for their registration.
The duration of the main peak of giant flares and hyperflares from magnetars is $\gtrsim0.1-0.2$ s \cite{1999Natur.397...41H, 2005Natur.434.1098H}. This exceeds the nominal integration time for eROSITA, which is 50 ms. Thus, although some of the incoming photons may not be registered separately, it still can be expected that the hyperflare will lead to several counts on the detector.
We accept that for reliable signal detection, the telescope must register 10 photons from the source for the entire time of the X-ray flash.\footnote{
If we talk about the presence of a radio trigger, i.e. if the arrival time and the coordinates of the flare are known, then the criterion of 10 photons can be significantly softened to 2-3 photons from a compact area corresponding to the angular resolution of the telescope.
On the other hand, the finite integration time (50 ms) of the detector can lead to the situation when those $\sim$10 photons, which arrived during the 100-200 ms flash, will ``overlap'' each other (the pile-up effect) in individual pixels, reducing the number of actually recorded events. Thus, our criterion of 10 arrived photons seems reasonable.}
Assuming that the telescope detected 10 photons, we construct the dependence of the energy release of the source from the distance for several spectral models.
We will examine several options representing a wide range of X-ray burst spectra that can potentially accompany FRBs (primarily those that can correspond to magnetar hyperflares). These are blackbody spectra for temperatures $kT = 30$ keV and $kT = 200$ keV and power-law spectra with photon indexes $\Gamma = 0.5$ and $\Gamma = 2$.
As long as the observations are carried out in a fairly soft part of the X-ray spectrum, the photon flux will be significantly attenuated due to interstellar absorption
(in the interstellar medium of the Galaxy, in the intergalactic medium and in the interstellar medium of the source's host galaxy).
At a column density of hydrogen atoms $N_H$ the flux weakens by a factor of $e^{-\sigma N_H} $. To calculate $\sigma$, we use data from \cite{morrison1983}:
\begin{equation}
\sigma=\frac{1}{E}C_2+\frac{1}{E^2}C_1+\frac{1}{E^3}C_0,
\end{equation}
where $E$ is the photon energy and the coefficients $C_0, C_1, C_2$ were taken from the abovementioned paper. Given the evaluative nature of our work, and that we are interested only in the total energy release in a quite wide X-ray band, the non-usage of more detailed results on interstellar absorption (see, for example, \cite{wilms2000}) is not crucial for our purposes. Note also that in the case of FRBs, a significant part of the absorption should be unrelated to the matter inside our Galaxy, so accurate calculations of the $\sigma$ parameter become even more uncertain.
Also, we take into consideration the dependence of the effective area of the telescope on the wavelength (see Fig. 1).
Several of eROSITA mirror systems will be covered by filters that cut off the soft part of the spectrum. We use data on the effective area assuming that five systems out of seven are covered by such filters.
Relevant data are taken from the site https://wiki.mpe.mpg.de/eRosita.
\begin{figure}
\includegraphics[scale=0.8]{eff-sq-eng.pdf}
\caption{Effective area of the eROSITA telescope versus photon energy. The data are presented under the assumption that five out of seven mirror systems are covered with filters cutting off the softest part of the X-ray spectrum.}
\end{figure}
\subsection{Power-law spectrum}
Models of X-ray flares with power-law spectrum in the application to FRBs have recently been considered in \cite{scholz2017}.
We suggest that the spectrum of an object is given by the equation
\begin{equation}
dN = C E^{ - \Gamma} e^{-(E/E_{cutoff})} dE,
\end{equation}
where $E_{cutoff} = 500$ keV is the cutoff energy of the spectrum, $C$ is the dimensional constant determined from the normalization of the total energy release to $10^{47}$ erg.
Here and below, $E$~is the photon energy in keV.
The value of $\Gamma$ is taken to be 0.5 and 2 as extreme cases.
For $\Gamma = 0.5 $ we get $C = 1.01 \times 10^{43}$~erg$^{-1/2}$.
At $\Gamma = 2$, the integral diverges at $E \rightarrow 0$, so it is necessary to choose a nonzero lower limit of integration. We vary it from $10^{-5}$ to 0.1 keV, and it turns out that $C$ changes by no more than an order of magnitude, which does not significantly affect further estimates. The lower limit is chosen to be $0.001$ keV, with $C = 9.7 \times 10^{45}$~erg. This value is used in subsequent estimations.
Taking into account absorption and dependence of the eROSITA effective area on the photon energy, the radiation energy arriving at the detector from the source at a distance of $r$ (fluence multiplied by the effective area of the detector) will be:
\begin{equation}
F_d = \int_{E_1}^{E_2} {\frac{C E_{}^{1 - \Gamma} e^{-E_{}/E_{cutoff}} e^{- \sigma N_H} S_{eff}(E_{}) dE_{}} {4 \pi r^2}}.
\end{equation}
Here we neglect the redshift effect because the distances for potentially detectable flares do not exceed 1 Gpc, which corresponds to $z<0.2$. In addition, it is important to emphasize the approximate nature of our estimates (for example, due to the uncertainties with absorption on the line of sight).
The number of detected photons is
\begin{equation}
N_d = \int_{E_1}^{E_2} {\frac{C E_{}^{- \Gamma} e^{-E_{}/E_{cutoff}} e^{- \sigma N_H} S_{eff}(E_{}) dE} {4 \pi r^2}}.
\label{r10}
\end{equation}
An example for a distance of 100 Mpc and a flare energy of $10^{47}$~erg is given in Table 1 for several spectral models and column densities.
\begin{table*}[t]
\caption{Number of registered photons for a burst at the distance 100 Mpc with the energy $10^{47}$~erg.}
\label{tabular:timesandtenses}
\begin{center}
\begin{tabular}{l l l l l}
\hline
$N_H$, cm$^{-2}$ & $kT = 30$ keV & $kT = 200$ keV & $\Gamma = 0.5$ & $\Gamma = 2$\\
\hline \hline
0 & 1.9 & 0.017 & 12.6 & $13000$ \\
$10^{22}$ & 1.35 & 0.012 & 4.8 & $1600$\\
$10^{24}$ & 0.087 & 0.0008 & 0.087 & 4.7 \\
\hline
\end{tabular}
\end{center}
\end{table*}
Assuming that 10 photons were registered from the flare, we can calculate the dependence of the total energy of the flare on the distance.
The distance $r_{10}$, from which 10 photons come from a flash with an energy of $10^{47}$~erg, can be determined from the equation (\ref{r10}).
Next, the total energy is $E_{total}=10^{47} (r/r_{10})^2$ erg.
Here we consider $N_H$ to be a fixed parameter, i.e. a change in $r$ does not lead to a change in absorption, so the quadratic dependence is preserved.
This simplification is possible due to the large uncertainty of the parameter $N_H$.
\begin{figure}
\includegraphics[scale=0.7]{444-eng.pdf}
\caption{Dependence of the burst energy on distance for different models of the hyperflares spectra under the assumption that 10 photons are registered. The number of hydrogen atoms on the line of sight is taken to be $N_H = 10^{22}$cm$^{-2}$.}
\end{figure}
\begin{figure}
\includegraphics[scale=0.6]{666-eng.pdf}
\caption{Dependence of the burst energy on distance for power-law (thick green lines) and thermal spectra (thin red lines) under the assumption that 10 photons are registered. Solid lines correspond to $N_H=0$, dashed --- to $N_H=10^{22}$cm$^{-2}$, and dot-dashed --- to $N_H=10^{24}$cm$^{-2}$.
}
\end{figure}
\subsection{Blackbody spectrum}
Similar calculations are made also with the assumption that the spectrum of the X-ray burst is thermal. Observations show that the spectra of giant magnetar flares can be well described by a blackbody spectrum with a temperature from $\sim$30 to $\sim200$ keV (see, for example, \cite{2018arXiv180305716E} and references therein).
The X-ray radiation is associated with the fireball retained by the magnetosphere with a characteristic size of a few hundred kilometers.\footnote{
Certainly, the radio emission corresponding to FRBs is non-thermal (and coherent) and is generated, apparently, in another spatial domain, for instance, in a shell similar to the pulsar nebula surrounding magnetars (see \cite{lyubarsky2014}). Several Galactic magnetars and high-B pulsars have such shells.}
In the case of such a spectrum, we have
\begin{equation}
B_E(T, E) = \frac{2 }{c^2 h^3} \frac {E_{}^3} {e^{\frac{E_{}} {k T}} - 1},
\end{equation}
\begin{equation}
F_d = \int_{E_1}^{E_2} {\frac{ C_p \pi B_E (T,E) e^{- \sigma N_H} S_{eff}(E_{}) dE} {4 \pi r^2}},
\end{equation}
where $C_p$ can be found from the normalization to the total energy release $10^{47}$ erg.
Number of detected photons is:
\begin{equation}
N_d = \int _{E_1}^{E_2}{\frac{ C_p \pi B_E (T,E) E^{-1} e^{- \sigma N_H} S_{eff}(E_{}) dE} {4 \pi r^2}}.
\end{equation}
The results for all the considered spectra are shown in Figs.~2 and 3 for different values of the column density $N_H$.
Note that flares can also be registered by the ART-XC telescope.
Preliminary estimates show that for a thermal spectrum with $kT=30$~keV ($N_H=10^{22}$~cm$^{-2}$) one can expect several counts in the case of
a flare with an energy of $\sim 10^{46}$~erg from distances of the order of tens of Mpc.
Nevertheless, as long as the field of view of the ART-XC is about three times smaller than that of eROSITA, the expected number of potentially detectable events decreases, correspondingly.
\section{Discussion}
Evaluation of distances to the FRBs show that if the intergalactic medium (rather than the matter in the immediate neighborhood of the source) dominates in the dispersion measure, then typical distances might be about 1~Gpc, and the minimum distances might be about 100 Mpc (see, e.g., the dispersion measure estimates in the intergalactic medium for FRBs detected by the ASKAP \cite{shannon2018}).
This means that most X-ray flares are potentially can be recorded with eROSITA only for soft power-law spectra and a sufficiently large energy release.
However, it is essential that there are alternative models in which a significant part of the FRB dispersion measure is gathered in the environment near the source (see, for example, \cite{2016MNRAS.462..941L} and the references given there).
In this instance, the average distance will be smaller.
Note, however, that in \cite{2016MNRAS.462..941L} the authors consider the pulsar model, not the magnetar one. In that model, no significant X-ray emission is expected. On the other hand, in the scenario of magnetar hyperflares, the interstellar medium around the source most probably will be insufficiently dense to make a significant contribution to the dispersion measure. Thus, it is unlikely that if FRBs are relatively close ($\lesssim100$~Mpc), the radio burst will be accompanied by a powerful X-ray flare.
In this paper, we did not examine the option with a soft thermal spectrum $kT=10$~keV used, for example, in \cite{scholz2017}.
This is due to the fact that such a soft spectrum must be atypical for hyperflares from magnetars.
For example, in the case of the SGR 1806-20 hyperflare, the spectrum can be described by a blackbody with a temperature of $\sim$200 keV \cite{2005Natur.434.1098H}. Surely, for a softer spectrum and the same total energy release, the flux in the eROSITA band increases, and the flare will be detectable from distances $\gtrsim 100$ Mpc.
Note that calculations presented above are also applicable for estimates of the detectability of hyperflares from extragalactic magnetars unrelated to any possible connection with fast radio bursts. Certainly, in the vast majority of cases it will be difficult to identify a faint flare as an event associated with the activity of a distant magnetar. However, with good localization, allowing to identify the host galaxy, and the presence of repetitions, it will be possible uncover the nature of such event.
It should also be noted that when the review program on the Spectrum-Roentgen-Gamma satellite is finished, it will be possible to perform long-term simultaneous observations with the eROSITA and ART-XC instruments and radio telescopes.
This will be especially important in case of detection of sources of repeating bursts at distances less than a few hundred Mpc.
\section{Conclusions}
We examined the possibility of detecting the hyperflares accompanying the FRB emission with the eROSITA telescope. We have shown that approximately one hyperflare per year can appear within the eROSITA field of view simultaneously with the registration of the FRB by ground-based radio telescopes. At the same time, the sensitivity of the X-ray telescope turned out to be quite sufficient to detect a hyperflare with an energy of $10^{46}$ erg from distances about tens or even hundreds of Mpc for realistic spectral parameters.
\section*{Acknowledgements}
We are grateful to the referee for several useful comments that helped to improve the paper.
We thank prof. N.I. Shakura and dr. K.L. Malanchev for their comments.
A.D. Khokhryakova and D.A. Lyapina thank the Moscow State University Development Program in the nomination ``Outstanding Scientific Schools of the Moscow State University''.
The work of S.B. Popov was supported by the RSF grant no. 19-12-00084.
\vskip 0.1cm
This is an authors' version of translation of the paper which appeared in Astronomy Letters (in Russian: Pis'ma v Astronomicheskij Zhurnal) in 2019 (N3).
|
train/arxiv
|
BkiUdA85qdmC_F6-OlU0
| 5 | 1 |
\section{Introduction} \label{sec:intro}
Over the last three decades, software development has been revolutionized under
the combined effect of the massive adoption of free and open source software
(FOSS\xspace), and the popularization of collaborative development platforms like
GitHub, Bitbucket, and SourceForge~\cite{Squire17}, which have sensibly reduced
the cost of collaborative software development and offered a place where
historical software can be stored~\cite{SpinellisUnix2017}. One important
consequence of this revolution is that the source code and development history
of tens of millions of software projects are nowadays public, making an
unprecedented corpus available to software evolution scholars. We will refer to
this corpus as \emph{public source code} in this paper.
Many research studies have been conducted \emph{on subsets} of all public
source code, looking for patterns of interest for software engineering, ranging
from the study of code clones~\cite{SvajlenkoR17, SemuraYCI17,
ThummalapentaCAP10} to automated vulnerability detection and
repair~\cite{Li2017,Grieco2016, MartinezM15}, from code
recommenders~\cite{Zeller2007, ZimmermannWDZ04} to software licence analysis
and compliance~\cite{GermanLicense17, VendomeLicence2015}.
Scaling up similar studies to the whole corpus, and making them reproducible,
is a significant challenge. In the absence of a common infrastructure providing
a \emph{reference archive} of pubic source code development, scholars have used
popular development platforms like GitHub as surrogates. But development
platforms are not archives: projects on GitHub come and go,\footnote{For
example, hundreds of thousands of projects migrated from GitHub to GitLab.com
in the days following the acquisition of GitHub by Microsoft in summer 2018,
see~\url{https://about.gitlab.com/2018/06/03/movingtogitlab/}.} making
reproducibility a moving target. And while GitHub is the most popular
development platform today, millions of projects are developed elsewhere,
including very high profile ones like GNOME.\footnote{See
\url{https://www.gnome.org/news/2018/05/gnome-moves-to-gitlab-2/}}
Software Heritage\xspace~\cite{swh-ipres-2017, swh-cacm-2018}---with its mission to collect,
preserve, and make accessible all public source code together with its
development history---offers an opportunity to change this state of affairs.
The project has amassed the largest public source code corpus to date, with
more than 80 millions software projects archived from GitHub, GitLab, PyPI, and
Debian, growing by the day.
In this paper we leverage Software Heritage\xspace to perform the first study on the evolution of
public source code. First, we look into the production of \emph{original}
source code artifacts over time, that is, the amount of source code files or
commits that have never been published before (e.g., in other VCS repositories
or distributed tarballs/packages) across the entire corpus. Our first research
question is:
\begin{description}
\item[{\bf RQ1}\xspace] how does the public production of \emph{original}, i.e., never
published before, source code artifacts, and in particular files and commits,
evolve over time? what are the respective growth rates?
\end{description}
To answer this we perform an extensive study of the Software Heritage\xspace archive, continuing a
long tradition of software evolution studies~\cite{SurveyCrowston2008,
LawsEvolutionHerraizRRG13, debsources-ese-2016, HattonSG17,MLgitarchive18},
which we extend by several orders of magnitude and perform over a period of
more than 40 years. We show evidence of stable \emph{exponential growth} of
original commits and files published over time.
Second, we study the number of \emph{different contexts} in which original code
artifacts re-appear over and over again, e.g., the same unmodified source code
file found in different commits, or the same commit distributed by different
repositories. By doing so we quantify the \emph{multiplication} of public
source code artifacts, addressing our second research question:
\begin{description}
\item[{\bf RQ2}\xspace] to what extent the same source code artifacts, and in particular
file and commits, can be found in different contexts (commits and
repositories, respectively) in public source code?
\end{description}
We find evidence of a combinatorial explosion in the number of contexts in
which original source code artifacts appear, which is particular significant in
the multiplication of identical source code files across different commits.
In the last part of the paper, we explore the implications of such
multiplication on the problem of \emph{software provenance
tracking}~\cite{Provenance2011, Godfrey15-provenance} for public source
code. We ask ourselves: is it feasible to keep track of all the different
contexts in which a given file or commit occur across the entire corpus?
To address this practical question we evaluate three different data models for
storing provenance information, which offer different space/time trade-offs.
We evaluate them on more than 40 years of public source code development
history and find that one of them---which we call the \emph{compact
model}---allows to concisely track provenance across the entire body of
public source code, both today and in the foreseeable future.
\paragraph*{Paper structure}
we review related work in Section~\ref{sec:related} and address {\bf RQ1}\xspace in
Section~\ref{sec:growth}; in Section~\ref{sec:provenancestudy} we attack {\bf RQ2}\xspace,
studying the multiplication factor of original source code artifacts across
different contexts; provenance tracking representations are studied in
Section~\ref{sec:provenance}, leading to the compact model, which is
experimentally validated in Section~\ref{sec:validation}; threats to validity
are discussed in Section~\ref{sec:threats} before concluding in
Section~\ref{sec:conclusion}.
\paragraph*{Reproducibility note} given the sheer size of the Software Heritage\xspace archive
($\approx$200~TB and a $\approx$100~B edges graph), the most practical way to
reproduce the findings of this paper is to first obtain a copy of the official
Software Heritage\xspace Graph Dataset~\cite{msr-2019-swh-dataset} and then focus on the source
code revisions that we have analyzed for this paper. The full list of their
identifiers is available on Zenodo (DOI
\href{https://dx.doi.org/10.5281/zenodo.2543373}{10.5281/zenodo.2543373})
(20~GB); the selection criteria are described in Section~\ref{sec:growth}.
\section{Related Work}
\label{sec:related}
\begin{figure*}[t]
\centering \includegraphics[width=\linewidth]{swh-merkle-dag}
\caption{Software Heritage\xspace Merkle DAG with crawling information.}
\label{fig:data-model-detailed}
\end{figure*}
The study of software evolution has been at the heart of software engineering
since the seminal ``Mythical Man Month''~\cite{BrooksMMM} and Lehman's
laws~\cite{LehmanLaw80}. The tidal wave of FOSS\xspace, making available a growing
corpus of publicly available software, has spawned an impressive literature of
evolution studies. Some 10 years ago a comprehensive
survey~\cite{SurveyCrowston2008} showed predominance of studies on the
evolution of individual projects. Since then large scale studies have become
frequent and the question of how Lehman's laws need to be adapted to account
for modern software development has attracted renewed attention, as shown in a
recent survey~\cite{LawsEvolutionHerraizRRG13} that advocates for more
empirical studies to corroborate findings in the literature.
While Mining Software Research (MSR) research~\cite{hassan2008road} is
thriving, realizing large-scale empirical studies on software growth remains a
challenging undertaking depending on complex tasks such as collecting massive
amounts of source code~\cite{MockusAmassing09} and building suitable platforms
for analyzing them~\cite{dyer2013boa, Candoia2016}.
Hence, up to now, most studies have resorted to selecting relatively small
subsets\footnote{Some studies have analyzed up to a few million projects, but
this is still a tiny fraction of all public source code.} of the full corpus,
using different criteria, and introducing biases that are difficult to
estimate.
For instance, an analysis of the growth of the Debian distribution spanning two
decades has been performed in~\cite{debsources-ese-2016}, observing initial
superlinear growth of both the number of packages and their size. But Debian is
a collection maintained by humans, so the number of packages in it depends on
the effort that the Debian community can consent.
A recent empirical study~\cite{HattonSG17} has calculated the compound annual
growth rate of over \num{4000} software projects, including popular FOSS\xspace
products
as well as closed source ones. This rate is sensibly in the range of
1.20--1.22, corresponding to a \emph{doubling in size every 42 months}. In this
study, though, the size of software projects was measured using lines of code,
without discriminating between original contents and refactored or exogenous
code reused as-is from other projects.
Not many of these studies take into account the amount of code duplication
induced naturally by the now popular pull-request development
technique~\cite{gousios2014exploratory} and more generally by the ease with
which one can create copies of software components, even without forking them
explicitly.
The amount of exogenous code in a project can be extremely important, as shown
in~\cite{DejaVuVitek2017}, which analyzed over 4 million non-fork projects from
GitHub,
and showed that almost 70\% of the code consists of file-level exact
clones. This paints a very interesting picture of cloning in a subset of GitHub
at the time it was performed; it would be interesting to know how cloning
evolves over time, and how it impacts the growth of the global source code
corpus.
Software provenance tracking is an essential building block of several studies,
in particular on vulnerability tracking, license
analysis~\cite{GermanPGA09-siblings}, and reuse~\cite{ishio_software_2016}.
Provenance can be looked at different
granularities~\cite{BertillonnageGerman13}. On one end of the spectrum,
tracking the origin of code \emph{snippets} is useful when studying coding
patterns across repositories~\cite{AllamanisS13a, GermanPGA09-siblings}. On the
opposite end, tracking the origin of whole \emph{repositories} is useful when
looking at the evolution of forks or project popularity~\cite{Borges16}. In
between, tracking \emph{file}-level provenance has been for more than a decade
a key element of industrial tools for license compliance offered by companies
like BlackDuck, Palamida, Antelink, nexB, TripleCheck, or FossID, leading to
patent portfolios~\cite{rousseau_computer_2010}.
With few exceptions~\cite{Provenance2011}, though, file-level provenance has
received little attention in the research community. We believe this is due to
the lack of a reference archive of public source code on which file-level
provenance tracking can be implemented once and then reused by other
researchers. In the final part of this paper we discuss the implications of our
findings about public source code and explore the feasibility of such
``provenance service'' approach, relying on Software Heritage\xspace as a proxy of public source
code.
\section{Public Source Code Growth}
\label{sec:growth}
Current development practices rely heavily on duplicating and reusing
code~\cite{gousios2014exploratory, DejaVuVitek2017}, which makes it non trivial
to estimate how much \emph{original} software is being produced: summing up the
usual metrics---such as number of source code files or revisions (also known as
\emph{commits})---across a wealth of software projects will inevitably end up
in counting the same original source code artifacts multiple times.
In this section we report on the first large scale analysis of the growth of
original software artifacts, in terms of revisions and contents, that we have
performed leveraging the fully-deduplicated data model that underlies Software Heritage\xspace,
briefly recalled below.
{\small \textbf{Terminological note}: we adopt in the following a
technology-neutral terminology to refer to source code artifacts: we use
``content'' for ``[source code] file'' and ``revision'' for commit. The next
subsection can be referred to for the intended meaning of those terms.}
\begin{figure*}[t!]
\centering
\includegraphics[width=\linewidth]{revision_content_growth_wide}
\caption{Global production of original software artifacts over time, in terms
of never-seen-before revisions and file contents (lin-log scale). Major
events in the history of version control systems and development forges are
materialised by vertical bars.}
\label{fig:original-production}
\end{figure*}
\subsection{The Software Heritage\xspace data model}\label{sec:swh}
A toy yet detailed example of the Software Heritage\xspace data model is shown in
\figurename~\ref{fig:data-model-detailed}; full details can be found
in~\cite{swh-ipres-2017,swh-ipres-2018-doi}. The key principle is to deal with
code artifacts duplication by storing them in a single, huge Merkle direct
acyclic graph (DAG)~\cite{Merkle}, where every node is thoroughly
deduplicated. Different types of nodes are present in the graph:
\paragraph{Contents} raw file contents as byte sequences. Contents are
anonymous; ``file names'' are given to them by directories and are context
dependent.
\paragraph{Directories} lists of named directory entries. Each entry can point
to content objects (``file entries''), to other directories (``directory
entries''), or even to other revisions (``revision entries''), capturing links
to external components like those supported by Git submodules and Subversion
externals). Each entry is associated to a name (i.e., a relative path) as well
as permission metadata and timestamps.
\paragraph{Revisions} (or \emph{commits}) point-in-time states in the
development history of a software project. Each revision points to the root
directory of the software source code at commit time, and includes additional
metadata such as timestamp, author, and a human-readable description of the
change.
\paragraph{Releases} (or \emph{tags}) particular revisions marked as noteworthy
by developers and associated to specific, usually mnemonic, names (e.g.,
version numbers or release codenames). Releases point to revisions and might
include additional descriptive metadata.
\paragraph{Snapshots} lists of pairs mapping development branch names (e.g.,
``master'', ``bug1234'', ``feature/foo'') to revisions or releases.
Intuitively each snapshot captures the full state of a development repository,
allowing to recursively reconstruct it if the original repository gets lost or
tampered with.
\emph{Deduplication} happens at node granularity for all source code artifacts:
each file content is stored exactly once and referred to via cryptographic
checksum key from multiple directories; each commit is stored once, no matter
how many repositories include it; up to each snapshot, which is stored once no
matter how many identical copies of repositories in exactly the same state
(e.g., pristine forks on GitHub) exist.
This arrangement allows to store in a uniform data model both specific versions
of archived software (pointed by release nodes), their full development
histories (following the chain of revision nodes), and development states at
specific points in time (pointed by snapshot nodes).
In addition to the Merkle DAG, Software Heritage\xspace stores \emph{crawling information}, as
depicted in the top left of \figurename~\ref{fig:data-model-detailed}. Each
time a source code origin is visited, its full state is captured by a snapshot
node (possibly reusing a previous snapshot node, if an identical repository
state has been observed in the past) plus a 3-way mapping between the origin
(as an URL), the visit timestamp, and the snapshot object, which is then added
to an append-only journal of crawling activities.
\subsection{Key figures on the Software Heritage\xspace dataset}
At the time we used it for this paper, the Software Heritage\xspace archive was the largest
available corpus of public source code~\cite{swh-ipres-2017, swh-cacm-2018},
encompassing:
\begin{itemize}
\item a full mirror of GitHub, constantly updated
\item a full mirror of Debian packages, constantly updated
\item a full import of the Git and Subversion repositories hosted on Google
Code at shutdown time
\item a full import of Gitorious at shutdown time
\item a one-shot import of all GNU packages (\emph{circa} 2016)
\end{itemize}
\begin{table}
\caption{Graph characteristics of the reference dataset: a Software Heritage\xspace archive copy
as of February 13th, 2018.}
\label{tab:ref-dataset}
\centering
\subfigure[archive coverage]{
46.4 M software origins
}\\
\subfigure[nodes]{
\begin{tabular}{l|l}
\multicolumn{1}{c|}{\bf node type}
& \multicolumn{1}{c}{\bf quantity}
\\\hline
content & 3.98 B \\
revision & 943 M \\
release & 6.98 M \\
directory & 3.63 B \\
snapshot & 49.9 M \\
\hline
\it total & 8.61 B \\
\end{tabular}
}
\subfigure[edges]{
\begin{tabular}{l|l}
\multicolumn{1}{c|}{\bf edge type}
& \multicolumn{1}{c}{\bf quantity}
\\\hline
revision $\to$ directory & 943 M \\
release $\to$ revision & 6.98 M \\
snapshot $\to$ release & 200 M \\
snapshot $\to$ revision & 635 M \\
snapshot $\to$ directory & 4.54 K \\
directory $\to$ directory & 37.3 B \\
directory $\to$ revision & 259 M \\
directory $\to$ file & 64.1 B \\
\hline
\it total & 103 B \\
\end{tabular}
}
\end{table}
For this paper we used the state (called \emph{reference dataset} in the
following) of the full Software Heritage\xspace archive as it was on February 13th, 2018. In terms
of raw storage size, the dataset amounts to about 200~TB, dominated by the size
of content objects. As a graph, the DAG consists of $\approx$9~B nodes and
$\approx$100~B edges, distributed as shown in Table~\ref{tab:ref-dataset}; note
how this corpus is orders of magnitudes larger than previously analyzed
ones~\cite{debsources-ese-2016, DejaVuVitek2017, MLgitarchive18}.
\subsection{Evolution of original revisions and file contents}
We have analyzed the entire reference dataset (see
Table~\ref{tab:ref-dataset}), processing revisions in increasing timestamps
order, and keeping track for each file content the timestamp of the
\emph{earliest} revision that contains it, according to the commit timestamp.
A revision is \emph{original} if the combination of its properties (or,
equivalently, its identifier in the Merkle DAG) has never been encountered
before.
Results are shown in \figurename~\ref{fig:original-production}. They provide
very rich information, answering {\bf RQ1}\xspace for both revisions and file contents.
We discuss first a few outliers that jump out. Data points at the \emph{Unix
epoch} (1/1/1970) account for 0.75\% of the dataset and are clearly
over-represented. They are likely due to forged revision timestamps introduced
when converting across version control systems (VCSs). This is probably also
the main reason behind revisions with timestamps in the ``future'', i.e., after
the dataset timestamp (0.1\% of the dataset). The sharp drop before the dataset
timestamp is a consequence of the lag of Software Heritage\xspace crawlers w.r.t.~its data sources.
Focusing on the core part of the figure we remark that in the early years,
before the introduction of forges and advanced VCSs, the number of revisions is
relatively small (tens to hundreds of thousands only), and their evolution is
rather irregular.
After the creation of the first popular forge, SourceForge (1999), we observe
on the other hand a remarkably regular exponential growth lasting twenty years.
For original revisions, growth can be accurately approximated by the fit line
$60 e^{0.27(t-1970)}$; at this rate \textbf{the amount of original revisions in
public source code doubles every $\approx$30 months}. For original contents,
growth is accurately approximated by the fit line $2.5 e^{0.37(t-1970)}$; at
this rate \textbf{the amount of original public source code files doubles every
$\approx$22 months}.
This information is precious to estimate the resources needed for
\emph{archiving} publicly developed software: taking into account the long term
evolution of storage costs\footnote{see, e.g.,
\url{https://hblok.net/blog/storage/}} this growth looks managable, provided
that deduplication is applied. The sustainability of provenance tracking
remains potentially challenging, though, because artifact \emph{occurrences} in
different contexts cannot be deduplicated. We will quantify source code
artifact multiplication in the next section.
The growth rate of original contents and revisions suggests that both the
production of source code and its derived graphs are interesting evolving
complex networks~\cite{albert2002statistical, dorogovtsev2002evolution}. Their
nature---scale-free or not---as well as the role of phenomena like preferential
attachment in the growth dynamics between edges and nodes, potentially leading
to accelerating growth~\cite{albert2002statistical}, are important subjects for
further investigation.
Finally, we remark that the \emph{difference} in the growth rates of original
revisions and original file contents means that over the past twenty years
\textbf{the average number of original file contents per revision has been
doubling every $\approx$7 years}. Whether this comes from the availability of
better tools that can easily handle large commits, from different development
practices, or other causes is another interesting open question.
\section{Public Source Code Multiplication}
\label{sec:provenancestudy}
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{DuplicationLayer}
\caption{The three layers of multiplication in public source code: SLOCs
occurring in source code files (contents), contents occurring in commits
(revisions), revisions found at different distribution places (origins).}
\label{fig:multiplication}
\end{figure}
We now look into \emph{public source code multiplication}, i.e., how often the
same artifacts (re-)occur in different contexts.
\figurename~\ref{fig:multiplication} depicts the three layers of this
phenomenon: a given line of code (SLOC) may be found in different source code
files; a given file content may appear in different revisions (e.g., different
commits in the same repository); and a given revision may be found at multiple
origins (e.g., the same commit distributed by multiple repositories and source
packages).
To study this phenomenon and answer {\bf RQ2}\xspace we perform in the following focused
analyses on the Software Heritage\xspace Merkle DAG. They will lead to quantitatively evaluate the
\emph{multiplication factor} of source code artifacts at each multiplication
layer of \figurename~\ref{fig:multiplication}.
\subsection{Content multiplication factor}
\label{sec:contentdup}
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{content_nr_aaa_all}
\includegraphics[width=0.7\linewidth]{content_nr_aa_c_bis}
\caption{Top: cumulative (upper curve) and simple (lower curve)
multiplication factor of unique file contents across unique revisions.
Bottom: normalized cumulative content multiplication factor for the same
sample (solid line) and two random samples of about 1~M contents each, with
content sizes up to 100~bytes (dashed line) and between $10^5$ and $10^6$
bytes (dotted line).}
\label{fig:content-duplication}
\end{figure}
In order to assess the \emph{content} multiplication factor, i.e., the amount
of duplication of file contents among revisions, we took a random sample of
about 1 million unique contents (all contents whose hash identifiers start with
\texttt{aaa}. For each content in that sample we counted how many revisions
contain it in the reference dataset. The resulting distribution of the
multiplication factor is shown in the upper part of
\figurename~\ref{fig:content-duplication}, together with the corresponding
cumulative distribution.
Looking at the cumulative distribution it jumps out that the average
multiplication factor is very high. It exhibits a characteristic decreasing
power law ($\alpha\simeq -1.5$), only limited by an exponential cut-off. There
are hence over a hundred of thousand contents that are duplicated more than one
hundred times; tens of thousand contents duplicated more than a thousand times;
and there are still thousands of contents duplicated \emph{more than a hundred
thousands times}! Space-wise, keeping track of all the occurrences of the
content$\to$revision layer of \figurename~\ref{fig:multiplication} is a highly
nontrivial task.
We did not resist investigating the side question of whether the \emph{size} of
a file content impacts the multiplication factor. We hence took two new random
samples of about 1 million contents each, one with content sizes up to 100
bytes and one with sizes between $10^5$ and $10^6$ bytes, and performed the
same analysis as for the previous sample.
The resulting normalized cumulative multiplication factors are shown on the
bottom of \figurename~\ref{fig:content-duplication}. We can see that \emph{the
multiplication factor of small contents is much higher} than that of
average-sized and large contents. Hence, keeping track of the
content$\to$revision occurrences only for files larger than, say, 100 bytes, is
a significantly simpler problem than its fully general variant. Omitting small
files is indeed a technique often used by state-of-the-art industry solutions
for software provenance tracking: we provide evidence on why it is effective
(at the expense of completeness).
\subsection{SLOC length and multiplication factor}
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{loc_l_20050101}
\caption{Distribution of normalized SLOC lengths in a sample of 2.5~M
contents that appear at least once with \texttt{.c} extension.}
\label{fig:loc-size}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{loc_occ_20050101}
\caption{Multiplication factor of normalized SLOCs as the number of unique
contents they appear in. Dataset: same of \figurename~\ref{fig:loc-size}.
}
\label{fig:loc-duplication}
\end{figure}
We now turn our attention to the bottom layer of
\figurename~\ref{fig:multiplication}: SLOC$\to$content. Since lines of code are
hardly comparable across languages, we focused on the C language, which is
well-represented in the corpus. We took a random sample of $\approx$11.4~M
unique contents occurring in revisions between 1980 and 2001, and selected from
it contents that appear at least once with \texttt{.c} extension and with sizes
between $10^2$ and $10^6$ bytes, obtaining $\approx$2.5~M contents. We then
split contents by line and, to remove equivalent formulations of the same SLOC,
\emph{normalized} lines by removing blanks and trailing \texttt{";"}
(semicolon). We obtained $\approx$64~M normalized SLOCs.
The multiplication factor of SLOCs across unique contents is shown in
\figurename~\ref{fig:loc-duplication}. We observe a much faster decrease
w.r.t.~the multiplication factor of contents in releases ($\alpha\simeq -2.2$),
hinting that keeping track of SLOCs$\to$content occurrences may be less
problematic than content$\to$revision.
We also computed the distribution of normalized SLOC lengths between 4 and
1000, shown in \figurename~\ref{fig:loc-size}. We observe that lines with
length 15 to 60 normalized characters are the most represented, with a fairly
stable presence within that range, and a steep decrease for longer lines.
Hence, for SLOC$\to$content occurrences there does not seem to exist any
obvious length-based threshold that would reduce their amount.
\subsection{Origin size and multiplication factor}
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{rev_occ_265712178_5428086}
\caption{Duplication of revisions across origins.}
\label{fig:origin-duplication}
\end{figure}
Finally we look into the revision$\to$origin layer of
\figurename~\ref{fig:multiplication}. To that end we took a sample of
$\approx$12\% of the origins, which contain $\approx$29\% of the revisions (or
about 5.4~M origins and 272.5~M revisions) and replicated the previous study of
content duplication onto revisions. Results are shown in
\figurename~\ref{fig:origin-duplication}.
Revision multiplication shows an erratic behavior near the end of the range,
but decreases steadily before that, and way more steeply ($\alpha\simeq -1.9$)
than it was the case for content$\to$revision multiplication (see
\figurename~\ref{fig:content-duplication} for comparison): the multiplication
factor of revisions in origins is way smaller than that of contents in
revisions.
While this result is sufficient to assess the respective impact on public
source code multiplication of the considered layers, we dug further into origin
sizes to better understand \emph{which} origins participate into
revision$\to$origin multiplication.
We have considered two different measures of origin size. One that simply
counts the number of revisions found at each origin. Another that associates
revisions found at multiple origins \emph{only to the origin that contains the
largest number of revisions}, and the count them as before. When a project is
forked, the second measure would always report a revision as belonging to the
fork with the most active development, which is a good approximation of the
``most fit fork'', while stale forks would decay. This measure has many good
properties: it will follow forks that resurrect projects abandoned at their
original development places, it does not rely on platform metadata for
recognizing forks, and is hence able to recognize \emph{exogenous forks} across
unrelated development platforms (e.g., GitHub-hosted forks of the Linux kernel
that is not natively developed on GitHub).
\figurename~\ref{fig:origin-size} shows the impact that the ``most fit fork''
measure has on the number of revision$\to$origin occurrences. Starting with
relatively small repositories, ($\approx$100 revisions) the number of
occurrences to track is lower than for the simpler measure, with a difference
growing up to a full order of magnitude for repositories hosting 10~K
revisions.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{origin_nr}
\caption{Distribution of origin size as the number of revisions they host.}
\label{fig:origin-size}
\end{figure}
\section{Compact Provenance Modeling}
\label{sec:provenance}
We now consider the problem of tracking software provenance across a corpus as
large (and as fast growing) as all public source code. In short, the goal is to
keep track of all the different places (contents, revisions, origins) in which
any given source code artifact (SLOC, content, revision) can be found---more
detailed requirements are given below in Section~\ref{sec:requirements}.
What are the implications of our findings on public source code growth and
multiplication, on the \emph{feasibility} of maintaining such a complete
provenance index? An important fact that emerges from the analyses is that,
size-wise, the most challenging part is the layer associating file contents to
all the revisions they appear in, because contents are duplicated across
revisions much more than revisions across origins.
Hence in the following we will focus on concisely representing the
content$\to$revision mappings; the revision$\to$origin ones will be a
straightforward and fully modular addition. SLOC-level provenance tracking is
left as future work.
\subsection{Requirements}
\label{sec:requirements}
\paragraph{Supported queries} At least two queries should be supported: first
occurrence and all occurrences. The \emph{first occurrence} query shall return
the earliest occurrence of a given source code artifact in any context,
according to the revision timestamp. The \emph{all occurrences} query will
return all occurrences. The two queries answer different use cases: first
occurrence is useful for prior art assessment and similar intellectual property
needs; all occurrences is useful for impact/popularity analysis and might be
used to verify first occurrence results in case of dubious timestamps.
\paragraph{Granularity} It should be possible to track the provenance of source
code artifacts at different granularities including at least file contents and
revisions.
\paragraph{Scalability} It should be possible to track provenance at the scale
of at least Software Heritage\xspace and keep up with the growth rate of public source code. Given
that the initial process of populating provenance mappings might be onerous,
and that some use cases require fresh data (e.g., impact/popularity), we also
require \emph{incrementality} as part of scalability: the provenance index must
support efficient updates of provenance mappings as soon as source code
artifacts (old or new) are observed in new contexts.
\paragraph{Compactness}
It should be possible to store and query provenance information using
state-of-the-art consumer hardware, without requiring dedicated hardware or
expensive cloud resources.
\paragraph{Streaming} For the \emph{all occurrences} query a significant
performance bottleneck is the transfer time required to return the potentially
very large result. A viable provenance solution should hence allow to return
results incrementally, piping up the rest for later.
\subsection{Provenance data models}
\begin{figure}
\centering
\subfigure[flat model]{
\includegraphics[width=0.6\linewidth]{flat-model}
\label{fig:flat-model}
}
\subfigure[recursive model]{
\includegraphics[width=\linewidth]{recursive-model}
\label{fig:recursive-model}
}
\subfigure[compact model]{
\includegraphics[width=\linewidth]{compact-model}
\label{fig:compact-model}
}
\caption{Provenance tracking models, entity-relationship (E-R) views}
\label{fig:provenance-models}
\end{figure}
We study three different data models for provenance tracking, that we call
respectively \emph{flat}, \emph{recursive}, and \emph{compact}. Their
Entity-Relationship (E-R) representations are shown in
\figurename~\ref{fig:provenance-models}.
\paragraph{Flat model}
this is our baseline for tracking provenance, shown in
\figurename~\ref{fig:flat-model}. In this model provenance mappings are
``flattened'' using a single \errel{C(ontent) occur in R(evision)} relation,
that also keeps track of file paths relatively to the root directory of the
associated revision. The cardinality of \errel{C occur in R} is n-m (rather
than 1-n), because the same content might appear multiple times in a given
revision at different paths. Each revision carries as attribute the revision
timestamp, in order to answer the question of \emph{when} the occurrence
happened. Each content carries as attribute the timestamp of its earliest
occurrence, i.e., the minimum timestamps among all associated revisions.
Given suitable indexing on content identifiers (e.g., using a B-tree), the flat
model adds no read overhead for the all occurrences query. Same goes for first
occurrence, given suitable indexing on timestamp attributes, which is required
to retrieve path and revision.
Updating provenance mappings when a new revision comes in requires traversing
the associated directory in full, no matter how many sub-directories or
contents in it have been encountered before, and adding a relationship entry
for each of its nodes.
\paragraph{Recursive model}
while the flat model shines in access time at the expenses of update time and
compactness, the recursive model shown in \figurename~\ref{fig:recursive-model}
does the opposite. It is intuitively a ``reverse'' Merkle DAG representation,
which maps contents to directories and directories to revisions.
Each entity has a timestamp attribute equal to the timestamp of the earliest
revision in which the entity has been observed thus far. When processing an
incoming revision $r_{t_2}$ (with timestamp $t_2$) it is no longer necessary to
traverse in full the associated directory: if a node $n$ is encountered that
is already present in the model with a timestamp $t_1<t_2$, recursion can stop because the
subtree rooted at $n$, which is already present due to the Merkle DAG properties,
has already been labeled with timestamps earlier than $t_2$ and needs not be updated;
we just need to add an entry in the corresponding occurrence table for $n$ with timestamp $t_2$.
Thanks to the sharing offered by the directory level, the recursive model is as
compact as the original Merkle structure, with no flattening involved. The all
occurrences query is slow in this model though, as for each content we need to
walk up directory paths before finding the corresponding revisions. Response
time will hence depend on the average directory depth at which queried contents
will be found. First occurrence is faster, but still incurs some read overhead:
given a content we have to walk up all directories and then lookup the
corresponding revisions whose timestamps equate the timestamp of the content
being queried.
\paragraph{Compact model}
\figurename~\ref{fig:compact-model} shows a compromise version between the flat
and recursive models, which is both storage-compact and capable of quickly
answering the required queries. The tables for the content, directory, and revision entities
are progressively populated as the structure is built, with a timestamp attribute denoting the earliest
known occurrence, as before. To understand how the compact model is built and used we introduce the following
notion:
\begin{definition}[Isochrone subgraph] \it given a partial provenance mapping $\mathcal{P}$ associating a timestamp of
first occurrence to each node in a Merkle DAG, the \emph{isochrone subgraph}
of a revision node $R$ (with timestamp $t_R$) is a subgraph rooted at $R$'s
directory that only contains directory nodes whose timestamps in
$\mathcal{P}$ are equal to $t_R$.
\end{definition}
Intuitively, when processing revisions chronologically to update the entity tables and the provenance
mappings, the isochrone subgraph of a revision starts with its root directory
and extends through all directory nodes containing never-seen-before source
code artifacts. Due to Merkle properties each directory containing at least one
novel element is itself novel. Everything outside the isochrone subgraph has
been observed before, in at least one previously processed revision.
Given this notion, the upper part of the compact model (\errel{C occur early in
R} in \figurename~\ref{fig:compact-model}) is filled with one entry for each
content attached to any directory in the isochrone subgraph. As a consequence
of this, the first occurrence of any given content will always be found in
\errel{C occur early in R} although other occurrences---depending on the order
in which revisions are processed to update provenance mappings---may also be
found there.
The relation \errel{D occur in R} is filled with one entry, pointing to the
revision being processed, for each directory \emph{outside} the isochrone
subgraph that is referenced by directories \emph{inside} it, i.e., \errel{D
occur in R} contains one entry for each directory$\to$directory edge crossing
the isochrone frontier. Finally, the relation \errel{C occur in D} is filled
with one entry for each content (recursively) referenced by any directory added
to the \errel{D occur in R} relation.
Filling the compact model is faster than the flat model: when we reach a
directory $d$ at the frontier of an isochrone subgraph, we only need to
visit it in full the first time, to fill \errel{C occur in D}, and we
need not visit $d$ again when we see it at the frontier of another
isochrone subgraph in the future.
It is slower than the recursive model case, though, as we still need
to traverse the isochrone subgraph of each revision. Read overhead for first
occurrence is similar to the flat model: provided suitable indexing on
timestamps we can quickly find first occurrences in \errel{C occur early in R}.
Read overhead for all occurrences is lower than the recursive model because all
content occurrences will be found via \errel{C occur in D} without needing to
recursively walk up directory trees, and from there directly linked to
revisions via \errel{D occur in R}.
\subsection{Discussion}
Intuitively, the reason why the compact model is a good compromise is that we
have many revisions and a very high number of file contents that occur over and
over again in them, as discussed in Section~\ref{sec:contentdup}. Consider now
two extreme cases: (1) a set of revisions all pointing to the same root
directory but with metadata differences (e.g., timestamp or author) that make
all revisions unique; (2) a set of revisions all pointing to different root
directories that have no file contents or (sub)directories in common.
In case (1) the flat model would explode in size due to maximal duplication.
The recursive model will need just one entry in \errel{D occur in R} for each
revision. The compact model remains small as the earliest revision will be
flattened (via \errel{C occur early in R}) as in the flat model, while each
additional revision will add only one entry to \errel{D occur in R} (as in the
recursive model).
In case (2) the flat model is optimal in size for provenance tracking
purposes, as there is no sharing. The recursive model will have to store all
deconstructed paths in \errel{D occur in D}. The compact model will be
practically as small as the flat model: all revisions are entirely isochrones,
so the \errel{C occur early in R} relation will be the same as the \errel{C occur in R} of the flat model, and the only extra item is the \errel{Directory} table.
Reality will sit in between these two extreme cases, but as the compact model
behaves well in both, we expect it to perform well on the real corpus too. The
experimental evaluation reported in the next section validates this intuition.
\section{Evaluation}
\label{sec:validation}
To compare the size requirements of the provenance data models described in
Section~\ref{sec:provenance}, we have monitored the growth of each model while
processing incoming revisions to maintain provenance mappings up to date.
Specifically, we have processed in chronological order revisions from the
reference dataset with timestamps strictly greater than the Unix epoch (to
avoid the initial peak of forged revisions discussed in
Section~\ref{sec:growth}) and up to January 1st, 2005, for a total of
$\approx$38.2~M revisions. For each revision we have measured the number of
entities and relationship entries according to the model definitions, that is:
\paragraph{Flat model} one entity for each content and revision; plus one
\errel{\small C occur in R} entry for each content occurrence
\paragraph{Recursive model} as it is isomorphic to the Merkle DAG, we have
counted: one entity for each content, directory, and revision; plus one
relationship entry for each revision$\to$directory, directory$\to$directory,
and directory$\to$content edge
\paragraph{Compact model} after identifying the isochrone subgraph of each
revision, we counted: one entity for each content and revision, plus one entity
for each directory outside the isochrone graph referenced from within; as well
as one relationship entry for each content attached to directories in the
isochrone graph (\errel{C occur early in R}), one \errel{D occur in R} entry
for each directory$\to$directory edge crossing the isochrone frontier, and one
\errel{C occur in D} entry for each content present in directories appearing in
\errel{D occur in R}.
Processing has been done running a Python implementation of the above
measurements on a commodity workstation (Intel Xeon 2.10GHz, 16 cores, 32 GB
RAM), parallelizing the load on all cores. Merkle DAG information have been
read from a local copy of the reference dataset, which had been previously
mirrored from Software Heritage\xspace. In total, revision processing took about 4 months, largely
dominated by the time needed to identify isochrone subgraphs.
\begin{table}
\centering
\caption{Size comparison for provenance data models, in terms of entities and
relationship entries. Same dataset of \figurename~\ref{fig:model-sizes}.}
\label{tab:model-sizes}
\begin{tabular}{l|r|r|r}
& \multicolumn{1}{c|}{\textbf{Flat}}
& \multicolumn{1}{c|}{\textbf{Recursive}}
& \multicolumn{1}{c}{\textbf{Compact}} \\
\hline
entities & \num{80118995} & \num{148967553} & \num{97190442} \\
& rev: 38.2 M & rev: 38.2 M & rev: 38.2 M \\
& cont: 41.9 M & cont: 31.9 M & cont: 31.9 M \\
& & dir: 68.8 M & dir: 17.1 M \\
\hline
rel.~entries & \num{654390826907} & \num{2607846338} & \num{19259600495} \\
& & cont--dir: 1.29 B & cont--dir: 13.8 B\\
& & dir--rev: 38.2 M & dir--rev: 2.35 B \\
& & dir--dir: 1.28 B & cont--rev: 3.12 B \\
\hline\hline
rel.~ratios
& \multicolumn{1}{c|}{$\frac{flat}{compact}$ = 34.0}
& \multicolumn{1}{c|}{$\frac{flat}{rec.}$ = 251}
& \multicolumn{1}{c}{$\frac{compact}{rec.}$ = 7.39}
\\
\end{tabular}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{nodes}
\includegraphics[width=0.7\linewidth]{edges}
\caption{Evolution over time of the sizes of different provenance data
models, in terms of entities (top) and relationship entries (bottom). Data
for Software Heritage\xspace revisions up to 2005-01-01, excluding Unix epoch.
}
\label{fig:model-sizes}
\end{figure}
Final sizes, measured in terms of entities and relationship entries are given
in Table~\ref{tab:model-sizes}. They show, first, that the amount of
relationship entries dominate that of entities in all models, from a factor 18
(recursive model) up to a factor \num{8000} (flat). Dealing with mappings
between source code artefacts remains the main volumetric challenge in
provenance tracking. As further evidence of this, and as a measure of the
overall amplitude of provenance tracking for all public source code, we have
also computed the number of relationship entries for the flat data model
\emph{on the full reference dataset}, obtaining a whooping $8.5\cdot 10^{12}$
entries in \errel{C occur in R}.
Second, sizes show that the Merkle DAG representation, isomorphic to the
recursive model, is indeed the most compact representation of provenance
information, although not the most efficient one to query. The compact model is
the next best, 7.39 times larger than the recursive model in terms of
relationship entries. The flat model comes last, respectively 251 and 34 times
larger than recursive and compact.
\figurename~\ref{fig:model-sizes} shows the evolution of model sizes over time,
as a function of the number of unique contents processed thus far. After an
initial transition period, trends and ratios stabilize making the outlook of
long-term viability of storage resources for the compact model look good.
Furthermore, the comparison between the compact (orange line) and flat (blue)
model shows that, at the cost of a small increase in the number of entities,
the compact model performs much better in terms of relationship entities. And
even if, in terms of entities a small divergence can be observed over time
($1/10$ of an order of magnitude), the gain in terms of relationship entries
makes it worthwhile (1.5 orders of magnitude).
In order to relate these figures to real-world storage requirements, we have
also filled a MongoDB-based implementation of the compact model---including all
attributes of \figurename~\ref{fig:compact-model} and needed indexes---while
processing revisions to perform the above measurements. Extrapolating the final
MongoDB size to the full reference dataset we obtain an on-disk size of
13~TB. While large, such a database can be hosted on a consumer workstation
equipped with $\approx$\num{4000}\$ of SSD disks, without having to resort to
dedicated hardware or substantial investments in cloud resources. Using the
compact model, universal source code provenance tracking can lay at the
fingertips of every researcher and industrial user!
\section{Threats to Validity}
\label{sec:threats}
\paragraph{Internal validity}
The main concern for internal validity is that we did not have the resources
available to perform all estimates and experiments on the full Software Heritage\xspace archive.
While our main growth results are measured on the full reference dataset, other
results are extrapolated from smaller subsets. To counter potential bias, we
have used random samplings and sizeable samples.
When comparing provenance data models we have quantitatively estimated sizes,
but only qualitatively estimated read overhead---rather than benchmarking it in
production---in order to remain technology-neutral.
Finally, we have trusted commit timestamps to determine first occurrences, even
if it commit timestamps can be forged. This approach is consistent with
previous software evolution studies that consider timestamp forging a marginal
phenomenon. We also remark that for evaluating the performance of the
provenance model we only need to single out \emph{a} ``first'' occurrence, no
matter how it is determined.
\paragraph{External validity}
While Software Heritage\xspace does not cover all existing free/open source software, it is the
largest source code archive in the world and spans the most popular code
hosting and development platforms. We therefore consider that this is the best
that can be done at present.
Finally, we acknowledge the habit of using \emph{software} development
platforms for collaboration tasks other than software development (e.g.,
collaborative writing), particularly on GitHub, but we did not try to filter
out non-software projects. On the one hand we expect software development to be
the dominant factor, and on the other hand non-software projects might still
contain interesting code snippets that are worth tracking. Also, as
demonstrated in the paper, it is not \emph{necessary} to filter out
non-software project in order to build a practical provenance tracking
solution.
\section{Conclusion}
\label{sec:conclusion}
The emergence of Software Heritage\xspace as a comprehensive archive of public source code,
spanning tens of millions of software projects over more than 40 years, enables
analysis of the evolution of software development at a novel scale.
The first contribution of this paper is a quantitative analysis of the growth
of public software development, factoring out exact code clones. Since the
advent of version control systems, the production of unique original revisions
doubles every 30 months, and the production of unique original file is even
faster, doubling every 22 months. Besides confirming the perceived overall
growth of the public software ecosystem, these results open up a wealth of new
research questions.
The second contribution is a quantitative assessment of the amount of
duplication of both original file contents across different commits and of
original commits across different software origins, gaining precious
preliminary insights into the deep structure of public software development and
distribution.
The third and final contribution is the development and comparison of three
data models designed to answer the software provenance questions of ``what are
the first/all occurrences of a given file content/commit?''. The \emph{compact}
data model, based on the novel notion of isochrone subgraphs, provides a
time/space trade-off that allows to track software provenance at the scale of
Software Heritage\xspace on consumer hardware.
In future work we intend to extend the compact data model to allow tracking
provenance at the granularity of individuals lines of code, and explore how
other types of original source code artifacts evolve over time. We also intend
to study the characteristics of provenance graphs as naturally-occurring,
evolving complex networks.
\subsection{Notice}
We strongly believe that it is essential to make available to other researchers
the data on which is based this kind of analysis. The core of the work presented
here has been performed between January 2017 and August 2018, but the
unprecedented size of this dataset has required significant time and effort to
comply with our principles, and this has delayed the disclosure of our work much
longer than what we wanted or expected. Now that the Software Heritage\xspace Graph Dataset is
available to all~\cite{msr-2019-swh-dataset}, we are finally able to share
results that can be independently verified by other researchers.
|
train/arxiv
|
BkiUaS825V5hcGj03Sza
| 5 | 1 |
\section{Introduction}
\label{IntroSect}
The article discusses one of the basic objects of Interval Analysis, namely, the concept
of interval. Recall that classical intervals are closed, connected, and bounded subsets
of the real line $\mbb{R}$, i.e., sets of the form
\begin{equation}
\label{UsualInterval}
[a, b] = \bigl\{\,x\in\mbb{R} \mid a\leq x\leq b\,\bigr\}
\end{equation}
(see \cite{AlefeldHerzberger, HansenWalster,MayerBook,MooreBakerCloud,NeumaierBook,
SharyBook} and other books on Interval Analysis). The set of all intervals (usually
with arithmetic operations on it) is denoted by $\mbb{IR}$. Multidimensional intervals
are their generalizations, in one sense or another.
The issues covered below were raised in an online discussion on
\verb|reliable_computing| \texttt{mailing list} (see \cite{RCmaillist}) that took
place in the spring of 2018. Its starting point was the question of whether it is
advisable to introduce and further use open and half-open intervals, such as $[a, b[\,$,
$]a, b[\,$, $]a, b]$, in addition to the existing closed intervals \eqref{UsualInterval}
(we denote various types of intervals in the style of N.\,Bourbaki). The author had
experience of working with similar objects and therefore took an active part in
the ensuing discussion. A summary of the views on these issues constitutes the core
of this article. Previously, some ideas of the following text have been published
in the book \cite{SharyBook}, as Section~1.11.
Hereafter, by ``traditional intervals'', we mean usual intervals of the form
\eqref{UsualInterval} that constitute the classical interval arithmetic $\mbb{IR}$,
and ``non-traditional'' intervals in Section~\ref{ClosedIntsSect} are open and
half-open intervals. Further, in Section~\ref{OtherNonTraditSect}, we consider
improper (``reversed'') intervals from Kaucher interval arithmetic $\mbb{KR}$
(algebraic and order completion of $\mbb{IR}$) as non-traditional intervals.
A short Section~\ref{EmptyIntSect} discusses usefulness of a special ``empty
interval''. Our notation follows the informal international standard \cite{INotation}.
Infinite and semi-infinite intervals of the form $[-\infty, p]$, $[q, \infty]$, and
$[-\infty, p]\cap[q, \infty]$ can also be classified as non-traditional. They have
been first considered by W.\,Kahan \cite{Kahan-68} and further developed in detail
in the works \cite{Laveuve, Ratz} and some others. Usually, the interval arithmetic
of such infinite intervals is called ``Kahan interval arithmetic'', although other
terms may also be used (see, e.\,g. \cite{KearfottBook}). Over the years, Kahan
interval arithmetic has received numerous applications in Interval Analysis, and
hence it does not need additional justification from our side. For this reason,
we do not consider these non-traditional intervals in our work.
\section{Why, in Interval Analysis, intervals are closed?}
\label{QuestionSect}
One of the popular myths about Interval Analysis and interval arithmetic $\mbb{IR}$
(widespread among beginners and veterans alike) is that it has the disadvantage of
supporting only intervals which contain their endpoints. This allegedly makes it
impossible to perform some basic set operations (like complement), bounds its
expressive power, limits arithmetic operations, etc., etc.
Well, is it really a drawback that intervals are closed and that they have no open
endpoints?? Or, on the contrary, we do not understand the underlying reasons behind
$\mbb{IR}$?
This kind of questions was asked by W.\,Kahan more than half a century ago (see
\cite{Kahan-68}), and he was the first to propose the introduction of non-closed
intervals (although he did not realize his idea to the end). Then the same issues
came up in 1998 and then in 2018.
Of course, in the practical application of Interval Analysis, these questions are not
so relevant. The fact is, all numerical values of continuously varying quantities
encountered in practice are approximate, all measurements are performed with non-zero
error, etc. Therefore, it is almost impossible to trace the difference between a point
and other values that differ infinitesimally little from it. As a consequence, for
engineering practice, the difference between open or closed intervals seems rather
abstract and unimportant. But this question is important in theory and in calculations
with intervals.
There are arguments in favor of admitting non-traditional open and half-open intervals
in computation and in our reasoning in general, in favor of giving them ``citizenship
rights'' in Interval Analysis. Sometimes such intervals seem to significantly expand
our capabilities. For example, let $[0, 1]$ be a traditional closed interval and let
$]0, 1]$ be half-open at zero, then
\begin{equation}
\label{ZeroDivision}
\frac{[0, 1]}{[0, 1]} = [-\infty, \infty], \qquad
\text{ but }\quad\frac{[0, 1]}{\,]0, 1]} = [0, \infty],
\end{equation}
since the first fraction must contain the result of the division $0/0$, and this is
not the case for the second fraction.
In other words, the benefit from the fact that, in \eqref{ZeroDivision}, the denominator
is open at one of its endpoints is half the real line! This is very helpful in interval
Newton method for enclosing zeros of functions. Its iterations are defined by the formula
\begin{equation}
\label{INewtonIters}
\mbf{X}^{(k+1)}\gets \mbf{X}^{(k)}\cap
{\,\eus{N}}\bigl(\mbf{X}^{(k)},\tilde{x}^{(k)}\bigr),
\quad \tilde{x}^{(k)}\in\mbf{X}^{(k)},
\qquad k = 0,1,2, \ldots ,
\end{equation}
where
\begin{equation}
\label{INewtonOperator}
\rule[-6mm]{0mm}{14mm}
{\eus{N}}(\mbf{X},\tilde{x})
:= \tilde{x} - \frac{f(\tilde{x})}{\mbf{f}'(\mbf{X})}
\end{equation}
is the interval Newton operator (see e.g. \cite{HansenWalster, MayerBook,
MooreBakerCloud, NeumaierBook, SharyBook}). If, in the fraction from
\eqref{INewtonOperator}, the numerator and denominator coincide with those
from \eqref{ZeroDivision}, then we would get very large improvement of the result.
Then the interval Newton method finds solutions to equations much faster, areas
that obviously did not contain solutions are better eliminated, etc.
A similar situation can arise when implementing the Gauss-Seidel interval method
(see e.g. \cite{HansenWalster, KearfottBook, MooreBakerCloud, NeumaierBook, SharyBook}).
One also need to divide by an interval there and then intersect the result with another
interval, and in this case we can again get the same huge improvement of the final
result.
More than 20 years ago, I worked for the Novosibirsk software company UniPro, which
then carried out orders for Sun Microsystems, Inc. Our team was implementing
Sun's interval Fortran-95 (see \cite{SunIASpecification}), a programming language
with built-in interval data type and operations with it. Just at that time, our project
manager, Bill Walster from Sun Microsystems, was writing a book \cite{HansenWalster}
with E.\,Hansen, which was devoted to interval methods for global optimization and
equation solving. He was greatly impressed by the above observation with the interval
Newton method. Hence, it was decided to implement something like open or half-open
intervals, at least partially, insofar as the sign of the zero, which was inherent
to floating-point computer arithmetics according to IEEE 754/854 standards, made it
possible to easily implement open and closed endpoints at zero.
In a computer, an interval $[a, b]$ is naturally represented by a pair of real numbers,
$(a, b)$, and, for zero endpoints, we can take that
\begin{center}
\tabcolsep=5mm
\begin{tabular}{lll}
$(a, +0)\,$ means $\;[a, 0]$, & $(a, -0)\,$ means $\;[a, 0[$ & for $a < 0$,\\[2mm]
$(-0, b)\,$ means $\;[0, b]$, & $(+0, b)\,$ means $\;]0, b]$ & for $b > 0$.
\end{tabular}
\end{center}
Approximately the same was proposed for the implementation of intervals not closed
at zero by W.\,Kahan in his work \cite{Kahan-68}. Recently, J.\,Gustafson proposed
a modern computer implementation of open intervals based on the construction of
\emph{universal numbers}, Unums, developed in \cite{Gustafson}. In general,
the possibility of implementing open intervals makes available the modified division
from \eqref{ZeroDivision} and all the bonuses that follow from it. As a result,
in 1999, Bill Walster immediately jumped at the idea.
In our team, I was responsible for semantic testing and general mathematical consulting
of the works on interval Fortran-95, so I had to think deeply about the consequences
of introducing a new construction into the language. As I delved into the question,
I realized that implementation of open intervals, even partially, was hardly possible
and did not make much sense. It turned out that there are very important and even
fundamental mathematical reasons why the intervals should be closed. The fact is that,
with their mathematical properties, bounded closed intervals essentially differ from
non-closed, open and half-open, intervals. That was exactly what I reported to Bill
Walster, and the implementation of non-closed intervals in Fortran-95 was canceled.
Then the discussion in our team acquired a broader context, and the participants began
to discuss questions about whether any other intervals, besides the classical intervals,
are needed at all. Is it necessary to introduce the empty set into our algebraic systems
of intervals? These topics are discussed in Sections~\ref{OtherNonTraditSect} and
\ref{EmptyIntSect} of the work, but next we will look at what is so good about closed
intervals.
\section{Closed intervals are compacts, \\*
complete lattices and complete metric spaces}
\label{ClosedIntsSect}
Let us remind that a subset $S$ of a topological space $X$ is called \emph{compact},
if for every open cover of $S$ there exists a finite subcover of $S$ \cite{Dieudonne,
Rudin,Shilov}. In fact, the concept of compactness formalizes the idea that it is
possible to exhaust a set by a finite number of its arbitrarily small open subsets.
Bounded closed intervals are compact sets in $\mbb{R}$, while open and half-open
intervals are non-compact, with all the ensuing consequences.
Considered as metric spaces, i.\,e. sets with an abstract distance (metric), non-closed
intervals are not \emph{complete spaces} (see \cite{Dieudonne, Rudin}): the fundamental
sequences (also called \emph{Cauchy sequences}) of elements from such non-closed
intervals do not necessarily converge within such intervals. For example, the sequence
of numbers $1/k$, $k = 1,2, \ldots\,$, has no limit within the half-open interval
$\,]0, 1]$.
Partially ordered sets in which we can freely take the supremum and infimum for each
pair of elements is called \emph{lattice} \cite{Birkhoff}. A partially ordered set
is called \emph{complete lattice} if all of its subsets have a supremum and an infimum.
Non-closed intervals are not complete lattices with respect to the standard order
``$\leq$'' on the real line, i.\,e. one cannot take infima and suprema, with respect
to the order ``$\leq$'', for every subset of a non-closed interval. An example is
the same sequence of numbers $1/k$, $k = 1,2, \ldots\,$, in the half-open interval
$\,]0, 1]$.
The above facts have many unpleasant consequences for practice, which we list below.
\paragraph{Non-compactness.}
On compact closed intervals, continuous functions reach their extrema, i.\,e. their
minimal and maximal values (the Weierstrass extreme value theorem). But on non-compact,
open and half-open intervals, continuous functions may not reach their extreme values.
\paragraph{Brouwer fixed point theorem and Banach fixed-point theorem.}
The Brouwer's fixed point theorem (see \cite{GranasDugundji,OrtegaRheinboldt,Zeidler})
states that for any continuous function $\phi$ mapping a compact convex set of
$\mbb{R}^n$ to itself, there is such a point $\tilde{x}$ that $\tilde{x} = \phi(\tilde{x})$
(a ``fixed point'' that remains unchanged). Obtained in 1909--1912, this result has become
one of the cornerstones of computational Interval Analysis, since intervals in $\mbb{R}$
and their multidimensional analogs are convex compact sets. Given an equation $f(x) = 0$,
we can always reduce it to a fixed-point form $x = \phi(x)$ and then proceed as follows.
If, using interval methods for enclosing the ranges of functions, we check the fulfillment
of the Brouwer fixed-point theorem on an interval $\mbf{X}$ for the mapping $\phi$, i.\,e.
that the inclusion $\phi(\mbf{X})\subseteq\mbf{X}$ is valid, then we rigorously prove
the existence of a solution to the fixed-point equation $x = \phi(x)$ within $\mbf{X}$.
The above technique, which constructively proves existence of solutions to equations, is
an integral part of important interval methods, in particular, the Krawczyk method,
the interval Newton method, and the Hansen-Sengupta method (see e.g. \cite{HansenWalster,
KearfottBook, MayerBook, MooreBakerCloud, NeumaierBook, SharyBook}).
The Banach fixed-point theorem (also known as the \emph{contraction mapping theorem};
see e.\,g. \cite{GranasDugundji, OrtegaRheinboldt, Zeidler}) states that a complete
metric space $X$ with a contraction mapping $\phi: X\to X$ has a unique fixed-point
$\tilde{x}$, i.\,e. such that $\tilde{x} = \phi(\tilde{x})$. It is also an important
tool of computational Interval Analysis, because traditional intervals are complete
metric spaces and hence the Banach fixed-point theorem allows one to prove
the existence of solutions to equations and even their uniqueness.
For non-closed intervals and their multidimensional analogs, the Brouwer fixed point
theorem and the Banach fixed-point theorem are not valid.
Therefore, the interval tests for the existence of solutions to equations and
systems of equations, that is, the interval Newton method, the Krawczyk method,
the Hansen-Sengupta method do not work in full with non-closed intervals.
Thus, with non-closed intervals, computational Interval Analysis is deprived
of its most powerful tools, widely used in solving equations, systems of linear and
nonlinear equations as well as in global optimization.
\paragraph{The nested intervals principle.}
It is known to be one of popular interval
tools for both theory and verified computing. Let us recall its formulation:
\begin{quote}
Every nested interval sequence $\{\mbf{X}_{k}\}$, i.\,e.
such that $\mbf{X}_{k+1}\subseteq \mbf{X}_{k}$ \\*
for all $k$, converges and has the limit $\cap_{k=1}^{\infty} \mbf{X}_{k}$.
\end{quote}
$\mbf{X}_k$ may be either one-dimensional intervals or interval boxes in $\mbb{R}^n$.
It turns out to be incorrect for non-traditional intervals. The sequence of half-open
intervals does not necessarily converge to anything and may have an empty intersection.
For example,
\begin{equation*}
\bigcap_{k=1}^\infty \hspace{1.1ex}
\bigl]\hspace{0.1ex}0,\, \tfrac{1}{k}\,\bigr] \ = \ \varnothing.
\end{equation*}
This is a great loss. Let us recall that the theory of interval integral and
interval estimates of the integral of a real function are based on this principle
(see \cite{CapraniMadsenRall, Rall-82}).
The most practical and efficient interval methods for solving operator equations
(integral and differential) are based on the nested intervals principle and
some fixed point theorems, in particular, the Banach fixed-point theorem and
the Schr\"{o}der fixed-point theorem \cite{Collatz, NeumaierBook}. They also become
invalid, since non-closed intervals are not complete topological spaces.
\paragraph{The Birkhoff-Tarski theorem and the Kantorovich lemma.}
These are popular fixed-point theorems for partially ordered sets, analogs
of topological fixed-point theorems that we formulated earlier. Birkhoff-Tarski
principle (also called Knaster-Tarski principle; see \cite{Birkhoff, GranasDugundji,
Tarski}) states that if $X$ is a complete lattice and $\phi: X\to X$ is an isotone
(order preserving) function, then $\phi$ has a fixed point $\tilde{x}\in X$, i.\,e.,
such that $\tilde{x} = \phi(\tilde{x})$. A feature of this result is the absence of
special requirements for the continuity of the function $\phi$, the form of the set
$X$, and its topological properties.
Birkhoff-Tarski theorem is incorrect for non-closed intervals that are not complete
lattices with respect to the standard order ``$\leq$'' on $\mbb{R}$ and with respect
to inclusion ordering between intervals.
Kantorovich lemma (see \cite{OrtegaRheinboldt}, Section 13.2) is a similar useful
result, which is a fixed-point theorem for isotone mappings. It also becomes
invalid for non-closed intervals.
\paragraph{Distance between various types of intervals.}
How should we calculate the distance between $[\alpha, \beta[$ and $[\alpha, \beta]$,
two intervals that differ in only one endpoint?
In mathematics, distance (deviation) is usually formalized by the concept of
a \emph{metric}, a function that gives a distance between each pair of elements of
a set. The metric is defined axiomatically, as a nonnegative function that satisfies
three axioms: identity of indiscernibles, symmetry, and triangle inequality
(see details e.g. in \cite{Collatz, Dieudonne, Engelking}).
The distance between intervals is known to be determined as follows (see
\cite{AlefeldHerzberger, MayerBook, MooreBakerCloud, NeumaierBook, SharyBook})
\begin{equation*}
\dist(\mbf{a}, \mbf{b})
= \max\bigl\{|\un{\mbf{a}} - \un{\mbf{b}}|, |\ov{\mbf{a}} - \ov{\mbf{b}}|\bigr\}.
\end{equation*}
It should be equal to zero for $\mbf{a} = [\alpha, \beta[$ and $\mbf{b} =
[\alpha, \beta]$. Thus, one of the main purposes of distance, which is to distinguish
between elements of a set that do not coincide with each other, is not fulfilled.
Moreover, the metric (distance) can not be introduced in any way on the set of all
closed and non-closed intervals , that is, this set is essentially non-metrizable
as a topological space.
The Arkhangelskii metrization theorem (see \cite{Engelking}) asserts that the topology
of a space can be determined by a metric if and only if this space satisfies the first
axiom of separation (the so-called T1-axiom) and has a countable fundamental family
of open neighborhoods. Axiom T1 is the weakest axiom of separability, it requires that
any one of two points of the space has a neighborhood not containing the other point.
It is easy to see that the space of all closed and non-closed intervals does not satisfy
even this weakest axiom: a half-open interval $[a, b[$ and its closure $[a, b]$ can not
be surrounded by such neighborhoods.
The failure of the axiom T1 is a very serious evidence of the fact that the topological
space under consideration is very exotic, even pathologic. For us, in Interval Analysis,
it implies that a meaningful calculus on the set of closed and non-closed intervals
is most likely never to be constructed. Of course, this does not exclude individual
episodic applications of non-closed intervals in certain particular situations.
But in general, alas \ldots
\section{Can other non-traditional intervals \\* be useful in Interval Analysis?}
\label{OtherNonTraditSect}
Next, let us turn to the other kinds of non-traditional intervals, such as improper
(``reversed'') intervals, like $[2, 1]$, $[1, -2]$, etc.
Some time ago, A.\,Neumaier reviewed in \cite{NeumaierDraft} the properties and
applications of some non-traditional interval arithmetics (he called them the unfortunate
term ``non-standard arithmetics'', as if someone issued a standard on various types
of intervals). But since A.\,Neumaier himself is a pessimist and does not believe much
in the usefulness of these arithmetics, then his review of applications turned out
to be also pessimistic, almost like an obituary. Below, we try to give another overview
of the capabilities of improper intervals from a more general point of view and show
that, sooner or later, they will win their rightful place among the mathematical
tools of natural and social science.
\subsection{Algebra I}
In the previous section, we considered the intervals from the viewpoint of the science
of Topology. Let us now consider sets of intervals, more precisely, various interval
arithmetics, from the viewpoint of another great science --- Algebra.
The science of Algebra is often called the science of algebraic systems, i.\,e.
the science that studies sets with certain operations and relations defined on them.
Let us look at the operations existing on the set of intervals.
In terms of algebra, operations can be different. If an associative binary operation
is defined on the set of some elements, then this set is called a semigroup, a monoid,
or a group, depending on what properties of this operation are. Strictly speaking,
interval arithmetic is an algebraic system on which more than one operation is defined,
but for our analysis it is sufficient to consider these operations one at a time.
\emph{Semigroup} is the weakest formation, where almost nothing is required from
a binary associative operation between elements.
A \emph{monoid} is a semigroup with a neutral element. A reminder: neutral element
or identity element is a special kind of element with respect to the binary operation
on that set, which leaves other elements unchanged when combined with it.
A \emph{group} is an algebraic system where the operation in question is reversible,
that is, for any element, there is an inverse element with respect to this operation.
In general, it is much more comfortable to work in a group than in a monoid or
a semigroup. Implicit awareness of this fact has been one of the driving forces
behind the expansion of popular and well-known algebraic systems over the past
millennia. Recall that this is why the simplest natural numbers were once expanded
to integers, then to rational numbers, and then to real and complex numbers, and
so on (although this process was not linear).
Why? The fact is, the operation in a group is ``predictable'' by its results and
``invertible''. We can restore the operands from the result of the operation. We can
perform algebraic manipulations in a group more easily and with less restrictions.
In other words, our mathematical tools are richer in a group than in a semigroup
or monoid.
Specifically, in a group with the operation ``$\ast$'', if we have an equality
\begin{equation*}
a*c = b*c,
\end{equation*}
then we can conclude that
\begin{equation*}
a = b.
\end{equation*}
And if
\begin{equation*}
a*b = c,
\end{equation*}
then
\begin{equation*}
a = b^{-1}*c,
\end{equation*}
where $b^{-1}$ is the inverse to $b$ with respect to the operation ``$\ast$''.
Additionally, we can solve equations in the group, which is not possible
in semigroups and monoids in the general case.
Do we really need such capabilities in Interval Analysis? My answer is definitely
``yes''. The author, for example, needs it, and he certainly knows that many others
also need such things. The above is especially important when we solve the so-called
``inverse problems'', when it is necessary to restore the preimage of a function
by its value. A special case of ``inverse problems'' is the well-known problem
of solving equations and systems of equations.
If we cannot restore the preimage in elementary interval operations, then we do
not have adequate tools to solve the ``inverse problems'' in general.
Yet another obvious example where the above algebraic properties proves to be
indispensable is metrology and measurement theory. This is the fundamental concept of
\emph{measurement error}. Recall that by definition, an error is the difference between
the approximate value of a quantity and its exact ideal value. In the natural sciences
and engineering, this latter is understood as the true value of a physical quantity,
that is, the value that ideally reflects the considered quantity or phenomenon within
the framework of the model (theory) we have adopted to describe it. Anyway,
the difference in the above formulation means an algebraic difference, i.e. addition
with the opposite element with respect to addition. If the measurement result and/or
the true value of a quantity are of an interval type, then it is not possible
to correctly find the error in the classical interval arithmetic, since there is
neither algebraic subtraction nor elements that are opposite to the proper intervals
with respect to addition.
In a similar situation in Geometry, when the Minkowski sum of sets is considered, and
it is required to ``inverse'' it, the so-called Hukuhara difference was introduced
\cite{Hukuhara} according to the following rule:
\begin{equation*}
A\ominus B = C \qquad \Leftrightarrow \qquad A = B + C.
\end{equation*}
With respect to the classical interval arithmetic $\mbb{IR}$, it is better not to limit
ourselves to partial reversal of operations, but to correct the situation fundamentally
by performing algebraic completion of $\mbb{IR}$.
\subsection{Algebra II}
Let us consider further facts from Algebra. Even if an algebraic system with
an associative binary operation is not a group, we can judge how good or bad
this operation is in terms of its ``invertibility''.
The condition
\begin{equation}
\label{CanceLaw}
a*c = b*c \quad\Rightarrow \quad a = b
\end{equation}
is called \emph{cancellation law}. If it holds in a semigroup or monoid, then this is
a sign that the operation ``$*$'' has good ``invertibility properties'', and it is
almost as that from a group. Moreover, such a semigroup can often be enlarged to
a group, or, in other words, this semigroup can be isomorphically embedded in a group.
The corresponding result from Algebra is as follows:
\bigskip\noindent
\textbf{Theorem.}
\textit{Every commutative semigroup that satisfies the cancellation law can be
isomorphically embedded in an commutative group.}
\bigskip
A reader can see details, for example, in the book \cite{Kurosh}, Chapter 2, Section 5.
The result we cite is at the bottom of page 47 of the Pergamon Press edition published
in 1965 (its electronic version is available in Internet).
Well, what about our interval arithmetic $\mbb{IR}$?
It is an Abelian (commutative) semigroup, with respect to both addition and
multiplication. For addition, the cancellation law is evidently satisfied, but,
for multiplication, the cancellation law is fulfilled only for intervals that do not
contain zero. In the general case, the cancellation law is not valid, as one can see
from the following example:
\begin{equation*}
[-1, 2]\cdot[2, 3] = [-3, 6] = [-1, 2]\cdot[1, 3].
\end{equation*}
As a consequence, we can embed interval arithmetic in a broader algebraic system
in which every element has an additive inverse (opposite) element, and any interval
that does not contain zero, has the multiplicative inverse.
This is the well-known Kaucher interval arithmetic $\mbb{KR}$, developed in PhD
thesis of Edgar Kaucher \cite{Kaucher}, which was defended in Karlsruhe,
Germany, in 1973, under the supervision of Prof.~Ulrich Kulisch. The main results
of this dissertation were included in the articles \cite{Kaucher77, Kaucher80}.
Earlier the idea of algebraic extension and completion of the classical interval
arithmetic was also implemented in the preprint of H.-J. Ortolf \cite{Ortolf},
although it was not elaborated in detail.
The operation opposite to the addition in $\mbb{KR}$, the so-called ``algebraic
subtraction'', is an analog to the Hukuhara difference \cite{Hukuhara} and is usually
denoted by the same symbol ``$\ominus$''. In the example of interval measurement error
from the preceding subsection, we can therefore define it as the algebraic difference
$(\tilde{\mbf{x}}\ominus\mbf{x}^\ast)$ between the measured value $\tilde{\mbf{x}}$
and the true value $\mbf{x}^\ast$ of a physical quantity.
A lot is said about the Kaucher interval arithmetic in the standard IEEE 1788-2015
for the implementation of interval arithmetic on digital computers \cite{IEEE1788},
although this arithmetic itself has yet to become a daily working tool for people
using interval computation.
It is necessary to say that E.\,Kaucher's work was quite nontrivial, since
he had to extend interval multiplication with the help of not only algebraic
considerations, but also with the use of the inclusion order relation, which was due
to the lack of the multiplicative cancellation law. This also results in the fact
that Kaucher interval arithmetic has some ``strange'' features, such as e.\,g.
nontrivial zero divisors:
\begin{equation*}
[-1, 1]\cdot[2, -3] = 0,
\end{equation*}
which can be easily explained and interpreted from a more advanced standpoint.
Namely,
\begin{equation*}
[-1, 1]\cdot[2, -3] \
= \ \bigvee_{x\in[-1, 1]} \ \bigwedge_{y\in[-3 ,2]} (x\cdot y) \ = \ 0,
\end{equation*}
according to the min-max definition of arithmetic operations in the Kaucher interval
arithmetic \cite{Kaucher, SharySurvey, SharyBook}.
Moreover, as is often the case in mathematics, progress in one area immediately
leads to advances in other areas related to the first one. For intervals in the new
algebraically completed interval arithmetic, new logical interpretations of arithmetic
operations are possible. They were discovered in the works of Spanish researchers
in the 70-90s of the XX century and summarized in the book \cite{ModalIntAnal}.
An alternative presentation of this theory can be found in the works
\cite{Goldsztejn11, Goldsztejn22}.
Additionally, the Kaucher arithmetic is also a lattice with respect to inclusion,
but this is achieved in a more elegant way than simply assigning the empty set
to the minimum of two nonintersecting intervals. Namely, in $\mbb{KR}$
\begin{equation*}
\text{ minimum of $\mbf{a}$ and $\mbf{b}$ with respect to inclusion } \
= \ \bigl[\,\max\{\un{\mbf{a}}, \un{\mbf{b}}\},
\,\min\{\ov{\mbf{a}}, \ov{\mbf{b}}\}\,\bigr].
\end{equation*}
If the intervals $\mbf{a}$ and $\mbf{b}$ do not intersect, the minimum is
an ``improper interval''.
Anyway, it makes sense to conclude this section by stressing the crucial role
of the cancellation law in semigroups. As we have already said, this is a sign
of partial "ivertibility" of the operation under study, and this fact greatly
simplifies the solution of various inverse problems in specific semigroups.
\subsection{Algebra and beyond}
We can give the arguments of the previous subsections a slightly different context
and show a different standpoint.
For several thousand years, there exists a very general and very powerful method
for solving various mathematical problems, which is called the ``method of equations''.
Its essence is
\begin{list}{}{\itemsep=0pt\topsep=2pt\parsep=0pt}
\item
to designate the sought-for value through a special symbol \\
(usually a letter called ``unknown variable'') \\
and then
\item
to write out an equality (or several equalities, i.\,e., their system) \\
that the solution to the problem of interest must satisfy.
\end{list}
An equality with unknown variable whose value we have to find is called \emph{equation}.
Further, to solve the original problem, it is necessary to solve the equation, i.\,e.
find, in one way or another, the value of the unknown variable (which can be a number,
a function, etc.) that satisfies the constructed equation or system of equations.
The convenience and generality of this method is that the equation can be ``very
implicit'' with respect to the unknown quantity. Moreover, the ways in which we search
for its solution do not necessarily have to make meaningful practical sense with respect
to the unknown variable. Instead, they can be very formal manipulations that are only
mathematical in nature. It is only important that the resulting solution of the equation
has a practical meaning. With the help of what kind of mathematics we got it, it is not
so important.
A nontrivial fact that some of readers (and even experts in Interval Analysis) may not
realize: in Interval Analysis it is also useful to solve equations, Interval Equations.
It is useful to find the solutions of interval equations in the general mathematical
sense described above. We call them ``formal solutions'', since the nature of
operations involved in the equation may be not necessarily algebraic. Anyway, doing
this, of course, is best in an algebraically completed interval arithmetic, that is,
in $\mbb{KR}$.
``Formal solutions'' to interval equations were first considered in 1969, in the work
of the Romanian mathematician S.\,Berti \cite{SBerti}, where they were not named in
any way. Berti studied an interval quadratic equation and simply drew attention to the
fact that the concept of solving an interval equation can also be given such a meaning.
Then H.\,Ratschek and W.\,Sauer \cite{RatschekSauer} studied such solutions for a single
interval linear equation, and they used the term ``algebraic solution''. In \cite{KNickel},
K.\,Nickel considered formal solutions to interval linear systems of equations in complex
interval arithmetics, but did not name them in any specific way. Both the author himself
and other researchers have also previously used the term ``algebraic solutions''
\cite{Markov-1999, Popova, Shary1996, SharyRC97}, but we now strongly recommend the term
``formal solutions'' (see \cite{SharySurvey,ModalIntAnal,SharyBook,Shary-arXiv} and many
others).
For example, the interval $[0, 1]$ is a formal solution to the interval quadratic
equation
\begin{equation*}
[1,2]\,x^2 + [-1,1]\,x = [-1,3].
\end{equation*}
Interval function $\mbf{x}(t) = 10.5\,[\,e^t, e^{2t}\,]$ of the real argument $t$
is a formal solution of the interval differential equation
\begin{equation*}
\frac{dx(t)}{dt} = [1, 2]\;x(t).
\end{equation*}
The interval function $\mbf{x}(t) = [\,0, 2t\,]$ on $[0,1]$ is a formal solution
to the Fredholm interval integral equation of the second kind
\begin{equation*}
x(t) + \int_0^1 (1.5s+t)\,x(s)\, ds = [\,0, 3t+1\,].
\end{equation*}
The last two (purely illustrative) examples show the main drawback of the term
``algebraic solution'': it emphasizes the algebraic nature of the operations
that form the interval equation in question, so that talking about ``algebraic''
solution of interval differential, integral and such like equations is at least
incorrect.
Let us remind of some results obtained in 60-90's of the last century, showing
the usefulness of ``formal solutions''.
\paragraph{Enclosing the united solution set.}
Let us be given an interval system of linear algebraic equations $\mbf{A}x = \mbf{b}$,
with an interval $m\times n$-matrix $\mbf{A}$ and interval right-hand side $m$-vector
$\mbf{b}$. Its united solution set is known to be the set
\begin{equation*}
\varXi_{\mathrm{uni}} (\mbf{A}, \mbf{b}) =
\bigl\{\,x\in\mbb{R}^n \mid Ax = b \;
\text{ for some $A\in\mbf{A}$ and $b\in\mbf{b}$}\;\bigr\}
\end{equation*}
i.\,e., the set of solutions to all point systems $Ax = b$ with $A\in\mbf{A}$ and
$b\in\mbf{b}$. Interval estimation of the united solution set is an important
practical problem, which is also one of the classic problems of Interval Analysis.
Hundreds of articles have been devoted to it from the 60s of the last century
up to the present.
It is easy to show that the united solution set of the original system of equations
coincides with the united solution set of the system in a fixed-point form
\begin{equation*}
x = (I - \mbf{A})\,x + \mbf{b}.
\end{equation*}
Next, a formal solution to the above fixed-point interval system gives an enclosure
(outer interval box) of the united solution set, if the spectral radius of the matrix
$|I - \mbf{A}|$, composed of the moduli of elements from $(I-\mbf{A})$, is less
than $1$. This is the well-known result of Apostolatos and Kulisch
\cite{ApostolatosKulisch}, obtained in 1968, which we reformulate in new terms
convenient to our purposes. The reader can also see this result in the beginning
of Chapter~12 of \cite{AlefeldHerzberger}.
\paragraph{Inner estimation of the united solution set.}
If an interval system of linear equations $\mbf{A}x = \mbf{b}$ is given, then
a proper formal solution to the interval system
\begin{equation*}
(\dual\mbf{A})\,x = \mbf{b},
\end{equation*}
where dual is dualization in Kaucher arithmetic, provides an inner box of the united
solution set. This inner box is almost always inclusion maximal (that is, it touches
the boundaries of the united solution set).
\paragraph{Inner estimation of the tolerable solution set.}
The tolerable solution set for an interval linear system $\mbf{A}x = \mbf{b}$
is known to be
\begin{equation*}
\varXi_{\mathrm{tol}} (\mbf{A}, \mbf{b}) =
\bigl\{\,x\in\mbb{R}^n \mid Ax\in\mbf{b} \;
\text{ for every $A\in\mbf{A}$}\;\bigr\}
\end{equation*}
i.\,e., the set of all such vectors $x$ that the product $Ax$ falls within
the right-hand side vector $\mbf{b}$ for every $A\in\mbf{A}$. This is the second,
in importance, among solution sets to interval systems of equations.
Any proper formal solution to the interval system
\begin{equation*}
\mbf{A}x = \mbf{b}
\end{equation*}
(with the same form as the initial interval system) gives an inner box of the tolerable
solution set. It is also inclusion maximal in most cases.
\paragraph{Enclosing the tolerable solution set.}
Any proper formal solution to the interval system
\begin{equation*}
x = (I - (\dual\mbf{A}))\,x + \mbf{b}
\end{equation*}
gives an enclosure of the tolerable solution set, if the spectral radius
of $|I - \mbf{A}|$ is less than $1$.
\bigskip
And so on. The list is indeed very extensive, and we could continue it, but the above
is enough for our short note. Naturally, there exist generalizations of the above
results to nonlinear systems (see, e.\,g., \cite{ModalIntAnal}).
In conclusion, it is worth noting that, historically, the short term ``solution''
as applied to interval equations has taken on a slightly different meaning. Since
the early 60s of the last century, when speaking about a \emph{solution} of an interval
equation, one has been referring to the solution of some extended problem statement
related to this equation. For example, ``to find an interval enclosure for the united
solution set to an interval equation'' (a typical example of such terminology is the work
\cite{PolyakNazin}). Or, ``to find an inner interval box within the tolerable solution
set'' (this is the so-called interval tolerance problem). And so on. In other words,
the situation at this point is similar to what we have in the theory of differential
equations, where we do not talk about solutions to individual differential equations,
per se. Usually some problem related to the differential equation in question is
formulated (initial value problem, boundary value problem, etc.), which imposes
additional conditions on the desired solution, without which the statement would be
incomplete and meaningless. Then the solutions for this extended problem are considered,
not for the single equation itself. The same is true for Interval Analysis.
\section{Empty intervals}
\label{EmptyIntSect}
Empty intervals are useful in some cases, although this is mostly applicable
in the classical interval arithmetic, equipped with set-theoretic operations.
It usually results from the intersection, e.\,g.
\begin{equation*}
[1, 2]\cap[3, 4] = \varnothing.
\end{equation*}
In Kaucher interval arithmetic, it is sometimes advisable to use the minimum with
respect to inclusion instead. Taking the minimum by inclusion is an operation similar
in purpose to the intersection, but ``more friendly''. For example,
\begin{equation*}
[1, 2]\wedge[3, 4] = [3, 2],
\end{equation*}
and we thus get a non-empty result which can lead to nontrivial conclusions
in the further reasoning.
What happens when we add the empty set (``empty interval'') to an interval arithmetics?
We have then
\begin{equation}
\label{EmptyInterval}
\begin{array}{ccccc}
\mbf{a} + \varnothing &=& \varnothing + \mbf{a} &=& \varnothing, \\[2mm]
\mbf{a} - \varnothing &=& \varnothing - \mbf{a} &=& \varnothing, \\[2mm]
\mbf{a}\cdot\varnothing &=& \varnothing\cdot\mbf{a} &=& \varnothing, \\[2mm]
\mbf{a} / \varnothing &=& \varnothing / \mbf{a} &=& \varnothing,
\end{array}
\end{equation}
and the cancellation law \eqref{CanceLaw} is ruined for both addition and multiplication
in interval arithmetics.
During the 2018 discussion, one of the participants, John Gustafson, compared the empty
set to zero, i.\,e. to $0$. That was the wrong metaphor: unlike the noble identity
elements like $0$ and $1$, the empty set is a kind of ``vampire'' in the algebraic
sense, judging by equalities \eqref{EmptyInterval}.
No invertibility of operations. No embedding in a larger and more complete
algebraic system. The interval arithmetics with the empty set are suitable to solve
mostly ``direct problems'' and perform chains of calculations in the forward direction.
In particular, Kaucher interval arithmetic is incompatible with the empty set
as we can see from the above reasons.
\section*{Acknowledgements}
The author thanks R. Baker Kearfott for his proposal to organize the outcomes
of the on-line discussion to the present text.
\bibliographystyle{plain}
|
train/arxiv
|
BkiUci45qoYDgaG4QfBB
| 5 | 1 |
\section{Introduction}
\label{sec:intro}
The existence of an exotic form of energy with negative
pressure, dubbed ``dark energy'', is one of the most widely involved mechanism to explain the accelerating universe. The most popular dark energy models mainly include the $\Lambda$CDM model and the scalar-field dark energy model. Moreover, the $\Lambda$CDM model is preferred by most observations, though a small number of observations display a slight deviation \citep{Bull_et_al_2016, Bullock_and_Boylan-Kolchin_2017}. However, on the theoretical level the $\Lambda$CDM model is embarrassed by the well-known cosmological constant problems \citep{Weinberg_1989,Carroll_1992}, i.e., the ``coincidence'' and ``fine-tuning'' problems. The ``coincidence problem'' states that why the present epoch is so special that the energy density of dark energy is in the same order of
magnitude as that of the matter only at this period. Several possible approaches have been adopted to explain or alleviate the coincidence problem, mainly including the anthropic principle \citep{Weinberg_2000,Vilenkin_2001,Garriga_et_al_2000,Garriga_and_Vilenkin_2001}, the slow evolving and spatially homogeneous scalar field with the ``tracker'' properties (see, for instance, \citep{Copeland_et_al_2006} for review), and the interaction between the dark energy and dark matter \citep{Amendola_2000,Caldera-Cabral_et_al_2009}.
In this work, we choose to explore the coincidence problem in a different perspective. A phenomenological model with minimal underlying theoretical assumptions is adopted, where the ratio of the energy densities of dark energy and matter is parameterized as $\rho_{X} \propto \rho_{m} a^{\xi}$ \citep{Dalal_et_al_2001, Chen_et_al_2010}. This model originates from two special cases, i.e., $\rho_{X} \propto \rho_{m} a^{3}$ for the $\Lambda$CDM model and $\rho_{X} \propto \rho_{m} a^{0}$ for the self-similar solution without the coincidence problem, where $\xi = 3$ and $0$, respectively.
The estimate value of $\xi$ obtained from the observational data can apparently reveal the severity of the coincidence problem. In addition, the standard cosmology without interaction between dark energy and dark matter is
characterized by $\xi + 3\omega_{X}= 0$, and $\xi + 3\omega_{X} \neq 0$ indicates the non-standard cosmology. Furthermore,
any solution with a scaling parameter $0<{\xi}<3$ makes the coincidence problem less severe \citep{Pavon_et_al_2004}.
Besides the case of $\xi = Constant$ which has been studied in the previous works, we also explore the possible evolution of $\xi$ with the parametrization $\xi(z) = \xi_{0} + \xi_{z}*\frac{z}{1+z}$. The previous studies have conducted observational constraints on the scenario of $\xi = Constant$ with several different cosmological probes (see, for instance, \citep{Pavon_et_al_2004,Guo_et_al_2007,Chen_et_al_2010,Cao_et_al_2011,Zhang_et_al_2014}), including the SNe Ia, CMB, BAO, Hubble parameter $H(z)$ versus redshift and Sandage-Loeb test data sets.
In this work, by considering the cases of $\xi = Constant$ and $\xi(z) = \xi_{0} + \xi_{z}*\frac{z}{1+z}$, we explore the cosmic coincidence problem and its possible evolution with the recent observations, including the SNe Ia data from the Pantheon sample \citep{Scolnic_et_al_2018}, the CMB power spectrum data from the Planck 2018 final analysis \citep{Aghanim_et_al_2018}, and the BAO data from the measurements of 6dFGS survey\citep[]{Beutler2011}, SDSS DR7 MGS\citep[]{Ross2015the}, and BOSS DR12\citep[]{Alam2017}.
This paper is organized as follows. In Section 2, we briefly introduce the phenomenological model under consideration. The Section 3 presents the observational data adopted in this work. The results from observational constraints and the corresponding analyses are displayed in Section 4. In the last section, we summarize the main conclusions.
\section{Phenomenological model: basic equations}
The model under consideration is characterized with a phenomenological form for the ratio of the dark energy and matter densities \citep{Dalal_et_al_2001,Chen_et_al_2010},
\begin{equation}
\rho_{X} \propto \rho_{m} a^{\xi}, \qquad or \qquad \Omega_{X} \propto \Omega_{m} a^{\xi},
\end{equation}
where $\Omega_{X}$ and $\Omega_{m}$ are the fractions of the energy density of the universe contributed from dark energy and matter, respectively. The scaling parameter ${\xi}$ can be constrained from observational data and used to reveal the severity of the coincidence problem.
Considering a flat FLRW universe with $\Omega_{X}+\Omega_{m}=1$, we can obtain
\begin{equation}
\Omega_{X} =\frac{\Omega_{X,0} a^{\xi}}{1-\Omega_{X,0}\left(1-a^{\xi}\right)},
\end{equation}
where $\Omega_{X,0}=\Omega_{X}(z=0)$. According to the energy conservation equation, we have
\begin{equation}
\frac{d \rho_{\mathrm{tot}}}{d a}+\frac{3}{a}\left(1+\omega_{X} \Omega_{X}\right) \rho_{\mathrm{tot}}=0,
\label{eq:equation2}
\end{equation}
where $\rho_{\mathrm{tot}}=\rho_{m}+\rho_{X}$ is the total energy density, $\omega_{X}$ specifies the equation of state of the dark energy. Meanwhile, the Eq.(\ref{eq:equation2}) can be rewritten as
\begin{equation}
\frac{d \rho_{m}}{d a}+\frac{3}{a} \rho_{m} = -\left[\frac{d \rho_{X}}{da}+\frac{3}{a}\left(1+\omega_{X}\right) \rho_{X}\right] = Q,
\end{equation}
where $Q = -(\xi+3\omega_X)\rho_m \kappa a^{\xi-1}/(1+\kappa a^{\xi})$ and $\kappa = \rho_X/(\rho_m a^{\xi})$, and the interaction term
$Q = 0\;(\neq 0)$ denotes the cosmology without (with) interaction between dark energy and matter.
Based on Eq.(\ref{eq:equation2}), we can work out
\begin{equation}
\frac{\rho_{\mathrm{tot}}}{\rho_{0}}=\exp \left(\int_{a}^{1} \frac{d a}{a} 3\left(1+\omega_{X} \Omega_{X}\right)\right).
\label{eq:equation3}
\end{equation}
Assuming $\omega_{X}$ as a constant, we can rewritten the above equation as
\begin{equation}
E^{2}(z)=\exp \left(\int_{a}^{1} \frac{d a}{a} 3\left(1+\omega_{X} \Omega_{X}\right)\right),
\label{eq:equation4}
\end{equation}
where $E^2(z)\equiv [H(z)/H_0]^2 = \rho_{\mathrm{tot}}/\rho_{0}$, and $E(z)$ is the dimensionless Hubble parameter. When $\xi = Constant$, we can solve Eq.(\ref{eq:equation4}) and get
\begin{equation}
E^{2}(z;\textbf{p})=a^{-3}\left(1-\Omega_{X,0}\left(1-a^{\xi}\right)\right)^{-3 \omega_{X} / \xi},
\label{eq:equation5}
\end{equation}
where the parameter set is $\textbf{p} \equiv \left(\Omega_{X,0}, \omega_{X}, \xi\right)$.
However, for a variable $\xi(z)$,
$$\xi(z) = \xi_{0} + \xi_{z}*\frac{z}{1+z}.$$
We cannot obtain the analytical solution of Eq.(\ref{eq:equation4}). Then we should solve it numerically with the parameter set
$\textbf{p }\equiv \left(\Omega_{X,0}, \omega_{X}, \xi_{0},\xi_{z}\right)$.
\section{Data sample}
The observational data sets used in our cosmological analyses are described as follows, including the Pantheon SNe Ia sample, the CMB power spectrum data from the final Planck 2018 results, and the BAO data from the 6dFGS survey, the SDSS DR7 MGS, and the BOSS DR12 measurements.
\subsection{SNe Ia data set}
The SNe Ia as standard candles have been proved to be a kind of sensitive probe of cosmology (see, e.g. \citep{Branch_and_Miller_1993, Riess_Press_and Kirshner_1996, Filippenko_2005}).
The population of confirmed SNe Ia has a dramatic increase over the last two decades, in the mean time, the techniques for measuring the light curve parameters are also continually being improved to reduce the systematic uncertainties. At present,
the most popular techniques mainly include the SALT/SALT2 \citep{Guy_et_al_2005, Guy_et_al_2007} and SiFTO \citep{Conley_et_al_2008} models, which are two popular techniques at present and fit the light curves of SNe Ia by using the spectral template.
The SNe Ia sample adopted in this work is the Pantheon sample \citep{Scolnic_et_al_2018}, which consists of 1048 SNe Ia (0.01 $\le z \le$ 2.3) combined from Pan-STARRS1(PS1) Medium Deep Survey, SDSS, SNLS, various low-z and HST samples.
In the Pantheon sample, the distances for each of these SNe Ia are determined after fitting their light-curves with the most up-to-date published version of SALT2 \citep{Betoule_et_al_2014}, then applying the BEAMS with Bias Corrections (BBC) method \citep{Kessler_and_Scolnic_2017} to determine the nuisance parameters and adding the distance bias corrections. The uniform analysis procedure conducted on the SNe Ia of Pantheon sample has significantly reduced the systematic uncertainties related to photometric calibration.
The observable given in the Pantheon sample can be deemed as a correction to the apparent magnitude (see Table A17 of \citep{Scolnic_et_al_2018}), i.e.,
\begin{eqnarray}
Y^{obs} &=& m_B+K \nonumber\\
&=& \mu+M,
\label{eq:Y_obs}
\end{eqnarray}
where $\mu$ is the distance modulus, $m_B$ is the apparent B-band magnitude, $M$ is the absolute B-band magnitude of a fiducial SN Ia, and the correction term $K = \alpha x_1-\beta c+\Delta_M+\Delta_B$ includes the corrections related to four different sources (see \citep{Scolnic_et_al_2018} for more details). The corresponding theoretical (predicted) value is
\begin{eqnarray}
Y^{th}&=& 5\log(d_L)+25 +M \nonumber\\
&=&5\log[(1+z)D(z)]+ Y_0,
\label{eq:Y_th}
\end{eqnarray}
where the constant term $Y_0$ is written as $Y_0 = M+5log(\frac{cH_0^{-1}}{Mpc})+25$, and the luminosity distance $d_L$ and the normalized comoving distance $D(z)$ are related with each other through the following formula, i.e.,
\begin{equation}
d_L(z) = \frac{c(1 + z)}{H_0}D(z),
\end{equation}
where $c$ is the velocity of light.
In a flat universe, $D(z)$ can be expressed as
\begin{equation}
D(z) = \int_0^z\frac{d\tilde{z}}{E(\tilde{z})},
\label{eq:D_z}
\end{equation}
where $E(z)$ can be worked out with Eq. (\ref{eq:equation4}) for the model under consideration.
The chi-square statistic for the Pantheon sample can be constructed as
\begin{equation}
\label{eq:chi2SNe}
\chi^2_{\textrm{SNe}}={\Delta \overrightarrow{Y}}^T\cdot\textbf{C}^{-1}\cdot{\Delta \overrightarrow{Y}},
\end{equation}
where the residual vector for the SNe Ia data in the Pantheon sample is $\Delta \overrightarrow{Y}_i = [Y^{obs}_i-Y^{th}(z_i; Y_0,\textbf{p})]$. The covariance matrix $\textbf{C}$ of the sample includes the contributions from both the statistical and systematic errors. The nuisance parameter, i.e., the constant term $Y_0$ is marginalized over with the analytical methodology presented in \citep{Giostri_et_al_2012}.
\subsection{BAO data set}
\begin{table}
\caption{The BAO data adopted in this work.}
\centering
\label{tab:table_BAO}
\begin{tabular}{ccccc
\hline
Survey & $z_{eff}$ & Measurement & Value & $\sigma$ \\
\hline
6dFGS & 0.106 & $r_{s}/D_{V}$ & 0.336 & 0.015 \\
SDSS DR7 MGS & 0.15 & $D_{V}\left(r_{s,fid}/r_{s}\right)$ & 664 & 25 \\
BOSS DR12 & 0.38 & $D_{M}\left(r_{s,fid}/r_{s}\right)$ & 1518 & -- \\
& 0.38 & $H(z)\left(r_{s} / r_{s,fid}\right)$ & 81.5 & --\\
& 0.51 & $D_{M}\left(r_{s,fid}/r_{s}\right)$ & 1977 & -- \\
& 0.51 & $H(z)\left(r_{s} / r_{s,fid}\right)$ & 90.4 & -- \\
& 0.61 & $D_{M}\left(r_{s,fid}/r_{s}\right)$ & 2283 & -- \\
& 0.61 & $H(z)\left(r_{s} / r_{s,fid}\right)$ & 97.3 & -- \\
\hline
\end{tabular}
\label{tab:baodata}
\end{table}
The BAO data extracted from galaxy redshift surveys are also a kind of powerful cosmological
probe \citep[]{eisenstein1998baryonic,eisenstein2005detection}.
The BAO data set used here is
a combination of measurements from the
6dFGS at $z_{\rm{eff}}=0.106$ \citep[]{Beutler2011}, the SDSS DR7 Main
Galaxy Sample (MGS) at $z_{\rm{eff}}=0.15$ \citep[]{Ross2015the}, and the BOSS DR12 at $z_{\rm{eff}} = (0.38,0.51,0.61)$ \citep[]{Alam2017}. The corresponding measurements are listed in Table \ref{tab:baodata}.
The observable quantities used in the measurements are expressed in terms of the transverse co-moving distance $D_M(z)$, the volume-averaged angular diameter distance $D_V(z)$, the Hubble rate $H(z)\equiv H_0E(z)$, the sound horizon at the drag epoch $r_s$, and its fiducial value $r_{\rm{s,fid}}$.
Following \citep{Ryan_Chen_Ratra_2019}, we use the fitting formula of \citep{eisenstein1998baryonic} to compute $r_s$, and $r_{\rm{s,fid}}$ is computed with the fiducial cosmology adopted in the paper in which the measurement is reported.
In a flat universe, the transverse co-moving distance $D_M(z)$ equals to the line-of-sight comoving distance $D_{C}(z)$, which is expressed as,
\begin{equation}
D_{C}(z) \equiv \frac{c}{H_{0}} D(z),
\label{eq:equationDC}
\end{equation}
and $c$ is the speed of light.
The volume-averaged angular diameter distance is \begin{equation}
D_{V}(z)=\left[\frac{c z}{H_{0}} \frac{D_{M}^{2}(z)}{E(z)}\right]^{1 / 3}.
\label{eq:equationDVz}
\end{equation}
We employ the BAO data set in the analysis with the chi-squared statistic
\begin{equation}
\chi_{\mathrm{BAO}}^{2}(p)=\left[\vec{A}_{\mathrm{th}}(p)-\vec{A}_{\mathrm{obs}}\right]^{T} C^{-1}\left[\vec{A}_{\mathrm{th}}(p)-\vec{A}_{\mathrm{obs}}\right],
\label{eq:chi2_BAO}
\end{equation}
where $C^{-1}$ is the inverse of the covariance matrix. The BOSS DR12 measurements listed in the last six lines of Table \ref{tab:baodata} are correlated, and the corresponding covariance matrix is present in Eq.(20) of \citep{Ryan_Chen_Ratra_2019}, which is also available from SDSS website\footnote{\url{https://sdss3.org/science/boss_publications.php}}.
\subsection{CMB data set} Observations of the CMB spectra provide another kind of independent test of the existence of dark energy.
It is remarkable that the CMB power spectra from the WMAP \citep{Hinshaw2013WMAP} and Planck projects \citep{Aghanim_et_al_2018} have provided strong constraints on cosmological parameters. Here, we use the combination of temperature and polarization CMB power spectra from the Planck 2018 release \citep{Aghanim_et_al_2018}, including the likelihoods at multipoles $\ell=2-2508$ in TT, $\ell=2-1996$ in EE, and $\ell=30-1996$ in TE.
In practice, different algorithms have been used to estimate the CMB power spectrum, such as Commander\citep{Planck_2018_A4,Planck_2018_A5}, SimAll\citep{Planck_2018_A5} and Pilk\citep{{Aghanim_et_al_2018}}.
The ``Commander'' component-separation algorithm is used to estimate the power spectrum over the range $\ell=2-29$ in TT. The ``SimAll'' approach is used to estimate the power spectrum over the range $\ell=2-29$ in EE. The ``Pilk'' cross-half-mission likelihood \citep{Planck_2018_A5} is used to compute the CMB high-$\ell$ part for TT,TE,EE over the range $30 \leq \ell \leq 2508$ in TT and over the range $30 \leq \ell \leq 1996$ in TE and EE \footnote{For more details on the Planck CMB spectrum and likelihood code, see \url{https://wiki.cosmos.esa.int/planckpla/index.php/CMB_spectrum_\%26_Likelihood_Code} }. Hereafter, $\mathcal{L}_{Planck}$ denotes the likelihood of the Planck data described above.
\section{Analysis and Results}
\subsection{Observational constraints}
\begin{table*}
\caption{\label{tab:parameters}
The mean values with $68\%$ confidence limits for model parameters constrained from the Pantheon SNe sample, and from a joint sample of SNe, BAO and CMB data sets, respectively. The scenarios with $\xi = Constant$ and $\xi(z) = \xi_{0} + \xi_{z}*\frac{z}{1+z}$ are both considered.}
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{ccccccc}
\hline
Model & Data set & $\Omega_{X,0}$ & $\omega_{X}$ & $\xi$ & $\xi_{0}$ & $\xi_{z}$ \\
\hline
$\xi = Constant$ & Pantheon & $0.75_{-0.08}^{+0.13}$ &$-0.96_{-0.14}^{+0.16}$ & $3.42_{-0.62}^{+1.22}$ & - & - \\
$\xi = Constant$ & Pantheon + BAO + CMB & $0.67\pm0.01$ & $-1.12\pm0.04$ & $3.28\pm0.15$ &- &- \\
$\xi(z) = \xi_{0} + \xi_{z}*\frac{z}{1+z}$ & Pantheon & $0.69^{+0.04}_{-0.15}$ & $-1.01_{-0.32}^{+0.04}$ & - & $3.00_{-0.78}^{+0.09}$ & $0.94_{-1.02}^{+0.58}$
\\
$\xi(z) = \xi_{0} + \xi_{z}*\frac{z}{1+z}$ & Pantheon + BAO + CMB & $0.69\pm 0.01$ & $-0.99_{-0.06}^{+0.03}$ & - & $2.78_{-1.01}^{+0.28}$ & $0.93_{-0.91}^{+1.56}$ \\
\hline
\end{tabular}}
\end{table*}
\begin{figure*}
\includegraphics[width=1.9\columnwidth]{fig1.pdf}
\caption{The 2D probability distributions of model parameters in the scenario of $\xi = Constant$, constrained from the Pantheon SNe sample (red solid lines), and from a joint sample of the SNe, BAO and CMB data (green dotted lines), respectively. The contours correspond to $68\%$ and $95\%$ CLs.}
\label{fig:fig1}
\end{figure*}
\begin{figure*}
\includegraphics[width=1.9\columnwidth]{fig2.pdf}
\caption{The 2D contours of parameters in the scenario of $\xi(z) = \xi_{0} + \xi_{z}*\frac{z}{1+z}$. The implications of line styles are the same as those in Fig.\ref{fig:fig1}.}
\label{fig:fig2}
\end{figure*}
In our analysis, the total likelihood for parameters is
\begin{equation}
\label{eq:LH_total}
\mathcal{L}(\mathbf{p})=\prod \mathcal{L}_{i},
\end{equation}
where $\mathcal{L}_{i}$ means the likelihood of each data set. In the case of using the combination of SNe Ia, BAO and CMB data sets, it takes,
\begin{equation}
\mathcal{L}_{tot}(\mathbf{p})=\mathcal{L}_{SNe} \mathcal{L}_{BAO} \mathcal{L}_{Planck}
\end{equation}
We derive the posterior probability distributions of parameters with Markov Chain Monte Carlo (MCMC) exploration using the May 2020 version of CosmoMC \citep{Lewis_et_al_2002}.
In the next following analysis, we consider two different treatment schemes for the scaling parameter $\xi$, i.e., $\xi = Constant$ and $\xi(z) = \xi_{0} + \xi_{z}*\frac{z}{1+z}$.
The case of $\xi = Constant$ has been widely studied in the literature \citep[see e.g.][]{Pavon_et_al_2004,Guo_et_al_2007,Chen_et_al_2010,Cao_et_al_2011,Zhang_et_al_2014}. Here we re-explore this scenario with the latest data sets. In addition, the case of $\xi(z)$ is taken into account to explore the possible evolution. We put observational constraints on the model parameters with the recent Pantheon SNe Ia sample, as well as with a combination of the SNe Ia, BAO and CMB data sets, respectively. We present the mean values with $68\%$ confidence limits for the parameters of interest in Table~\ref{tab:parameters} for both scenarios. In the scenario of $\xi = Constant$, the constraints on $\Omega_{X,0}$, $\omega_X$ and $\xi$ from the combining sample are much tighter than those from the single Pantheon SNe Ia sample.
The constraints on the parameters $(\Omega_{X,0}, \omega_X, \xi)$ from the Pantheon SNe sample are consistent with those from the ``Constitution Set'' SNe sample adopted in \citep{Chen_et_al_2010} at $68\%$ CL. However, the constraints on $(\Omega_{X,0}, \omega_X, \xi)$ from our combining sample are inconsistent with those from the joint SNe + BAO + CMB sample adopted in \citep{Chen_et_al_2010} at $68\%$ CL, but they are consistent at $95\%$ CL.
Moreover, the $\Lambda$CDM scenario, i.e., $(\omega_X, \xi) = (-1, 3)$, is accepted by the Pantheon SNe sample at $68\%$ CL, however, it's ruled out by the combining sample at $99\%$ CL. In the scenario of $\xi(z)$, the constraints on $\Omega_{X,0}$ and $\omega_X$ from the combining sample are much tighter than those from the Pantheon SNe sample, but the constraint precisions on $\xi_0$ and $\xi_z$ from the combining sample do not have significant improvements compared with those from the single Pantheon SNe sample. The $\Lambda$CDM scenario, i.e., $(\omega_X, \xi_0, \xi_z) = (-1, 3, 0)$, is accepted by the Pantheon SNe sample but ruled by the combining sample at $68\%$ CL, nevertheless, it's accepted by the combining sample at 95\% CL. Then, we pay attention to the constraints on the parameter $\xi_z$ which indicates the degree of temporal evolution of the scaling parameter $\xi$. The mean values with $68\%$ confidence limits for the parameters $\xi_z$ are $\xi_z =0.94^{+0.58}_{-1.02}$ from the Pantheon SNe sample and $\xi_z =0.93^{+1.56}_{-0.91}$ from the combining sample. It implies that the Pantheon SNe sample cannot distinguish between the evolving and non-evolving scenarios, and the combining sample supports the time-evolving scenario at $68\%$ CL.
The two-dimensional (2D) contours for the model parameters of interest are presented in Fig.~\ref{fig:fig1} for the scenario of $\xi=Constant$ and in Fig.~\ref{fig:fig2} for the scenario of $\xi(z)$.
From Fig.~\ref{fig:fig1}, one also can see that the constraints from the combining sample are much more restrictive than those from the Pantheon SNe sample, though there are degeneracies between some parameters. The $\omega_X-\xi$ plane does not have significant degeneracy from the Pantheon SNe sample, but displays a negative correlation from the combining sample. The $\Omega_{X,0}-\xi$ plane demonstrates a positive correlation from both the single Pantheon SNe sample and the combining sample. Especially,
the $\Omega_{X,0}-\omega_X$ plane displays a positive correlation from the Pantheon sample, conversely, a negative correlation from the combining sample. From Fig.~\ref{fig:fig2}, one can find out that
the contours constrained from the combining sample shrink significantly compared with those from the Pantheon SNe sample except for the last panel, i.e., the $\xi_0 - \xi_z$ plane. It implies that the addition of the BAO and CMB data sets cannot greatly improve the constraint precisions on $\xi_0$ and $\xi_z$.
\subsection{Model selection statistics}
\begin{table}\caption{\label{tab:table_selection}We list the natural logarithm of the Bayesian evidences $\ln B_{i}$ and the Bayes factors $\ln B_{i,0}$ from the joint sample of SNe+BAO+CMB, where the subscript ``0'' denotes the $\Lambda$CDM model.}
\centering
\begin{tabular}{ccc}
\hline
Model & $\ln B_{i}$ & $\ln B_{i,0}$ \\
\hline
$\Lambda$CDM & -1940.80 & 0 \\
$\rho_{X} \propto \rho_{m} a^{\xi}$ with $\xi = Constant$ & -2022.22 & -81.42 \\
$\rho_{X} \propto \rho_{m} a^{\xi}$ with $\xi(z) = \xi_{0} + \xi_{z}*\frac{z}{1+z}$ & -2030.04 & -89.24 \\
\hline
\end{tabular}
\end{table}
In the framework of Bayes' theorem, the probability that the model $M_{i}$ is true can be estimated with
\begin{equation}
P\left(M_{i} \mid D\right)=\frac{P\left(D \mid M_{i}\right) P\left(M_{i}\right)}{P(D)},
\end{equation}
where $P(M_{i} \mid D)$ is the posterior probability, $D$ denotes the observational data, $P(M_{i})$ is a prior probability in the model $M_{i}$, and $P(D)$ is the normalization constant. In addition, $P(D \mid M_{i})$ is the so-called Bayesian evidence \citep{roberto2008,limitation2008}, which can be written as
\begin{equation}
P\left(D \mid M_{i}\right)=\int P\left(D \mid \bar{\theta}, M_{i}\right) P\left(\bar{\theta} \mid M_{i}\right) d \bar{\theta},
\end{equation}
where $P(D \mid \bar{\theta},M_{i})$ is the likelihood function under the model $M_i$, and $P(\bar{\theta} \mid M_{i})$ is the prior probability for parameter $\bar{\theta}$ under the model $M_i$. Hence, calculating the Bayesian evidence requires the evaluation of an integral over the entire likelihood function and the prior distributions of model parameters.
When comparing two models, e.g., $M_{i}$ versus $M_{j}$, the Bayes factor
\begin{equation}
B_{i j}=\frac{P\left(D \mid M_{i}\right)}{P\left(D \mid M_{j}\right)},
\end{equation}
which is defined as the ratio of the Bayesian evidences of two models can be employed as a judgment criterion, where the Bayes factor $B_{ij}>1$ (i.e., $\ln B_{ij} > 0$) means that the observational data prefer $M_i$ to $M_j$, and $B_{ij}<1$ implies that $M_j$ is preferred \citep{bayesfactor}.
To compare the phenomenological models under consideration with the $\Lambda$CDM model, we calculate the values of Bayesian evidence for each model, where the code
$\textbf{MCEvidence}$ \citep{MCEvidence} which is a popular python package to compute the Bayesian evidence is adopted here, and the observational data correspond to the joint sample of SNe, BAO and CMB data.
In Table \ref{tab:table_selection}, we show the natural logarithm of the Bayesian evidence for each model, $\ln B_i$, as well as the natural logarithm of the Bayes factor, $\ln B_{i0}$, where the subscript ``0'' denotes the $\Lambda$CDM model.
It turns out that the $\Lambda$CDM model is most supported by the joint sample, since $B_{1,0}$ and $B_{2,0}$ are both smaller than 1, where the subscripts ``1'' and ``2'' denote the scenarios with a constant $\xi$ and a variable $\xi(z)$, respectively. In addition, $B_{1,2} =B_1/B_2$ is bigger than 1, so the scenario with a constant $\xi$ is more competitive than the one with a variable $\xi_z$.
\section{Summary and conclusions}
We have concentrated on a kind of phenomenological model of cosmology, where the assumption of $\rho_{X} \propto \rho_{m} a^{\xi}$ is adopted. As a key parameter, the scaling parameter $\xi$ reveals the severity of the coincidence problem, where the particular values $\xi = 3$ and $\xi = 0$ correspond to the $\Lambda$CDM scenario and the self-similar solution without coincidence problem, respectively. Besides the scheme of assuming $\xi = Constant$, we have also considered the scenario with a variable $\xi(z) = \xi_{0} + \xi_{z}*\frac{z}{1+z}$ to explore the possible evolution. The observational constraints on the model parameters are conducted with both the single Pantheon SNe Ia sample and a joint sample of SNe, BAO and CMB data sets, where the CMB power spectrum data are from the Planck 2018 final analysis , and the BAO data are from the measurements of 6dFGS, SDSS DR7 MGS, and BOSS DR12.
The main conclusions can be summarized as follows:
(i) In the case of $\xi = Constant$, the $\Lambda$CDM scenario, i.e., $(w_X, \xi) = (-1, 3)$, is accepted by the Pantheon SNe sample and by the joint sample at $68\%$ CL and $95\%$ CL, respectively. Moreover, in the case of a variable $\xi(z)$, the $\Lambda$CDM scenario, i.e., $(w_X, \xi_0, \xi_z) = (-1, 3, 0)$, is also accepted by the Pantheon SNe sample and by the joint sample at $68\%$ CL and $95\%$ CL, respectively.
(ii) According to the observational constraints on the model parameters, the Pantheon SNe sample cannot distinguish between the scenarios of a constant $\xi$ and a variable $\xi(z)$ at $95\%$ CL because of $\xi_z\in [-2.60, 5.23]$ at 95\% CL; moreover, the joint sample also cannot distinguish whether the scaling parameter $\xi$ is variable or not at 95\% CL because of $\xi_z\in [-0.67, 4.40]$ at 95\% CL.
(iii) According to the Bayesian evidences calculated from the joint sample, we find out that the $\Lambda$CDM model is most supported by the joint sample; furthermore, the joint sample prefers the scenario with a constant $\xi$ to the one with a variable $\xi(z)$.
(iv) The inclusion of the BAO and CMB data sets just can provide very limited improvements on constraining $\xi_0$ and $\xi_z$ in the scenario of $\xi(z)$, but it has significantly reduced the allowed regions of other parameters. Thus, to diagnose the evolution of the scaling parameter $\xi$ more robustly, it seems to be quite necessary to
explore other probes which can supply more efficient improvements on constraining $\xi_0$ and $\xi_z$.
\section{acknowledgments}
This work has been supported by the National Natural Science Foundation of China
(Nos. 11988101, 11633001, 11920101003, 11703034, 11773032 and 11573031), the Strategic Priority Research
Program of the Chinese Academy of Sciences (No. XDB23000000), the Interdiscipline Research Funds
of Beijing Normal University, and the NAOC Nebula Talents Program.
\paragraph{Note added.} The data underlying this article will be shared on reasonable request to the corresponding author.
|
train/arxiv
|
BkiUeFjxK7kjXLSrGPj_
| 5 | 1 |
\section{Introduction}
Transiting exoplanets present unique opportunities for observational study. It is only for these planets that masses, radii, and atmospheric properties together may be determined. These data give important insight into the planets' structures, and thus, their formation and evolutionary history as well. Additionally, observations of transiting planets allow very precise characterization of their orbits, and this can be leveraged to investigate the architecture of the systems the planets reside in. The idea behind this is that another planet in the same system will perturb the transiting planet's orbit. These orbital variations could be detectable in transit observations over time, and they could be used to characterize the perturbing planet without the need to detect it using additional observational methods (e.g. radial velocities). On the other hand, the absence of such observed variations can be used to place limits on the mass and orbital properties of a hypothetical additional planet.
The main method used to look for perturbations arising from an additional planet in a transiting planet system is the search for transit timing variations \citep[TTVs,][]{agol05, holman05}. The premise is that perturbations to a transiting planet's orbit from another planet will cause deviations from strict periodicity. By regularly measuring the time of central transit for the transiting planet one would observe increasing deviation from the expected constant ephemeris. For example, a typical data set of measured transit times with precisions on the order of a few tens of seconds and spread out over a few years can be sensitive to perturbations from additional planets with masses of 1\,M$_{\oplus}$ or even lower in or near low-order mean motion resonances with the transiting planet. Therefore, TTV investigations can probe for planets in a unique region of parameter space.
To date, there have been no definitive detections of transit timing variations attributable to the existence of an additional planet in a transiting planet system, although there are some systems like OGLE-TR-111 \citep{diaz08} that do warrant further observation and study to determine whether some limited discrepant data are indicative of true TTVs. As a result of no definitive detected variations, most work in this area has been on establishing baselines for long-term monitoring \citep[e.g. the Transit Light Curve project,][]{holman06} and setting limits on the existence of additional planets using the observed constancy as a constraint.
The transiting planet systems for which detailed calculations have been carried out to place limits on additional planets based on no observed TTV variations are TrES-1 \citep{steffen05}, HD\,209458 \citep{agol07, miller-ricci08a}, HD\,189733 \citep{miller-ricci08b}, and GJ\,436 \citep{bean08}. The upper limits to additional planets in these systems are interesting for a variety of reasons. For the three gas giant transiting planets (TrES-1b, HD\,209458b, and HD\,189733b), the lack of observed TTVs rules out the existence of terrestrial-mass planets in or near interior, low-order mean motion resonances. Such a system architecture would have been the result of shepherding migration \citep{raymond08}. The obtained limits rule out this evolutionary scenario in these specific systems, unless some other physical process (e.g. tidal evolution) is responsible for driving the system out of resonance after the migration has stopped. If the same kind of limits are obtained for other similar systems in the future, then that would indicate that the shepherding of terrestrial-mass planets by inward migrating gas giants rarely plays a role in the evolutionary history of planetary systems.
In the case of the planet HD\,209458b, limits from the lack of TTVs and radial velocity variations together significantly constrain the existence of a perturbing planet as a cause for its ``inflated'' nature through eccentricity pumping and the subsequent dissipation of tidal energy \citep[e.g.][]{bodenheimer01}, although not all possible perturbing planets can be ruled out. For the GJ\,436 system, the absence of TTVs for the ``Hot Neptune'' planet allowed \citet{bean08} to disprove the existence of the additional 5\,M$_{\oplus}$ planet proposed by \citet{ribas08} to explain its eccentric orbit.
I present an analysis of the transit times for CoRoT-1b\footnote{Recently the CoRoT team changed the designation of this planet from CoRoT-Exo-1b.} to search for deviations arising from perturbations from an additional planet in the system, and to place limits on the mass and orbit of such a hypothetical planet. CoRoT-1b was one of the first two planets discovered using data from the CoRoT satellite \citep{barge08}. Therefore, it presents an interesting chance to make an assessment of the real TTV sensitivity of the long, continuous sequence of space-based transit photometry from this mission. The paper is organized as follows. In \S2 I describe the CoRoT data and reduction. In \S3 I present the light curve modeling to determine transit times for each of the individual observed events. I describe the analysis of these transit times to search for variations and place limits on additional planets in \S4. I conclude in \S5 with a discussion of the results.
\section{Observations and data reduction}
CoRoT-1 was observed as part of the first CoRoT observing run of 55 days between February 2 and April 6, 2007. Details of the observations and the original analysis that definitively established the star as a host to a transiting planet were presented by \citet{barge08}. Thirty-six transit events were observed. Part-way through the observing run it was realized that these were likely transiting planet events and the sampling rate for the photometric aperture containing CoRoT-1 was changed from once per 512\,s to once every 32\,s as the usual on-board binning of 16 exposures was switched off. The first 20 transits were observed using the nominal sampling, while the last 16 were obtained in the high-frequency mode.
I retrieved the pipeline reduced so-called ``N2'' chromatic photometric time series of CoRoT-1 from the CoRoT archive\footnote{http://idoc-corot.ias.u-psud.fr/index.jsp}. A description of the data processing steps leading to the N2 data is given by \citet{auvergne09}. From the retrieved data I extracted only the time series points flagged with a valid status and ignored those flagged as invalid (e.g. data taken while the satellite passed through the South Atlantic Anomaly or while it was entering or exiting the Earth's penumbra).
I made a few modifications to the extracted data before analyzing them to determine the transit parameters. For the data with the 512\,s sampling, the times given in the N2 data are at the end of first 32\,s exposure in the sequence of the 16 exposures that are binned together. For the data with the 32\,s sampling, the times given in the N2 data are at the end of exposure. I applied the appropriate corrections to the time stamps so that they corresponded to the midpoint of the exposures. I also converted the given heliocentric times to the reference frame of the barycenter, although this correction was relatively small (1.7\,s on average).
The chromatic N2 data contain time series obtained in three different spectral channels referred to as the blue, green, and red channels. I inspected these data for abnormalities indicative of systematic effects that were not fixed in the normal CoRoT pipeline processing. \citet{barge08} noted that their data, which were based on a preliminary reduction, were corrupted by strong cosmic ray events during two of the transits. In the version of the data that I worked with, I noticed several discontinuities attributable to cosmic ray strikes in the blue channel data when compared to the green and red channel data. I discarded some of these affected data and corrected the rest as described below.
I identified one transit event (\#30) for which the data were too corrupted by a strong cosmic ray strike for light curve modeling, and none of the data in the date range between 43.9 and 44.8\,d after the start of the observing run were included in further analysis. The blue channel data after this were normalized to the typical level seen before the event.
One other strong cosmic ray event was seen in the blue channel data at 1.27\,d after the beginning of the observing run. After this event, the blue channel flux exhibits an exponential decay back to the normal level over the next 18\,d. I corrected this part of the blue channel flux with a method like that used by \citet{aigrain08} to correct for a similar event in the data for CoRoT-Exo-4. The goal was to correct the blue channel data so that the blue-to-green and blue-to-red channel flux ratios were smoothly and slowly varying functions of time similar to the green-to-red channel flux ratio. To do this, I fit a power-law to the blue channel flux over the affected range. I limited the fit to data well outside of a transit. I divided the best fit from all the data in the affected range (i.e. including data during transits) with the overall normalization set by the typical flux just before the event.
After applying the described corrections to the blue channel flux, I summed the data from the three spectral channels to yield a ``white'' photometric time series. The N2 photometric counts are given as number of detected photoelectrons per second so I multiplied each sample by its effective exposure time to give the total number of counts. I took the square root of these values as the corresponding photon-limited uncertainty. The median uncertainty in the 512\,s samples was 94\,ppm, while the median uncertainty in the 32\,s samples was 375\,ppm.
As a check of the effect of the applied corrections to the blue channel on my final results, I applied the light curve modeling (see \S3) to different realizations of the data. In addition to the nominal analysis of the corrected data, I also fit a version of the data where no corrections were applied (but still ignoring the same parts of the data considered irrecoverably corrupted), and a version of the data where the blue channel data were not included in white light curve. In all cases the determined transit parameters were consistent at the level expected from their post priori uncertainties. The residuals from the fit to the corrected data were the lowest, which is mainly due to the significantly improved data for the transits immediately following the cosmic ray event that the data were corrected for. Therefore, I conclude that the corrections have the desired effect of improving the photometric data, and that this simply results in increased precision on the determined transit parameters rather than a large systematic effect on the parameters themselves.
\section{Light curve modeling}
I modeled the white light curve specified above to determine the parameters that best describe the observed transits. For each of the transits with good data, I extracted the portion of the light curve that occurred within 0.4\,d from the central transit time predicted using the ephemeris given by \citet{barge08}. This yielded 35 individual light curves. I used the exact analytic formulas including quadratic limb darkening given by \citet{mandel02} to create the model that was fitted to each of these light curves.
The global parameters of the model were the ratio of the planet and host star radii ($R_{p}/R_{\star}$),the ratio of the planet orbital semi-major axis and host star radius ($a/R_{\star}$), the planet orbital inclination ($i$), and the quadratic limb darkening coefficients ($\gamma_{1}$ and $\gamma_{2}$). I determined unique central transit times ($T_{c}$), and flux normalizations and linear trends for each of the 35 transit events. All the transit light curves were fit at the same time to simultaneously determine the global parameters and the individual event parameters. For all the modeling I assumed the transiting planet was on a circular orbit with a fixed period. I first carried out the analysis using the orbital period given by \citet{barge08}. After this, I re-determined the orbital period based on the measured individual transit times (see \S4.1) and then repeated the light curve modeling with this new period.
I used a Levenberg-Marquardt algorithm to determine the parameters that yielded the best-fit model to the observed data. The standard $\chi^{2}$ parameter was used as the fit quality metric throughout. I applied the algorithm iteratively to reject outliers and revise the photometric error estimates. I began by first fitting the light curves assuming the photon-limited uncertainties. After the best-fit model was identified, I iteratively rejected highly deviant points and re-fit the data. The rejection threshold was set for each of the individual transit light curves to be four times the rms of the residuals around the best-fit model. This step resulted in 1.8\% of the points being eliminated.
In the next step, I calculated an adjustment factor for the photon-limited uncertainties. This factor was given by the square root of the the reduced $\chi^2$ for the best-fit to all the data together (minus the data points rejected in the previous step). The value was found to be 7.3, which indicates much larger true uncertainties in the photometry then that given by counting statistics alone. The reason for this discrepancy is unknown. I multiplied the photon limited uncertainties by this factor and then re-fit the data a final time. After the adjustment, the median uncertainty in the 512\,s samples was 681\,ppm, while the median uncertainty in the 32\,s samples was 2724\,ppm.
The transit light curves and best-fit model are shown in Fig.~\ref{f1}. The gaps in the data are the sections of the time series that were flagged as invalid from the CoRoT pipeline. The fit residuals are Gaussian distributed, which is evidence that validates the global adjustment to the photon-limited uncertainties based on the initial reduced $\chi^2$ value.
To estimate the uncertainties in the determined parameters, I used the residual permutation boostrap or ``prayer bead'' method. I generated 10\,000 simulations of the transit light curves by adding to the best-fit model the original fit residuals shifted about a random number. I fitted each of these simulated data sets in the same way as I fit the real data. The standard deviations of the resulting parameter distributions were taken to be the parameter uncertainties. The best-fit global parameters and their corresponding uncertainties are given in Table~\ref{t1}. The transit times and uncertainties are given in Table~\ref{t2}.
My results for the physical and orbital parameters of planet and star are slightly different than the values given by \citet{barge08}. I find a larger ratio of the planet and host star radii, a smaller ratio of the planet orbital semimajor axis and host star radius, and a lower planet orbital inclination all at about 2\,$\sigma$ formal confidence. As discussed above, these results are rather insensitive to the additional reductions I applied to the pipeline processed data. Therefore, the difference between my result and that of \citet{barge08} probably arises from differences in data themselves. The data I analyzed were processed with a more recent version of the CoRoT pipeline, whereas the data \citet{barge08} analyzed were processed with a preliminary version of the pipeline. It is likely the more recent pipline-reduced data are of superior quality due to better corrections for systematic effects that were developed as the CoRoT mission has progressed \citep{auvergne09}. All of my determined transit parameters have lower uncertainties despite my using a similar error estimation method (residual permutation boostrap) as \citet{barge08}. This suggests the more recently reduced data are indeed of better quality. I conclude that my determined transit parameters are probably also correspondingly more robust as well, and I utilize the individual transit times as described below.
\begin{table}
\caption{Global transit parameters for CoRoT-1b.}
\label{t1}
\centering
\begin{tabular}{ll}
\hline\hline\\[-3mm]
Parameter & Value \\
\hline\\[-3mm]
$R_{p}/R_{\star}$ & $0.1433\,\pm\,0.0010$ \\
$a/R_{\star}$ & $4.751\,\pm\,0.045$ \\
$i$ ($\degr$) & $83.88\,\pm\,0.29$ \\
$\gamma_{1}$ & $0.57\,\pm\,0.10$ \\
$\gamma_{2}$ & $-0.16\,\pm\,0.18$ \\
\hline\hline
\end{tabular}
\end{table}
\begin{table}
\caption{Transit times and residuals from the mean ephemeris for CoRoT-1b.}
\label{t2}
\centering
\begin{tabular}{cc}
\hline\hline\\[-3mm]
$T_{c}$ & O - C \\
(BJD) & (s) \\
\hline\\[-3mm]
2454138.32761 $\pm$ 0.00047 & 21.3 \\
2454139.83712 $\pm$ 0.00059 & 68.3 \\
2454141.34485 $\pm$ 0.00062 & -38.2 \\
2454142.85425 $\pm$ 0.00039 & -0.7 \\
2454144.36411 $\pm$ 0.00163 & 76.6 \\
2454145.87255 $\pm$ 0.00042 & 31.5 \\
2454147.38076 $\pm$ 0.00048 & -34.1 \\
2454148.88942 $\pm$ 0.00038 & -60.5 \\
2454150.39899 $\pm$ 0.00026 & -8.1 \\
2454151.90798 $\pm$ 0.00045 & -6.4 \\
2454153.41661 $\pm$ 0.00041 & -35.3 \\
2454154.92615 $\pm$ 0.00035 & 14.5 \\
2454156.43527 $\pm$ 0.00026 & 28.1 \\
2454157.94481 $\pm$ 0.00073 & 77.5 \\
2454159.45331 $\pm$ 0.00050 & 36.9 \\
2454160.96252 $\pm$ 0.00029 & 58.4 \\
2454162.47066 $\pm$ 0.00045 & -13.2 \\
2454163.97926 $\pm$ 0.00044 & -44.4 \\
2454165.48852 $\pm$ 0.00047 & -18.9 \\
2454166.99747 $\pm$ 0.00054 & -20.6 \\
2454168.50619 $\pm$ 0.00028 & -41.4 \\
2454170.01596 $\pm$ 0.00020 & 27.3 \\
2454171.52372 $\pm$ 0.00028 & -76.5 \\
2454173.03365 $\pm$ 0.00026 & 7.1 \\
2454174.54240 $\pm$ 0.00033 & -11.6 \\
2454176.05183 $\pm$ 0.00026 & 28.2 \\
2454177.56026 $\pm$ 0.00023 & -18.2 \\
2454179.06932 $\pm$ 0.00029 & -9.9 \\
2454180.57844 $\pm$ 0.00025 & 3.7 \\
2454183.59609 $\pm$ 0.00024 & -20.5 \\
2454185.10512 $\pm$ 0.00031 & -15.3 \\
2454186.61471 $\pm$ 0.00031 & 38.4 \\
2454188.12338 $\pm$ 0.00050 & 12.8 \\
2454189.63227 $\pm$ 0.00038 & 6.3 \\
2454191.14143 $\pm$ 0.00026 & 23.5 \\
\hline\hline
\end{tabular}
\end{table}
\begin{figure*}[ht!]
\resizebox{\hsize}{!}{\includegraphics{fig1.eps}}
\caption{Individual normalized transit light curves for CoRoT-1b (points) with the best-fit model (lines). The number in each panel indicates which transit event is plotted.}
\label{f1}
\end{figure*}
\section{Transit time analysis}
\subsection{Search for perturbations}
To search for TTVs, I fit the determined transit times with a model assuming a constant period and examined the residuals. The obtained mean transit time and period are given in Table~\ref{t3}. The residuals from the fit are plotted in Fig.~\ref{f2}. The rms of the residuals is 37\,s and the maximum deviation is 78\,s. The $\chi^2$ of the fit is 43.2 for 33 degrees of freedom (reduced $\chi^{2}$\,=\,1.31). The probability for a value drawn from the $\chi^2$ distribution to equal or exceed this value is 11\%.
Although the reduced $\chi^{2}$ for the fit to the transit times is somewhat higher than would be expected for a constant periodicity and well estimated errors, the significance of the discrepancy is low. Furthermore, the determined time for one transit (\#23) is essentially solely responsible for the larger than expected $\chi^2$ because it is deviant by 3.2 times its uncertainty. I closely examined the light curve for this event and found that the data did not exhibit any obvious signs of systematic error. Removing this transit time from the data set and re-fitting yielded a reduced $\chi^{2}$\,=\,1.01. Additionally, the residuals closely follow a Gaussian distribution and no residual point exceeds the standard deviation of the group by more than a factor of 2.1. Therefore, I conclude that the estimated transit time errors are reasonable, and that there is no evidence for TTVs given the precision of the data.
\begin{table}
\caption{Ephemeris for CoRoT-1b.}
\label{t3}
\centering
\begin{tabular}{lc}
\hline\hline\\[-3mm]
Parameter & Value \\
\hline\\[-3mm]
$T_{c}$ (BJD) & 2454159.452879 $\pm$ 0.000068\\
$P$ (d) & 1.5089656 $\pm$ 0.0000060\\
\hline\hline
\end{tabular}
\end{table}
\subsection{Limits on additional planets}
As the transit times do not exhibit any evidence for perturbations to the transiting planet by another body, I turned my attention to placing limits on the mass and orbit of a hypothetical additional planet in the system. I began by first delineating the orbital parameter space such a planet could exist in based on a stability argument. To do this, I ran a long-term N-body simulation of CoRoT-1b and some massless test particles using the Mercury code \citep{chambers99}. The test particles were distributed between orbital periods of 0.1\,d and 15\,d ($a$\,=\,0.004 to 0.117\,AU) in steps of 0.02\,d. The simulation was run for 10$^{6}$ orbits of CoRoT-1b (1.5 x 10$^{6}$\,d). At the end of the simulation I determined for which orbital periods the particles did not become destabilized leading to collisions with the central star or the planet, or ejection. I found that no test particles remained stable between the central star and the planet. Outside the planet's orbit, I found that test particles remained in stable orbits for periods longer than 2.77\,d ($a$\,$>$\,0.038\,AU).
With the region of stability for a hypothetical additional planet established, I then calculated the maximum mass such a planet could have for a given orbital period in this region and not perturb the transiting planet so much that its transit times would be inconsistent with the observed transit times. I followed the methodology used by \citet{miller-ricci08a, miller-ricci08b} for this step. The technique is based on the principle that TTVs must exhibit some non-linearity to be detectable. That is, they must be distinguished from just an incorrect assumed period for the transiting planet.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{fig2.eps}}
\caption{Transit timing residuals for CoRoT-1b.}
\label{f2}
\end{figure}
I integrated the orbits of the transiting planet and a test planet over the timespan of the observations using the Burlish-Stoer integrator in the Mercury code with the methodology described by \citet{bean08} to generate model transit times. The transiting planet's orbital parameters were initialized at their nominal values. The test planet was initialized on a circular orbit coplanar with the transiting planet. These simplifying assumptions about the orbit of the second planet are reasonable because an additional planet in a non-coplanar and/or eccentric orbit would tend to lead to even larger TTVs. The TTV signal was calculated over a grid of possible mean anomaly values for a given test planet orbital period and mass (0$\degr$ -- 360$\degr$ in steps of 1$\degr$) to marginalize over this parameter.
The transit times predicted from a given orbital integration were subtracted from the observed times to give the residuals. I fit these residuals with a first order polynomial (i.e. a linear trend). For a given period, I started with a test planet mass of 0.1\,M$_{\oplus}$ and increased this value until the smallest fit $\chi^2$ in the grid of mean anomaly values degraded by more than a certain amount from the $\chi^2$ of the best-fit constant period model to the observed transit times. I adopted limits corresponding to 3$\sigma$ confidence ($\Delta\chi^2$\,=\,9). The calculations were done for orbital periods between 2.77\,d (i.e. the shortest period for which a massless test particle was stable) and 10.0\,d in steps of 0.01\,d. A finer grid of points with steps of 0.002\,d was used around the 1:2, 3, and 4 mean motion resonances to better resolve the limits in these areas where the perturbations would be the most sensitive to the orbital configuration.
\begin{figure*}[ht!]
\resizebox{\hsize}{!}{\includegraphics{fig3.eps}}
\caption{Upper mass limits (3\,$\sigma$ confidence) from the transit timing analysis for an additional planet in the CoRoT-1 system as a function of orbital period. The dashed lines indicate the orbital periods corresponding to the 1:2, 1:3, and 1:4 mean motion resonances with the transiting planet.}
\label{f3}
\end{figure*}
\begin{figure*}[ht!]
\resizebox{\hsize}{!}{\includegraphics{fig4.eps}}
\caption{Same as Fig.~\ref{f3} except that the data are from an analysis of simulated transit times for a system like CoRoT-1 observed during a long run.}
\label{f4}
\end{figure*}
The results of the limit calculations are shown in Fig.~\ref{f3}. Masses greater than 4\,M$_{\oplus}$ are ruled out for a planet in a 1:2 mean motion resonance with CoRoT-1b. Interestingly, the data yield less stringent limits on planets near a 1:3 mean motion resonance ($\sim$5\,M$_{Jup}$) than in the surrounding parameter space ($\sim$2\,M$_{Jup}$). Planets with masses of 1\,M$_{Jup}$ are ruled out for all orbital periods less than about 4.2\,d. Planets with masses of 10\,M$_{Jup}$ can only be ruled out for orbital periods less than 9.4\,d on the basis of the transit times. However, the presence of such a massive planet would likely lead to instability so the true limits are likely lower than this.
\subsection{Simulation of possible CoRoT TTV sensitivity}
The data for CoRoT-1 were obtained during one of the so-called CoRoT ``short runs'' \citep{baglin06}, and for only part of the time with the high-cadence sampling. I investigated with a simulation what limits could be placed on additional planets in a similar system using transit times measured over the course of a ``long run'' of 150\,d with the high-cadence sampling the entire time. This situation represents the best possible scenario for the sensitivity of CoRoT alone to detect additional planets in systems with a transiting short-period Jovian planet.
For the simulation I generated a sequence of transit times from 100 consecutive orbits of a planet with the same parameters of CoRoT-1b. I added to these simulated times random noise with a standard deviation of 24\,s, which is the rms of the CoRoT-1b transit times from a constant ephemeris when the high-cadence sampling was used. I then analyzed these data to determine mass limits for additional planets using the same method as above. The results are shown in Fig.~\ref{f4}.
I find that for such a scenario, the transit times would typically be about two times more sensitive for a given period. Most interestingly, the data would be sensitive to planets in and near the 1:2 mean motion resonance with masses as small as twice that of Mars (0.2\,M$_{\oplus}$). Planets with masses of 1\,M$_{Jup}$ would yield significant TTVs for all orbital periods less than about 5.4\,d. In addition, such data would be useful to probe for planets with masses of 10\,M$_{Jup}$ and orbital periods less than 12.1\,d.
\section{Summary and discussion}
I have analyzed the light curve for the transiting Jovian planet host star CoRoT-1 that was obtained with the CoRoT satellite. My results for the physical and orbital parameters of the star and the transiting planet from modeling these data are somewhat inconsistent with the results from the original analysis of the data presented by \citet{barge08}. This is most likely due to those authors analyzing a version of the light curve from a preliminary reduction of the raw data, whereas my results are based on an analysis of a version of the data produced by a more recent, and likely more robust, version of the CoRoT pipeline. The most interesting discrepancy comes for the planet-to-star radii ratio, with my results indicating a slightly larger planet than \citet{barge08}. If I assume the radius of the host star is 1.11\,$\pm$\,0.05\,M$_{\odot}$ \citep{barge08}, then my result suggests the radius of the planet is 1.54\,$\pm$\,0.07\,R$_{Jup}$. CoRoT-1b is therefore likely another ``inflated'' Hot Jupiter in the mold of HD\,209458b \citep[e.g.][]{bodenheimer03}.
The transit times determined from the light curve analysis are consistent with a constant period and, therefore, exhibit no evidence of perturbations to the transiting planet. I used this observed constancy to set limits on the mass of a hypothetical additional planet in a nearby, stable orbit. I find that the data rule out planets with masses below 4\,M$_{\oplus}$ near the 1:2 mean motion resonance, although the upper mass limits are typically much higher over the orbital period range considered. Interesting limits can only be obtained for orbital periods less than 9\,d. I confirm the general result noted in previous TTV analyses \citep[][]{steffen05, agol07, miller-ricci08a, miller-ricci08b} that this kind of study is most sensitive to planets in or near the 1:2 mean motion resonance with the transiting planet.
I have also analyzed data simulated for a similar system observed during a CoRoT long run with the high-cadence light curve sampling. The purpose of this experiment was to study what is the best possible sensitivity of CoRoT alone to detect additional planets in systems with a transiting short-period Jovian planet. As expected, such data would yield increased sensitivity to additional planets over the short run data, and planets with masses down to the Mars level near the 1:2 mean motion resonance would produce high-confidence TTV signals.
The CoRoT data yield transit time precisions on the order of a few tens of seconds, whereas the transit times determined from the highest quality light curves obtained via ground-based \citep[e.g.][]{winn09} and space-based \citep[e.g.][]{knutson07} observations are $\sim$5\,s. Nevertheless, I have demonstrated that the CoRoT data are useful for TTV studies because of their unique continuous coverage. It would be interesting to follow-up CoRoT-detected planets with high-precision ground-based observations to extend the time baseline of transit time measurements. Such a combination could yield unprecedented sensitivity to low-mass planets.
\begin{acknowledgements}
I thank the anonymous referee and Ansgar Reiners for helpful comments on a draft of this paper. Support for this work was provided by the DFG through grants GRK 1351 and RE 1664/4-1.
\end{acknowledgements}
\bibliographystyle{aa}
|
train/arxiv
|
BkiUdW05qsJBjms2Yf3o
| 5 | 1 |
\section{Introduction}
Heterodyne-detected optical Kerr effect (HD-OKE) has been widely used since more than twenty years for the investigation of dynamical properties of molecular liquids. Not surprising, a remarkable fraction of this work has been dedicated to the study of liquid water, often in relation to the hypothesized presence of two liquid phases. We recently published new HD-OKE experimental results covering a very broad temperature range, extended into the supercooled regime, characterized by very high accuracy and unprecedented signal-to-noise ratio \citep{taschin_13}. Here we intend to report on the detailed analysis of those data, comparing the results of different theoretical models. Our intention is not to formulate a ranking of those models on the basis of their ability of reproducing the experimental data, but rather to highlight the pros and cons of the different approaches and, most of all, to point out the key features of each model responsible for its predictive capabilities.
In the following, we first recall the main aspects of HD-OKE measurements of liquid and supercooled water, and of the recovery of the sample response function, both in time and frequency domains. We dedicate special attention to the deconvolution of the femtosecond OKE data, and demonstrate the crucial importance of an accurate measurement of the instrumental function. In the second part, we present the detailed analysis of those data making use of three different theoretical models. We conclude by summarizing the main findings and pointing out the most relevant features of the models.
\section{HD-OKE measurements of liquid water}
In a HD-OKE experiment~\cite{righini_93,hunt_07,bartolini_08} a linearly polarized laser pulse induces transient birefringence in a medium by means of a non resonant non-linear effect. The induced birefringence can be probed by a second pulse of different polarization, spatially superimposed to the pump within the sample. The change of the polarization status of the probe pulse, measured as a function of the delay time from the pump, gives information about the non-linear response of the studied material. The signal is characterized by an instantaneous electronic contribution plus a decaying contribution, which constitutes the most interesting part, as it contains information about the relaxation and vibrational response of the molecules in the sample.
The time window probed in an OKE experiment can be very broad, extending from tens of femtoseconds to hundreds of picoseconds. This makes OKE a very powerful technique, capable of revealing very different dynamic regimes in a variety of samples, from simple liquids to supercooled liquids and glass formers~\cite{bartolini_99,torre_98,torre_00,prevosto_02,ricci_02,ricci_04}, it makes possible measuring, at the same time, the slow relaxation processes and the fast inter molecular vibrations of the system. In the optical heterodyne detection configuration, the signal is directly proportional to the third order non-linear response function of the material, $R(t)$, convoluted with the instrumental function, $G(t)$.
Liquid water is a challenging sample for this kind of experiment because of its very weak OKE response, due to its nearly isotropic molecular polarizability.
For this reason, we implemented in our set-up, based on a Ti:Sapphire laser oscillator (wavelength 800 nm, pulse width 18 fs), two experimental improvements that enabled us to measure the fast vibrational dynamics and the slow structural relaxation in the same experiment, with very high signal-to-noise ratio and large dynamic range. The first feature is the independent and continuous motion of a translation stage, equipped with a linear encoder, which ensures the absolute control of the position~\cite{bartolini_07}. This reduces substantially the acquisition time and improves the signal statistics. The second one is the implementation of a peculiar configuration of the heterodyne detection~\citep{giraud_03,bartolini_09}, which makes use of a circularly polarized probe beam and of a differential detection of two opposite-phase signals on a balanced photodiode. As shown in fig.\ref{setup}, a quarter wave-plate between the two polarisers produces the circularly polarized probe field. Two signals, with opposite polarizations and with opposite phase respect to the local field, emerge from the P2 polarizer, and are sent to the balanced photodiode detector. The OKE signal measured in such a way is automatically heterodyned and free from any spurious phase-independent signal. A further improvement is obtained by subtracting the two HD-OKE measurements obtained with left and right circular polarizations of the probe field. This procedure removes from the signal the dichroic contributions coming from possible misalignment of the wave-plate. The output signal of the photodiode is amplified by a lock-in amplifier and digitalized by an acquisition board. A home made software acquires the processed signal together with the reading of the delay line encoder and retraces the final time dependent HD-OKE signal.
\begin{figure*}[t]
\includegraphics[scale=0.6]{setup.pdf}
\caption{
The optical set-up for heterodyne-detected optical Kerr effect measurements (HD-OKE). The laser pulses are produced by a Ti:Sapphire Kerr-lens mode locked cavity and their group velocity dispersion is controlled by a prism compression stage. The laser beam is split by a beam splitter (BS) in the pump and probe beams. The probe pulse is delayed respect to the pump pulse by an computer controlled optical delay line. A half wave plate ($\lambda/2$) fixes the polarization of the pump at $45^\circ$ from that of the probe, set vertical by the P1 polarizer. The probe polarization is then converted to circular by a quarter wave plate ($\lambda/4$). Probe and pump beams are focused inside the sample S by the achromatic lens AL1. The probe beam is then re-collimated by the achromatic lens AL2 and sent to the Wollaston polariser P2. The horizontal and vertical linear polarizations, both present in the probe, are selected by P2.
This optical set-up produces twin OKE signals with vertical and horizontal polarizations having opposite phase with the local field. The balanced photodiode detector BPD operates the electronic subtraction of the two polarization components and extracts the HD-OKE signal from the background. The analogue output of the photodiode is sent to a lock-in amplifier together with the reference signal of chopper (C) placed on the pump beam. A DAQ board simultaneously acquires the translation stage position from the encoder board and the lock-in output signal. Finally, the signal and position are stored by the computer to build the final signal time profile.}
\label{setup}
\end{figure*}
Supercooling bulk water is not an easy task because water is prone to crystallization; special care in the preparation and manipulation of the samples is required in order to reach very low temperatures. We performed our measurements on a sealed vial of cylindrical shape, prepared for pharmaceutical purposes by the Angelini company: the lowest temperature reached for this sample was 247 K. The vial was inserted into a parallelepiped-shaped aluminum holder, whose central cylindrical cavity fits the diameter of the vial. A thin film of glycerol between the vial and the housing assured an efficient heat transfer. The holder was fixed to the cold plate of a Peltier cooler, whose temperature was controlled, with a stability of 0.1 K, by a platinum thermoresistance in thermal contact with the holder itself. Two fused silica windows inserted on two opposite sides of the aluminum holder allowed the beams to cross the sample.
\subsection{\label{sec:OKEsignal}HD-OKE signal}
The signal measured in an heterodyne-detected optical Kerr effect (HD-OKE) experiment is \cite{torre_99,prevosto_02,bartolini_08,bartolini_09,kinoshita_12}
\begin{equation} \label{signal1}
\begin{split}
S(\tau) & \propto \int_{-\infty}^{+\infty} dt I_{pr}(t-\tau) \int_{-\infty}^{+\infty} dt' R(t-t') I_{ex}(t') \\
& = \int_{-\infty}^{+\infty}dt_1 R(t_1)G(\tau - t_1)
\end{split}
\end{equation}
with
\begin{equation}
G(t)=\int_{-\infty}^{+\infty}dt_2 I_{pr}(t_2) I_{ex}(t_2+t)
\label{crosscor}
\end{equation}
where $I_{pr}$ and $I_{ex}$ are the probing and exciting laser intensities, respectively; $G(t)$ is their intensity correlation, determining the experimental time resolution and has thus the role of instrumental function, and $R(t)$ is the material response.
Since we are performing a non-resonant OKE experiment, the Born-Oppenheimer approximation applies and the response function can be cast in the form~\cite{hellwarth_77,foggi_92,torre_93,ricci_93,ricci_95,torre_98b}:
\begin{equation}
R(t)=\gamma\delta(t)+R_n(t)
\label{signal2}
\end{equation}
with $\gamma$ representing the instantaneous electronic response and $R_n(t)$ the nuclear response. The latter can be written in the classical limit as
\begin{equation}
R_n(t)\propto-\frac{\theta(t)}{kT}\frac{\partial}{\partial t}\Phi_{\chi\chi}
\label{Response}
\end{equation}
In eq. \ref{Response} $\theta(t)$ is the Heaviside step function, $k$ is the Boltzmann constant and $\Phi_{\chi\chi}$ the time correlation function of the anisotropic susceptibility
\begin{equation}
\Phi_{\chi\chi}=\langle\chi_{xy}(t)\chi_{xy}(0)\rangle
\label{CorrelFunction}
\end{equation}
$\chi_{xy}(t)$ beaing the off-diagonal element of the susceptibility tensor (i.e. the collective electronic polarizability).
In order to fit the OKE data we simulated the measured signal using the following expression:
\begin{equation}
S(t)\propto\int_{-\infty}^{+\infty} \left[ \gamma\delta(t-t')+R_n(t-t') \right] G(t') dt'
\label{signalfit}
\end{equation}
Taking the Fourier transform of eq. \ref{signalfit}, we get
\begin{equation}
\widetilde{S}(\omega)=\left[ \gamma + \widetilde{R}_n(\omega)\right] \widetilde{G}(\omega)
\label{signalfitFT}
\end{equation}
as, for the non resonant OKE, $\gamma$ is real, we get from \ref{signalfitFT} the important result that the imaginary part of $\widetilde{S}(\omega)$ is unaffected by the instantaneous response, and that for nuclear part
\begin{equation}
Im\left[ \widetilde{R}(\omega)\right] =Im\left[ \frac{\widetilde{S}(\omega)}{\widetilde{G}(\omega)}\right]
\label{imromega}
\end{equation}
hallowing the extraction of $\widetilde{R}_n(\omega)$ from the OKE signal once the instrumental function is known.
\subsection{Instrumental function measurement}
\label{InstrFunct}
An important experimental issue concerns the measurement of the actual instrumental function $G(t)$, which is far from trivial. Eq.\ref{signalfit} shows clearly that the instrumental function can be measured directly if the sample has negligible or very fast nuclear response (i.e. if $R_n(t)\propto\delta(t)$).
In many cases, a fused silica plate has been used to this purpose, because of its weak and fast nuclear response. Actually, the OKE response of silica is quite complex and difficult to determine, so it turns out to be not particularly adapt when a precise determination of real $G(t)$ is required.
Some previous studies utilized the second harmonic cross-correlation of the pump and probe pulses to measure the instrumental function, see Kinoshita et al.~\cite{kinoshita_95}. We found that any small modification and adjustment of the experimental set-up, inevitable when replacing the water sample with the reference material chosen for measuring $G(t)$, causes severe alterations of the latter. For instance, i) the insertion of a second harmonic crystal in place of the sample modifies the pulse compression status, ii) the spatial overlap the beams requires some re-alignment, iii) the translation stage has to be re-positioned in order to achieve the temporal superposition of pump and probe pulses. Actually, even small changes of the experimental conditions critically affect the instrumental function.
With an alternative procedure, in some other cases the instrumental function was obtained by fitting the instantaneous electronic contribution to the HD-OKE signal with an analytic peak function (Gaussian, hyperbolic secant, etc..).
The OKE experiments in water are typically characterized by very low
and fast signals, so that the precise determination of the instrumental function is extremely important in order to extract the water response. The experimental methods, summarized above, are not enough accurate for this purpose.
We measured the instrumental function following a different procedure that grants the proper level of accuracy required for water investigations. As reference sample we chose a plate of calcium fluoride (CaF$_2$); this is a cubic ionic crystal with only one Raman active band in the probed frequency range, the optical phonon at 322~cm$^{-1}$ of T$_{2g}$ symmetry. Its nuclear OKE response is then simple and well known. The calcium fluoride plate was dipped inside a water vial, identical to that used for water measurements, supported by the same sample holder. The measurement of the instrumental function was done by just replacing the water vial with the water-CaF$_2$ vial, leaving the rest of the set-up unchanged. We took care that the faces of the CaF$_2$ plate were perpendicular to the line bisecting the angle formed by pump and probe beams. The thickness of the CaF$_2$ plate, 3 mm, was enough to fully contain the probe and pump overlap area, avoiding any spurious signal contribution by the outer water. We took a reference measurement for each set of water data.
This procedure, differently from the other approaches, allows us to accurately preserve the experimental conditions adopted in the water and in the reference measurements.
\begin{figure}[htb]
\includegraphics[scale=0.5]{OKE-CaF2.pdf}
\caption{A typical HD-OKE signal of the reference sample and the extracted instrumental function. In the figure we show the HD-OKE signal of CaF$_2$ (circles) and the instrumental function (red continuous line) obtained with the fitting procedure described in the text. The nuclear response, $R_n$, is taken as the time-derivative of a single damped oscillator and the instrumental function, $G$, is simulated as the sum of Gaussian, Lorentzian, and hyperbolic secant functions. The comparison with the fit of the instantaneous electronic part performed using just a single hyperbolic secant (blue dashed line) is reported too.}
\label{CaF2}
\end{figure}
Fig.~\ref{CaF2} reports a typical HD-OKE signal obtained in the CaF$_2$ reference sample: the signal shows a first peak, due to the electronic response, characterized by very fast rise and fall, and a second oscillating contribution due to the nuclear response. This signal is the convolution of the instrumental function with the OKE response, see eq.\ref{signalfit}; for CaF$_2$ its nuclear part is the time derivative of a single harmonic damped oscillator (DHO). The simplicity of this nuclear response function allows the reliable extraction of the instrumental function, $G(t)$, by means of an iterative fitting procedure (i.e. least square fitting of the HD-OKE data of the reference sample with the simulated signal according to eq.\ref{signalfit}). We found that our instrumental function can not be reproduced by a single hyperbolic secant function, as often
reported in the literature; a good fit requires the sum of several functions, namely a combination of Gaussian, Lorentzian and/or hyperbolic secant functions. In Fig.~\ref{CaF2} we report the instrumental function obtained by the iterative fitting procedure (red continuous line) and the one obtained by fitting the electronic peak with a simple hyperbolic secant function (blue dashed line).
Apparently these two instrumental functions are very similar but the small differences cannot be neglected for an accurate investigation of water fast OKE response. This is mostly evident when the data are Fourier transformed to the frequency domain.
\begin{figure}[htb]
\includegraphics[scale=0.5]{OKE-CaF2-freq.pdf}
\caption{Dependence of the reference sample spectrum on the choise of the instrumental function $G(t)$. We obtained the response function by deconvolution of the HD-OKE signal $S(t)$ according to eq. \ref{imromega}.
The figure compares the results obtained for the CaF$_2$ sample by adopting the iterative fitting procedure described in the text (red line), and by fitting the electronic peak with a hyperbolic secant function (blue line). Only the knowledge of the "real" instrumental function allows measuring the correct response.}
\label{CaF2-freq}
\end{figure}
Fig.~\ref{CaF2-freq} shows the comparison between the frequency response calculated using the instrumental function obtained by the iterative fitting procedure (red line) and the one obtained by simply fitting the electronic peak with a hyperbolic secant function (blue line). We clearly see that only an accurate measurement of the real time domain instrumental function yields the correct response.
As a further test of the method, we measured the OKE signal of a carbon tetrachloride (CCl$_{4}$) liquid sample. We inserted a scrap of CaF$_2$ slab in the CCl$_{4}$ cell for measuring the instrumental function. The resulting CCl$_{4}$ frequency response is shown in fig.~\ref{CCl4-freq} (black open circle). The agreement of our data with the depolarized Raman scattering spectrum (red continuous line), corrected for the Bose factor, is good in the entire spectral range; frequencies and amplitudes of Raman bands being very well reproduced. This confirms the high accuracy of the employed method.
\begin{figure}[htb]
\includegraphics[scale=0.5]{OKE-CCl4-freq.pdf}
\caption{Frequency response of CCl$_{4}$ obtained from the OKE data using the instrumental function extracted from a CaF$_2$ slab inserted in the CCl$_{4}$ cell. Our data (black open circles) reproduce very well the depolarized Raman scattering spectrum (red continuous line), both in frequency and amplitude, corrected for the Bose factor.}
\label{CCl4-freq}
\end{figure}
The role of the instrumental function in HD-OKE experiments has been always considered as a underpinning problem for a correct data analysis, but to our knowledge it has never been addressed in such a quantitative way. In the present study, the unprecedented signal/noise ratio and the accurate determination of the instrumental function ensure a correct data analysis, from which a reliable OKE response can be extracted, also in case of weak and fast decaying signals.
\subsection{HD-OKE water data}
\begin{figure*}[t]
\includegraphics[scale=0.9]{OKE-data.pdf}
\caption{Log-linear (left panel) and log-log (right panel) plots of the HD-OKE data on liquid and supercooled water. From the bottom to the top the following temperatures are reported: 353, 333, 313, 293, 278, 273, 268, 263, 258, 253, $247~K$. : the oscillations due to the inter-molecular vibrational modes, clearly visible at short times (left panel) smoothly merge at longer times into the structural relaxation decay (see right panel). By lowering the temperature, the vibrational dynamics becomes more defined and the monotonic decay longer and strongly non-exponential.}
\label{OKE-data}
\end{figure*}
In Fig.~\ref{OKE-data} we report all the HD-OKE data collected on water at changing of temperatures from liquid phase to the supercooled one. The left panel shows the short times in a log-linear plot and the right panel the data are reported in a log-log plot showing the whole time-scale measured. Water shows a complex relaxation pattern strongly dependent by the temperature. Decreasing temperature the fast oscillating dynamics becomes more structured and the slow relaxation decay becomes increasingly long.
At short times we have fast oscillations due to inter-molecular vibrational modes. These correspond to the two broad bands centered at about $50~cm^{-1}$ and $200~cm^{-1}$, generally addressed in the literature~\cite{walrafen_86,skaf_05,desantis_04,padro_04} as ``bending" and ``stretching" modes of the hydrogen-bond network, respectively.
At long times the signal shows a monotonic decay; in the first OKE investigations it was interpreted as a bi-exponential relaxation, due to single molecule orientational dynamics\cite{winkler_00}. Further experiments, extending the temperature range to the supercooled phase, proved that the slow decay follows a stretched exponential function, typical of structural relaxation phenomena, with a critical slow down of the relaxation times\citep{torre_04}.
\section{Data analysis and OKE response models}
As we briefly summarized in section Sec.\ref{sec:OKEsignal}, the time-resolved HD-OKE experiment measures the time correlation function of the off-diagonal susceptibility elements. These are collective polarizability tensors of the liquid whose definition is quite complex. There have been many studies concerning the basic problem of defining the optical observables (i.e. susceptibility tensor) starting from the molecular features and dynamics, see for example ref.~\cite{hellwarth_70,hellwarth_77,berne_76,balucani_94,torre_08} and references therein. The general theories that define rigorously this connection necessarily involve a huge number of physical variables and they turn out to be not operative for a comparison with the OKE experimental data. So the OKE response interpretation have been typically done using phenomenological models\cite{torre_93,ricci_95,torre_95,torre_96,torre_98b,bartolini_99,bartolini_08}, and/or computer simulations\cite{paolantoni_02,ryu_04,tao_06}. Recently, mode-coupling theories, dynamic models at the mesoscopic scale, based on the memory function approach, have also been used to interpret the OKE results~\cite{torre_98,torre_99,torre_00,prevosto_02b,prevosto_02,ricci_02,pratesi_03,ricci_04,bartolini_08}.
The definition of the OKE response function in liquid water is even more complex than in other liquids due the almost isotropic molecular polarizability and to the hydrogen-bounded network that make the OKE observable dominated by the collective susceptibility and dynamics.
Numerical simulations of the OKE signal in liquid water\cite{skaf_05,sonoda_05,lupi_12} show a complex interplay
between intrinsic molecular terms and interaction-induced contributions.
Two main issues need to be addressed in order to define the OKE response. First, one has to pinpoint the physical parameters relevant for the experimentally probed dynamics and what are the equations of motion that they follow. In order to have an operative model, both the liquid modes and their equations of motion must be relatively simple, in other words they should be defined with a coarse-graining approach, where the fine molecular features are averaged out. Second, the connection between optical susceptibility and modes of the liquid must be defined.
Few phenomenological models have been utilized to analyse the whole time-dependent OKE response in liquid water\cite{palese_96,winkler_02,ratajska_06}. As well as, the mode-coupling theory has been used to simulate the water response\cite{torre_04,bartolini_08,taschin_13}.
In the following sections we compare the results of three different theoretical models in the an analysis of our new OKE data, extending into supercooled water phase; namely, the Multi-mode Brownian Oscillator (MBO) model\cite{tanimura_93,mukamel_95}, the Kubo's Discrete Random Jump model\cite{kubo_62,kubo_69}, and the Schematic Mode-Coupling model~\cite{goetze_92,goetze_00b,goetze_04,goetze_09}. The first two models have been already applied for the analysis of the OKE data of water\cite{palese_96,winkler_02}, but only in the stable liquid phase, the third one has been very recently applied with success to water in the liquid and supercooled phases by the same authors of the present paper~\citep{taschin_13}. Including the HD-OKE results for the supercooled phase of water provides a more stringent test also of the first two theories.
\subsection{Multi-mode Brownian Oscillator Model}
A relatively simple Multi-mode Brownian Oscillator Model (MBO) was utilized by Palese et al. \cite{palese_96} to describe the liquid water dynamics. The model aims at describing the whole relaxation behaviour of the liquid without a time scale separation, a priori imposed, between the fast and slow dynamics.
The susceptibility tensor is taken as a second order expansion on the nuclear $Q$-modes\cite{tanimura_93}, $\chi(t)\simeq a_1Q(t)+a_2Q^2(t)$.
The $Q$ variables must be interpreted as local normal modes of the liquid obtained from a coarse-graining treatment of the molecular and intermolecular coordinates. The equation of motion of these MBO modes are described by a Damped Harmonic Oscillator (DHO) equation, $\ddot{Q}(t)-\gamma\dot{Q}(t)+\omega^2 Q(t)=0$. The non-resonant nuclear third-order response function, in particular the OKE response, from a single DHO can be written as\cite{tanimura_93,mukamel_95,palese_96}:
\begin{widetext}
\begin{equation}
R^{MBO}(\Omega,t)=\theta(t)\frac{e^{-\gamma}\frac{t}{2}}{\Omega}\sin(\Omega t)\left\{a_1^2
+\frac{a_2^2}{\Omega}\left[\coth(\frac{i \hbar}{2kT}\varphi)e^{-\varphi t} - \coth(\frac{i \hbar}{2kT} \bar{\varphi})e^{-\bar{\varphi} t}\right] \right\}
\label{RespMBO}
\end{equation}
\end{widetext}
where $\Omega=\sqrt{\omega^2-\frac{\gamma^2}{4}}$, $\varphi=(\frac{\gamma}{2}+\textit{i}\Omega)$, and $\bar{\varphi}=(\frac{\gamma}{2}-\textit{i}\Omega)$.
The $a_2$ quadratic term in this equation expresses the non-linear coupling between the susceptibility and the $Q$-mode, when $a_2=0$ the simple DHO response function is recovered.
The MBO model used by Palese et al. considers a continuous ensemble of $Q$-modes characterized by different frequencies $\omega$; these modes are uncoupled (i.e. defined by independent DHO equations) and all characterized by identical damping coefficients, $\gamma$. Consequently, some modes are under-dumped and other ones are over-damped, in dependence of their $\omega$ value. Each $Q$-mode has a homogeneous broadening, expressed by $\gamma$, which is assumed temperature dependent and related to the liquid viscosity. The collection of $Q$-modes is shaped by an inhomogeneous broadening function, $S(\Omega)$, which fixes the weigh of each mode in the distribution. The inhomogeneous broadening is due to the interactions of the $Q$-mode with the thermal bath and it taken in the form\cite{palese_96}:
\begin{equation}
S(\Omega)=\sum_{n}\frac{\Omega^2 A_n \Gamma_n}{2\pi((\Omega^2-\Omega_n^2)^2 +\Omega^2 \Gamma_n^2)}
\label{SOmega}
\end{equation}
where the sum over $n$ accounts for different dynamics, like librations, translations and bendings, which contribute to the frequency distribution of the harmonic oscillators.
The MBO model simulates the OKE nuclear response function with the oscillator ensemble according to the following integral equation:
\begin{equation}
R_n(t) \propto \int_{\Omega_c}^{\infty} S(\Omega)R^{MBO}(\Omega, t)d\Omega
\label{RespMBOtot}
\end{equation}
The integration range in Eq.~\ref{RespMBOtot} is lower limited by the cut-off frequency $\Omega_c$, corresponding to the physical restriction that no oscillator can have an oscillation period longer than the structural rearrangement of the bath. This cut-off is proportional to the homogeneous damping
\begin{equation}
\Omega_c = b\,\gamma(T),
\label{OmegaC}
\end{equation}
where $b$ is a temperature independent coefficient.
The cut-off frequency in the integration yields a signal that at long times relaxes as a single exponential with time constant $\tau \propto 1/\gamma$. The MBO model cannot account for the stretched exponential decay at very long times; this is a first serious limit in order to fit the OKE data in the supercooled phase.
Moreover, since the structural relaxation time, $\tau$, is proportional to the viscosity, the homogeneous width $\gamma$ is inversely proportional to the viscosity; so the temperature dependence of $\gamma$ is fixed by the viscosity. In summary, in the MBO model the slow relaxation dynamics is accounted for by the superposition of low-frequency over-damped oscillators in eq.\ref{RespMBOtot}, critically dependent on the $\Omega_c$ cut-off frequency, while the fast vibrational part results from the inhomogeneous distribution of under-dumped oscillators. The integral equation (8) provides a smooth merging of the two oscillator ensembles.
\begin{figure*}[t]
\includegraphics[scale=0.7]{MBOfit.pdf}
\caption{Comparison of the experimental data at $247~K$, $273~K$, and $353~K$, and the MBO fits with cut-off proportionality factor $b=0.32$ and $b=0.2$}
\label{figMuka1}
\end{figure*}
In the liquid phase, we confirm the results already reported in ref. \cite{palese_96}, the model is able to reproducing the OKE signal in the whole temporal range, with the proper temperature behaviour of all the fitting parameters, but fails more and more as temperature decreases, see Fig.~\ref{figMuka1}. In summary, it is not able to accurately reproducing the fast dynamics, responsible for the complex structure of the vibrational bands and the stretched exponential character of the slow relaxation.
\subsubsection{MBO fitting details and results}
The fitting function is obtained by numerical convolution of the OKE response function, see Eqs.~\ref{signalfit} and Eq.~\ref{RespMBOtot}, with the instrumental function obtained as explained in Section~\ref{InstrFunct}. The best fit is achieved by the repeated variation, in a numerical loop, of the fitting parameters, until a nonlinear least-squares minimization is reached.
The fitting function is built on an inhomogeneous broadening presenting two water bands: at $180~cm^{-1}$ and $60~cm^{-1}$, which are usually assigned to stretching and bending of the hydrogen bond, respectively. Palese et al. introduced other higher frequency bands in order to simulate their OKE data; according our analysis, frequencies higher then about $400~cm^{-1}$ are not necessary, once the instrumental response function is properly taken into account.
In our MBO analysis, we used nine free fitting parameters: the parameters of the inhomogeneous distribution $A_1$, $\Omega_1$, $\Gamma_1$, $A_2$, $\Omega_2$, $\Gamma_2$, the width of the homogeneous distribution $\gamma$, and the amplitudes $a_1^2$ and $a_2^2$ appearing in Eq.~\ref{RespMBO}.
The cut-off frequency $\Omega_c$ in Eq. \ref{RespMBOtot} turns to be a very critical parameter in the fit, so we performed several fitting runs with different values of this coefficient. If the value fo the $b$ parameter in \ref{OmegaC} is locked in the range $0.1-0.15$, the fits are apparently fine but the inhomogeneous distribution parameters turn out to be non physical at high temperatures. In particular, we observed a narrowing of the modes with rising temperature. In fact, a value of $b$ in such a low range yields a large homogeneous width, necessary to fit the long times part of he data, with a consequent narrowing of the inhomogeneous modes required for the fitting of the oscillating part. With larger values of $b$ ($0.25-0.3$), the model fails to fit the long part of the data at the lowest temperatures, the simulated decay going to zero too fast. With intermediate values of $b$ ($0.15-0.2$), the fit fails both at short and long times; moreover, also in this case the inhomogeneous distribution has an unphysical temperature dependence.
Finally, a fully free value of the $b$ free fitting parameter leads to an apparently better reproduction of the data in the whole temporal range. However, the temperature trend of the $\gamma$ parameter is definitely unphysical: in fact, it remains constant at all temperatures, yielding a temperature independence of liquid viscosity.
In Fig.~\ref{figMuka1} we report the HD-OKE data at three temperatures in the liquid and supercooled phases (247 K, 273 K, and 353 K) and the corresponding curves calculated with $b=0.32$ and $ =0.2$. As noted above, the agreement at high temperature is good, but it worsens at low temperatures.
\begin{figure*}[t]
\includegraphics[scale=0.7]{MBOParAwi02.pdf}
\caption{Temperature dependence of the MBO fitting parameters for the fits with $b=0.2$ (left panels) and $b=0.32$ (right panels).}
\label{figMuka2}
\end{figure*}
The temperature dependence of the MBO fitting parameters for these two fitting series are shown in Fig.~\ref{figMuka2}.
In Fig.~\ref{figMuka3} we report the imaginary part of the Fourier transform of the single oscillator response function, $\tilde{R}^{MBO}(\nu)$ and the inhomogeneous broadening function $S(\nu)$. These are calculated adopting for the fitting parameters the values reported in the left panel of Fig.~\ref{figMuka2}, corresponding to $b=0.2$. The temperature dependence of the homogeneous broadening (black line) is the expected one, while the inhomogeneous distribution (red line) narrows as temperature increases, with a definitely non-physical behaviour.
\begin{figure}[htb]
\includegraphics[scale=0.5]{MBOAwi02fft.pdf}
\caption{We report the Fourier transform of the single oscillator response function (red line) and the inhomogeneous distribution function (black line), calculated using the parameters from the fitting section with $b=0.2$. The opposite temperature behaviours of the inhomogeneous and homogeneous distributions are evident in the spectral representation.}
\label{figMuka3}
\end{figure}
\subsection{Kubo's Discrete Random Jump Model}
Winkler et al.\cite{winkler_02} used a model based on the Kubo's Discrete Random Jump (KDRJ)~\cite{kubo_62,kubo_69} to fit the OKE data of liquid water. Their approach implies an a priori separation between the fast vibrational dynamics, described by the KDRJ model, and the slow relaxation, which is simulated by the time derivative of a stretched exponential function.
The KDRJ model describes the dynamics of the liquid in terms of $Q(t)$ stochastic oscillators: $\ddot{Q}(t)-\frac{\dot{\omega}}{\omega}\dot{Q}(t)+\omega^2 Q(t)=0$; the $\omega(t)$ frequency is a stochastic variable randomly perturbed by $N$ independent two-state jump Markov processes (random-telegraph process), $\omega(t)=\Omega+\sum_{n=1}^N\omega_n(t)$. Each stochastic process is considered stationary, ie. $\langle\omega_n(t)\rangle=0$ and $\langle \omega(t)\rangle=\Omega$, and Markovian, with $\langle\omega_n(t)\omega_m(0)\rangle=\frac{\Delta^2}{N}exp(-\gamma\vert t \vert)$. In the latter expression, $\gamma$ is the rate of the random frequency modulation and $\Delta^2$ is the amplitude of the modulation. The total stochastic process is assumed Gaussian.
In the approach of Winkler et al.\cite{winkler_02}, the susceptibility tensor is linearly connected to the nuclear $Q$-modes, $\chi(t)\propto Q(t)$; thus, the OKE response function can be obtained from the time derivative of the $Q$-mode correlation functions. The contribution of a single $Q$-mode to the response function is expressed as:
\begin{widetext}
\begin{equation}
R^{KDRJ}(\Omega,t)= \theta(t)\bigg[\frac{\gamma}{2}N\frac{a^2-1}{a}sinh\left(\frac{\gamma t}{2a}\right)cos(\Omega t)+ \psi(t)\Omega sin(\Omega t)\bigg] \psi(t)^{N-1} exp\left(-N\frac{\gamma t}{2}\right)
\label{Respkubo}
\end{equation}
\end{widetext}
with
\begin{equation}
\psi(t)=\left[cosh\left(\frac{\gamma t}{2a}\right)+a sinh\left(\frac{\gamma t}{2a}\right)\right]
\end{equation}
where
\begin{equation}
a=\left(1-4\frac{\Delta^2}{\gamma^2}\right)^{-1/2}
\label{psia}
\end{equation}
The Fourier transform of the KDRJ response function $Im [\tilde{R}^{KDRJ}(\nu)]$ provides the spectral representations of the involved dynamics. The resulting spectral profile is strongly dependent on the $\Delta/\gamma$ ratio. For $\Delta/\gamma\gg 1$, slow modulation limit, the line consists of $N+1$ lines with an overall Gaussian envelope profile whose full width half maximum is equal to $\Delta$. The lines are spectrally separated by the quantity $2\Delta/\sqrt{N}$, and each line is homogeneously broadened with width $\gamma$ due to the finite life time of the level itself. For $\Delta/\gamma\ll 1$, motional narrowing limit, the multiplet structure collapses in a single resonance. In this case the frequency jumps occurs on a time scale faster that the average vibrational period $2\pi/\Omega$. It is clear that, depending on the average frequency $\Omega$, on the $\Delta/\gamma$ ratio, and on the number of stochastic processes $N$, we can obtain complex line shapes, which describe the structured vibrational bands of liquids.
In their data analysis, Winkler et al. introduced an extra interaction between the Kubo oscillator and a further thermal bath. This interaction produces an inhomogeneous broadening defined by\cite{winkler_02}:
\begin{equation}
S_i(\Omega)=exp\left[\frac{-4ln(2)(\Omega-\Omega_i)^2}{\Gamma_i^2}\right]
\label{SOmegaK}
\end{equation}
and the oscillator response function becomes:
\begin{equation}
R_i^{KDRJ}(t) \propto \int_{0}^{\infty} \left[S_i(\Omega)R^{KRDJ}(\Omega,t)\right] d\Omega
\end{equation}
If more Kubo oscillators are involved in the dynamics,
\begin{equation}
R_n(t)=\Sigma_i R_i^{KDRJ}(t)+\theta(t)At^{\left(\beta-1\right)}\exp\left[-\left(\frac{t}{\tau_s}\right)^{\beta}\right]
\label{RespKDRJtot}
\end{equation}
$\tau_s$ being the structural relaxation time and $\beta$ is the stretching factor. The presence of stretched exponential decay in OKE data has been proved to show-up both in glass-former liquids\cite{torre_98} and supercooled water\cite{torre_04}.
Eq.\ref{RespKDRJtot} implies that the vibrational dynamics, described by the $Q$-modes, is uncoupled from the structural relaxation, described by the stretched exponential decay. If the time/energy scale of the structural relaxation could be considered well separated from that of other dynamics, the decoupling hypothes would be properly founded. In water both structural relaxation and vibrational dynamics take place on very similar time/energy scales\cite{torre_04,taschin_13}; the same is true for the H-bond dynamics\cite{fecko_03}. Any decoupling approximation then appears a rather unrealistic hypothesis. Apart from these fundamental criticisms, we tested the ability of the model to fit our OKE data in supercooled water.
\subsubsection{KDRJ fitting details and results}
The fitting function was obtained as the convolution of the response function with the instrumental function. The OKE nuclear response function is simulated, following the analysis of Winkler et al. analysis, using two KDRJ oscillators with $N=3$. The free fitting parameter were: four parameters of each oscillators, $\Delta_i$, $\gamma_i$, $\Omega_i$, and $\Gamma_i$, with the constrain $\gamma_1=\gamma_2$, two amplitudes $a_i$, and the stretched exponential parameters, $A$, $\tau_s$, and $\beta$.
\begin{figure}[htb]
\includegraphics[scale=0.6]{KuboFit.pdf}
\caption{Comparison between the experimental data and the KDRJ model fits at $247~K$, $273~K$, and $353~K$.}
\label{figKubo1}
\end{figure}
We report in Fig. \ref{figKubo1} the fit-data comparison for the three temperatures $247~K$, $273~K$, and $353~K$. The long time part of the signal is clearly very well described by the adopted stretched exponential function. The short time oscillating part is fairly well reproduced at high temperatures, as already found in ref.~\cite{winkler_02}. In the deeply supercooled phase this part is not perfectly reproduced (see the oscillations around $1~ps$) but the model is able to account for the growing structuring of the vibrational bands at low temperatures.
\begin{figure}[htb]
\includegraphics[scale=0.45]{KuboPar.pdf}
\caption{Temperature behaviour of the best fit parameters for the two KDRJ oscillators.}
\label{figKubo2}
\end{figure}
In Fig.~\ref{figKubo2}, we report the values of the fitting parameters for all the temperatures.
\begin{figure}[htb]
\includegraphics[scale=0.5]{kubooscillatori.pdf}
\caption{Temperature behaviour of the spectral shape of the KDRJ oscillator centred around $180~cm^{-1}$, stretching band, neglecting the inhomogeneous broadening. The spectrum peaks could be addressed to different water clusters.}
\label{figKubo3}
\end{figure}
It is worth to look at the spectral shape of the KDRJ oscillator when $S_i(\Omega)=\delta(\Omega-\Omega_i)$, i.e. in the hypothesis of neglecting the inhomogeneous broadening due to the interaction with the thermal bath. In Fig.\ref{figKubo3} we show the imaginary part of the Fourier transform the single KDRJ oscillator response function, $Im[\tilde{R}^{KDRJ}(\nu)]$, at different temperatures. For each temperature the $\Omega$ parameter is fixed to the highest value obtained from the fitting procedure, see
Fig.\ref{figKubo2}. The spectrum of this KDRJ oscillator represent the homogeneous vibrational components describing the stretching water band. Clearly $N+1=4$ resonances are present.
Winkler et al. proposed an intriguing interpretation of these resonances for liquid water: they would correspond to the H-bond stretching frequencies of different molecular aggregates, from dimer units to the tetrahedral pentamer units. These clusters inter-convert each other through breaking and making of the H-bonds; this very fast process would result int the KDRJ frequency jumps. The frequency and damping of the KDRJ oscillator would be related to the H-bond stretching/bending vibrations and to their life time, respectively. The application of this picture to the OKE data analysis suggests that liquid water consists of mixture of four different clusters, having comparable and weakly temperature dependent concentrations. The most recent experimental investigations\cite{nilsson_12,taschin_13}, simulation studies\cite{overduin_12,kesselring_12} and theoretical models\cite{holten_12,holten_13,tanaka_13} do not support this scenario: the emerging picture is that water presents a bimodal local structuring (i.e. formation of two main molecular clusters); water molecules form local structures either tetrahedrally coordinated, named low-density forms, or close-packed, named high-density form. Moreover, the relative populations of these two alternative local structures are strongly temperature dependent.
\subsection{Schematic Mode-Coupling Model}
The Mode-Coupling Theories (MCT)\cite{goetze_09} are a generalization of the Mori and Zwanzig approach. The liquid dynamics is described by the memory-function equations, which define the equations of motion of the correlation functions of physical observables. The retardation effects are taken into account by the memory functions $K(t)$, that in the MCT are defined on the basis of the correlators. These theories represent a generalized hydrodynamic approach to the liquid dynamics where the physical observables are intrinsically mesoscopic.
In the schematic mode-coupling (SMC) model~\cite{goetze_92,goetze_00b,goetze_04,goetze_09} the main variable is the density, $\rho(t)$; hte other observables, $Q_i(t)$, are linked to the density.
The time evolution of the correlation functions of these physical observables is given by the memory-function equations. They are formulated as~\cite{goetze_92,goetze_00b,goetze_04,goetze_09}:
\begin{equation}
\ddot{\Phi}_m(t)+\eta_m\dot{\Phi}_m(t)+{\Omega_m}^2\Phi_m(t)+
\int K(t-t')\dot{\Phi}_m(t')dt'=0
\label{mastereq}
\end{equation}
with the memory function written as
\begin{equation}
K(t)= v_1 \Phi_{m}(t)+v_2 \Phi^2_{m}(t)
\label{masterememo}
\end{equation}
The SMC model defines the memory by a series expansion (up to the second term) of the \textit{master correlator} itself $\Phi_{m}$, thus providing a closed form for the integro-differential equation \ref{mastereq}. The SMC model identifies the master correlator as the density correlator $\Phi_m\propto\langle\rho(t)\rho(0)\rangle$; the quadratic term in eq. \ref{masterememo} corresponds to the minimum order of the series expansion able of reproducing the slowing down behaviour of the structural relaxation.
The dynamics of any other observable, $Q_i$, linked to the time dependent density (e.g. to the local inter-molecular dynamics) can be described by a similar differential equation~\cite{bosse_87b}:
\begin{equation}
\ddot{\Phi}_i(t)+\eta_i\dot{\Phi}_i(t)+{\Omega_i}^2\Phi_i(t)+
\int m_i(t-t')\dot{\Phi}_i(t')dt'=0
\label{slaveeq}
\end{equation}
In \ref{slaveeq} the memory is given by
\begin{equation}
m_i(t)= v_i^s\Phi_m(t)\Phi_i(t)
\label{slavememo}
\end{equation}
$\Phi_i(t)\propto\langle Q_i(t)Q_i(0)\rangle$ being the \textit{slave correlator}. The coupling between the slave and master dynamics is assured by the product of the slave and master correlators in the memory kernel \ref{slavememo}.
Equations \ref{mastereq}, \ref{masterememo}, \ref{slaveeq}, and \ref{slavememo} are a closed set that can be solved numerically, as analytic solutions exist only in a restricted number of cases\cite{goetze_09}.
MCT is essenzially a hydrodynamic model: than is hard to attribute a precise microscopic (at the molecular level) interpretation of the involved physical quantities; the $Q_i$ variables can be interpreted as key parameters influencing the liquid susceptibility. Their dynamics, described by $\Phi_i$ correlators, allow the calculation of the OKE nuclear response function. The experimental response can be expressed as the time derivative of the sum of these slave correlators:
\begin{equation}
R_n(t)\propto -\theta(t)\frac{\partial}{\partial t}\sum_i a_i \Phi_i(t).
\label{mct-response}
\end{equation}
In other words, equations \ref{slaveeq}, \ref{slavememo} and \ref{mct-response} correspond to decomposing the electronic susceptibility correlator, $\Phi_{\chi\chi}$, into the sum of $\Phi_i(t)$ correlators. Each of these correlator describes an ``average collective mode", whose dynamics is described by the SMC equations. The vibrational and relaxation properties and the coupling of different observables are present into the SMC equations by definition. In this respect, the SMC equations represent a robust physic model able of describing complex dynamics including damped vibrations and structural relaxation, as well as their coupling. Differently from other approaches, SMC does not require any decoupling or dynamic separation between the fast/vibrational dynamics and the slow/relaxation phenomena.
\subsubsection{SMC fitting details and results}
We solved the SMC equations numerically, taking the frequencies, friction and coupling coefficients as parameters to be adjusted in order to reproduce the HD-OKE response. We adopted a step-by-step second order Runge–Kutta algorithm to solved numerically the integro-differential equations Eq.\ref{mastereq} and Eq.\ref{slaveeq}. Once the time dependent function of the master correlator is known, it can be used for calculating those of the slave correlators and then that of the OKE signal. The parameters of the model are: the master equation parameters $\eta_m$, $\Omega_m$, $v_1$ and $v_2$, the slave equations parameters $\eta_i$, $\Omega_i$, $v_i^s$ with $i=1,2,3$ and the three amplitudes $a_i$ in eq. \ref{mct-response}. Of course, the result of the fit depends on the number of slave correlators included: we considered the cases corresponding to one, two, and three correlators. We performed a preliminary series of fits to obtain a qualitative estimate of the temperature dependence of the parameters. On that basis, we chose, in agreement with what already done in similar analyses reported in literature \citep{alba_95,wuttke_00,goetze_00b,krakoviack_02,wiebel_02,goetze_04,ricci_04}, to force some of the parameter either to assume fixed values or to follow pre-established temperature trends.
\begin{figure}[htb]
\includegraphics[scale=0.5]{FigDATI-MCT.pdf}
\caption{SMC fits (red line) of the heterodyne-detected optical Kerr effect data (circle) at different temperatures in a log-log plot. The SMC model very well reproduces the complex vibrational dynamics taking place in the sub-picosecond time scale and the slow relaxation decays. Also at intermediate delay times, where the vibrational dynamics merge into the relaxation processes, the SMC equations correctly descibe the decay curve.}
\label{SMC}
\end{figure}
We found that the OKE data can be described in all the temperature range with three slave correlators at most. In particular, the weight of the highest frequency contribution decreases monotonically as the temperature increases, while it becomes negligible at the two highest temperatures, where only two slave correlators are sufficient to reproduce the data.
In Fig.~\ref{SMC} we show some of the measured HD-OKE signals with the fits obtained using the SMC model. The model reproduces correctly the experimental data over the whole time range at all temperatures. Most remarkably, and differently from other fitting models, this result is achieved without imposing any decoupling of vibrational and relaxation dynamics.
\begin{figure}[htb]
\includegraphics[scale=0.4]{SMC-par.pdf}
\caption{Temperature behaviour of the fitting parameters of the SMC model, the slave frequencies $\Omega_i$, the friction parameters $\eta_m$ and $\eta_i$ and the three vertices $v^s_i$. Circles, squares, and triangles refer to $\Phi_i$ slave correlators, with $i=1$, $2$, and $3$, respectively; diamonds represent the master correlator.}
\label{MCTpar}
\end{figure}
The best values of the frequency $\Omega_m$ and and the vertex $v_1$ of the master oscillator were almost constant in the whole temperature range; then, they were locked to $66~cm^{-1}$ and $0.33$, respectively. The second vertex $v_2$ was, instead, resulted to increase almost linearly with decreasing temperature: it was forced to obey the linear dependence $v_2=6-0.014T$. Finally, we left free the friction $\eta_m$ and the remaining parameters of slave oscillators. In Fig.\ref{MCTpar} we show the temperature dependence of the slave frequencies $\Omega_i$, of the friction parameters $\eta_m$ and $\eta_i$ and of the three vertices $v^s_i$.
\begin{figure}[htb]
\includegraphics[scale=0.55]{SMC-FFT.pdf}
\caption{ The Fourier transform of the SMC fit response function, $Im[\tilde{R}_n(\nu)]$, is reported (red line). The three correlators $Im[\tilde{\Phi}_{1,2,3}(\nu)]$, are also shown (magenta-blue-orange areas). The simulation of HD-OKE data based on the SMC model requires two modes (blue and orange shaded areas) to fit the high frequency band. The characteristics of these two modes are clearly different in terms of spectral shape and temperature dependence.}
\label{MCT-FFT}
\end{figure}
In Fig.\ref{MCT-FFT} we report the imaginary part of the Fourier transform of the SMC simulated OKE responses obtained by the best fit of the experimental data at two temperatures. The contributions of the three slave correlators are reported in the figure as magenta-blue-orange lines. The simulation of HD-OKE data by SMC model requires two vibrational modes to fit the intermolecular stretching band of water (blue and orange lines). The characteristics of these two modes are clearly different in terms of spectrum shape and temperature dependence. As discussed in a previous paper\cite{taschin_13}, these two modes can be associated with two fluctuating water species with different local structures; a low-density form characterized by a tetrahedral network and a high-density form characterized by closely packed aggregates with lower coordination and high network distortions.
\subsection{Final considerations}
Two points of general relevance for the OKE investigation of the dynamical properties of molecular liquids come out from the above discussion: i) the analysis of the time domain OKE response and, most of all, of its spectral representation, is critically dependent on the accurate determination of the instrumental function. Only an absolutely faithful determination of the latter can provide a reproducible response function, susceptible to reliable detailed analysis; ii) extending the investigation to low temperatures is an essential requirement: different models can provide equivalent results at high temperature, while diverging in their heuristic power when confronted to low temperature experiments. In fact, only at low temperature non-exponential behaviors of the OKE signal decay can show-up, and marked structuring of the oscillating part can grow-in.
Besides these general aspects, the main goal of our work has been that of analyzing the ability of different theoretical models to reproduce femtosecond HD-OKE data of very high quality and accuracy. The three models that we considered derive from significantly different approaches. In particular, the KDRJ model adopted by Winkler et al.\cite{winkler_02} differs significantly from the others as it assumes that the oscillatory and diffusive dynamics of liquid water can be separated a priori. In fact, the long time relaxation is described as an exponential decay and is subtracted from the HD-OKE time domain data, thus isolating the vibrational component. For the short time dynamics, the authors adopt an essentially molecular picture, based a Kubo treatment of the linewidth, which involves three intermolecular vibrational frequencies as stochastic variables. Ref.\cite{winkler_02} takes into account only room temperature data: we show here that extending the analysis to low temperatures, well below the thermodynamic melting point, provides a much more stringent test of the theory. In fact, at low temperature not only the long time relaxation has to be described as a stretched exponential decay, but also the agreement for the oscillatory part of the OKE response, very good at room temperature, is definitely less satisfactory in the supercooled regime. In any case, the KDRJ approach accounts fairly well for the growing structuring of the oscillatory pattern at low temperatures.
The other two models considered get rid of the imposed separation of the diffusive contribution from the overall dynamics, a separation that appears hardly justified in view of the similar time scales of the structural relaxation and of the intermolecular vibrations of liquid water. Similarly to KDRJ, the Brownian oscillator (MBO) approach, employed for water by Palese et al.\cite{palese_96}, is based on an almost microscopic picture, whose dynamical variables consist of a collection of averaged local intermolecular modes. Under-damped oscillators of relatively high frequency account for the oscillatory part of the response, while the structural relaxation contributions originate from the superposition of over-damped oscillators described by eqs.\ref{SOmega} and \ref{RespMBOtot}. We found that the most critical parameter is the cut-off frequency $\Omega_c$ in eq.\ref{RespMBOtot}: it is this low frequency limit that inhibits a good fit of the non-exponential decay present in the low temperature OKE data. Nevertheless this limit is imposed by the physical restriction that the oscillator period cannot be longer than the structural rearrangement.
The mode coupling (SMC) treatment is based on a continuum picture of the liquid, and describes its dynamics on the basis of time correlation functions of physical observables. The key points are the non-linear form of the master memory function, eq.\ref{masterememo}, in the equation of motion of the density correlator, eq.\ref{mastereq}, and the inclusion of slave correlators coupled to the density correlator by the slave memory kernels, eq.\ref{slavememo}. We found that the SMC model provides the most flexible set of equations and allows a very good fit at all temperatures of the OKE data in the entire experimental time window. This approach is essentially hydrodynamic, hence no immediate link can be made between those correlators and the inter- and intra-molecular modes typical of a molecular-scale description of the dynamics. Only in particular conditions a link with the specific molecular features can be out-lined\cite{goetze_09}. In this sense, it appears mostly suitable to the investigation of pre-transitional and critical phenomena. The most interesting feature of the SMC analysis of the experimental data is that, using a rigorous physical approach and avoiding questionable assumptions, it allows disentangling dynamical contributions characterized by peculiar temperature (and, possibly, pressure) dependence.
\section*{Acknowledgments}
This work was supported by Regione Toscana, prog. POR-CRO-FSE-UNIFI-26, by Ente Cassa di Risparmio Firenze, prog. 2012-0584 and by MIUR, prog. PRIN-2010ERFKXL-004. We acknowledge M. De Pas, A. Montori and M. Giuntini for providing their continuous assistance in the electronic set-ups ; R. Ballerini and A. Hajeb for the mechanical realizations.
|
train/arxiv
|
BkiUbGg5qX_AYyuLNCQX
| 5 | 1 |
\section{Introduction}
Bismuth selenide and bismuth telluride have recently attracted considerable attention as prototypical topological insulators. The electronic band structure has a negative band gap at $\Gamma$, resulting in an odd number of closed surface state Fermi contours around the centre of the surface Brillouin zone, i.e. in a particularly simple manifestation of topologically protected surface states \cite{Noh:2008,Zhang:2009,Xia:2009,Hsieh:2009c}.
Bi$_2$Se$_3$ has a layered crystal structure, made up from Se-Bi-Se-Bi-Se quintuple layers (QLs), separated by van der Waals gaps (see Figure \ref{fig1}). The (111) surface of the material is the surface parallel to these QLs and can be prepared easily by cleaving the crystal with scotch tape. It therefore appears likely that this cleaving process takes place between two QLs and the surface is thus terminated by an intact QL.
However, this termination has recently been questioned by a low energy ion scattering investigation which revealed that a surface obtained by cleaving at room temperature, or left for some time at low temperature, develops a strong enrichment of Bi, consistent with a termination by a bismuth bilayer on top of the last QL of Bi$_2$Se$_3$ \cite{He:2013}. Such a change of the surface termination (of Bi$_2$Se$_3$ or Bi$_2$Te$_3$) should lead to a drastic modification of the surface electronic structure \cite{Hirahara:2011,He:2013,Miao:2013} compared to the single Dirac cone usually observed \cite{,Xia:2009,Hsieh:2009c}. Angle-resolved photoemission (ARPES) spectra for Bi$_2$Se$_3$ cleaved at room temperature do not show this more complicated electronic structure \cite{Hatch:2011}, but it can be created on both Bi$_2$Te$_3$ and Bi$_2$Se$_3$ by depositing a bilayer of bismuth on purpose \cite{Hirahara:2011,Miao:2013}.
\begin{figure}[h!]
\includegraphics[width=.45\textwidth]{figure1}%
\caption{Left: Side view of a bulk-terminated Bi$_2$Se$_3$(111) crystal with indication of the terminations tested and the notation for the interlayer distances. Right: One possible bilayer-terminated surface (different stacking possibilities are not shown). \label{fig1}}
\end{figure}
Even for the bulk terminated by a QL, details of the structural relaxations are crucial for the electronic structure. It has been observed early that ARPES spectra from Bi$_2$Se$_3$ change with time after cleave \cite{Hsieh:2009c}. The change manifests itself as a shift of all the bands to higher binding energy and the appearance of new two-dimensional states on the surface \cite{Bianchi:2010b,Bianchi:2011}. An initial interpretation of this was a structural relaxation of the van der Waals gaps below the surface \cite{Noh:2008,Hsieh:2009c} and it was shown theoretically that an increased van der Waals gaps could indeed give rise to two-dimensional electronic states that are similar to those observed by ARPES \cite{Menshchikova:2011,Vergniory:2012}. In related layered systems with van der Waals gaps, such a surface relaxation can in fact reproduce observed splittings of ARPES band dispersions \cite{Hoesch:2009}. Intercalating of atoms into Bi$_2$Se$_3$ to increase the van der Waals gap spacing on purpose, however, did not lead to changes in the electronic structure \cite{Bianchi:2012b}, and an alternative interpretation of the phenomenon is the formation of two-dimensional electron gases near the surface caused by an adsorbate-induced band bending \cite{Bianchi:2010b,Bianchi:2011,Bahramy:2012,King:2011}.
A detailed structural determination of the Bi$_2$Se$_3$(111) surface cleaved at room temperature is therefore called for and presented here. We use two complementary and powerful structural techniques, low-energy electron diffraction (LEED) and surface X-ray diffraction (SXRD).
The Bi$_2$Se$_3$ crystals were grown by standard methods described elsewhere \cite{Bianchi:2010b}. The bulk structure was determined by X-ray diffraction at room temperature. To this end, a fine powder was filed from the crystal rod and diffraction experiments were performed on a STOE powder diffractometer using Cu K$_{\alpha1}$ radiation in transmission geometry. The bulk structure parameters were analysed by Rietveld refinement and found to be in good agreement with the literature \cite{Nakajima:1963}.
These structural parameters were used as starting and reference points for the surface structure determination.
LEED and SXRD experiments were performed in ultra-high vacuum (UHV) chambers with a base pressure of $\approx 10^{-10}$ Torr. The samples were cleaved at room temperature in a loadlock with a somewhat inferior pressure. X-ray photoemission spectroscopy performed in the LEED chamber did not show any detectable contaminations. SXRD data were taken at beamline I07 of the Diamond Light Source, using 20~keV X-rays and a UHV chamber mounted on a large '2+3' diffractometer \cite{Vlieg:1998}. Scattered X-rays were collected using a two-dimensional detector (Pilatus) enabling fast data acquisition. The specular reflectivity (00 rod) was collected using a conventional $\Theta-2\Theta$ scan, whilst all non-specular data was recorded using a fixed X-ray incidence angle of 1$^{\circ}$. Note that data were acquired over a time span in the order of hours, whereas Ref. \cite{He:2013} reports an increased Bi concentration near the surface, interpreted as a bilayer formation, immediately after cleaving the sample at room temperature.
Full dynamic LEED \textit{I(V)} model calculations were performed using a modified version of the Symmetrised Automated Tensor LEED (SATLEED) computer package \cite{michel1,VanHove:1986}. The potential and the electron scattering phase-shifts for the Bi$_{2}$Se$_{3}$ (111) surface were calculated using the optimised muffin-tin potential method \cite{Rundgren:2003}, an approach recently used to successfully determine complex metal oxide surfaces \cite{Nascimento:2007,Pentcheva:2008,Nascimento:2009}. Specific phase-shift sets were calculated for selenium and for bismuth atoms, depending on their surface and bulk positions. The \textit{I(V)} model calculations converged when using 12 phase shifts ($l_{max}=11$); and 13 phase shifts were used in the final calculations. Convergence for a lower number of phase shifts than in e.g. Ref. \cite{Fukui:2012} is probably caused by two factors: one is the different method of phase shift calculations and the other is the lower maximum electron energy in the experiment, which was found sufficient due to the higher temperature. Debye temperatures were obtained from Ref. \cite{Shoemake:1969}. The real and imaginary parts of the optical potential were set to $V_{0} = 10.0$~eV and $V_{0i}=$-5.0~eV, respectively.
SXRD crystal truncation rod intensities were extracted by numerically integrating the background-subtracted spot in a well defined region of interest on the detector image. The structure factors were calculated by applying correction factors to account for the polarisation of the X-ray beam, the rod interception, and the area of the sample contributing to the scattered intensity, and then taking the square root of the corrected intensity \cite{Schleputz:2011}. Subsequent analysis of the data was undertaken using the ANAROD code \cite{Vlieg:2000}.
The structure determination was performed by a quantitative comparison between the experimental and theoretical \textit{I(V)} curves and crystal truncation rod intensities. The agreement between experimental data and model calculations was quantified using the Pendry reliability factor ($R_{P}$) \cite{Pendry:1980} and by $\chi^2$ for LEED and SXRD, respectively. In the first step of the structure analysis, five bulk truncated trial models were tried: the different possible possible bulk terminations (\textit{Se1, Bi1, Se2, Bi2,} and \textit{Se-Se}, see Fig. \ref{fig1}) as well a bismuth bilayer atop of the truncated bulk crystal. The resulting agreement is shown Table \ref{rptable1}. After that, an optimisation procedure was used to adjust the structural parameters as well as the inner potential (for LEED) to find the optimum structure, namely the one that leads to the lowest $R_{p}$ or $\chi^2$. For the LEED analysis of the \textit{Se1} structure, the surface Debye temperatures of the first four layers were refined, too, but this gave only rise to a very small change of $R_P$. The final values of $R_{p}$ and $\chi^2$ for all structural models are also listed in Table \ref{rptable1}. Regarding the Bi bilayer atop model \cite{He:2013}, all the nine stacking possibilities were tested and the structural parameters of the bilayer were refined. The best values of $R_{P}$ and $\chi^2$ are given in Table \ref{rptable1}. The final comparison between measured and calculated diffraction data are shown in Figures \ref{fig2} and \ref{fig3}, both for the best fit obtained with the intact QL (\textit{Se1}) termination as well as for the best agreement that can be reached with an optimised Bi bilayer termination.
\begin{figure}[h!]
\includegraphics[width=.5\textwidth]{figure2}%
\caption{LEED experimental and theoretical \textit{I(V)} curves for the best structural model. The inset shows the LEED pattern at 161 eV. \label{fig2}}
\end{figure}
\begin{figure}[h!]
\includegraphics[width=.45\textwidth]{figure3}%
\caption{SXRD data from four different crystal truncation rods (CTRs) and fit to two different terminations. \label{fig3}}
\end{figure}
\begin{table
\caption{LEED Pendry R-factor and SXRD $\chi^2$ for different trial models and for non-optimised (truncated bulk) as well as optimised parameters. \label{rptable1}}
\begin{ruledtabular}
\begin{tabular}{c c c c c}
& \multicolumn{2}{c}{LEED $R_{P}$} & \multicolumn{2}{c}{SXRD $\chi^2$} \\
\hline
Model & Bulk & Optimised & Bulk & Optimised \\
\hline
\textit{Se1} & 0.74 & \textbf{0.25} &3.15 & \textbf{2.44} \\
\textit{Bi1} & 0.83 & 0.61 & 9.50 & 4.68 \\
\textit{Se2} & 0.91 & 0.70 & 30.32 & 10.20 \\
\textit{Bi2} & 0.80 & 0.65 & 30.31 & 7.02 \\
\textit{Se-Se} & 0.98 & 0.80 & 9.46 & 7.05 \\
\textit{Bi atop \cite{He:2013}} & 1.00 & 0.84 & 11.88 & 6.31 \\
\end{tabular}
\end{ruledtabular}
\end{table}
For both LEED and SXRD, the termination with an intact QL, i.e. the \textit{Se1} model, gives the best agreement between experimental and simulated diffraction data. The detailed structural relaxations for this termination are given in Table \ref{leedatomic}. The agreement between the two structural techniques is excellent, apart from the third interlayer distance $d_{Se2-Bi}$ where LEED finds a slightly larger expansion than SXRD, and the fourth interlayer distance $d_{Bi-Se1}$ where LEED points towards a small contraction of the interlayer spacing whereas SXRD shows an expansion. Both techniques are consistent in finding a small contraction in the first interlayer spacing $d_{Se1-Bi}$ and a small expansion of the van der Waals gap distance $d_{Se1-Se1}$.
\begin{table
\caption{Interlayer distances found from LEED and SXRD (see Fig. \ref{fig1}) and the corresponding bulk distances. All values are in given in \AA ngstroms. The uncertainties for the bulk values are smaller than 0.01 \AA. \label{leedatomic}}
\begin{ruledtabular}
\begin{tabular}{c c c c}
Interlayer Distances& Bulk Value & LEED & SXRD \\%& DFT \\
\hline
$d_{Se1-Bi}$ & 1.62 & 1.56 $\pm$ 0.03 & 1.51 $\pm$ 0.05 \\%1.58 & \\
$d_{Bi-Se2}$ & 1.95 & 1.96 $\pm$ 0.03 & 1.94 $\pm$ 0.06 \\%1.93 & \\
$d_{Se2-Bi}$ & 1.95 & 2.01 $\pm$ 0.04 & 1.91 $\pm$ 0.05 \\%1.92 & \\
$d_{Bi-Se1}$ & 1.62 & 1.53 $\pm$ 0.05 & 1.72 $\pm$ 0.04 \\%1.61 & \\
$d_{Se1-Se1}$ & 2.42 & 2.51 $\pm$ 0.08 & 2.50 $\pm$ 0.06 \\%2.31 & \\
\end{tabular}
\end{ruledtabular}
\end{table}
From the results of this structural determination, we can draw the following conclusions: Both techniques unambiguously show that, at room temperature, the Bi$_2$Se$_3$(111) surface is terminated with an intact QL rather than covered by a Bi bilayer. An intact QL termination also suggest that this termination prevails at lower temperature, since less thermal activation energy is available for a major structural rearrangement. The observed termination is at variance with the recent low-energy ion scattering result report in Ref. \cite{He:2013} but consistent with other experimental evidence. Notably, the surface electronic structure for a Bi bilayer on Bi$_2$Se$_3$(111) is quite different \cite{Miao:2013} from the simple single Dirac cone observed for the surface terminated by a van der Waals gap. While only low-temperature ARPES data for an on-purpose prepared bilayer of Bi on Bi$_2$Se$_3$(111) are available, it is unlikely that a room temperature measurement such as in Ref. \cite{Hatch:2011} would miss the electronic structure change and the additional bands caused by the bilayer. Finally, it is interesting to note that a recent room temperature SXRD structural determination of thin Bi$_2$Te$_3$ films on Si has a also shown the films to have a intact QL termination towards vacuum \cite{Liu:2013b}.
The details of the surface relaxations are also interesting. The contraction of the first interlayer spacing $d_{Se1-Bi}$ is of the order of 5\% and such a large contraction is unusual for a closed packed surfaces. On closed-packed simple metal surfaces, small expansions of the first interlayer spacing are more frequently found than contractions \cite{Hofmann:1996e}. On Bi$_2$Te$_3$(111), a small contraction of 1 \% was reported \cite{Fukui:2012}. More importantly, neither LEED nor SXRD find a significant expansion on the first van der Waals gap spacing. Both point towards a small expansion in the order of 4 \%, far smaller than the expansions of 20 - 40 \% required to explain the observed two-dimensional electronic states in the conduction band as caused by an interlayer expansion \cite{Menshchikova:2011,Vergniory:2012}. This can be taken as additional evidence that band bending, an not surface relaxation, is the cause for the appearance of new two-dimensional electronic states near the surface of Bi$_2$Se$_3$ \cite{Bianchi:2010b,Bianchi:2011,Bahramy:2012,King:2011}.
In summary, LEED and the SXRD reveal that the room temperature structure of Bi$_2$Se$_3$(111) is that of a surface terminated by an intact QL, i. e. as expected for a surface cleaved in a van der Waals gap. Both techniques agree on a small expansion of the first van der Waals gap below the surface but the size of the expansion is far smaller than what would be required to bring about a dramatic change of the surface electronic structure.
\begin{acknowledgments}
We gratefully acknowledge financial support by the VILLUM foundation, the Danish National Research Foundation, the Carslberg foundation,
CNPq, CAPES, FAPEMIG, and the Laborat\'orio Nacional de Luz S\'incrotron (LNLS), as well as travel support from the Diamond Light Source Ltd. under proposal SI7522. \end{acknowledgments}
\providecommand{\noopsort}[1]{}\providecommand{\singleletter}[1]{#1}%
|
train/arxiv
|
BkiUdq45qhDBbAkZEpOj
| 5 | 1 |
\section{Introduction}\label{intro}
We consider quasilinear ellitic partial differential equations, in a possibly anisotropic medium. From the mathematical viewpoint, the anisotropy is responsible for a much richer geometric structure than the usual Euclidean geometry. However, the main interest rests in concrete applications, since anisotropic media naturally arise in several real world phenomena.
In fact, anisotropic energies are widely used in computer vision (see for instance \cite{BaFe,EsOs,PeMa,We,YiZhSi}) and in continuum mechanics, in particular in presence of materials with distinct behavior with respect to different directions,
typically due to the crystalline microstructure of the medium
(see for instance \cite{BeNoPa,BeNoRi,BrRiSo,CaHo,Gu,Ta} and the references therein).
Our main results are a Hopf Lemma at the boundary, as well as local and global regularity estimates for positive solutions
Let us remark that, in a forthcoming paper, as a direct application of the results discussed above, we shall develop a moving plane procedure in the general context of this anisotropic and possibly singular/degenerate elliptic equations, in order to prove monotonicity and symmetry results in this Finsler geometry setting for positive solutions, both on bounded and unbounded domains, such as the whole space $\R^n$ or on half spaces.
For $n\geq 2$, let $\Omega \subset \R^n$ be a smooth bounded domain.
Let us consider the following \emph{Wulff type functional}:
\begin{equation}\label{wulff}
I(u)=\int_\Om\left[B(H(\n u))-F(u)\right]\di x\,,
\end{equation}
whose weak form of the Euler-Lagrange equation is given by
\begin{equation}\label{eq debole}
\int_\Om B'(H(\n u))\la\n H(\n u),\n \psi\ra =\int_\Om f(u)\psi, \, \forall \psi \in C^{1}_{c} (\Omega),
\end{equation}
(where $f=F'$)
as well as its strong form
\begin{equation}\label{eq forte}
-\Div\left(B'(H(\n u))\n H(\n u))\right)=f(u).
\end{equation}
We assume the following set of hypotheses on $B$, $H$ and $f$:
\begin{itemize}
\item[(i)] $B\in C^{3,\beta}_{loc}((0,+\infty))\cap C^1([0,+\infty))$, with $\beta\in (0,1)$
\item[(ii)] $B(0)=B'(0)=0,\quad B(t),B'(t),B''(t)>0\,\,\forall t\in(0,+\infty)$
\item[(iii)] there exist $p>1,k\in [0,1],\gamma>0,\Gamma>0$ such that:
\begin{equation}\label{propr B}
\begin{array}{c}
\gamma (k+t)^{p-2}t\leq B'(t)\leq \Gamma (k+t)^{p-2}t\\
\gamma (k+t)^{p-2}\leq B''(t)\leq \Gamma (k+t)^{p-2}
\end{array}
\end{equation}
for any $t>0$
\item[(iv)] $H\in C^{3,\beta}_{loc}(\R^n\backslash \{0\})$ is even and such that $H(\xi)>0\,\,\forall\xi\in\R^n\backslash\{0\}$
\item[(v)] $H(t\xi)=tH(\xi)\quad\forall\xi\in\R^n\backslash\{0\},\forall t>0$
\item[(vi)] $H$ is \emph{uniformly elliptic}, that means
the set $B_1^H:=\{\xi\in\R^n:\,H(\xi)<1\}$ is uniformly convex, i.e.
$$
\exists \lambda>0:\quad \la D^2 H(\xi)v,v\ra\geq\lambda |v|^2\quad\forall\xi\in\partial B_1^H,\,\forall v\in\n H(\xi)^\perp\,.
$$
\item[(vii)] $f$ is a positive continuous function on $[0,\infty)$ which is locally Lipschitz continuous on $(0,\infty)$
\item[(viii)] there exists $g\in C^0([0,+\infty))$ non-decreasing on $(0,\delta)$, $\delta>0$,
satisfying $g(0)=0$, $f+g\geq 0$ and either $g=0$ in $[0,d]$, $d>0$, or
$$
\int_0^\delta\frac{1}{L^{-1}(G(s))}ds=+\infty,
$$
where $G(s)=\displaystyle\int_0^\delta g(t)dt$ and $L(s)=sB'(s)-B(s)$.
\end{itemize}
The assumptions $(iv)-(v)-(vi)$ ensure that $H$ is a Finsler norm.
For $H(\xi)=|\xi|$ we get the usual Euclidean norm and, if we take $B(t)=\frac{t^p}{p}$,
the operator at left-hand side of \eqref{eq forte} becomes the usual $p$-Laplacian operator.
Let us observe that, under the above hypotheses, the natural space for the existence of solutions is
$W^{1,p} (\Om) \cap L^{\infty} (\Om)$.
However, better regularity holds in general.
If $k>0$ in hypothesis $(iii)$, then the operator is uniformly elliptic and, by standard elliptic regularity, solutions are classical and \eqref{eq forte} is satisfied.
On the other hand, for $k=0$, the operator becomes degenerate or singular, and solutions are not classical.
In fact, in \cite{CoFaVa} the authors show that to equation \eqref{eq forte}
it is possible to apply results in \cite{DB, To} to ensure that the solutions belong to
$C^{1,\alpha}(\Omega)\cap C^2(\Omega\backslash Z)$ for some $0 < \alpha < 1$,
where $Z$ denotes the \emph{critical set}, i.e. the set whet $\n u$ vanishes.
Moreover, if $\Omega$ is smooth, we can apply the regularity results in \cite{Lie}
to deduce that the solutions are in fact $C^1$ up to the boundary.
Thus, in order to cover the general case, throughout the paper we shall consider solutions belonging to $C^1 (\overline{\Om})$, which implies that the equation shall always be intended in the weak sense \eqref{eq debole}.
Anyhow, we will see that, as a consequence of our regularity results,
the critical set $Z$ is negligible and the strong equation \eqref{eq forte} shall be satisfied almost everywhere.
In \cite{DS1,DS2} were firstly introduced useful tools to get regularity and
qualitative properties (comparison principles, Harnack inequality, monotonicity and symmetries, etc.)
of equations involving the $p-$Laplace operator.
Then these techniques were widely developed to study several type of
more general equations and systems (for instance with lower order terms or singular data)
both in bounded and unbounded domains
(see for instance \cite{FaMoRiSc, LePoRi, MeRiSc, MoRiSc, RiSc, Sc1, Sc2} and the references quoted there).
Using this framework, we prove local regularity estimates (Section~\ref{localreg})
for the solutions of our anisotropic elliptic quasilinear equations,
namely a weighted integral hessian estimate as well as the integrability of the inverse of the gradient.
For these kind of equations we also prove a Hopf type Lemma (Section \ref{sect hopf}).
Thanks to this result, the local regularity estimates are then extended to the global case (Section~\ref{globalreg}).
\section{Notation and some geometrical tools}\label{notation}
For $a,b\in\R^n$ we denote by $a\otimes b$ the matrix whose entries
are $(a\otimes b)_{ij}=a_i b_j$. We remark that, for $v,w\in\R^n$,
there holds:
\begin{equation}\label{tensori}
\la a\otimes b\, v,w\ra=\la b,v\ra \la a,w\ra\,.
\end{equation}
Given an $n\times n$ matrix $A$, we set: $\displaystyle
|A|:=\sqrt{\sum_{i,j=1}^n |A_{ij}|^2}$.
For $x_0\in\R^n$ and $r>0$ we set $\br=\{x\in\R^n:|x-x_0|<r\}$.\\
We briefly recall some basic properties about Finsler geometry,
which is the main tool to study anisotropic problems.
We recall that Riemannian geometry is a particular case of the Finsler one
and, in fact, also in this more general framework it is possible to define
length of curves, geodesics, curvatures, etc.
For our purposes, we focus the attention on Finsler norms not depending on the position,
that means invariant by translations.
\begin{definition}\label{finsler norm}\rm
A function $H:\R^n\to [0,+\infty)$ is said a \emph{Finsler norm} if it is continuous,
even, convex and it satisfies:
\begin{equation}\label{finsler 1}
H(\lambda\xi)=|\lambda|H(\xi),\quad\forall\,\lambda\in\R,\quad\forall\,\xi\in\R^n
\end{equation}
and
\begin{equation}\label{finsler 2}
\exists\,c>0\,:\quad H(\xi)\geq c|\xi|\quad\forall\,\xi\in\R^n\,.
\end{equation}
\end{definition}
The \emph{dual norm} $\ho:\R^n\to [0,+\infty)$ is defined as:
$$
\ho(x)=\sup\{\la \xi,x\ra: H(\xi)\leq 1\}.
$$
It is easy to prove that $\ho$ is also a Finsler norm and it has the same regularity
properties of $H$. In particular $\ho$ satisfies \eqref{finsler 2} with $c^{-1}$ in place of $c$.
Moreover it follows that $(\ho)^\circ=H$.
For $r>0$ and $\overline x\in\R^n$ we define:
$$
\bh_r(\overline x)=\{x\in\R^n: H(x-\overline x)\leq r\}
$$
and
$$
\bho_r(\overline x)=\{x\in\R^n: \ho(x-\overline x)\leq r\}.
$$
For simplicity, when $\overline x=0$, we set: $\bh_r=\bh_r(0),\,\bho_r=\bho_r(0)$.
In literature $\bh_r$ and $\bho_r$ are also called ``Wulff shape'' and
``Frank diagram'' respectively.
We remark that there holds:
\begin{equation}\label{propr finsler}
H(\n \ho(x))=1=\ho(\n H(x)).
\end{equation}
For more details on Finsler geometry see for instance \cite{BaChSh, BePa}.
\section{Local regularity estimates}\label{localreg}
The aim of this section is to present some integral regularity estimates for the hessian and for the inverse of the gradient of any (local) solution of \eqref{eq debole}. First of all, we shall need the following lemma about some structural bounds for the principal part of our divergence form operator.
\begin{lemma}\label{stime fondamentali}
There exist $\bar C_1,\bar C_2>0$ such that:
\begin{equation}\label{propr BH 1}
\la\left[
B''(H(\xi))\n H(\xi)\otimes\n H(\xi)+
B'(H(\xi))D^2 H(\xi)
\right]v, v\ra\geq
\bar C_1(k+|\xi|)^{p-2}|v|^2
\end{equation}
and
\begin{equation}\label{propr BH 2}
\left|
B''(H(\xi))\n H(\xi)\otimes \n H(\xi)+B'(H(\xi))D^2 H(\xi)
\right|\leq
\bar C_2(k+|\xi|)^{p-2}
\end{equation}
for any $\xi\in \R^n \backslash\{0\}$ and $v\in\R^n$.
\end{lemma}
\proof
See formulas (3.2) and (3.3) in \cite{CoFaVa}.
\endproof
Next, we shall be interested in the \textbf{linearized equation} of \eqref{eq debole} at any fixed solution $u$, which we can write as follows.
Set $Z=\{x\in\Om:\n u(x)=0\}$ and, for $\vf\in C^{\infty}_c(\Om\backslash Z)$, taking $\psi=\vf_i$ in \eqref{eq debole}, we get:
\begin{equation*}
\int_\Om B''(H(\n u))\la\n H(\n u),\n u_i\ra\la\n H(\n u),\n \vf\ra+ B'(H(\n u))\la D^2H(\n u)\n u_i,\n\vf\ra=
\int_\Om f'(u)\vf,
\end{equation*}
which, taking in account \eqref{tensori}, can also be written as:
\begin{equation}\label{linearizzato}
\int_\Om \la
\left[B''(H(\n u))\n H(\n u)\otimes \n H(\n u)+ B'(H(\n u))D^2H(\n u)\right]\n u_i,\n\vf\ra=
\int_\Om f'(u)\vf.
\end{equation}
Let us remark that we could make sense of \eqref{linearizzato} under several regularity hypotheses on the solution as well as on the test functions. However, even if we will not pursue such generality, let us point out that the right space for the linearization and a full spectral theory for this equation, in the singular/degenerate case when $k=0$, has been introduced in \cite{DS1},\cite{DS2} and completed in \cite{CES4}.
\begin{remark}
In the sequel, with a little abuse of notation, we will
denote by $\n u_i$ (and $u_{ij}$ respectively) the second derivatives
of $u$ outside $Z$ (thought extended equal to $0$ on $Z$).
Then at the end of the section we will recover the sufficient regularity
to ensure that actually these derivatives coincide with the distributional
second derivatives of $u$ in the whole of $\Omega$.
\end{remark}
We are now ready to prove one of our main regularity results, namely a local weighted integral estimate for the Hessian.
\begin{proposition}[Local Hessian estimate]\label{stima hessiano locale}\rm
Let $u\in C^1(\overline{\Omega})$ be a solution to \eqref{eq debole}.
Fix $x_0\in \Omega $ and $r>0$ such that $B_{2r}(x_0)\subset\Omega$. For $\beta\in [0,1)$ and $\gamma<n-2$ ($\gamma=0$ if $n=2$),
there holds:
\begin{equation}\label{eq stima hessiano locale componente}
\sup\limits_{y\in \Omega}\, \int_{\br}\frac{(k+|\n u|)^{p-2-\beta} | u_{ij}| ^2}{|x-y|^\gamma}\di x \leq C
\end{equation}
and
\begin{equation}\label{eq stima hessiano locale}
\sup\limits_{y\in \Omega}\, \int_{\br}\frac{(k+|\n u|)^{p-2-\beta} | D^2 u | ^2}{|x-y|^\gamma}\di x \leq C\,,
\end{equation}
where $C= C(x_0,r,\beta, \gamma,p,n,\|u\|_{W^{1,\infty}},f)$.
\end{proposition}
\proof
Let $G_\e:\R\to\R$ be defined as:
$$
G_\e(s)=\left\{
\begin{array}{ll}
s & \hbox{ if } |s| \geq 2 \e, \\
2\left[s- \e \frac{s}{|s|}\right] & \hbox{ if } \e< |s|< 2\e, \\
0 & \hbox{ if } |s|\leq \e,
\end{array}
\right.
$$
and let $\psi$ be a cut-off
function such that
\begin{equation}\label{psi}
\psi\in C^\infty_c(B_{2r}(x_0))\quad \psi\equiv 1\ \mbox{ in } \ B_r(x_0)\qquad \mbox{and }\quad |\n \psi|\leq\frac{2}{r},
\end{equation}
with $2r <$ dist$(x_0,\partial \Omega)$.
Fix $\beta\in [0,1)$
and $\gamma<n-2$ (or $\gamma=0$ if $n=2$) and set:
\begin{equation}\label{test}
\varphi (x)=T_\e(u_i(x)) K_\delta(|x-y|) \psi^2(x)
\quad \mbox{ where } \quad T_\e(t)=\frac{G_\e(t)}{|t|^\beta}
\quad\hbox{and}\quad
K_\delta(t)=\frac{G_\delta(t)}{|t|^{\gamma+1}}\,.
\end{equation}
Substituting $\vf$ in \eqref{linearizzato}, we get:
\begin{eqnarray}
\nonumber && \int_\Om B''(H(\n u))\la\n H(\n u),\n u_i\ra^2 T'_\e(u_i) K_\delta(|x-y|)\psi^2\\
\nonumber &+&\int_\Om B''(H(\n u))\la\n H(\n u),\n u_i\ra \la\n H(\n u),\n K_\delta(|x-y|)\ra T_\e(u_i)\psi^2\\
\nonumber &+&\int_\Om B''(H(\n u))\la\n H(\n u),\n u_i\ra \la \n H(\n u),\n \psi\ra T_\e(u_i)K_\delta(|x-y|)2\psi\\
\nonumber &+&\int_\Om B'(H(\n u))\la D^2H(\n u)\n u_i,\n u_i\ra T'_\e(u_i)K_\delta(|x-y|)\psi^2\\
\nonumber &+&\int_\Om B'(H(\n u))\la D^2H(\n u)\n u_i,\n K_\delta(|x-y|)\ra T_\e(u_i)\psi^2\\
\nonumber &+&\int_\Om B'(H(\n u))\la D^2H(\n u)\n u_i,\n \psi\ra T_\e(u_i)K_\delta(|x-y|)2\psi\\
\label{stima hessiano 1} &=&\int_\Om f'(u) T_\e(u_i)K_\delta(|x-y|)\psi^2\,.
\end{eqnarray}
Recalling \eqref{tensori}, we have:
$$
\la \n H(\xi),\n v\ra^2=\la \n H(\xi)\otimes \n H(\xi)v,v\ra
\quad
\forall \xi\in\R^n,\,\forall v\in\R^n
$$
and
$$
\la \n H(\xi),v\ra
\la \n H(\xi),w\ra=
\la \n H(\xi)\otimes \n H(\xi) v,w\ra
\quad
\forall \xi\in\R^n,\,\forall v\in\R^n,\forall w\in\R^n.
$$
Hence by \eqref{propr BH 1} and \eqref{propr BH 2} we have:
\begin{equation}\label{stima hessiano 2}
B''(H(\n u))\la\n H(\n u),\n u_i\ra^2+
B'(H(\n u))\la D^2H(\n u)\n u_i,\n u_i\ra\geq
\bar C_1(k+|\n u|)^{p-2}|\n u_i|^2
\end{equation}
and
\begin{equation}\label{stima hessiano 2 bis}
\left| B''(H(\n u))\n H(\n u)\otimes \n H(\n u)+
B'(H(\n u)) D^2H(\n u)\right|\leq
\bar C_2(k+|\n u|)^{p-2}.
\end{equation}
By \eqref{stima hessiano 2} and Cauchy-Schwartz inequality we get:
\begin{eqnarray}
\nonumber && \bar C_1\int_\Om(k+|\n u|)^{p-2}|\n u_i|^2T'_\e(u_i) K_\delta(|x-y|)\psi^2\\
\nonumber &\leq& \int_\Om
\left|\la \left[B''(H(\n u))\n H(\n u)\otimes \n H(\n u)+B'(H(\n u)) D^2H(\n u)\right]\n u_i,\n K_\delta(|x-y|)\ra\right|
|T_\e(u_i)|\psi^2\\
\nonumber &+& \int_\Om
\left|\la \left[B''(H(\n u))\n H(\n u)\otimes \n H(\n u)+B'(H(\n u)) D^2H(\n u)\right]\n u_i,\n \psi\ra\right||T_\e(u_i)|K_\delta(|x-y|) 2\psi\\
\nonumber &+&\int_\Om |f'(u)||T_\e(u_i)|K_\delta(|x-y|)\psi^2\\
\nonumber &\leq& \int_\Om
\left|B''(H(\n u))\n H(\n u)\otimes \n H(\n u)+B'(H(\n u)) D^2H(\n u)\right||\n u_i||\n K_\delta(|x-y|)|
|T_\e(u_i)|\psi^2\\
\nonumber &+& \int_\Om
\left|B''(H(\n u))\n H(\n u)\otimes \n H(\n u)+B'(H(\n u)) D^2H(\n u)\right||\n u_i||\n \psi||T_\e(u_i)|K_\delta(|x-y|) 2\psi\\
\label{stima hessiano 3} &+&\int_\Om |f'(u)||T_\e(u_i)|K_\delta(|x-y|)\psi^2.
\end{eqnarray}
Taking into account \eqref{stima hessiano 2 bis}, by \eqref{stima hessiano 3} we infer:
\begin{eqnarray}\label{stima hessiano 4}
\nonumber && \bar C_1\int_\Om(k+|\n u|)^{p-2}|\n u_i|^2T'_\e(u_i) K_\delta(|x-y|)\psi^2\\
\nonumber &\leq& \bar C_2\int_\Om (k+|\n u|)^{p-2}|\n u_i||\n K_\delta(|x-y|)||T_\e(u_i)|\psi^2\\
\nonumber &+& \bar C_2\int_\Om (k+|\n u|)^{p-2}|\n u_i||\n \psi||T_\e(u_i)|K_\delta(|x-y|) 2\psi\\
&+&\int_\Om |f'(u)||T_\e(u_i)|K_\delta(|x-y|)\psi^2.
\end{eqnarray}
Since $\Om$ is bounded, we have that $\int_\Omega |x-y|^{-s} \di x$ is uniformly bounded for any $s<n$. In particular, since $\gamma<n-2$, this is true both for $s =\gamma$ and $s=\gamma+1$.
Therefore, for fixed $\e>0$, we can send $\delta$ to $0$ in \eqref{stima hessiano 4} and, recalling the definition of $K_\delta$,
by dominated convergence we get:
\begin{eqnarray}\label{stima hessiano 5}
\nonumber && \int_\Om \frac{(k+|\n u|)^{p-2}|\n u_i|^2T'_\e(u_i) \psi^2}{|x-y|^\gamma}\\
\nonumber &\leq& C\int_\Om \frac{(k+|\n u|)^{p-2}|\n u_i||T_\e(u_i)|\psi^2}{|x-y|^{\gamma+1}}\\
\nonumber &+& C \int_\Om \frac{(k+|\n u|)^{p-2}|\n u_i||\n \psi||T_\e(u_i) |\psi}{|x-y|^\gamma}\\
&+&C \int_\Om \frac{|f'(u)||T_\e(u_i)| \psi^2}{|x-y|^\gamma}.
\end{eqnarray}
We remark that, if $n=2$, the first term in the sum at the right hand-side of \eqref{stima hessiano 5} is equal to $0$
because $\n K_\delta=0$ if $\gamma=0$. If instead $n\geq 3$, recalling that $G_\e$ is an odd function and that $\gamma<n-2$,
for a $0<\theta<1$ we have:
\begin{equation}\label{stima I1}
\int_\Om \frac{(k+|\n u|)^{p-2}|\n u_i||T_\e(u_i)|\psi^2}{|x-y|^{\gamma+1}}
\leq
\theta \int_\Omega\frac{(k+|\n u|)^{p-2}|\n u_i|^2}{|x-y|^\gamma |u_i|^\beta}
\frac{G_\e(u_i)}{u_i} \psi^2+ C.
\end{equation}
Since $|\n\psi|\leq\frac{2}{r}$, we have:
\begin{equation}\label{stima I2}
\int_\Om \frac{(k+|\n u|)^{p-2}|\n u_i||\n \psi||T_\e(u_i) |\psi}{|x-y|^\gamma}
\leq \theta \int_\Omega\frac{(k+|\n u|)^{p-2}|\n u_i|^2 }{|x-y|^\gamma |u_i|^\beta }
\frac{G_\e(u_i)}{u_i}\psi^2+C.
\end{equation}
Recalling that $\beta\in[0,1)$ and that $|T_\e(u_i)|\leq |u_i|^{1-\beta}$,
since $f$ is Lipschitz and $u$ is bounded we get:
\begin{equation}\label{stima I3}
\int_\Om \frac{|f'(u)||T_\e(u_i) | \psi^2}{|x-y|^\gamma}
\leq C\int_{\Omega} \frac{1}{|x-y|^\gamma}\di x\leq C.
\end{equation}
For $s>0$ we have
$$
T'_\e(s)=\frac{1}{|s|^\beta}\left[G'_\e(s)-\beta\frac{G_\e(s)}{s}\right]
$$
and by \eqref{stima hessiano 5}, \eqref{stima I1}, \eqref{stima I2} and \eqref{stima I3} we get:
\begin{equation}\label{eq stima hess 6}
\int_\Omega \frac{(k+|\n u|)^{p-2}|\n u_i|^2}{|u_i|^\beta |x-y|^\gamma}
\left(
G'_\e(u_i)-(\beta+\vartheta)\frac{G_\e(u_i)}{u_i}\right)\psi^2\leq C.
\end{equation}
We now choose $\theta$ small enough so that $\beta+\theta<1$,
so that $G'_\e(u_i)-(\beta+\vartheta)\frac{G_\e(u_i)}{u_i}$ is positive.
By definition of $G_\e$ it follows that, for any $s >0$,
$G'_\e(s)-(\beta+\vartheta)\frac{G_\e(s)}{s}$ tends to $1-(\beta+\theta)$ as
$\e$ goes to $0$, and hence by Fatou's Lemma we get:
\begin{equation}\label{eq stima hess 7}
\int_{\Omega\setminus\{u_i=0\}} \frac{(k+|\n u|)^{p-2}|\n u_i|^2}{|u_i|^\beta |x-y|^\gamma}\psi^2\leq C
\end{equation}
and, since $(k+|\n u|)^\beta\geq |\n u|^\beta\geq |u_i|^\beta$, we have:
\begin{equation}\label{eq stima hess 8}
\int_{\Omega\setminus\{u_i=0\}} \frac{(k+|\n u|)^{p-2-\beta}|\n u_i|^2}{|x-y|^\gamma}\psi^2\leq C
\end{equation}
whence, recalling that $u_{ij} = 0$ on $\{u_i=0\}\cap\left(\Omega\setminus Z\right)$, we see that:
\begin{equation}\label{eq stima hess 9}
\int_{\Omega\setminus Z} \frac{(k+|\n u|)^{p-2-\beta}|\n u_i|^2}{|x-y|^\gamma}\psi^2\leq C\,,
\end{equation}
where $C$ depends on $x_0,r, n,p,\beta,\gamma, f, H, \|u\|_{W^{1,\infty}}$
but it does not depend on $y$.
By the properties of $\psi$ it follows:
$$
\sup\limits_{y\in \Omega}\,\, \int_{B_r(x_0)\setminus Z}\frac{(k+|\n u|)^{p-2-\beta} |D^2 u|^2}{|x-y|^\gamma}\leq C
$$
and by standard arguments (see for instance \cite{DS1},\cite{Sc1},\cite{Sc2}) we get the thesis.
In fact, inspired by Stampacchia's Theorem, we first
extend the (generalized) second derivatives of $u$ to be zero over the critical set $Z$ and we can actually write that
\begin{equation}\label{stima hess piena}
\sup\limits_{y\in \Omega}\,\, \int_{B_r(x_0)}\frac{(k+|\n u|)^{p-2-\beta} |D^2 u|^2}{|x-y|^\gamma}\leq C
\end{equation}
Such an estimate then allows us to conclude that the extended generalized derivatives are the effective distributional derivatives (see e.g. \cite{DS1} for details).
\endproof
At this point, before proving the integrability of the inverse of the gradient, we shall need another structural estimate.
\begin{lemma}\label{stima BH}
There exists $C>0$ such that:
\begin{equation}\label{eq stima BH}
|B'(H(\xi))|\leq C \left(k+|\xi|\right)^{p-1}.
\end{equation}
\end{lemma}
\proof
Since $H$ is a norm equivalent to the euclidean one, there exist $\lambda_1,\lambda_2>0$ such that:
\begin{equation}\label{norma equivalente}
\lambda_1|\xi|\leq H(\xi)\leq \lambda_2 |\xi|\quad\quad\forall \xi\in\R^n.
\end{equation}
Hence by \eqref{propr B} we infer:
\begin{equation}\label{stima BH 1}
B'(H(\xi))\leq C_2(k+H(\xi))^{p-2}H(\xi)\leq C_2(k+H(\xi))^{p-2}\lambda_2|\xi|.
\end{equation}
Moreover we have:
\begin{equation}\label{stima BH 2}
\left\{
\begin{array}{ll}
(k+H(\xi))^{p-2}\leq (k+\lambda_2|\xi|)^{p-2} & \hbox{ if } p\geq 2\\
(k+H(\xi))^{p-2}\leq (k+\lambda_1|\xi|)^{p-2} & \hbox{ if } p<2.
\end{array}
\right.
\end{equation}
Let us consider first the case $p\geq 2$.
For $t>0$, if $\lambda_2\leq 1$, we have:
\begin{equation}\label{stima BH 3}
k+\lambda_2 t\leq k+t,
\end{equation}
while, if $\lambda_2>1$, we have:
\begin{equation}\label{stima BH 4}
k+\lambda_2 t=\lambda_2\left(\frac{k}{\lambda_2}+t\right)\leq \lambda_2(k+t).
\end{equation}
By \eqref{stima BH 1}, the first equation in \eqref{stima BH 2}, \eqref{stima BH 3}
and \eqref{stima BH 4}, we get:
\begin{equation}\label{stima BH 5}
B'(H(\xi))\leq C_2(k+\lambda_2|\xi|)^{p-2}\lambda_2|\xi|\leq C_2(k+\lambda_2|\xi|)^{p-1}
\leq C_2\max\{1,\lambda_2\}^{p-1}(k+|\xi|)^{p-1}.
\end{equation}
Let us now consider the case $p<2$.
For $t>0$, if $\lambda_1\geq 1$, we have:
\begin{equation}\label{stima BH 6}
k+\lambda_1 t\geq k+t,
\end{equation}
while, if $\lambda_1<1$, we have:
\begin{equation}\label{stima BH 7}
k+\lambda_1 t=\lambda_1\left(\frac{k}{\lambda_1}+t\right)> \lambda_1(k+t).
\end{equation}
Hence by \eqref{stima BH 1}, the second equation in \eqref{stima BH 2}, \eqref{stima BH 6}
and \eqref{stima BH 7}, we get:
\begin{equation}\label{stima BH 8}
\begin{split}
B'(H(\xi)) &\leq \frac{C_2\lambda_2}{\lambda_1}(k+\lambda_1|\xi|)^{p-2}\lambda_1|\xi|\\
&\leq \frac{C_2\lambda_2}{\lambda_1}(k+\lambda_1|\xi|)^{p-1}\leq
\frac{C_2\lambda_2}{\lambda_1}\min\{1,\lambda_1\}^{p-1}(k+|\xi|)^{p-1}.
\end{split}
\end{equation}
\endproof
We are now in position to state and prove our second main regularity result, which deals with the local integrability of the inverse of the weight.
\begin{proposition}[Local estimate of the weight]\label{inverso peso locale}\rm
Let $u\in C^{1}(\overline{\Om})$ be a solution to \eqref{eq debole}.
Fix $t\in [0,p-1)$ and $\gamma<n-2$ ($\gamma=0$ if $n=2$).
Then, for any $\Omega'\subset\subset\Omega $ there exists $C$ such that
\begin{equation}\label{stima peso locale}
\sup\limits_{y\in \Omega}\,\, \int_{\Omega'}\frac{1}{(k+|\n u|)^t |x-y|^\gamma} \di x \leq C\,,
\end{equation}
where $C=C(\Omega',t,\gamma,n,p,\|u\|_{W^{1,\infty}}, f)$.
\end{proposition}
\proof
We first prove inequality \eqref{stima peso locale}
on balls, then the thesis will follow by a covering argument.
For $x_0\in \Omega $ we choose $r>0$ such that $B_{2r}(x_0)$
is contained in $\Omega $.
Let $\psi$ and $K_\delta$ be defined as in Proposition \ref{stima hessiano locale}.
For $t=p-2+\beta<p-1$ and $\e > 0$, we consider in \eqref{eq debole} the test function:
$$
\vf=\frac{1}{(k+|\n u|)^t+\e}K_\delta(|x-y|)\psi^2
$$
and, noticing that $f(u(x)) \geq C(x_0)>0$ for any $x \in \brr$, we
get:
\begin{eqnarray}\label{peso 1}
&& C(x_0)\int_{\brr}\frac{1}{(k+|\n u|)^t+\e}K_\delta(|x-y|)\psi^2\\
\nonumber &\leq&
-t\int_{\brr}\frac{B'(H(\n u))(k+|\n u|)^{t-1}}{[(k+|\n u|)^t+\e]^2}
\left\la\n H(\n u),\frac{\n u}{|\n u|}D^2 u\right\ra K_\delta(|x-y|)\psi^2 \\
\nonumber &+& \int_{\brr}\frac{B'(H(\n u)) \psi^2}{(k+|\n u|)^t+\e}
\la \n H(\n u),\n K_\delta(|x-y|)\ra\\
\nonumber &+& \int_{\brr}\frac{B'(H(\n u)) 2\psi K_\delta(|x-y|)}{(k+|\n u|)^t+\e}
\la \n H(\n u),\n \psi\ra.
\end{eqnarray}
Recalling that $H$ is $1$-homogeneous, we have that $\n H$ is
$0$-homogeneous and hence we have:
$$
\n H(\xi)=\n H\left(|\xi|\frac{\xi}{|\xi|}\right)=\n H\left(\frac{\xi}{|\xi|}\right)\quad\forall \xi\in\R^n.
$$
Since $\n H$ is continuous, we infer that there exists $M>0$ such that:
\begin{equation}\label{bound grad H}
|\n H(\xi)|\leq M\quad\forall\xi\in\R^n.
\end{equation}
By \eqref{peso 1} and \eqref{bound grad H} and Lemma \ref{stima BH} we argue:
\begin{eqnarray}\label{peso 2}
&& \int_{\brr}\frac{1}{(k+|\n u|)^t+\e}K_\delta(|x-y|)\psi^2\\
\nonumber &\leq&
C\int_{\brr}\frac{(k+|\n u|)^{p-1}\cdot (k+|\n u|)^{t-1}}{[(k+|\n u|)^t+\e]^2}
|D^2 u||K_\delta(|x-y|)|\psi^2 \\
\nonumber &+& C\int_{\brr}\frac{(k+|\n u|)^{p-1}}{(k+|\n u|)^t+\e}
|\n K_\delta(|x-y|)|\psi^2\\
\nonumber &+& C\int_{\brr}\frac{(k+|\n u|)^{p-1}}{(k+|\n u|)^t+\e}
|K_\delta(|x-y|)||\n \psi|\psi.
\end{eqnarray}
Sending $\delta$ to $0$, by dominated convergence we get:
\begin{eqnarray}\label{peso 3}
&& \int_{\brr}\frac{1}{(k+|\n u|)^t+\e}\frac{1}{|x-y|^\gamma}\psi^2\\
\nonumber &\leq&
C\int_{\brr}\frac{(k+|\n u|)^{p-1}\cdot (k+|\n u|)^{t-1}}{[(k+|\n u|)^t+\e]^2}
|D^2 u|\frac{1}{|x-y|^\gamma}\psi^2 \\
\nonumber &+& C\int_{\brr}\frac{(k+|\n u|)^{p-1}}{(k+|\n u|)^t+\e}
\frac{1}{|x-y|^{\gamma+1}}\psi^2\\
\nonumber &+& C\int_{\brr}\frac{(k+|\n u|)^{p-1}}{(k+|\n u|)^t+\e}
\frac{1}{|x-y|^\gamma}|\n \psi|\psi.
\end{eqnarray}
Recalling that $t=p-2+\beta$, using Proposition \ref{stima hessiano locale},
for $0 <\theta <1$ we get:
\begin{eqnarray}\label{stima pezzo 1}
&& \int_{\brr}\frac{(k+|\n u|)^{p-1}\cdot (k+|\n u|)^{t-1}}{[(k+|\n u|)^t+\e]^2}
|D^2 u|\frac{1}{|x-y|^\gamma}\psi^2 \\
\nonumber &\leq& \theta\int_{\brr}
\frac{(k+|\n u|)^{3t}}{[(k+|\n u|)^t+\e]^4 |x-y|^\gamma}\psi^2
+
\frac{1}{4\theta}\int_{\brr}
\frac{(k+|\n u|)^{p-2-\beta}| D^2 u |^2}{|x-y|^\gamma}
\psi^2\\
\nonumber &\leq& \theta\int_{\brr}
\frac{1}{[(k+|\n u|)^t+\e]|x-y|^\gamma}\psi^2
+\frac C\theta.
\end{eqnarray}
Since $t<p-1$, $\frac{(k+|\n u|)^{p-1}}{(k+|\n u|)^t+\e}$ is bounded
and hence we have:
\begin{equation}\label{stima pezzo 2}
\int_{\brr}\frac{(k+|\n u|)^{p-1}}{(k+|\n u|)^t+\e}
\frac{1}{|x-y|^{\gamma+1}}\psi^2\leq
C\int_{B_{2r}(x_0)}\frac{1}{|x-y|^{\gamma+1}}\psi^2\leq C.
\end{equation}
Moreover by properties of $\psi$ it follows:
\begin{equation}\label{stima pezzo 3}
\int_{\brr}\frac{(k+|\n u|)^{p-1}}{(k+|\n u|)^t+\e}
\frac{1}{|x-y|^\gamma}|\n \psi|\psi
\leq
\frac{C}{r}\int_{B_{2r}(x_0)}\frac{1}{|x-y|^\gamma}\psi^2\leq C.
\end{equation}
Choosing $\theta$ small enough, by \eqref{peso 3}, \eqref{stima pezzo 1},
\eqref{stima pezzo 2} and \eqref{stima pezzo 3} we get:
\begin{equation}\label{peso 4}
\int_{\brr} \frac{1}{(k+|\n u|)^t+\e} \frac{1}{|x-y|^\gamma}\psi^2\leq C
\end{equation}
and, sending $\e$ to $0$, by Fatou's Lemma we get the thesis.
\endproof
\section{Hopf Lemma}\label{sect hopf}
In this section we shall prove a Hopf Lemma for solutions
of our anisotropic quasilinear elliptic equation.
We begin with a couple of structural bounds from below for our principal part
and then we state a weak comparison principle for a solution and a super-solution of an equation related to ours.
\begin{lemma}\label{lemma disug basso}
For $x\in\R^n\backslash\{0\},\,y\in\R^n$ and $p>1$ there exists $C>0$ such that:
\begin{equation}\label{disug basso 1}
\sum_{i,j=1}^{n}\frac{\partial}{\partial
x_i}\left(B'(H(x))H_j(x)\right)y_iy_j\geq C|x|^{p-2}|y|^2.
\end{equation}
and
\begin{equation}\label{disug basso 2}
\la B'(H(x))\n H(x)-B'(H(y))\n H(y),x-y\ra\geq
C\left(|x|+|y|\right)^{p-2}|x-y|^2.
\end{equation}
\end{lemma}
\proof First we remark that
$$
\frac{\partial}{\partial x_i}\left(B'(H(x)H_j(x)\right)=
B''(H(x))H_i(x)H_j(x)+B'(H(x))H_{ji}(x)
$$
and hence \eqref{disug basso 1} immediately follows by \eqref{propr BH 1}. Moreover, assuming $|y|\geq |x|$, using \eqref{disug basso 1}, we
have:
\begin{equation}\label{disug basso 2a}
B'(H(x))H_j(x)-B'(H(y))H_j(y)=
\int_0^1\sum_{i=1}^n\frac{\partial}{\partial
x_i}\left.\left(B'(H(x))H_j(x)\right)\right|_{y+t(x-y)}(x_i-y_i)dt
\end{equation}
and hence
\begin{eqnarray}\label{disug basso 2b}
&&\la B'(H(x)\n H(x)-B'(H(y))\n H(y),x-y\ra=\\
\nonumber &=&\int_0^1
\sum_{i=1}^n
\frac{\partial}{\partial x_i}\left.\left(B'(H(x))H_j(x)\right)\right|_{y+t(x-y)}(x_i-y_i)(x_j-y_j)dt\geq\\
\nonumber &\geq&C|x-y|^2\int_0^1|y+t(x-y)|^{p-2}dt.
\end{eqnarray}
For $t\in [0,1]$ it holds $|y+t(x-y)|\leq |x|+|y|$ and hence, if
$1<p<2$, then \eqref{disug basso 2} immediately follows. Else, if $p>2$, then we need to prove that
\begin{equation}\label{disug basso integrale}
\int_0^1|y+t(x-y)|^{p-2}dt\geq C (|x|+|y|)^{p-2}.
\end{equation}
We recall that we are assuming $|y|\geq |x|$ and hence, if $|x-y|\leq \frac{|y|}{2}$, then for $0<t<1$ it holds
$$
|y+t(x-y)|\geq|y|-|x-y|\geq\frac{|y|}{2}\geq \frac{|x|+|y|}{4},
$$
from which \eqref{disug basso integrale} follows. If instead
$|x-y|>\frac{|y|}{2}>0$, we set $t_0=\frac{|y|}{|x-y|}$ so that
$t_0\in (0,2)$ and we have:
\begin{eqnarray}\label{disug basso intermedia}
|y+t(x-y)| &\geq& ||y|-t|x-y||=|t-t_0||x-y|\geq\\
\nonumber &\geq& |t_0-t|\frac{|y|}{2}\geq |t_0-t|\frac{|x|+|y|}{4}
\end{eqnarray}
Recalling that $p>2$, we have $\int_0^1|t_0-t|^{p-2}dt\geq C$ and
hence \eqref{disug basso intermedia} implies \eqref{disug basso
integrale}.
\endproof
Thanks to the above lemma we can now deal with the following weak comparison principle.
\begin{lemma}\label{lemma principio massimo}
Let $u,v\in C^1(\overline\Om)$ satisfy:
\begin{equation}\label{eq per princ max}
\left\{
\begin{array}{ll}
-\hbox{div}\left(B'(H(\n u))\n H(\n u)\right)+g(u)\geq 0 & \hbox{ in } \Om\\
\\
-\hbox{div}\left(B'(H(\n v))\n H(\n v)\right)+g(v)=0 & \hbox{ in } \Om\\
\\
v\leq u & \hbox{ on } \partial\Om\,,
\end{array}
\right.
\end{equation}
where $g$ satisfies the assumption $(viii)$ given in Section \ref{intro}.
Then there holds:
$$
v\leq u \hbox{ in }\Omega.
$$
\end{lemma}
\proof By the weak formulation of \eqref{eq per princ max} we get:
\begin{equation}\label{diseq princ max}
\io \la B'(H(\n v))\n H(\n v)-B'(H(\n u))\n H(\n u) ,\n\psi\ra \leq
\io (-g(v)+g(u))\psi.
\end{equation}
Taking $\psi=(v-u)^+$ as test function in \eqref{diseq princ max},
using \eqref{disug basso 2} and, recalling that $-g$ is
non-increasing, we infer:
\begin{equation}\label{diseq princ max intermedia}
\io \left(|\n u|+|\n v|\right)^{p-2}|\n (v-u)^+|^2\leq
\io(-g(v)+g(u))(v-u)^+\leq 0.
\end{equation}
If $p>2$, it follows that $\n u=\n v$ a.e. in $\Om$ and hence
$(v-u)^+$ is constant and, since $(v-u)^+=0$ on the boundary, we infer
that $(v-u)^+=0$. If $1<p<2$, since $u,v\in C^1(\overline\Om)$
\eqref{diseq princ max intermedia} implies:
$$
\io|\n (v-u)^+|^2\leq 0
$$
and hence as above we get again the thesis.
\endproof
\begin{remark}\label{remark principio massimo}
If $u$ satisfies \eqref{eq forte}, by property $(viii)$ of $f$ we infer that
$u$ satisfies also the first inequality in \eqref{eq per princ max}.
\end{remark}
At this point, in order to prove a Hopf Lemma at the boundary for any positive solution,
exactly as in the classical semilinear case, we shall need to construct a radial barrier from below defined in an annulus.
\begin{lemma}\label{esistenza radiale}
For $R>0$ and $\overline x\in\R^n$ we consider the annulus
$$
A_R(\overline x)=\bho_R(\overline
x)\backslash\overline{\bho_{\frac{R}{2}}(\overline x)}.
$$
Let $g$ satisfy the assumption $(viii)$ in Section \ref{intro}. Then
for every $m>0$ there exists a non-negative function $v\in
C^1(\overline {A_R})$ satisfying:
\begin{equation}\label{propr v}
\left\{
\begin{array}{ll}
-\Div\left(B'(H(\n v))\n H(\n v))\right)+g(v)=0 & \hbox{ in } A_R\\
v=0 & \hbox{ on } \partial\bho_R\\
v=m & \hbox{ on } \partial\bho_{\frac{R}{2}}\\
\frac{\partial v}{\partial \nu}>0 & \hbox{ on } \partial\bho_R\,,
\end{array}
\right.
\end{equation}
where $\nu$ denotes the inner unit normal to $\bho_R$.
\end{lemma}
\proof We look for radial solutions. The word ``\emph{radial}''
has to be understood in the Finsler framework: from now on we will say that $v$ is radial if there exists $\overline x$ such
that $v$ is constant on the Finsler balls $\bho_R(\overline x)$ for any $R>0$. In
the sequel for simplicity we assume $\overline x=0$ and we set
$A_R=A_R(0)$. If we assume that $v$ is radial (with $\overline
x=0$), then there exists $w:[0,+\infty)\to\R$ such that:
\begin{equation}\label{def vw}
v(x)=w(\ho(x)).
\end{equation}
Using \eqref{def vw} in \eqref{eq debole} (where we take test
functions of the form $\psi(x)=\vf(\ho(x))$ with $\vf:\R\to\R$), we
get:
\begin{eqnarray}\label{eq debole radiale 1}
\nonumber &&\ia B'\left(H(w'(\ho(x))\n\ho(x))\right)
\left\la \n H\left(w'(\ho(x))\n\ho(x)\right),\vf'(\ho(x))\n\ho(x)\right\ra\di x+\\
&&+\ia g(w(\ho(x)))\vf(\ho(x))\di x=0.
\end{eqnarray}
Using the positive $1$-homogeneity of $H$ (and hence the
$0$-homogeneity of $\n H$), we infer:
\begin{eqnarray}\label{eq debole radiale 2}
\nonumber &&\ia B'\left(|w'(\ho(x))|H(\n\ho(x))\right)\hbox{sign}\left(w'(\ho(x))\right) \vf'(\ho(x))
\left\la \n H\left(\n\ho(x)\right),\n\ho(x)\right\ra\di x+\\
&&+\ia g(w(\ho(x)))\vf(\ho(x))\di x=0.
\end{eqnarray}
Recalling that $\la\n H(\xi),\xi\ra=H(\xi)$ and using \eqref{propr finsler}, \eqref{eq debole radiale 2} becomes:
\begin{equation}\label{eq debole radiale 3}
\ia B'\left(|w'(\ho(x))|\right)\hbox{sign}\left(w'(\ho(x))\right)\vf'(\ho(x))\di x+ \ia
g(w(\ho(x)))\vf(\ho(x))\di x=0.
\end{equation}
We now consider on $A_R$ a system of Finsler polar coordinates in
the following sense. Since $\bho_1$ is convex, we can find a map
$s:U\subset\R^{n-1}\to\R^n$ such that $\overline{s(U)}=\bho_1$.
Therefore for $\theta=(\theta_1,\cdots,\theta_{n-1})\in U$ we have
$\ho(s(\theta))=1$. For $\rho>0$ we have $\ho(\rho s(\theta))=\rho$
and hence $\rho s(\theta)$ belongs to $\bho_\rho$. This allows us to
consider the transformation $S:\R^n\to\R^n$ defined as:
$$
S(\rho,\theta)=\rho s(\theta)=(\rho s^1(\theta),\cdots,\rho s^n(\theta))
$$
and the change of variables:
$$
x=S(\rho,\theta)
$$
with $\rho\in \left[\frac{R}{2},R\right]$ and $\theta\in U$. Denoted
by $DS$ the Jacobian matrix of $S$, we set
$J(\rho,\theta)=|\hbox{det}DS(\rho,\theta)|$ and we have that there
exists a suitable function $\Gamma(\theta)>0$ such that:
\begin{equation}\label{jacobiano}
J(\rho,\theta)=\rho^{n-1}\Gamma(\theta).
\end{equation}
Using this change of variables, \eqref{eq debole radiale 3} becomes:
\begin{eqnarray}\label{eq debole radiale 4}
&&\int_U\Gamma(\theta)\di \theta_1\cdots\di \theta_{n-1}
\int_{\frac{R}{2}}^R B'\left(|w'(\rho)|\right)\hbox{sign}\left(w'(\rho)\right)\vf'(\rho)\rho^{n-1}\di \rho+\\
\nonumber &+&
\int_U\Gamma(\theta)\di \theta_1\cdots\di \theta_{n-1}
\int_{\frac{R}{2}}^R g(w(\rho))\vf(\rho)\rho^{n-1}\di \rho=0
\end{eqnarray}
and hence:
\begin{equation}\label{eq debole radiale 5}
\int_{\frac{R}{2}}^R
B'\left(|w'(\rho)|\right)\hbox{sign}\left(w'(\rho)\right)\vf'(\rho)\rho^{n-1}\di \rho+
\int_{\frac{R}{2}}^R g(w(\rho))\vf(\rho)\rho^{n-1}\di \rho=0,
\end{equation}
whose form is:
\begin{equation}\label{eq debole radiale 6}
-\left(B'\left(|w'(\rho)|\right)\hbox{sign}\left(w'(\rho)\right)\rho^{n-1}\right)'+
g(w(\rho))\rho^{n-1}=0.
\end{equation}
If we are interested in Finsler radial solutions in the annulus
$A_R(\overline x)$, we can choose $\rho=\ho(x-\overline x)$ and for
$\rho\in \left[0,\frac{R}{2}\right]$ we consider
$q(\rho):=(R-\rho)^{n-1}$. In such a way $q$ satisfies the
assumptions required to apply Proposition 4.2.1 and Proposition 4.2.2 in \cite{PuSe},
which states that the problem:
\begin{equation}\label{eq debole radiale 7}
\left\{
\begin{array}{ll}
\left(\Phi\left(w'(\rho)\right)q(\rho)\right)' - g(w(\rho))q(\rho)=0 & \hbox{ in } (0,T)\\
w(0)=0,\quad w(T)=m>0,\quad w'(0)>0
\end{array}
\right.
\end{equation}
with $\Phi(t) = B'(t)\hbox{sign}(t)$, $q$ as defined above, $f=g$ and $T=R/2$ in our case, admits a $C^1$ solution satisfying $w'>0$.
\endproof
We close this section with the following theorem, which states that
the Hopf Lemma holds also for our equation. Let us point out that this can be seen an extension to the anisotropic setting of a classical Hopf Lemma for quasilinear equations by Vazquez \cite{Vaz}.
\begin{theorem}[Hopf Lemma]\label{hopf}
Let $u$ be a $C^1(\overline\Om)$ solution of \eqref{eq debole} satisfying $u>0$ in
$\Om$ and $u=0$ on $\partial\Om$. Let $y\in\partial\Om$ such that
$\Om$ satisfies the interior sphere condition\footnote{ We remark
that the ``\emph{interior sphere condition}'' in the Euclidean sense
is equivalent to that in the Finsler geometry framework, that means,
if $\Om$ satisfies this condition with classical euclidean balls,
then it satisfies the same condition also if we consider the Finsler
balls $\bh$ or $\bho$. } at $y$ and let $\nu$ denote the inner unit
normal to $\partial\Om$ at $y$. Then there holds:
$$
\frac{\partial u}{\partial \nu}(y)>0.
$$
\end{theorem}
\proof Let $R>0$ be small enough such that there exists $\overline
x\in\Om$ such that $\bho_R(\overline x)\subset\Om$ and
$y\in\partial\bho_R(\overline x)$. Let $v$ be the function given by
Lemma \eqref{esistenza radiale}, which clearly satisfies the second
equality in \eqref{eq per princ max}.
Then by Lemma \ref{lemma principio massimo} and Remark \ref{remark principio massimo}
we have $u\geq v$ in $A_R(\overline x)$ and hence
$\displaystyle\frac{\partial u}{\partial
\nu}(y)\geq\displaystyle\frac{\partial v}{\partial \nu}(y)>0$.
\endproof
\section{Global regularity estimates}\label{globalreg}
In this section, thanks to the Hopf Lemma we just proved, we will see how it is possible to easily extend the local Hessian regularity result of Section~\ref{localreg} to the global case of the whole of $\Omega$.
\begin{proposition}[Global Hessian estimate]\label{stima hessiano globale}
Let $u\in C^{1}(\overline{\Om})$ be a solution to \eqref{eq debole}.
Then for $\beta\in [0,1)$ and $\gamma< n-2$ ($\gamma=0$ if $n=2$),
it holds
\begin{equation}\label{eq stima hessiano globale componente}
\sup\limits_{y\in \Omega}\, \io\frac{(k+|\n u|)^{p-2-\beta}|u_{ij}|^2}{|x-y|^\gamma}\di x \leq C
\end{equation}
and
\begin{equation}\label{eq stima hessiano globale}
\sup\limits_{y\in \Omega}\, \io\frac{(k+|\n u|)^{p-2-\beta}|D^2 u|^2}{|x-y|^\gamma}\di x \leq C\,,
\end{equation}
where $C= C(\beta,\gamma,p,n,\|u\|_{W^{1,\infty}},f)$ and $k\geq 0$ is given in Section \ref{intro}.
\end{proposition}
\proof
Recall that, since $\Om$ is smooth, then the boundary $\partial \Om$ satisfies the
interior sphere condition at each point. By Theorem \ref{hopf} we then have that $\n u$ does not vanish on $\partial \Om$.
Therefore by compactness of $\overline{\Om}$, applying Theorem \ref{stima hessiano locale}, we get the thesis.
\endproof
Arguing as in the proof of Proposition \ref{stima hessiano locale},
using once again Hopf's Lemma to ensure that $\n u$ does not vanish on the boundary of $\Omega$,
we can extend also the local result in Proposition \ref{inverso peso locale} to the following global result.
\begin{proposition}[Global estimate of the weight]\label{inverso peso globale}
Let $u\in C^{1}(\overline{\Om})$ be a solution to \eqref{eq debole}.
Fix $t\in [0,p-1)$ and $\gamma<n-2$ ($\gamma=0$ if $n=2$).
Then there exists $C>0$ such that
\begin{equation}\label{stima peso locale}
\sup\limits_{y\in \Omega}\,\, \int_{\Omega}\frac{\di x }{(k+|\n u|)^t |x-y|^\gamma}\leq C\,,
\end{equation}
where $C=C(\Omega,t,\gamma,n,p,\|u\|_{W^{1,\infty}}, f)$. Moreover, $|Z| = 0$, where $Z$ is the critical set of $u$.
\end{proposition}
We finish this section with a discussion about Sobolev regularity for our positive solutions, as a corollary of the above global results.
\begin{theorem}\label{teo reg}
Let $u$ be a $C^{1}(\overline\Omega)$ solution
to $\eqref{eq debole}$.
There holds:
\begin{enumerate}
\item if $p\in (1,3)$, then $u\in W^{2,2}(\Om)$
\item if $p\in [3,+\infty)$, then $u\in W^{2,q}(\Om)$ with $q\leq\frac{p-1}{p-2}$.
\end{enumerate}
\end{theorem}
\proof
For any $p>1$, taking $\gamma=0$ in \eqref{eq stima hessiano globale} and \eqref{stima peso locale} respectively, we immediately get:
\begin{equation}\label{eq reg 01}
\io(k+|\n u|)^{p-2-\beta}|D^2 u|^2\di x \leq C\,,
\end{equation}
as well as
\begin{equation}\label{eq reg 02}
\int_{\Omega}\frac{1}{(k+|\n u|)^t} \di x \leq C\,,
\end{equation}
for any $t < p-1$.
Now, for $1<p<3$, choosing $\beta < 1$ such that $p-2-\beta < 0$ and
recalling that $\n u$ is bounded, from \eqref{eq reg 01} we immediately get:
\begin{equation*}
\int_\Om |u_{ij}|^2 \di x \leq \sup_\Om (k+|\n u|)^{\beta+2 - p} \int_\Om (k+|\n u|)^{p-2-\beta} |u_{ij}|^2 \di x \leq C,
\end{equation*}
which proves statement $(1)$. On the other hand, for $p \geq 3$, by \eqref{eq reg 01} and \eqref{eq reg 02} we have:
\begin{equation}\label{eq reg 1}
\begin{split}
&\int_\Om |u_{ij}|^q \di x =
\int_\Om |u_{ij}|^q (k+|\n u|)^{(p-2-\beta)\frac{q}{2}}\cdot\frac{1}{(k+|\n u|)^{(p-2-\beta)\frac{q}{2}}} \di x\\
&\leq
\left(\int_\Om|u_{ij}|^2(k+|\n u|)^{p-2-\beta} \di x \right)^{\frac{q}{2}}
\left(\int_\Om \frac{1}{(k+|\n u|)^{(p-2-\beta)\frac{q}{2-q}}} \di x\right)^{\frac{2-q}{2}} \leq C,
\end{split}
\end{equation}
where we used Holder's inequality with exponents
$\frac{2}{q}$ and $\frac{2}{2-q}$. Notice that, in order to apply \eqref{eq reg 02}, we need $(p-2-\beta) > 0$, which is true for $p \geq 3$, as well as
$(p-2-\beta)\frac{q}{2-q}<p-1$, and this is ensured taking
$q<\frac{p-1}{p-2}$ and recalling that $\beta<1$. This proves the statement $(2)$.
Finally, recalling that $|Z|=0$ and closely following the steps
in the proof of Proposition 2.2 in \cite{DS1}, we infer
that the generalized derivatives of $u_i$ coincide with the classical ones
almost everywhere in $\Om$ and we get the thesis.
\endproof
\begin{remark}
In case we cannot apply Hopf's Lemma, the same regularity
results stated in Theorem \ref{teo reg} can be locally obtained
using the local estimates \eqref{eq stima hessiano locale componente},
\eqref{eq stima hessiano locale} and \eqref{stima peso locale}.
\end{remark}
|
train/arxiv
|
BkiUaznxK4sA-7sDc65M
| 5 | 1 |
\section{Introduction}
\label{parintroduction}
Type Ib supernovae (SNe) are observationally defined by the absence of
hydrogen (H) and presence of helium (He) in their spectra at early
phases. They are thought to come from massive stars (M $\ge$ 8
M$_{\odot}$) which have lost their H envelope, but retained part of
their He envelope before exploding. SNe Ic show neither H nor He
features in their early spectra, and arise from massive stars which
have also lost most of their He envelope. A group of SNe that show H
features at early phases, but do not follow the typical light curve
seen for the H-rich Type II SNe, and instead evolve photometrically
and spectroscopically into SNe Ib are known as Type IIb SNe. These
are also thought to come from massive star progenitors which have lost
most (but not all) of their H envelope before exploding. SNe IIb, Ib
and Ic are often collectively termed ``stripped-envelope SNe''
\citep{clocchiatti97}.
We emphasize that the apparent lack of H and/or He lines in an early
spectrum does not preclude the presence of these elements in the
ejecta. \cite{branch02} analysed a large sample of SN Ib spectra, and
suggested that H is often present also in SNe Ib, though this element
is very difficult to detect after maximum. They also found that He is
often present outside the photosphere (detached), in particular at
late phases. Similar studies of SNe Ic \citep{branch06,elmhamdi06}
suggest that traces of H and He may be also present in some SNe Ic.
The presence of He and/or H in some stripped-envelope SNe could be an
indication that the progenitor has been stripped continuously (which
can leave thin layers of H or He on the progenitor) as opposed to
strong episodic mass loss, which would likely completely strip the
progenitor. Furthermore, the small observational differences between
He rich (Ib) and He poor (Ic) SNe may suggest a similar origin, with
progenitors of SNe Ic simply being more strongly stripped than those
of SNe Ib.
An important open issue on SNe Ib/c is whether they come from
relatively high mass Wolf-Rayet (WR) stars (M $> 20-25$ M$_{\odot}$),
that have been stripped of H and part of their He envelope by
radiatively-driven winds, or from lower mass progenitors (M $>$ 11
M$_{\odot}$) which have been stripped of their envelope by a binary
companion \citep{Podsiadlowski92}.
Several statistical studies have been performed on the environments
of stripped-envelope SNe in order to characterise the Ib and Ic
progenitor populations. Many have focused on metallicity as a key
parameter driving mass loss in single stars and hence determining the
relative numbers of Types II, Ib and Ic SNe. Metallicity is also
important in binary models \citep{Eldridge08}, and influences the
number of WR stars produced as well as the predicted ratios of Ibc SN.
Rotation rates are also thought to be dependent on initial stellar
metallicity \citep{Georgy09}.
Initial studies of the global properties of host galaxies suggested
that the number of SNe Ic relative to the number of
SNe Ib increases with metallicity \citep{prieto08,Boissier09,Anderson09}.
However more recent measurements of the nebular
oxygen abundances at the sites of SNe Ib and Ic now
suggest there is little difference between the samples
\citep{Modjaz11,Anderson10,Leloudas11}.
From these results there is no unambiguous evidence that
Ibc SNe are produced in more metal rich regions than type II SNe, or
that Ib and Ic SNe originate in different metallicity regimes.
The relatively high frequency of SNe Ib/c (30 per cent of all core-collapse SNe by volume)
and the lack of progenitor detections \citep{Crockett09,smartt09b}
supports the idea that at least some
of the SNe Ib/c progenitors are massive stars in binary systems
\citep{Podsiadlowski92,fryer07,Eldridge08}. On the other hand, the
extreme kinetic energies inferred for some stripped-envelope SNe,
namely those associated with GRBs, are indicative of a high-mass
progenitor which does not need necessary binarity to lose its envelope.
Excluding the extreme cases, for most stripped-envelope SNe
discriminating between the progenitor scenarios is more tricky. In
principle one may argue that SNe which eject more material probably
come from more massive stars, while SNe with low mass ejecta may come
from less massive progenitors in binary systems. On the other hand, if
fall-back occurs (where a portion of the SN ejecta falls back on to
the newly born compact remnant at the centre) even very massive stars
may eject only a small amount of material. A detailed analysis of the
elemental abundances in the ejecta is then a useful diagnostic. For
example, oxygen and carbon are expected to be more abundant in ejecta
of a SN from a massive progenitor, due to the larger progenitor core
mass. However, from an observational point of view, the number of
stripped-envelope SNe with detailed ejecta characterization is still
small. Recent discoveries have also pointed out the largely unexplored
diversity in He-rich SNe, for example the wide range of ejected mass
and energy as typified by SN 2007Y \citep{stritzinger09} and SN 2008D
\citep{mazzali08}, respectively, or the case of SN 1999dn, for years
considered a prototype of SNe Ib, but recently proposed to be a highly
energetic SN from a massive progenitor \citep{benetti11}.
In this framework, every new nearby SN Ib discovered represents an
opportunity to increase our understanding of these events. SN 2009jf,
a nearby SN which was discovered early after the explosion, is a
perfect target to enlarge the sample of well-studied SNe Ib.
In this paper we present the full set of data we collected for SN
2009jf, from the ultraviolet (UV) to optical and near infrared
(NIR). The structure of the paper is as follows: in the next section
we describe the set of data collected and its calibration. In
Section~\ref{parphot} we present the photometric data and in
Sections~\ref{parspe} and \ref{spenebular} the optical and NIR spectra
of SN 2009jf. In Section~\ref{sec:host}, we discuss the properties
of the host galaxy NGC 7479 (see Fig. \ref{figseqstar}) and provide an
analysis of the progenitor using archival pre-explosion data. In
Section~\ref{parametribolo}, we discuss the main physical parameters
of the progenitor of SN 2009jf at the moment of the explosion through
an analysis of the bolometric light curve. In the final section, we
summarise our results and discuss our conclusions. We note that
\cite{Sahu11} have also presented a spectro-photometric study of SN
2009jf in the optical with which we will compare our results with.
\section{Discovery and follow-up}
\label{parobs}
\begin{figure}
\includegraphics[width=8.5cm,height=8.5cm]{valenti_fig1_col.eps}
\caption{The field of view (9'x9') of SN 2009jf. This $R$-band Calar
Alto image (18 November 2009) was taken 34 days after the $B$-band maximum.
The local sequence stars have been numbered to calibrate the photometry
(magnitudes reported in Tables \ref{tabseqstar1} and \ref{tabseqstar2}).}
\label{figseqstar}
\end{figure}
SN 2009jf was discovered \citep{li2009} on 2009 September 27.33 (UT
dates are used throughout this paper) with the Katzman Automatic
Imaging Telescope (KAIT) during the Lick Observatory Supernova Search
\citep{filippenko01}. The supernova is located at coordinates
$\alpha~$ = $23^h 04^m 52^s.98$ and $\delta$ =$ +12\degr 19' 59''.5$
(equinox J2000), which is $53''.8$ W and $36''.5$ N of the centre of
the host galaxy NGC 7479. The host is a barred spiral galaxy, with an
intriguing jet-like radio continuum feature. The alignment of this
jet, which is in the opposite orientation to the optical arms, has
been suggested to be consistent with NGC 7479 having recently
undergone a minor merger \citep{laine08}. SN 2009jf was not visible in
a KAIT unfiltered image taken 4 days before discovery (September
23.32) \citep[$>$19.2 mag,][]{li2009} and was classified on September
29.1 as a young Type Ib SN similar to SN 1999ex
\citep{Kasliwal09,sahu09}. \cite{Itagaki09} reported the detection of
a source close to the position of the SN in several images obtained over the
past few decades. A rough estimate of the absolute magnitude of the
source in the pre-discovery images ($-14.5$ mag) led \cite{Itagaki09}
to initially suggest a Luminous Blue Variable (LBV) as the progenitor
of SN 2009jf. However, we have undertaken a more thorough analysis of
the archival images, and the source is more likely a cluster close to
the position where the SN occurred (see Section~\ref{sec:host}). SN
1990U, which was a SN Type Ic, also exploded in this galaxy
\citep[]{Pennypacker90,filippenko90b}.
Being discovered well before maximum, and in a nearby host galaxy, SN
2009jf was targeted for an intensive spectro-photometric
follow-up campaign by the European Large Programme (ELP) SN
Collaboration\footnote{http://graspa.oapd.inaf.it/index.php$?$option=com\_content\&view=article\&id=68\&Itemid=93},
together with the Millenium Center for Supernova Science (MCSS).
Our photometric and spectroscopic monitoring campaign for SN 2009jf
began on 2009 October 1st, just 7 days after explosion (see
Section~\ref{parphot}). We observed the SN every $\sim2-3$ days in
Sloan and Bessel filters, with sligthly more relaxed coverage (one
observation every $\sim4-5$ days) in the NIR bands. From the
beginning of December, $\sim$2.5 months after explosion, the SN was no
longer visible from the Southern Hemisphere. From then on, it was
observed from the Northern Hemisphere with a more relaxed cadence (one
observation every week) until it disappeared behind the sun at
$\sim$105 days after explosion. The SN was recovered as soon as it was
visible again in June 2010 with observations that extended until
October to cover the nebular phase.
We used several of the facilities available to the ELP collaboration,
and also the five PROMPT\footnote{Panchromatic Robotic Optical
Monitoring and Polarimetry Telescopes.} \citep{Reichart05}
telescopes used by the MCSS project. The \emph{Swift} telescope also
observed SN 2009jf at UV wavelengths, and the publicly available data
from this has been included in our analysis. However, due to the
strong contamination from the close-by cluster the \emph{Swift} $uvm2$
and $uvw2$ filter data are not usable (see Appendix~\ref{ap0}) and
thus not reported.
NGC 7479 is one of the most beautiful nearby face-on galaxies, and a popular
target for amateur astronomers. Some of the images obtained by
amateurs have been useful in constraining the explosion epoch, and
these have been added to our dataset.
In particular, we obtained images of NGC 7479 taken on September 23,
24, 26 and 27, providing excellent coverage close to the explosion
epoch\footnote{http://eder.csillagaszat.hu/deepsky/350D/sn2009jf/sn2009jf$\_$eder$\_$en.htm}.
The $UBVRI$ data ranging from $\sim$1 to $\sim$380 days after
explosion are reported in Table~\ref{tablandolt}, while the $ugriz$
data are reported in Table~\ref{tabsloan}. All data calibrated to the
Landolt system are in the Vega system, while the data calibrated to
Sloan are in the AB system \citep{smith02}.
Spectroscopic monitoring started on 2009 October 1st, 7 days after
explosion and continued during the photospheric phase until the
beginning of the seasonal gap at $\sim$105 d after explosion. More
spectra were collected in the nebular phase, when the SN became
visible again. In total we collected 20 optical and 4 infrared
spectra of SN 2009jf (see Section \ref{parspe}). Details of our data
reduction methods are reported in Appendix \ref{ap0}.
\subsection{Archival observations}
To search for a progenitor in pre-explosion data \citep[see ][ for a
review]{smartt09b}, we queried all suitable publicly available image
archives of which we are aware.
The most useful images for constraining the pre-explosion environment
and progenitor of SN 2009jf are from the Wide-Field and Planetary
Camera 2 (WFPC2) on-board the {\it Hubble Space Telescope (HST)}. The
site of SN 2009jf was observed for a total of 2000 s in the $F569W$
filter and 1800 s in the $F814W$ filter on 1995 October 16.
Pre-explosion data were also available from many ground based observatories,
but these images, which have a typical seeing of ~1
arcsec, are not of sufficient quality to resolve the complex region
where the SN exploded (see Section \ref{parprogenitor}). The same is
true for the {\it Spitzer} IRAC and {\it XMM-Newton} OM images.
However, several of these deep pre-explosion ground based images were
used as templates for application of the template subtraction
technique for our photometry (see Sec. \ref{ap0}).
In particular we used the following images: $U$, $B$ and
$I$ images from William Herschel Telescope (WHT) (1990 August 17, 17
and 18 respectively), a $V$-band image from the ESO New Technology
Telescope (NTT) (1992 December 2), and an $R$-band image from the
Nordic Optical Telescope (NOT) (2009 September 13).
To identify a potential progenitor in {\it HST} data we obtained deep images
of SN 2009jf and its environs on 2009 October 24 and 25 with the
ESO-VLT (Very Large Telescope) and NaCo (Nasmyth Adaptive Optics
System and Near-Infrared Imager and Spectrograph). We used the $K_\mathrm{S}$
filter with the S54 camera\footnote{pixel scale of 0.054\arcsec over a
56\arcsec $\times$ 56\arcsec\ field of view.}. A late time
observation was obtained on 2010 September 16, but the SN had faded in
the NIR to the point where it was not significantly brighter than the
nearby cluster, and so this data was of no use in locating the
progenitor.
\section{Photometry}
\label{parphot}
To constrain the explosion epoch we used the pre-discovery images of
NGC 7479 taken by I. Eder on September 23rd--27th. The
first observation on September 23rd was before the first non-detection
of SN 2009jf at the KAIT telescope \citep{li2009}. Using these images
taken on September 23 as a template, we performed image subtraction on
the images taken over the following days (24th, 26th and 27th) with
the same telescope. The SN is marginally detected in $B$,$V$ and $R$ bands
on September 26th, while nothing is detected
in the same bands on September 24th (see Fig.~\ref{figexp}). The
detections on September 26th and the non-detections on September 24th,
making the reasonable assumption that the object was rapidly
brightening before maximum, constrain the explosion epoch to
September 25th (JD=2455099.5) $\pm 1$ day, which is one of the
best constrained explosion epochs for a stripped-envelope SN
(not associated with a GRB).
\begin{figure*}
\includegraphics[width=16cm,height=8cm]{valenti_fig2.eps}
\caption{$BVR$ images of SN 2009jf obtained on 24th and 26th September
2009 (upper panels) together with difference images obtained by subtracting
images in the same filters obtained on 23rd September 2009 (lower panels).
The marginal detections on September 26th and the non-detections
on September 24th constrain the explosion epoch to 2009
September 25th (JD=2455099.5) $\pm 1$ which is one of
the best constrained explosion epochs for a stripped-envelope SN.
North is up and East to the left.}
\label{figexp}
\end{figure*}
\begin{figure*}
\includegraphics[width=17cm,height=9cm]{valenti_fig3.eps}
\caption{The light curves of SN 2009jf in the $uvw1-UuBgVrRiIzJHK$ bands. In the
left panel we show the data from explosion to two months after $B$ maximum
(JD=$2455119.4\pm1.0$, 2009 Oct. 14). In the right panel, we show the full light
curves in all bands. The data of \protect\cite{Sahu11} are overplotted with
smaller symbols for comparison.}
\label{figLC}
\end{figure*}
All the collected photometric data are shown in Fig.~\ref{figLC}.
The data of \cite{Sahu11} are also overplotted for comparison.
As shown in Fig.~\ref{figLC} our photometry is consistent with that of
\cite{Sahu11} at early phases, but at later epochs their photometry
appears to systematically overestimate the SN flux as compared to our
template-subtraction photometry (see Appendix~\ref{ap0}).
SN 2009jf is a clear case where proper template subtraction has to be
used to avoid the contamination
from the bright background. There is also a shift between our $U$-band
photometry and that of \cite{Sahu11}, with our photometry being
fainter by $\sim0.2$ mag. At late phases the differences are even
larger, again probably due to the overestimation of the SN magnitude
when using PSF-fitting techniques as opposed to template subtraction
photometry.
SN 2009jf reached its maximum luminosity
on 2009 October 14th at $15.56 \pm 0.02$ mag in the $B$-band. It
peaked 2.4 days earlier in the $U$ band and 2.1, 4.6, and 5.3 days
later in the $V$, $R$, and $I$ bands respectively
(Table~\ref{tabparam}). The rise times to maximum were $\sim$17.5,
19.9, 22.0, 24.5 and 25.2 days respectively in the $UBVRI$ bands. This
is one of the slowest rise times ever observed for a classical SN
Ib/c. Using a distance modulus of 32.65 $\pm$ 0.1 mag and a galactic
reddening of $E(B-V)$=0.112 mag \citep[][]{schlegel98} and internal
$E(B-V)$=0.05 mag (see Section~\ref{sec:host}), SN 2009jf reached a
maximum absolute magnitude in the $R$-band of $-18.24$ mag, brighter
than SNe 2008D \citep[$-17.26$ mag,][]{mazzali08} and 1999ex
\citep[$-17.78$ mag,][]{stritzinger02} and comparable with the massive
Type Ic core-collapse SN 2004aw \citep[$-18.22$
mag,][]{taubenberger06}. After maximum the light curves of SN
2009jf are not very different from those of other SNe Ib/c, although
with a slower than normally observed decline both soon after maximum
and after the inflection point at $\sim$30 days past maximum. The
resulting light curve peak for SN 2009jf is broad, suggesting a
massive ejecta and/or a small expansion velocity which keeps the
ejecta optically thick for a long time.
The late-time luminosity decline rates were computed via weighted linear
least-squares fits to the observations
and are reported in Table~\ref{tabparam} together with other key
parameters measured from the light curves.
The late-time slopes are in all bands steeper than those
expected if the energy source is $^{56}$Co~$\rightarrow$~$^{56}$Fe decay with
complete trapping of the $\gamma$-rays [0.98 mag
(100~d)$^{-1}$] - as is normally observed in stripped-envelope
core-collapse supernovae.
The colour evolution of SN 2009jf (see Fig.~\ref{figcol}) resembles
that of other stripped-envelope SNe. The SN becomes redder with time
as the ejecta expands and the temperature decreases. The $B-V$ colour
is 0.3 mag at +15 d from explosion and reaches a maximum $\sim$ +40 d
after explosion ($B-V$ $\sim$ 1 mag). In the first 2 weeks after
explosion some stripped-envelope SNe display a bluer $B-V$ colour
($B-V$ $\sim 0$ mag)
(e.g. SNe 1993J, 2008D). Other stripped-envelope SNe (e.g. SNe 1999ex,
2008ax), are blue immediately after explosion, show a $B-V$ colour
$\sim$ 0.5-1 mag at one week and then become blue again ($B-V$ $\sim$
0.3 mag) at two weeks after explosion. SN 2009jf shows a behaviour
similar to this second group of objects, but less extreme with $B-V$
$\sim 0$ mag in the first two photometric points, $B-V$ $\sim 0.5$ mag
at one week, and $B-V$ $\sim 0.3$ mag two weeks from explosion. The
blue colour of some SNe in the first days after explosion has been
interpreted as an evidence for shock break-out
\citep{stritzinger02,Chevalier10}. With the caveat of the large
uncertainty in these first measurements, this may also be the case for
SN 2009jf.
\begin{figure}
\includegraphics[width=8.5cm,height=7cm]{valenti_fig4.eps}
\caption{The $B-V$ and $V-R$ colour curves for a sample of stripped-envelope core-collapse supernovae.}
\label{figcol}
\end{figure}
\begin{table*}
\centering
\begin{minipage}{180mm}
\caption{Main parameters for light curves of SN 2009jf. The $UBVRI$ magnitudes are in the Vega system, the $ugriz$ magnitudes in the AB system.}
\label{tabparam}
\begin{tabular}{cccccc}
\hline
Parameter & $U$ & $B$ & $V$ & $R$ & $I$ \\
\hline
Date of max (JD $-$2,400,000) & $55117. \pm 2.0$ &$55119.4\pm1.0$ &$55121.5\pm1.0$ &$55124.0\pm1.0$& $55124.7\pm1.0$\\
Apparent magnitude at max & $15.70\pm 0.05$ &$15.56\pm0.02$ &$15.03 \pm 0.01$&$14.83\pm0.01$ & $14.59 \pm 0.01$\\
Absolute magnitude at max & $-17.73\pm 0.26$ &$-17.75\pm0.22$ &$-18.12\pm 0.18$&$-18.24\pm0.15$& $-18.37\pm 0.14$\\
Late-time decline $\gamma$ (mag d$^{-1}$)& $-$ & $0.0088\pm0.0014$ &$0.0136\pm 0.0007$& $0.0161\pm0.0002$ &$0.0171\pm0.0011$\\
Phase range & $-$ & 55-105 & 63-105 & 60-105 & 58-105 \\
\hline
\end{tabular}
\begin{tabular}{cccccc}
\hline
Parameter & $u$ & $g$ & $r$ & $i$ & $z$ \\
\hline
Date of max (JD $-$2,400,000) & $55117.\pm 2.0$ & $55120.6\pm 1.0 $ & $55123.7\pm1.0$ & $55125.0 \pm 1.0$&$ 55125.5 \pm 1.0$ \\
Apparent magnitude at max & $16.56\pm 0.04$ & $15.29\pm 0.01 $ & $14.96\pm0.02$ & $14.97 \pm 0.02$ &$ 14.95 \pm 0.03$ \\
Absolute magnitude at max & $-16.88\pm0.26$ & $-17.97\pm 0.21 $ & $-18.13\pm0.17$ & $-18.02\pm 0.14$ &$-17.94 \pm 0.12$ \\
Late-time decline $\gamma$ (mag d$^{-1}$)&$-$ & $0.015\pm0.002$ & $0.020\pm0.003$&$0.021\pm0.002$&$0.016\pm0.001$ \\
Phase Range & $-$ & 49-73 & 55-73 & 55-73 & 48-73 \\
\hline
\end{tabular}
\end{minipage}
\end{table*}
\section{Photospheric spectra}
\label{parspe}
\begin{table*}
\centering
\begin{minipage}{140mm}
\caption{Journal of spectroscopic observations}
\label{tabseqspec}
\begin{tabular}{@{}cccccc@{}}
\hline
Date & JD & Phase
\footnote{Relative to $B$-band maximum light (JD = 2,455,119.4).}
& Range & Resolution FWHM
\footnote{FWHM of night-sky emission lines.}
& Equipment \footnote{
$~$A1.82 $=$ Asiago Ekar 1.82~m telescope;
$~$ NOT $=$ Nordic Optical Telescope;
$~$ TNG $=$ Telescopio Nazionale Galileo;
$~$ NTT $=$ ESO New Technology Telescope;
$~$ CA $=$ Calar Alto 2.2m Telescope;
$~$ WHT $=$ William Herschel Telescope;
$~$ VLT $=$ESO Very Large Telescope}
\\
& 2,400,000 & (days) & (\AA) & (\AA) & \\
\hline
2009 Oct 02 & 55106.62 & $-13$ & 3200-10100 & 10 & TNG+DOLORES+LRB/LRR \\
2009 Oct 04 & 55108.63 & $-11$ & 3600-9300 & 10 & VLT+FORS2+300V/300I \\
2009 Oct 06 & 55111.40 & $-8$ & 3200-8750 & 10 & CA+CAFOSC+b200 \\
2009 Oct 07 & 55112.37 & $-7$ & 6150-10100 & 10 & CA+CAFOCS+r200 \\
2009 Oct 12 & 55116.57 & $-3$ & 3200-10100 & 10 & TNG+DOLORES+LRB/LRR \\
2009 Oct 13 & 55118.46 & $-1$ & 3200-9200 & 15 & NOT+ALFOSC+gr4 \\
2009 Oct 16 & 55121.48 & $+2$ & 3550-7750 & 23 & A1.82+AFOSC+gr4 \\
2009 Oct 18 & 55123.42 & $+4$ & 3550-10000 & 23/35 & A1.82+AFOSC+gr4/gr2 \\
2009 Oct 19 & 55124.61 & $+5$ & 3400-9650 & 14 & NTT+EFOSC+gr11/16 \\
2009 Oct 27 & 55132.60 & $+13$ & 3600-9300 & 10 & VLT+FORS2+300V/300I \\
2009 Nov 04 & 55140.39 & $+21$ & 3200-9200 & 15 & NOT+ALFOSC+gr4 \\
2009 Nov 13 & 55149.39 & $+30$ & 3200-9200 & 17 & NOT+ALFOSC+gr4 \\
2009 Nov 19 & 55155.35 & $+36$ & 3800-10000 & 12 & CA+CAFOCS+g200 \\
2009 Dec 05 & 55171.34 & $+52$ & 3200-9200 & 15 & NOT+ALFOSC+gr4 \\
2009 Dec 27 & 55193.35 & $+74$ & 3800-10000 & 12 & CA+CAFOCS+g200 \\
2010 Jan 07 & 55149.39 & $+85$ & 3200-9200 & 15 & NOT+ALFOSC+gr4 \\
2010 Jun 19 & 55366.88 & $+247$ & 3200-9200 & 18 & NTT+EFOSC+gr11/16 \\
2010 Jul 08 & 55385.66 & $+266$ & 3200-9200 & 3 & WHT+ISIS+R300B/R316R \\
2010 Oct 04 & 55473.57 & $+354$ & 3600-9200 & 25 & NTT+EFOSC+gr13 \\
2010 Oct 11 & 55480.51 & $+361$ & 3600-9300 & 10 & VLT+FORS2+300V/300I \\
\hline
2009 Oct 04 & 55109.51 & $-10$ & 8700-24700 & 18/35 & TNG+NICS+IJ/HK \\
2009 Oct 21 & 55125.58 & $+6$ & 9400-24000 & 23/33 & NTT+SOFI+GB/GR \\
2009 Nov 23 & 55158.58 & $+39$ & 8700-24700 & 18/36 & TNG+NICS+IJ/HK \\
2009 Dec 05 & 55171.36 & $+52$ & 9300-24700 & 23/33 & NTT+SOFI+GB/GR \\
\hline
\end{tabular}
\end{minipage}
\end{table*}
A subset of the optical photospheric spectra of SN 2009jf is shown in
Fig.~\ref{fig:specevol}, the instrumental configurations used at each
epoch are listed in Table~\ref{tabseqspec}, while details of data
reduction are in Appendix~\ref{ap0}. The four NIR spectra,
are shown in Fig.~\ref{fig:speinfrared} together with spectra of other
stripped-envelope SNe. The first spectrum of SN 2009jf was observed 13
days before maximum (one week after explosion). At early phases, SN
2009jf shows a blue continuum with the typical features of a
stripped-envelope SN already visible, namely Fe\,{\sc ii}{}
$\lambda\lambda${}4924,5018,5169, Ca\,{\sc ii}{} $\lambda\lambda${}8498,8542,8662, Ca\,{\sc ii}{}
$\lambda\lambda${}3934,3968 and O\,{\sc i}{} $\lambda${}7770. The lines are partially blended,
but not as much as for broad-lined Ic (BLIc) SNe or the energetic Ib
SN 2008D \citep{mazzali08}.
With time, the temperature decreases, the spectra of SN 2009jf become
redder and lines which form deeper in the ejecta (at lower velocity)
become more prominent. The identification of the features at $\sim$
5800 \AA{} and $\sim$ 6200 \AA{} is more complicated as these regions
are usually densely populated with lines from several different
elements such as Na, He, H, Si, C and Ne.
\begin{figure}
\includegraphics[width=8.5cm,height=9.5cm]{valenti_fig5.eps}
\caption{A subset of the collected photospheric spectra of SN 2009jf. The spectra are in the observer frame.}
\label{fig:specevol}
\end{figure}
The possible presence of H and He is very important to understand the
progenitor star evolution. In more massive progenitor (M $>$ 25-30 M$_{\odot}${})
the ratio between He (or H) and the other elements of the ejecta should
be smaller than in less massive progenitors (11 M$_{\odot}${} $<$ M $<$ 25 M$_{\odot}${} ).
This will be discussed in the following section.
\begin{figure}
\includegraphics[width=9cm,height=10cm]{valenti_fig6.eps}
\caption{The infrared spectra of SN 2009jf are shown at the SN rest wavelength together
with those of SN 2007gr (Ic) and SN 1999ex (Ib). He\,{\sc i}{} $\lambda${}2.058 $\micron$ is marked
in the spectra of SN 2009jf.}
\label{fig:speinfrared}
\end{figure}
\subsection{He identification}
\label{sec:He}
The He lines, with the exception of the He\,{\sc i}{} line at 2.058
$\micron$, are in spectral regions which are densely populated by
other features. The identification of He lines is further complicated
by the fact that they often form in non-LTE conditions, giving
uncertain line strength ratios \citep{lucy91}.
For some recent detailed studies of SNe Ib/c the detection or
non-detection of the He\,{\sc i}{} line at 2.058 $\micron$ was the clearest
way to distinguish SNe Ib from SNe Ic
\citep{hamuy02a,valenti08b,stritzinger09,modjaz09}. For SN 2009jf,
He\,{\sc i}{} at 2.058 $\micron$ is clearly detected in our NIR spectra at 6, 39
and 52 days after maximum and marginally detected also at
10 days before $B$-band maximum (see Fig.~\ref{fig:speinfrared}).
Thus He is definitely present in SN 2009jf, confirming the SN classification
as a SN Ib. However, while for most SNe Ib the He lines increase in intensity
from the explosion until two weeks after maximum, in SN 2009jf the He lines
are weaker than in other SNe Ib, with the lines at $\lambda${}6678 and $\lambda${}7065 almost
disappearing two weeks after $B$-band maximum. In
Fig.~\ref{fig:Helium}, we show the spectral regions where He lines
should be visible. The He\,{\sc i}{} $\lambda${}5876 and $\lambda${}6678 are clearly seen
in Fig.~\ref{fig:Helium} (minimum of the narrow P-Cygni absorption is
indicated with a dashed line). On the red side of the He line at
$\lambda${}5876, a broader absorption becomes more intense over time. This
feature was identified by \cite{Sahu11} as He at much lower velocity
than at early phases. However, no sign of such absorption is visible
for the He lines at $\lambda${}6678, $\lambda${}7065 and 2.058 $\micron$. In order to better
constrain the presence of He, we used the {\sc synow}\footnote{
http://www.nhn.ou.edu/$\sim$parrent/synow.html} spectrum synthesis
code to model two spectra of SN 2009jf.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{valenti_fig7.eps}}
\caption{The spectral regions where the He lines (at $\lambda$ 5876, $\lambda$ 6678 and $\lambda$ 7065) should be visible (spectra at rest frame).
The absorption at $\sim$ 5700 \AA ~ visible at late phases is unlikely to be He, as suggested by \protect\cite{Sahu11}
since all other He lines at that velocity are missing both in the optical and in the near infrared.}
\label{fig:Helium}
\end{figure}
\subsubsection{-11 d from $B$-band maximum}
We have modelled the merged optical (VLT+FORS2) and near infrared
(TNG+NICS) spectra taken $-11$ and $-10$ days before $B$-band maximum
respectively. The spectrum was reproduced including lines from Fe\,{\sc ii}{},
O\,{\sc i}{}, Ca\,{\sc ii}{}, Mg\,{\sc ii}{}, Sc\,{\sc ii}{}, Ti\,{\sc ii}{}, Si\,{\sc ii}{}, Ne\,{\sc i}{} and He\,{\sc i}{}
(upper panel Fig.~\ref{fig:synow}). We used a photospheric velocity
of 13500 km\,s$^{-1}${} and line optical depths that vary as ${e^{-v/v_{e}}}$
with $v_{e}$ = 2000 km\,s$^{-1}${}. The features at 6100 \AA{} were well
reproduced by a combination of Si\,{\sc ii}{} and Ne\,{\sc i}{}, while He\,{\sc i}{}
(undetached\footnote{lines that form at the photospheric velocity}) is
able to reproduce both the feature at $\sim$5600 \AA{} and the one at
$\sim$6500 \AA{}. The presence of \CII{} can not be ruled out, but
while \CII{} $\lambda${}7235 would improve the fit of the absorption at
$\sim$ 7000 \AA{}, the \CII{} $\lambda${}6580 line would deteriorate the fit
at $\sim$ 6400 \AA{}. At this epoch, the He\,{\sc i}{} at 2.058 $\micron$ is
not visible in the NIR spectrum (see Fig.~\ref{fig:speinfrared}). For
this reason, we also considered the possibility that the absorption at
$\sim$ 5600 \AA{} is Na\,{\sc i}~D{} instead of He\,{\sc i}{}. However, with Na,
instead of He, the fit will also fail to reproduce the features at
$\sim$6500 \AA{}, which is likely He\,{\sc i}{} ($\lambda${}6678) (see inset in
upper panel in Fig.~\ref{fig:synow}).
Helium is also visible at 2.058 $\micron$ (see Fig. \ref{fig:speinfrared}),
though not very strong. We note that early NIR spectra
are also available for another SN Ib (SN 2008D), in which the He\,{\sc i}{}
at 2.058 $\micron$ is also marginally visible at high velocity
also at early phases \citep{modjaz09}.
We hence consider the He identification quite robust.
\begin{figure}
\includegraphics[width=9cm,height=10cm]{valenti_fig8.eps}
\caption{{\sc synow} fit of the spectra of SN 2009jf at -11 days and +6 days from $B$-band maximum light.
Before maximum the He lines are present in the {\sc synow} fit, while in the inset, using Na instead of He,
the line at $\sim$6500 \AA{}, which is likely He\,{\sc i}{} $\lambda${}6678 is not reproduced.
After maximum most of the line at 5700 \AA{} is reproduced by Na\,{\sc i}~D. Only undetached lines of He are identified.
The inset shows a {\sc synow} spectrum with undetached He. The spectra are at rest frame.}
\label{fig:synow}
\end{figure}
\subsubsection{+5 d from $B$-band maximum}
The NTT+EFOSC (+5 d) and NTT+SOFI (+6 d) spectra and {\sc synow} fit
are shown in the lower panel of Fig.~\ref{fig:synow}. We used the
same ions as for the pre-maximum spectrum, plus Na\,{\sc i}{} and \CI{} at a
velocity of 8200 km\,s$^{-1}$. \CI{} is needed to fit some of the features
between 10000 and 16000 \AA{}, while Na\,{\sc i}~D{} is needed to reproduce
part of the spectrum at $\sim$ 5600 \AA{}. He\,{\sc i}{} is mainly
responsible for the blue narrow component of this feature with a
detached velocity of 12000 km\,s$^{-1}${}. The detached He is also responsible
for the line at $\sim$ 6500 \AA{}. We investigated the possible
presence of undetached He\,{\sc i}{}, but in this case the He velocity is too
low to reproduce correctly the absorbtion at $\sim$ 6500 \AA{}
(see lower inset panel in Fig.~\ref{fig:synow}). Also, as we will see
in the next section, the photospheric velocity at which the He\,{\sc i}{}
feature at 2.058 $\micron$ forms (measured from the minima of the
P-Cygni absorption) are consistent with detached He. The photospheric
velocity of He\,{\sc i}{} at 2.058 $\micron$ remains constant from +6 to +52
days after $B$-band maximum.
Summarising the optical and infrared spectral analysis, we can
conclude that the layer where the He lines form is located at a
velocity between 12000 and 16000 km\,s$^{-1}${}. It appears undetached at early
phases, but becomes detached at $1-2$ weeks after maximum. This is
not surprising, as similar behaviour has already been observed in
other SNe Ib \citep{branch02}. Regarding the presence of H, while a
small amount cannot be completely ruled out, in both spectra there is
no clear improvement in the fit when H is included. Moreover, there is
no evidence (as observed for some SNe Ib) of H$\beta${} features in the
early spectrum. Hence SN 2009jf is likely a H-poor stripped-envelope
SN, with a relatively small amount of He (mainly detached). This is
confirmed by comparing the spectra of SN 2009jf with other SNe Ib/c
(see Fig.~\ref{fig:comparison}). Across a range of phases, SN 2009jf
is almost identical to the He-poor SN 2007gr
\citep{valenti08a,hunter09}, with differences only in the region of
the most prominent He features. For comparison, some spectra of the
Ib SN 1999dn \citep{benetti11} are shown with more prominent He
features. Apart from the Ib classification, we could also regard SN
2009jf as a SN Ic that exploded with a small amount of He left in a
high velocity shell at 12000-16000 km\,s$^{-1}${}.
\begin{figure}
\includegraphics[width=9cm,height=9cm]{valenti_fig9.eps}
\caption{Three spectra of SN 2009jf are compared with those of SNe 2007gr and 1999dn.
At all epochs the spectra of SN 2009jf are almost identical to those of SN 2007gr,
with the sole exception of the He lines. SN 1999dn shows strong He
absorption features, in contrast to both SNe 2009jf and 2007gr. All spectra are at rest frame.}
\label{fig:comparison}
\end{figure}
\subsection{Photospheric velocity}
\label{parvelosity}
In the previous section we claimed the presence of a detached layer of
He at $\sim$ 12000 km\,s$^{-1}${}. This is also evident in
Fig.~\ref{fig:velocevol} (upper panel), where the velocity evolution
for different ions is plotted. The isolated He\,{\sc i}{} line at $\sim$ 2
$\micron$ confirms the presence of detached He, while there is no sign
in the infrared spectra of He at low velocity.
Different lines form in different regions of the ejecta, in part due
to the differing physical conditions (temperature and density),
but also due to the stratification of elements
within the ejecta. The Ca\,{\sc ii}{} lines form in the outer parts of the
photosphere (as can be seen from their higher velocities), while O and
Fe lines form in the inner part. The Fe\,{\sc ii}{} lines usually provide a
good estimate of the photospheric velocity. Comparing with other
stripped-envelope SNe, SN 2009jf has quite a high photospheric
velocity, slightly lower than SNe 1999ex and 2008D but higher than SNe
2007Y and 2008ax. In particular at late phases (70 days after
explosion), SN 2009jf still has an optically thick photosphere at 7000
km\,s$^{-1}${}. The evolution of the photospheric velocity of SN 2009jf
resembles that of SN 2007gr, but with velocities which are $\sim$1000
km\,s$^{-1}${} greater at almost all epochs. The similarity in the spectra
(see previous section) and in the photospheric velocity
evolution suggests that SN 2009jf had a progenitor similar to that of
SN 2007gr, albeit slightly more massive and with more He left at the
time of the explosion.
\begin{figure}
\includegraphics[width=8.5cm,height=11cm]{valenti_fig10.eps}
\caption{Upper panel: Line velocities of SN 2009jf. The Na\,{\sc i}~D{} and He\,{\sc i}{} minima
have been measured by simultaneously fitting two gaussians to the absorption feature at $\sim$ 5600 \AA{}.
Lower panel: The photospheric velocity evolution of SN 2009jf in comparison with those of a set of stripped-envelope SNe. All velocities were measured in {\sc iraf} with a Gaussian fit to the minimum of the line. In the both panel the solid curve show the power-law fit to the photospheric velocities of a sample of SNe Ib from \protect\cite{branch02}.}
\label{fig:velocevol}
\end{figure}
\section{Nebular spectra}
\label{spenebular}
After SN 2009jf became visible again in June 2010, we obtained spectra
at 247, 266, 354 and 361 days after $B$-band maximum (see
Fig.~\ref{fig:nebularspectra}). At these late epochs the SN ejecta
are optically thin, allowing us to observe the innermost parts of the
ejecta where lines are mainly in emission. Several studies have been
performed on nebular spectra of stripped-envelope SNe to investigate
the geometry of the explosion
\citep{mazzali05,maeda08,modjaz08b,taubenberger09} as the nebular
lines approximately represent a one-dimensional line-of-sight
projection of the three-dimensional distribution of elements.
The most prominent emission lines are the [O\,{\sc i}] $\lambda\lambda${}6300,6364
doublet, Mg\,{\sc i}] $\lambda${}4571 and [Ca\,{\sc ii}] $\lambda\lambda${}7291,7323. Several features
of Fe\,{\sc ii}\/ are also visible (but highly blended) from 4500 to 5500
\AA\/. These are prominent only in the brightest stripped-envelope
SNe \citep[e.g. in SN 1998bw,][]{mazzali01c}. The [O\,{\sc i}] doublet is
best suited to probe the explosion geometry, as the [Ca\,{\sc ii}] feature
can be partially contaminated by [O\,{\sc ii}] $\lambda\lambda${}7320,7330, and the
Mg\,{\sc i}] line is less intense and contaminated by Fe lines (at $\sim$
100 days after explosion). Oxygen has also the advantage that it
is the most abundant element in strongly stripped core-collapse
SNe. A drawback is that it is a doublet with a line ratio
sensitive to the temperature and density where the lines form.
It has been predicted and observed that the ratio of the oxygen lines
$\lambda${}6300/$\lambda${}6364 increases with time from a ratio of 1:1 to a ratio
of 3:1 in H-rich SNe \citep{chugai92,leibundgut91,spyromiglio91}. For
stripped-envelope SNe, \cite{taubenberger09} suggested that the
conditions during the nebular phase in the oxygen layer always
give a ratio of 3:1.
\cite{Milisavljevic10}, on the other hand, suggested that the [O\,{\sc i}]
line ratio may be close to 1:1 in several of these SNe, in order to
explain the fact that SNe Ib show a double peak for the O feature more
often than SNe Ic (without invoking a highly asymmetric
explosion). Recently, \cite{hunter09} and \cite{taubenberger11}
demonstrated that for SN 2007gr and SN 2008ax magnesium and oxygen had
a similar distribution within the ejecta if the ratio of the oxygen
lines at $\lambda${}6300/$\lambda${}6364 was fixed at 3:1. This similar
distribution is expected from nucleosynthesis models
\citep{maeda06a}. Furthermore, \cite{maurer10} suggested that the
apparent double peak at the position of the oxygen doublet is due to a
high-velocity H$\alpha${} absorption which, when superimposed on the oxygen
lines, gives the appearance of a double peak. All SNe in the sample
analysed by \cite{maurer10} are SNe IIb.
SN 2009jf also shows an oxygen feature with a complex structure that
resembles a double peak profile. But since SN 2009jf does not show
H features either at early or nebular phases, we consider unlikely that
the [O\,{\sc i}] profile of SN 2009jf is due to contamination from
high-velocity H. This is also supported by the comparisons with Mg\,{\sc i}]
at $\lambda${}4571 and [Ca\,{\sc ii}] at $\lambda\lambda${}7291,7323 on day +361 : if we take into account
the doublet nature of the [O\,{\sc i}] and artificially add a second
component (redshifted and scaled to 1/3 of the flux) to the Mg\,{\sc i}] and
[Ca\,{\sc ii}] lines \footnote{ As the [Ca\,{\sc ii}] feature is a close doublet,
it has been considered here as a single line.},
then the profiles of [O\,{\sc i}], modified-Mg\,{\sc i}] and modified-[Ca\,{\sc ii}]
all have similar profiles as shown in the bottom panel ($d$) of
Fig.~\ref{fig:nebularspectra}. The similar profiles suggest that
mixing is important in the progenitor of SN 2009jf since oxygen,
magnesium and calcium have similar distributions within the
ejecta, and that the oxygen profile is not due to H$\alpha${}
contamination.
If we assume that the mass of an element at a particular velocity is
simply proportional to the emitted flux at that same velocity, the O
profile (as those of Mg and Ca) is consistent with an
asymmetric explosion, with a large part of the material ejected away
from the observer. \cite{Sahu11} also discussed the nature of the
oxygen profile, and suggested large-scale clumping or a unipolar jet
as the explanation. Here we propose an alternative geometry, which
reproduces the oxygen profile with four different components (see
upper panel of Fig.~\ref{fig:nebularspectra2}) coming from different
parts of the ejecta. Taking into account that the [O\,{\sc i}] line is a
doublet (two lines separated by with a 64 \AA{} and with a ratio $\sim$ 3:1), there
are two clear narrow lines at 6285 \AA{} and 6349 \AA{} in the oxygen
profile that may be interpreted as an off-centre dense core that is
blueshifted by $\sim$ 700 km\,s$^{-1}${}. The ratio of these two lines seems
slightly lower than 3:1 suggesting a high density in this blob.
Blueward and redward by $\sim$ 50 \AA{} with respect to the off-center
dense core, we identify further blobs\footnote{the two components of
the blobs have ratios of $\sim$ 3:1} of oxygen-rich material that
may be interpreted as clumps.
In this scenario the rest of the oxygen (the broader component of the O line)
is uniformly distributed in the ejecta\footnote{This is reproduced with
two Gaussians (ratio $\sim$ 3:1) close to zero velocity.}
A schematic reconstruction of the geometry is shown in
Fig.~\ref{fig:nebularspectra2} (lower panel).
Within our simplifying assumption of mass proportional to flux, the
dense off-center core would contain $\sim$ 20$\%$ of the oxygen mass,
the clumps $\sim$ 10$\%$, while the rest of the oxygen mass
would be distributed uniformly. We performed this analysis on the
spectra at both +266 days and +361 days after $B$-band maximum,
obtaining similar results, as there is no significant evolution in the
nebular spectra over this time period.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[width=9cm,height=10cm]{valenti_fig11.eps}}
\caption{\textbf{Upper panel}: Nebular spectra (at rest frame) of a sample of stripped-envelope
SNe at 1 year from $B$-band maximum. \textbf{Bottom panel}. Evolution of the main nebular
features visible in the nebular spectra of SN 2009jf: (a) [O\,{\sc i}], (b) Mg\,{\sc i}] and (c) [Ca\,{\sc ii}].
In panel (d) we show at 361 days after $B$-band maximum, the comparison
between the [O\,{\sc i}] profile and profiles of the [Ca\,{\sc ii}] and Mg\,{\sc i}] modified
with a second component to reproduce the [O\,{\sc i}] doublet (see text).}
\label{fig:nebularspectra}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[width=8cm,height=14cm]{valenti_fig12.eps}}
\caption{Oxygen line profile of SN 2009jf at +266 and +361 days from B
maximum (upper and middle panel). If we assume that the oxygen line flux to trace
the oxygen mass, SN 2009jf could be an asymmetric explosion, with an
off-axis core ($\sim$ 20$\%$ of the oxygen flux/mass) surrounded by clumps.
The remaining oxygen ($\sim$ 70$\%$) is distributed uniformly throughout the ejecta.
In the lower panel a schematic reconstruction of the geometry is shown.}
\label{fig:nebularspectra2}
\end{figure}
A caveat on the above discussion should be added. While the
assumption that the oxygen mass distribution is proportional to the
oxygen lines flux is reasonable, the random directions of material ejection
in asymmetric explosions should produce stripped-envelope SNe with
observables (e.g. line profiles) isotropically distributed.
Inspecting the sample of nebular spectra of \cite{taubenberger09}
(and including some recent discoveries), a number of SNe have been found
showing blue-shifted, off-center line cores, while SNe with red-shifted
off-center line cores have not been detected so far.
This investigation is complicated by the fact that the oxygen profile may
show blue-shifted line peaks also at early phases ($<$ 200 days, when the
ejecta are not fully transparent) or at very late phases due to dust
formation.
However, the amount of blue-shift of the oxygen line peaks in the above
mentioned spectra do not show any time evolution, leading us to rule out
the dust formation or still opaque ejecta as likely explanations.
Nevertheless the lack of detections of SN spectra with red-shifted oxygen
profiles is puzzling and an obvious explanation cannot be provided.
\section{Host galaxy properties}
\label{sec:host}
The host galaxy of SN 2009jf, NGC 7479 is a face-on spiral galaxy in
the Pegasus constellation. It is relatively nearby ($\mu=32.65$
mag)\footnote{Using a radial velocity corrected for in-fall onto Virgo
of 2443 km\,s$^{-1}${} and a Hubble constant of 72 km\,s$^{-1}${} Mpc$^{-1}$.}, and
quite asymmetric, with strong star formation along most of the
luminous western arm \citep{laine98}. SN 2009jf occurred in this arm,
at the location of an extended star forming region (see
Fig.~\ref{figha}). NGC 7479 also displays some intriguing properties
at radio wavelengths, with a radio continuum in the reverse direction
to the optical arms \citep{laine05}, which has been suggested to be
the result of a minor merger.
\begin{figure}
\center
\includegraphics[width=6cm,height=5cm]{valenti_fig13.eps}
\caption{Continuum-subtracted H$\alpha$ image of NGC7479 with the SN position indicated with a circle.
The radius of the circle is 10 times the uncertainty in transformation used to determine the SN position.}
\label{figha}
\end{figure}
The early spectra and the colour evolution of SN 2009jf are quite
blue, suggesting that the light from SN 2009jf is absorbed and
reddened by a relatively small amount of interstellar dust both in the
Milky Way and in the host galaxy NGC 7479. While the Milky Way
component is easily removed using maps of the Galactic dust
distribution and a standard extinction law \citep[$E(B-V)$ = 0.112
mag, ][]{schlegel98}, evaluating the extinction in NGC 7479 is more
difficult. In principle, assuming an average dust-to-gas ratio, the
reddening can be estimated by measuring the gas column density of
interstellar lines from Na~{\sc i}~D absorption. It is common practice
to derive the host galaxy extinction from its relation with the
equivalent width (EW) of the Na~{\sc i}~D line. Mostly using a sample
of SNe Ia for which the reddening has been calculated using the Lira
relation \citep{phillips99}, \cite{turatto03} found that SNe appear to
split on two slopes that give quite different EW(NaID) versus
reddening relations. For SN 2009jf, we used the early spectra with
high signal-to-noise ratio in order to separate the contribution to
the Na~{\sc i}~D absorption of the Milky Way from that of host system.
Two small Na~{\sc i}~D absorptions are visible in several spectra at
5893 \AA{} (Galactic) and at 5941 \AA{} (host system) (see
Fig.~\ref{fignaid}). The EW measured for each absorption slightly
changes among the spectra (0.4-0.9 \AA{} for the Galactic absorption
and 0.2-0.4 \AA{} for the one in the host system). This is probably
due to the different signal-to-noise ratios in the spectra, the
spectral resolution and/or contamination from other lines. On the
other hand, the ratio of the two absorptions is almost constant, with
the Galactic one being twice as strong as that in the host system.
Assuming a similar dust-to-gas ratio in NGC 7479 and in the Milky Way and
the Galactic reddening in the direction of SN 2009jf reported by
\cite{schlegel98}, we obtain an $E(B-V)_{7479} \sim$ 0.05 mag.
Due to the uncertainty of the method and the fact that the EW measurement
is uncertain, we adopt a host-galaxy reddening of $E(B-V)_{7479}$= 0.05 $\pm$
0.05 mag.
\begin{figure}
\center
\includegraphics[width=6cm,height=6cm]{valenti_fig14.eps}
\caption{Low resolution spectra of SN 2009jf in the region of the Galactic
and host-galaxy Na~{\sc i}~D absorptions}
\label{fignaid}
\end{figure}
\begin{table}
\caption{Main parameters for SN 2009jf and its host galaxy}
\label{tabsummary}
\begin{tabular}{ll}
\hline
Parent galaxy & NGC 7479 \\
Galaxy type & SBbc$^a$ \\
RA (2000) & $23^h 04^m 52^s.98$ \\
Dec (2000) & $+12\degr 19' 59.5''$ \\
Recession velocity & 2443 [km\,s$^{-1}$] $^a$ \\
Distance modulus ($H_0 = 72 $) & $32.65 \pm 0.10$ mag \\
$E(B-V)_{7479}$ & 0.0 - 0.1 mag \\
$E(B-V)_{MW}$ & 0.112 mag$^b$ \\
Offset from nucleus & $53''.8$ W , $36".5$ N \\
Explosion epoch (JD) & $2455099.5\pm 1.0$ (Sep 25, 2009) \\
\hline
\end{tabular}\\
$^a$LEDA, velocity corrected for Local Group infall onto the Virgo cluster.\\
$^b$\protect\cite{schlegel98}.
\end{table}
\subsection{Metallicity}
NGC 7479 has a bright $B$-band absolute magnitude of
$M_{B}$ = $-21.64$\footnote{HyperLEDA; http://leda.univ-lyon1.fr/}. Using the
calibration of \cite{Boissier09}, suggests a super-solar oxygen
abundance of 12 + log(O/H) = 9.09 dex at the characteristic radius of
0.4 R$_{25}$\footnote{The R25 radius is the radius of the 25 mag
arcsec$^{−2}$ B-band isophote.}. A direct estimate for the host
galaxy metallicity can be obtained from the line fluxes of nebular
emission lines in the vicinity of the SN. Actually, many different
diagnostics for metallicity can be found in the literature \citep[for
example ][]{pettini04,kewley02,mcgaugh91,Pilyugin05} using various
emission-line ratios and calibrations. Different metallicity
calibrations also give systematically different results
\citep{smartt09a,ellison05,modjaz08a}. In this paper, we follow
\citeauthor{modjaz08a} and use a range of methods to determine the
metallicity.
\begin{figure}
\includegraphics[width=9cm,height=6cm]{valenti_fig15.eps}
\caption{Host-galaxy emission lines in the late-time spectrum of SN 2009jf
at +266 days from $B$-band maximum.}
\label{fig:fig_neb_spec}
\end{figure}
The strengths of the host-galaxy emission lines were measured in the
deep spectrum of SN 2009jf obtained on 2010 July 7th with the WHT and
ISIS (see Fig.~\ref{fig:fig_neb_spec}). The measured fluxes are
listed in Table~\ref{tab_neb_lines}.
\begin{table}
\center
\caption{Nebular emission lines seen in the spectrum of 2010 July 7th.
The spectrum has been corrected for host and Galactic extinction, and
the fluxes of all emission lines of interest were measured subtracting the continuum
and fitting a Gaussian to the line using {\sc iraf}.}
\label{tab_neb_lines}
\begin{tabular}{llr}
\hline
Species & Wavelength & Flux ($10^{-17}$)\\
& (\AA) & (erg\,s$^{-1}$ $cm^{-2}$) \\
\hline
[O\,{\sc ii}{}] & 3726,29 & 183 \\
H$\beta$ & 4861 & 58 \\
$[$O\,{\sc iii}{}$]$ & 4959 & 45 \\
$[$O\,{\sc iii}{}$]$ & 5007 & 125 \\
$[$N\,{\sc ii}{}$]$ & 6548 & 20 \\
H$\alpha$ & 6563 & 273 \\
$[$N\,{\sc ii}$]$ & 6583 & 68 \\
$[$S\,{\sc ii}$]$ & 6717 & 31 \\
$[$S\,{\sc ii}$]$ & 6731 & 25 \\
\hline
\end{tabular}\\
\end{table}
The N2 index calibration of \cite{pettini04} gives 12+log(O/H) = 8.56
dex, while the O3N2 index by the same authors gives a value of 8.43
dex. \cite{Pilyugin11} use the NS calibration to give relations for
the oxygen abundance 12+log(O/H), and the nitrogen abundance
12+log(N/H). With the measured fluxes near the site of the SN we
derive 8.36 dex for the former and 7.49 dex for the
latter. \cite{Kobulnicky04} give an approximation for the average of
the KD02 and M91 relations, which gives 12+log(O/H) = 8.67 dex.
The mean of these four values for the metallicity is 8.51 dex, which
is between the solar value \citep[8.72 dex, ][]{allende2001} and that
of the Large Magellanic Cloud \citep[8.35 dex, ][]{hunter07}
which is considerably smaller than the value obtained with the $B$-band absolute
magnitude of NGC 7479.
\subsection{The progenitor of SN 2009jf}
\label{parprogenitor}
To search for possible evidence of the progenitor, the post-explosion
NaCo $K_\mathrm{S}$-band image from October 2009 was aligned with the stacked
pre-explosion WFPC2 $F814W$ image. 12 sources common to both images
were identified and their positions were measured accurately with the
{\sc iraf phot} task. Using the resulting list of matched pixel
coordinates, we derived a geometrical transformation\footnote{As a
small number of common sources were used for the alignment, we
restricted the transformation to rotation, scaling and translation
only.} between the two images, using {\sc iraf geomap}. The rms
error in the transformation was found to be 0.573 WF pixels, or 57
mas.
The position of the SN was measured in the NaCo image using the three
different centering algorithms in {\sc iraf phot}, all of which agreed
to within $\sim$ 5 mas. The mean of the three positions was taken as
the SN position, this was then transformed to the pixel coordinates of
the WFPC2 $F814W$-filter image\footnote{The derived pixel position of
the SN in the coordinates of u2z00103t.c0f.fits are
471.55,134.53.}. The SN is in a crowded region, dominated by two
bright complexes (A and B) one of which appears elongated (see
Fig.~\ref{fig_F569}). We also aligned the $F569W$ image using
the same procedure\footnote{The $F569W$ image was found
to be offset from the $F814W$ image by ($-0.05,-0.43$) WF pixels in
x and y respectively, this offset has been corrected for in all of
the following.}.
\begin{figure*}
\resizebox{\hsize}{!}{\includegraphics{valenti_fig16.eps}}
\caption{The $HST$+WFPC2 $F569W$ image of the location of SN 2009jf.
Scale and orientation are indicated. The position of the SN,
as determined from the post-explosion NaCo image is located with a white
circle. The radius of the circle corresponds to the 57 mas uncertainty in the
SN coordinates and the geometric transformation. All sources detected by
{\sc hstphot} within a distance of 31 pixels of the SN location (corresponding
to a 500 pc radius at the distance of NGC 7479) at the 3$\sigma$ level in both
filters are indicated with white diamonds. The source located close to the SN
in the $F569W$ filter is indicated with a white square. The regions used for
aperture photometry of the complexes as discussed in the text are indicated
with dashed circles, and labelled accordingly. }
\label{fig_F569}
\end{figure*}
While the SN is not coincident with either of the two complexes A and
B, an association is still plausible. The SN is $\sim$5 pixels
(0.5\arcsec) from the brightest pixels in A and B. At the distance of
NGC 7479, and scaling by ${4}\over{\pi}$ to account for projection
this corresponds to $\sim 100$ parsec. If we assume a velocity of 100
km\,s$^{-1}${} for the progenitor, it could traverse this distance in $\sim$1
Myr. As this is a factor of ten less than the lifespan of a single
25~M$_{\odot}${} star (Fig.~\ref{fig:cluster}), the progenitor could well
be associated with either region A or B (or indeed be unrelated to both).
The {\sc hstphot} package \citep{Dolphin00} was used to produce
photometry of all sources detected in the region of the SN\footnote{We
used the pre-processing programs that accompany {\sc hstphot} to
remove cosmic rays and hot pixels ({\sc mask, crmask, hotpixels}),
to combine the individual exposures in each filter ({\sc coadd}) and
to measure the sky background ({\sc getsky}).}. {\sc hstphot} was
then run separately for the coadded F569W and F814W filter images with
a detection threshold of 3$\sigma$.
A colour-magnitude diagram of all sources detected by {\sc hstphot} in
both filters, within $\sim$500 pc of SN 2009jf is shown in
Fig.~\ref{fig:local_pop}.
As can be seen, there are no evolved red sources detected in both filters,
and indeed the population appears quite blue, which is indicative of a young and
massive stellar population.
While the population of detected sources appears blue, the limiting
magnitudes of the images likely prevent us from detecting most of the evolved red
supergiant population, or the main sequence below about 20-30 M$_{\odot}${}.
As such it is difficult to provide an estimate of an age for the
surrounding stellar population. It is likely that most objects
brighter than $-8.5$ are unresolved, compact clusters. Furthermore the
region is similar to the 50-300pc sized starforming complexes that are
common in late-type spirals \citep{bastian05}. Such complexes
typically contain compact star clusters of $<10$pc diameter, which are
likely to host coeval populations themselves. However the whole star
forming complex may have a significant age spread
\citep[see discussion in ][]{Crockett08}.
The closest source detected by {\sc hstphot} is
66 mas (10 pc) away from the nominal SN position in the F569W
($m_{F569W}$ = 23.75). With an absolute magnitude of $M_{F569W}$
=-9.25, this is likely to be a compact cluster. The host region of
SN2009jf is quite similar to that of SN2007gr \citep{Crockett08},
with the SN being close to (but not exactly coincident) with a compact
cluster and contained with a larger starforming complex.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{valenti_fig17.eps}}
\caption{All sources detected by {\sc hstphot} above the 3$\sigma$
level in both the $F569W$ and $F814$ filters. Magnitudes have been
corrected for the distance of NGC 7479 and for Milky Way extinction,
but not for extinction in the host. Furthermore, we have made no
attempt to separate out sources which are poorly fit by a PSF, and
are hence likely unresolved clusters. Indeed, any source with an
absolute magnitude brighter than $-8.5$ mag is likely a cluster rather than a
single star.}
\label{fig:local_pop}
\end{figure}
We can also attempt to estimate the age of the population by comparing
the observed colours of the region to models. If we can determine the
age of the A and B region, then we can infer the most massive stars
which would be still extant at that time. We have performed simple
aperture photometry on the $F569W$ and $F814W$ filter WFPC2 images on the
regions indicated A and B in Fig.~\ref{fig_F569}. Using the revised
zeropoints of Dolphin (2009), and correcting for Milky Way extinction,
we find the colours of regions A and B to be M$_{F569W-F814W}$= 0.36
and 0.52 mag respectively. In a large aperture encompassing both
regions (also indicated in Fig.~\ref{fig_F569} with a dashed circle),
we find a colour of M$_{F569W-F814W}$=0.33 mag, which is reassuring as
this is a similar colour to region A, which contributes most of the
flux. We have no constraint on the internal extinction in the
complexes, so we will regard these colours as upper limits on the true
colours, which are likely bluer.
We have used the Padova stellar population
models\footnote{http://stev.oapd.inaf.it/cmd}
\citep{girardi02,marigo08} to create a table of integrated magnitudes
for a single stellar population, at a metallicity appropriate to the
site of SN 2009jf (Z=0.012), and in the WFPC2 filter system. The
Padova model colours are shown as a function of age in Fig.~\ref{fig:cluster},
together with the lifetimes of massive stars as found from the STARS
evolutionary code \citep{Stancliffe09}. The degeneracy in the
M$_{F569W-F814W}$ colour versus age plot precludes us from determining
the age for the clusters. Region A is consistent with an unreddened
population of stars with a maximum progenitor mass somewhere between
8 and 25 M$_{\odot}$.
Complex B is redder which implies an older age. But the
degeneracy between age and extinction prevents a useful
determination. For example a moderate amount of internal
extinction would make the age range consistent with a
8-25 M$_{\odot}$\ population.
Unfortunately the compact cluster which is closest to the progenitor
position is detected only in the $F569W$ filter, and so we cannot
constrain its age, beyond the fact that it is blue, and hence
presumably among the younger objects in the region.
As for the progenitor of SN2007gr and its possible host cluster,
future observations of the region with HST in the $U$ and
$B$ bands could age date the cluster.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{valenti_fig18.eps}}
\caption{Padova integrated model $F569W$-$F814W$ colours of a stellar population (on the left y-axis),
are plotted against the age of the population for a range of extinctions. The observed colours of
regions A and B, together with the total colour of the entire region (A+B) are indicated with arrows,
these values have been corrected for Milky Way extinction only. The range of population ages
(approximately 10 - 100 Myr) which are consistent with the cluster colours are indicated by the
shaded region. Unfortunately there is a strong degeneracy in the $F569W$-$F814W$ colour of the models.
On the right y-axis, the ZAMS mass of single stars from the STARS code are plotted against their lifespan.
A population age of 10 Myr is consistent with a progenitor mass of 25 M$_{\odot}${}, while for
an older population the progenitor must be less massive, likely $\sim$ 8 M$_{\odot}${}.}
\label{fig:cluster}
\end{figure}
We also present an H$\alpha$ image of NGC 7479, which was obtained on
13 September 1996 with the Prime Focus Cone Unit + CCD on the Isaac
Newton Telescope. This deep image consisted of an 1800 s exposure
taken with the H626 filter (H$\alpha${}) and another 1800 s exposure taken
with the H712 filter to allow for removal of continuum light. The
{\sc hotpants} package (A. Becker) was used to subtract the continuum
image from the H$\alpha${} image.
The H$\alpha$ flux does not appear to be from a point source, but
is rather spread out over several pixels at the southern end of the
complexes. The SN position in the H$\alpha$ image was determined
by alignment to a Liverpool Telescope $r$'-band image, and is found
to be on the edge of this region of H$\alpha$ flux (as shown in Figure
\ref{figha}).
In conclusion, the environment of the SN clearly displays signs of
recent star formation, with a strong H$\alpha${} flux and integrated colours
which are consistent with a young, massive stellar population. A
colour magnitude diagram of sources within a radius of 500 pc of the
SN indicates a young population, with no detections of the red
supergiants which may be expected in a slightly more evolved
population. The progenitor is close to, but not coincident with, two
regions which we have termed A and B. Unfortunately we cannot
distinguish between a high mass ($\sim$25 M$_{\odot}$) or low mass ($\sim$10
M$_{\odot}$) progenitor on the basis of the age of the clusters due to the
degeneracy in the age-colour relation. However, on the basis of the
H$\alpha$ flux and the surrounding blue stellar population, together
with the characteristics of the SN, we consider the high mass channel
more likely. Future observations after the SN has faded may help to
better address this issue.
\section{Bolometric light curve of SN 2009jf}
\label{parametribolo}
\begin{figure}
\includegraphics[width=9cm,height=7cm]{valenti_fig19.eps}
\caption{The bolometric light curve of SN 2009jf compared with those of
other core-collapse SNe. References for each SN:
- SN 2009jf: this work;
- SN 1998bw: $E(B-V) =$ 0.06 mag, $\mu =$ 32.76 mag , phot. data:
\protect\cite{patat01};
- SN 1994I: $E(B-V) =$ 0.04 mag, $\mu =$ 29.60 mag
\protect\citep{sauer06}, phot. data: \protect\cite{rich96};
- SN 1993J: $E(B-V) =$ 0.079 mag \protect\citep{barbon95}, $\mu =$ 27.8
mag \protect\citep{Freedman94}, phot data: \protect\cite{barbon95};
- SN 2008D: $E(B-V) = $ 0.65, $\mu =$ 32.29 mag \protect\citep{mazzali08},
phot. data: \protect\cite{mazzali08,modjaz09};
- SN 2007gr: $E(B-V) =$ 0.092 mag, $\mu =$ 29.84 mag, phot. data:
\protect\cite{valenti08b,hunter09};
- SN 2008ax: $E(B-V) =$ 0.40 mag \protect\citep{taubenberger11}, $\mu =$
29.92 mag \protect\citep{pastorello08a}, phot. data:
\protect\cite{pastorello08a,taubenberger11};
- SN 2007Y: $E(B-V) = $ 0.112 mag, $\mu=$ 31.13 mag , phot. data:
\protect\cite{stritzinger09}.
}
\label{figbolo}
\end{figure}
The bolometric light curve of a SN is a powerful tool to validate
model predictions. It is well known that, starting from $\sim$ 1 week
past explosion, the light curve of a non-interacting stripped-envelope
SN is mainly determined by the amount of nickel synthesised during the
explosion, together with the amount of ejected material and the
kinetic energy \citep{arnett80,arnett82}. As a first-order
approximation to the true bolometric luminosity, we integrated the SN
emission in the spectral window accessible from ground, from the UV
atmospheric cut off to the NIR ($JHK$)\footnote{Photometric data in
the Sloan filters were transformed from the AB to VEGA system
\protect\citep{Holberg06}. }. All magnitudes were then converted to
fluxes\footnote{The zeropoints for the conversion have been computed
by integrating the spectrum of Vega with the Landolt and Sloan
filters \protect\citep{buser78,gunn98}.} and integrated from $U$ to
$K$ using Simpson's Rule. The resulting bolometric light curve is
shown in Fig.~\ref{figbolo} together with the bolometric light curves
of other stripped envelope SNe. SN 2009jf is one of the brightest SNe Ib
observed to date. At 20.5 days after explosion, it reached a maximum
luminosity of log$_{10}$L = 42.62 $\pm$ 0.05 erg s$^{-1}$, in
between the Type Ic GRB-SN 1998bw and SN 2008ax. The light curve of
SN 2009jf evolves quite slowly, only reaching the radioactive tail
around 50 days after explosion. The slope of the tail is then 0.0133
mag day$^{-1}$, which is faster than the cobalt decay time scale.
\cite{valenti08a} developed a toy-model in order to compute a
first-order estimate of the main physical parameters that shape the
bolometric light curve \citep[see also][]{chatzopopoulos09}:
$M_{\mathrm{56Ni}}$, $M_{\mathrm{ej}}$ and $E_{\mathrm{k}}$. The toy-model is based on very
simple approximations \citep{arnett82,cappellaro97,clocchiatti97},
dividing the light curve into an optically thick (photospheric) and an
optically thin (nebular) phase. Adopting a photospheric velocity at
maximum of 11000 km\,s$^{-1}${} and an optical opacity of $k_{opt}$= 0.06, we
obtained the following parameters: $M_{\mathrm{56Ni}}$ = 0.23 $\pm$ 0.02
M$_{\odot}${}, $M_{\mathrm{ej}} = 5-7$ M$_{\odot}${} and $E_{\mathrm{k}} = 5.5-10.6 ~ 10^{51}$ erg.
In Appendix~\ref{ap1} we report in more detail on the toy model as
applied to the case of SN 2009jf and other SNe Ib, along with some
caveats. The ejected mass we find for SN 2009jf is comparable with
those obtained for other massive SNe Ib (SN 2008D,
\citealt{mazzali08}, \citealt{tanaka09}; SN 1999dn,
\citealt{benetti11}) and for some massive and energetic SNe Ic. The
kinetic energy obtained is quite high. This is not surprising however,
since our toy-model also seems to overestimate the kinetic energy for
other SNe Ib. The true kinetic energy of SN 2009jf is likely close to
the lower edge of our range. Nevertheless the values we find for the
kinetic energy and ejected mass are consistent with those of
\cite{Sahu11}. The ejected nickel mass of \citeauthor{Sahu11} (0.17
$\pm$ 0.03 M$_{\odot}$) is slightly smaller than our value, most likely
because they did not include the infrared contribution in their
bolometric light curve\footnote{Their over-estimate of the radioactive
tail magnitude does not change their nickel mass estimate as this is
mainly based on the SN magnitude at maximum light.}.
\section{Discussion}
In the last 5 years, good data-sets have been published for several
SNe Ib (SN 2007Y, \citealt{stritzinger09}; SN 2008D,
\citealt{Soderberg08,mazzali08,modjaz09,malesani09}; SN 2008ax,
\citealt{pastorello08a,Chornock10,taubenberger11}; SN 2009jf,
\citealt{Sahu11}, this work). These data allow us for the first time
to make comparisons between the different stripped-envelope SNe
subtypes. The broad light curves and the evidence that some SNe Ib
are quite luminous and energetic, suggests that these explosions do
not always release the canonical 10$^{51}$ erg, as often assumed.
In fact, SN 2007Y, one of the faintest and least massive He-rich SNe
\citep{stritzinger09}, ejected $\sim$ 2 M$_{\odot}${} of material once the
He (1.5 M$_{\odot}$) and residual H (0.1 M$_{\odot}$) layers \citep{maurer10}, are
taken into account\footnote{\protect\cite{stritzinger09} found that
excluding He and H, the ejected mass of SN 2007Y was $\sim$ 0.45
M$_{\odot}$.}. In this context, SN 2009jf is one of the most energetic
and massive SNe Ib known to date.
From Fig.~\ref{figbolo}, it is clear that, while some He-poor SNe
(eg. SNe 1998bw, 2007gr) show an asymmetric peak (with a faster rise
than decay), all the He-rich SNe show a slow rise to maximum. In
Fig.~\ref{figrise} we show the rise times for a sample of
stripped-envelope SNe. In the sample of published SNe Ib we have tried
to include only SNe with reasonable information on the explosion epoch
(early discovery or good pre-discovery limit) and good data coverage.
SN 1983N has also been included because of the early $V$-band
discovery, even though the light curve coverage is not ideal.
Only two SNe Ic, with data published, have been monitored soon after
the explosion, although for some broad-lined SNe Ic the explosion
epoch is well constrained by the associated X-ray flash or Gamma Ray
Burst. We have excluded stripped-envelope SNe for which there is
evidence of circumstellar interaction, as interaction will increase
the luminosity of the supernova. We also excluded the new class of
ultra-bright SNe \citep{quimby09,pastorello10} that share some
similarities with SNe Ic \citep{pastorello10}, for which a flux
contribution to the light curve due to interaction has also been
proposed \citep{Blinnikov10}. These SNe show rise times of up to
$\sim$100 days, and may have an origin different to \emph{normal} Ic
SNe.
Even though the longer rise times for He rich SNe are clearly apparent, we must
consider the possibility that this is a selection effect. SNe Ib are quite rare, and
slowly-evolving/rising SNe Ib are easier to discover at an earlier phase than fast
rising SNe Ib. However, our findings are confirmed by \cite{Drout10} who recently
presented an extended set of stripped-envelope light curves in which there are
no fast rising SNe Ib, and SN 1994I is the only fast evolving stripped-envelope SN.
\begin{figure}
\includegraphics[width=9cm,height=7cm]{valenti_fig20.eps}
\caption{The rise times of a sample of stripped-envelope
SNe. References for each SN:
-1998bw: \protect\cite[from GRB 980425, ][]{GRB980425};
-1994I: \protect\cite[modelling, ][]{sauer06};
-2003jd: \protect\cite[pre-explosion limit][]{valenti08a};
-1999ex: \protect\cite[early discovery, ][]{stritzinger02};
-2002ap: \protect\cite[modelling, ][]{mazzali02a};
-1997ef: \protect\cite[modelling, ][]{mazzali00a};
-2006aj: \protect\cite[from GRB060218, ][]{cusumano06};
-1993J: \protect\cite[early discovery, ][]{lewis94};
-2007gr: \protect\cite[good pre-explosion limit, ][]{valenti08b};
-2008D: \protect\cite[from swift detection, ][]{berger08};
-2008ax: \protect\cite[pre-discovery limit, ][]{pastorello08a};
-1996cb: \protect\cite[1 day from early discovery][]{Qiu99};
-1983N: \protect\cite[1 day from early discovery, ][]{Richtler83};
-2007Y: \protect\cite[1 day from early discovery, ][]{stritzinger09};
-2009bb: \protect\cite[good pre-explosion limit, ][]{pignata11}.}
\label{figrise}
\end{figure}
Recently, several fast evolving stripped-envelope SNe have been
discovered (or recovered in archival data), but unfortunately none of
them has been well covered in the premaximum phase (e.g. SN 2008ha,
\citealt{valenti09}, \citealt{foley09}; SN 2005E \citealt{Perets10};
SN 2002bj, \citealt{Poznanski10}; SN 2005cz, \citealt{Kawabata10}).
It is also highly debated whether they are core-collapse SNe,
thermonuclear explosions in a \emph{non-canonical} Ia scenario, or
even new explosion channels.
If we believe that these fast evolving stripped-envelope SNe are not
core-collapse explosions, we will conclude that, with the exception of
SN 1994I, all stripped-envelope SNe ejected at least 2 M$_{\odot}${} of
material.
The large ejected mass (3-7 M$_{\odot}$) and large oxygen
mass (the latter reported by Sahu et al. 2011) of SN2009jf
could be explained by the explosion of a 5-9 M$_{\odot}${} CO star (assuming
1.5 M$_{\odot}$ neutron star remnant). The WC population in the
LMC have pre-SN masses between 6-18 M$_{\odot}${} \citep{Crowther02}
A 5-9 M$_{\odot}$ CO star could originate in a single massive progenitor
of $M_{ZAMS}$ $>$ 35 M$_{\odot}${} \citep{Georgy09,Eldridge04}
with radiative driven mass loss, suggesting that a massive WR star
(with a low He residual mass) is plausible. Equally a lower mass star
which has been stripped of its envelope through binary transfer is
also possible - the CO core mass of a 20 M$_{\odot}$ star is about 5 M$_{\odot}$
in the STARS models \citep{Eldridge04}. The analysis of the
progenitor environment cannot distinguish between these two, as the
age of the stellar population in the vicinity of the SN progenitor is
not constrained well enough to determine a star formation age.
The fact that the SN is very close to a compact star cluster suggests
that an age determination of that cluster could provide further
constraints, if UV data are obtained once the SN has faded.
The detection of relatively weak He at high velocity and the spectral
similarity with the He poor SN 2007gr suggest the presence of only a
small amount of He in the ejecta of SN 2009jf. However, this
conclusion needs to be confirmed by detailed spectral modelling of SN
2009jf.
The profile of [O\,{\sc i}] $\lambda\lambda${}6300,6364 is suggestive of an asymmetric
explosion with an off-axis dense core with clumps. Similar
conclusions can be drawn for the magnesium and calcium distribution by
comparing their line profiles to that of oxygen. While a similar
profile is expected for Mg\,{\sc i}{}] and O\,{\sc i}{} as they come from similar
regions in the ejecta, [Ca\,{\sc ii}] $\lambda\lambda${}7291,7324 should form in the
inner part of the ejecta where silicon is more abundant, or in the
outer layer of He \citep{fransson89}. The similar profile for the
three lines could indicate that mixing is an important factor in the
explosion of SN 2009jf.
\section{Summary}
In this paper we have presented optical and infrared photometry and
spectroscopy of SN 2009jf spanning from $\sim$ 20 days before $B$-band
maximum to one year after maximum.
We have shown that SN 2009jf is a slowly evolving, massive and
energetic stripped-envelope SN which retained only a small part of its
He layer at the moment of explosion. The SN exploded in a young
stellar environment, and the progenitor is likely a massive star $>
25-30$ M$_{\odot}${} as suggested by \cite{Sahu11}. Furthermore, the
similarity with the SN Ic 2007gr suggests a similar progenitor for at
least some SNe Ib and Ic. The nebular spectra of SN 2009jf are
consistent with an asymmetric explosion with an off-center dense core.
We have also shown that He rich SNe appear to have longer rise times
than other stripped-envelope SNe. However, this should be treated as a
preliminary result, and needs to be verified with a larger sample of
stripped-envelope SNe, while carefully accounting for all possible
systematic biases.
\section*{Acknowledgements}
We thank the anonymous referee for helpful suggestions.
S.V. is grateful H. Wang for hospitality at the UCLA.
G.P. acknowledges support by the Proyecto FONDECYT 11090421. S.B.,
E.C., M.T.B., F.B., P.A.M. and M.T. are partially supported by the
PRIN-INAF 2009 with the project \emph{Supernovae Variety and
Nucleosynthesis Yields}. G.P., M.H. and J.M. acknowledge support
from the Millennium Center for Supernova Science through grant
P06-045-F funded by \"{}Programa Bicentenario de Ciencia y Tecnolo\'gia de CONICYT\"{},
\"{}Programa Iniciativa Cientifica Milenio de MIDEPLAN\"{},
from Centro de Astrofi�sica FONDAP 15010003 and by Fondecyt through grant
1060808 from the Center of Excellence in Astrophysics and Associated
Technologies (PFB 06). S.T. acknowledges support by the TRR 33 "The
Dark Universe" of the German Research Foundation.
S.M. acknowledges support from the Academy of Finland (project 8120503)
E.K. acknowledges financial support from the Finnish Academy of Science
and Letters (Vilho, Yrj\"{o} and Kalle V\"{a}is\"{a}l\"{a} Foundation).
This paper is based on observations made with the following
facilities: ESO Telescopes at the La Silla and Paranal Observatories
under programme IDs 184.D-1151, 085.D-0750 and 386.D-0126, the Italian
National Telescope Galileo (La Palma), the 1.82~m Copernico telescope
of the Asiago Observatory (Italy), the William Herschel (La Palma),
Liverpool Telescope (La Palma), Nordic Optical Telescope (La Palma),
AlbaNova Telescope (Sweden), Prompt telescopes (Chile), Calar Alto
(Spain).
We thank Genoveva Micheva for help with observations.
We are grateful to the staff at all of the telescopes for their assistance.
This paper makes use of data obtained from the Isaac Newton Group
Archive which is maintained as part of the CASU Astronomical Data
Centre at the Institute of Astronomy, Cambridge. Based on
observations made with the NASA/ESA Hubble Space Telescope, obtained
from the data archive at the Space Telescope Institute. STScI is
operated by the association of Universities for Research in Astronomy,
Inc. under the NASA contract NAS 5-26555.
We thank K. Itagaki for providing us his images of SN 2009jf.
This manuscript made use of information contained in the Bright
Supernova web pages (maintained by the priceless work of D. Bishop),
as part of the Rochester Academy of Sciences
(http://www.RochesterAstronomy.org/snimages).
This publication makes use of data products from the Two Micron All
Sky Survey, which is a joint project of the University of
Massachusetts and the Infrared Processing and Analysis
Center/California Institute of Technology, funded by the National
Aeronautics and Space Administration and the National Science
Foundation.
|
train/arxiv
|
BkiUduLxaKgQHaLUvi-g
| 5 | 1 |
\section{Introduction}
The discovery of a SM-like Higgs boson in Run~I of the Large Hadron
Collider (LHC)~\cite{ATLASdiscovery,CMSdiscovery} marks a milestone in
the exploration of electroweak symmetry breaking (EWSB). Within
experimental and theoretical uncertainties, the properties of the new
particle are compatible with the Higgs boson of the Standard Model
(SM)~\cite{ATLAS-CMS-comb}. Looking beyond the SM, also the light
$\ensuremath{{\cal CP}}$-even Higgs boson of the Minimal Supersymmetric Standard Model
(MSSM)~\cite{mssm} is a perfect candidate, as it possesses
SM Higgs-like properties over a significant part of the model parameter
space with only small deviations
from the SM in the Higgs production and decay rates~\cite{Mh125}.
Here we will review~\cite{hifi2} that also the {\em heavy} $\ensuremath{{\cal CP}}$-even
Higgs boson of
the MSSM is a viable candidate to explain the observed signal at 125~GeV.
(the ``heavy Higgs case'', which has been discussed in
\citeres{Mh125,Hagiwara:2012mga,Benbrik:2012rm,Drees:2012fb,Han:2013mga,hifi,hifi2}).
At lowest order, the Higgs sector of the MSSM can be fully specified in
terms of the
$W$~and $Z$~boson masses, $M_W$ and $M_Z$, the $\ensuremath{{\cal CP}}$-odd Higgs boson
mass, $M_A$, and $\tan \beta \equiv v_2/v_1$, the ratio of the two neutral Higgs
vacuum expectation values. However, higher-order corrections are crucial for a
precise prediction of the MSSM Higgs boson properties and introduce dependences on other model parameters, see e.g.\
\citeres{mhiggsAWB,habilSH,PomssmRep} for reviews.
In the heavy Higgs case all five MSSM Higgs bosons
are relatively light, and in particular the lightest $\ensuremath{{\cal CP}}$-even
Higgs boson has a mass (substantially) smaller than $125\,\, \mathrm{GeV}$
with suppressed couplings to gauge bosons. We review whether
the heavy Higgs case in the MSSM can still provide a good theoretical
description of the current experimental data, and
which parts of the parameter space of the MSSM are favored.
We also discuss the newly defined benchmark scenarios in which this
possibility is realized, in agreement with all current Higgs constraints.
\section{Theoretical basis}
\label{sec:theory}
In the supersymmetric extension of the SM, an even number of Higgs
multiplets consisting of pairs of Higgs doublets with opposite
hypercharge is required to avoid anomalies due to the supersymmetric
Higgsino partners. Consequently the MSSM employs two Higgs doublets,
denoted by $H_1$ and $H_2$, with hypercharges $-1$ and $+1$,
respectively. After minimizing the scalar potential, the neutral
components of $H_1$ and $H_2$ acquire vacuum expectation values (vevs),
$v_1$ and $v_2$. Without loss of generality, one can assume that the
vevs are real and non-negative, yielding
\begin{align}
v^2\equiv v_1^2+v_2^2\simeq(246~{\rm GeV})^2\,, \quad
\tan \beta\equiv v_2/v_1\,.
\end{align}
The two-doublet Higgs sector gives rise to five physical Higgs states.
Neglecting $\ensuremath{{\cal CP}}$-violating phases the mass eigenstates correspond to the
neutral $\ensuremath{{\cal CP}}$-even Higgs bosons $h$, $H$ (with $M_h<M_H$), the $\ensuremath{{\cal CP}}$-odd
$A$, and the charged Higgs pair $H^\pm$.
At lowest order, the MSSM Higgs sector is fully described by
$M_Z$ and two MSSM parameters, conveniently chosen as $M_A$, and $\tan \beta$.
Higher order corrections to the Higgs masses are known to be sizable
and must be included, in order to be consistent with the observed Higgs
signal at $125 \,\, \mathrm{GeV}$~\cite{ATLAS-CMS-comb}.
In order to shift the mass of $h$ up to $125 \,\, \mathrm{GeV}$,
large radiative corrections are necessary, which
require a large splitting in the stop sector and/or heavy stops.
The stop (sbottom) sector is governed by the soft SUSY-breaking mass
parameter $M_{\tilde{t}_L}$ and $M_{\tilde{t}_R}$ ($M_{\tilde{b}_L}$ and $M_{\tilde{b}_R}$), where SU(2) gauge
invariance requires $M_{\tilde{t}_L}=M_{\tilde{b}_L}$,
the trilinear coupling $A_t$ ($A_b$) and the Higgsino mass parameter $\mu$.
The ``heavy Higgs case'', i.e.\ the heavy $\ensuremath{{\cal CP}}$-even Higgs boson gives
rise to the signal observed at $125 \,\, \mathrm{GeV}$ can {\em only} be realized
in the \emph{alignment without decoupling limit}.
In the so-called \textit{Higgs basis} (see \citere{hifi2} for details
and citations), the scalar Higgs potential in terms of the Higgs basis
fields $\mathcal{H}_1$ and~$\mathcal{H}_2$, can be expressed as
\begin{align}
{\cal V} = \ldots + \tfrac{1}{2} Z_1 ({\cal H}_1^\dagger{\cal H}_1)^2 + \ldots +
\left[
Z_5 ({\cal H}_1^\dagger{\cal H}_2)^2 +
Z_6 ({\cal H}_1^\dagger{\cal H}_1)({\cal H}_1^\dagger{\cal H}_2) + {\rm h.c.} \right]
+ \ldots\,,
\end{align}
where the most important terms of the scalar potential are highlighted
above. The quartic couplings $Z_1$, $Z_5$ and $Z_6$ are linear
combinations of the quartic couplings that appear in the MSSM Higgs
potential expressed in terms of $H_1$ and $H_2$.
The $Z_i$ are $\mathcal{O}(1)$ parameters.
The mass matrix of the neutral $\ensuremath{{\cal CP}}$-even Higgs bosons is then given by
\begin{align}
{\cal M}^2 = \left( \begin{array}{cc} Z_1 v^2 & Z_6 v^2 \\
Z_6 v^2 & M_A^2 + Z_5 v^2 \end{array} \right)\,.
\label{HiBa-massmatrix}
\end{align}
The \emph{alignment without decoupling limit} is reached for
$|Z_6|\ll 1$. In this case $h$ is SM-like if $M_A^2+(Z_5-Z_1)v^2>0$ and
$H$ is SM-like if $M_A^2+(Z_5-Z_1)v^2<0$: the ``heavy Higgs case''.
The possibility of alignment without decoupling has been analyzed in detail in
Refs.~\cite{Gunion:2002zf,Craig:2013hca,Carena:2013ooa,Haber:2013mia,Carena:2014nza,Dev:2014yca,Bernon:2015qea,Bernon:2015wef} (see also the
``$\tau$-phobic'' benchmark scenario in \citere{Carena:2013ytb}). It was
pointed out that exact alignment via $|Z_6| \ll 1$ can only happen through an
accidental cancellation of the tree-level terms with contributions arising
at the one-loop level (or higher).
\section{Parameter scan and observables}
The results shown below have been obtained by scanning the MSSM
parameter space. To achieve a good sampling of the full MSSM parameter
space with \order{10^7} points, we restrict ourselves to the eight
MSSM parameters, called the pMSSM\,8,
\begin{align}
\tan \beta, \quad M_A, \quad M_{\tilde{q}_3}, \quad A_f, \quad \mu, \quad M_{\tilde{\ell}_3}, \quad M_{\tilde{\ell}_{1,2}}, \quad M_2\,,
\label{Eq:fitparameters}
\end{align}
most relevant for the phenomenology of the Higgs sector. Here $\mu$ denotes
the Higgs mixing parameter, $M_{\tilde{\ell}_3}$ ($M_{\tilde{\ell}_{1,2}}$) is the diagonal soft
SUSY-breaking parameters for scalar leptons in the thrid (second and
first) generation, and $M_2$ denotes the SU(2) gaugino soft
SUSY-breaking parameter.
The scan assumes furthermore that the third generation squark and
slepton parameters are universal. That is, we take
$M_{\tilde{q}_3} := M_{\tilde{t}_L} (= M_{\tilde{b}_L}) = M_{\tilde{t}_R} = M_{\tilde{b}_R}$,
$M_{\tilde{\ell}_3} := M_{\tilde{\tau}_L} = M_{\tilde{\tau}_R} = M_{\tilde \nu_\tau}$
and $A_f := A_t = A_b = A_\tau$.
The remaining MSSM parameters are fixed,
\begin{align}
M_{\tilde{q}_L} = M_{\tilde{q}_R}~(q = c, s, u, d) \; &= \; 1500 \,\, \mathrm{GeV}, \\
M_3 = m_{\tilde{g}} &= 1500 \,\, \mathrm{GeV}\,.
\end{align}
The high values for the squark and gluino mass parameters, which have a
minor impact on the Higgs sector, are chosen in order to be in agreement
with the limits from direct SUSY searches. The U(1) gaugino mass
parameter is fixed via the usual GUT relation.
The pMSSM\,8\ parameter space is scanned with uniformly distributed random
values in the eight input parameters
over the parameter ranges given in \refta{tab:param}.
\begin{table}[h!]
\centering
\begin{tabular}{|r|cc|}
\hline
Parameter & Minimum & Maximum \\
\hline
$M_A$ [GeV] & 90 & 200 \\
$\tan \beta$ \phantom{[GeV]} &1 & 20 \\
$M_{\tilde{q}_3}$ [GeV] & 200 & 1500 \\
$M_{\tilde{\ell}_3}$ [GeV] & 200 & 1000 \\
$M_{\tilde{\ell}_{1,2}}$ [GeV] & 200 & 1000 \\
$\mu$ [GeV] & $-5000$ & 5000 \\
$A_f$ [GeV] & $-3\,M_{\tilde{q}_3}$ & $3\,M_{\tilde{q}_3}$\\
$M_2$ [GeV] & 200 & 500 \\
\hline
\end{tabular}
\caption{Ranges used for the free parameters in the pMSSM\,8\ scan.}
\label{tab:param}
\end{table}
We calculate the SUSY particle spectrum and the MSSM Higgs masses using
{\tt FeynHiggs}\ (version 2.11.2)%
\footnote{
Recent updates in the Higgs boson mass calculations~\cite{FHwww} lead to a
downward shift in $M_h$, in particular for large values of $X_t/M_S$. These
changes range within the estimated uncertainties and should not have a
drastic impact on our analysis.}%
~\cite{FHwww,feynhiggs,mhiggsAEC,mhcmssmlong},
and estimate the remaining theoretical uncertainty (e.g.~from unknown higher-order corrections) in the Higgs mass calculation
to be $3\,\, \mathrm{GeV}$~\cite{mhiggsAEC}.
Following \citeres{Benbrik:2012rm,hifi}, we demand that all points
fulfill a $\matr{Z}$-matrix criterion,
$\left||Z_{21}^{\mathrm{2L}}| - |Z_{21}^{\mathrm{1L}}|\right|/
|Z_{21}^{\mathrm{1L}}|<0.25$
in order to ensure a reliable and stable perturbative behavior in the
calculation of propagator-type contributions in the MSSM Higgs sector.
The $\matr{Z}$-matrix definition and details can be found in
\citere{mhcmssmlong}.
The observables included in the fit are the Higgs-boson mass, the Higgs
signal rates (evaluated with
{\tt HiggsSignals}~\cite{higgssignals}),
Higgs exclusion bounds from LEP, Tevatron and the LHC (evaluated with
{\tt HiggsBounds}~\cite{higgsbounds}),
SUSY exclusion bounds from the LEP and the LHC (the latter evaluated with
{\tt CheckMate}~\cite{checkmate}),
and several low-energy observables (LEOs): \ensuremath{{\rm BR}(B \to X_s \gamma)}, \ensuremath{{\rm BR}(B_s \to \mu^+\mu^-)}\ and \ensuremath{{\rm BR}(B_u \to \tau \nu_\tau)}\ (evaluated with
{\tt SuperIso}~\cite{superiso}),
\ensuremath{(g-2)_\mu} (evaluated with {\tt SuperIso} and {\tt FeynHiggs}), and
$M_W$ (with an evaluation based on \citere{mw}). The total $\chi^2$ is
evaluated as (see \citere{hifi2} for more details),
\begin{align}
\chi_H^2 &= \frac{(M_{H}-\hat{M}_H)^2}{\sigma_{\hat{M}_H}^2}
+ \chi^2_\text{HS}
+\sum_{i=1}^{n_{\mathrm{LEO}}} \frac{(O_i-\hat{O}_i)^2}{\sigma_i^2}
- 2\ln\mathcal{L}_\text{limits}~,
\label{eq:totchi2}
\end{align}
where experimental measurements are denoted with a hat.
\section{Results for the ``heavy Higgs case''}
Based on the above described $\chi^2$ evaluation the best-fit point,
shown as a star below, and the preferred parameter regions are
derived. Points with $\Delta\chi_H^2 < 2.30~(5.99)$ are highlighted in
red (yellow), corresponding to points in a two-dimensional ${68}\%$
($95\%$) C.L.~region in the Gaussian limit. The best fit point has
a $\chi^2/$dof of $73.7/85$, corresponding to a $p$-value of $0.87$,
i.e.\ the heavy Higgs case presents an excellent fit to the experimental
data~\cite{hifi2}.
In \reffi{fig:Hrates_corr}~\cite{hifi2} we review the correlations for
the heavy Higgs signal rates,
\begin{align}
R_{XX}^{P(H)} = \frac{\sum_{P(H)} \sigma(P(H)) \times {\rm BR}(H\to XX)}{\sum_{P(H)} \sigma_\mathrm{SM}(P(H)) \times {\rm BR}_\mathrm{SM}(H\to XX)}.
\label{Eq:Rvalues}
\end{align}
Here $XX= VV, \gamma\gamma, bb, \tau\tau$ (with $V=W^\pm,Z$) denotes the
final state from the Higgs decay and $P(H)$
denotes the Higgs production mode. It can be seen that the heavy Higgs
case can reproduce the SM case ($R_{XX}^{P(H)} = 1$), but also allows
for some spread, in particular in $R_{\tau\tau}^{H}$.
\begin{figure}[htb!]
\includegraphics[width=0.46\columnwidth]{HH_tot_Rhgaga_RhVV}\hspace{0.5cm}
\includegraphics[width=0.46\columnwidth]{HH_tot_Rhgaga_RVhbb}\\
\includegraphics[width=0.46\columnwidth]{HH_tot_RhVV_Rhtautau}\hspace{0.5cm}
\includegraphics[width=0.46\columnwidth]{HH_tot_RVhbb_Rhtautau}
\caption{Correlations between signal rates for the
heavy Higgs case. The best-fit points are shown as a black star,
and points with $\Delta \chi_H^2 < 2.3$ (shown in \emph{red})
and $\Delta \chi_H^2 < 5.99$ (shown in \emph{yellow}). }
\label{fig:Hrates_corr}
\end{figure}
\begin{figure}[htb!]
\includegraphics[width=0.46\columnwidth]{HH_tot_TB_MA0}\hspace{0.5cm}
\includegraphics[width=0.46\columnwidth]{HH_tot_XtMS_mstop1}
\caption{$M_A$-$\tan \beta$ plane (left) and $X_t/M_S$-$m_{\tilde{t}_1}$ plane (right)
in the heavy Higgs case. The color coding is as in Fig.~1.}
\label{fig:MAtb}
\end{figure}
The MSSM parameter space for the heavy Higgs scenario is shown in
\reffi{fig:MAtb}. The left plot indicates the preferred regions in the
$M_A$-$\tan \beta$ plane, where one can see that
$140 \,\, \mathrm{GeV} \lsim M_A \lsim 185 \,\, \mathrm{GeV}$ must be fulfilled, while $\tan \beta$
ranges between $\sim 6$ and $\sim 11$. The right plot shows the
preferred regions in the $X_t/M_S$-$m_{\tilde{t}_1}$ plane. Here the heavy Higgs
case makes a clear prediction with $300 \,\, \mathrm{GeV} \lsim$ $m_{\tilde{t}_1} \lsim 650 \,\, \mathrm{GeV}$
and $X_t/M_S \sim -1.5$. Some properties of the light $\ensuremath{{\cal CP}}$-even
Higgs boson are shown in \reffi{fig:h}. The left plot shows the light
Higgs boson coupling to massive gauge bosons relative to the SM
value. One can see that the coupling squared is suppressed by a factor
of 1000 or more, rendering its discovery via $e^+e^- \to Z^* \to Zh$
at LEP impossible~\cite{LEPHiggsSM,LEPHiggsMSSM}. The right plot gives
the ${\rm BR}(H \to hh)$ for $M_h \lsim M_H/2$. Here it is shown that the
BR does not exceed 20\%, and thus does not distort the coupling
measurements of the heavy Higgs at $\sim 125 \,\, \mathrm{GeV}$ too
much~\cite{ATLAS-CMS-comb}.
\begin{figure}[htb!]
\includegraphics[width=0.46\columnwidth]{HH_tot_Mh1_ghVV2}\hspace{0.5cm}
\includegraphics[width=0.46\columnwidth]{HH_tot_BRHH_hh_Mh}
\caption{$g_{hVV}^2$ (relative to the SM value)
(left) and ${\rm BR}(H \to hh)$ as a function of $M_h$ (right)
in the heavy Higgs case. The color coding is as in Fig~1.}
\label{fig:h}
\end{figure}
\section{Updated benchmark scenarios}
In \citere{hifi2} an updated set of benchmarks for the heavy Higgs
case was presented, superseeding the experimentally excluded
low-$M_H$\ scenario~\cite{Carena:2013ytb}. The parameters of the three
new benchmark scenarios are given in \refta{tab:benchmarks}.
The low-$M_H^{\rm alt-}$\ (low-$M_H^{\rm alt+}$) scenario is defined in the $\mu$-$\tan \beta$
plane with $M_{H^\pm} < (>) m_t$, while the low-$M_H^{\rm alt\,v}$\ scenario has a
fixed $\mu$ in the $M_{H^\pm}$-$\tan \beta$ plane.
\begin{table}[h!]
\centering
\begin{tabular}{lccc}
\hline
Benchmark scenario & $M_{H^\pm}~[\mathrm{GeV}]$ & $\mu~[\mathrm{GeV}]$ & $\tan \beta$ \\
\hline
low-$M_H^{\rm alt-}$ & $155$ & $3800$ -- $6500$ & $4$ -- $9$ \\
low-$M_H^{\rm alt+}$ & $185$ & $4800$ -- $7000$ & $4$ -- $9$ \\
low-$M_H^{\rm alt\,v}$ & $140$ -- $220$ & $6000$ & $4$ -- $9$ \\
\hline
fixed parameters: &\multicolumn{3}{l}{$m_t = 173.2\,\, \mathrm{GeV}$, \quad $A_t = A_\tau = A_b = -70\,\, \mathrm{GeV}$,\quad $M_2 = 300 \,\, \mathrm{GeV}$,}\\
&\multicolumn{3}{l}{$M_{\tilde{q}_L} = M_{\tilde{q}_R} = 1500 \,\, \mathrm{GeV}$~($q = c, s, u, d$),\quad $m_{\tilde{g}} = 1500 \,\, \mathrm{GeV}$,} \\
&\multicolumn{3}{l}{$M_{\tilde{q}_3} = 750\,\, \mathrm{GeV}$,\quad$M_{\tilde{\ell}_{1,2}} = 250 \,\, \mathrm{GeV}$,\quad $M_{\tilde{\ell}_3} = 500 \,\, \mathrm{GeV}$} \\
\hline
\end{tabular}
\caption{Parameters of the updated low-$M_H$\ benchmark scenarios, see
\citere{hifi2} for more details.
The lower row gives the fixed parameters that are common to all three
benchmark scenarios
and $M_1 = \tfrac{5}{3} \tfrac{s_\mathrm{w}^2}{c_\mathrm{w}^2} M_2$}.
\label{tab:benchmarks}
\end{table}
The experimentally allowed parameter space in the three benchmark
scenarios is shown in \reffi{fig:lowMH}.%
\footnote{In the evaluation of these plots the two-loop corrections to
the $M_A$-$M_{H^\pm}$ mass relation had been omitted. Taking them into
account will lead to a slight shift of the $M_h$ contour lines.}%
~The red,
orange and blue regions are disfavoured at the \CL{95\%} by LEP light
Higgs $h$ searches~\cite{LEPHiggsMSSM}, LHC $H/A\to \tau^+\tau^-$
searches~\cite{Khachatryan:2014wca,CMS:2015mca} and LHC $t\to
H^+b\to(\tau\nu)b$ searches~\cite{Aad:2014kga,Khachatryan:2015qxa},
respectively. The green area indicates parameter regions that are
compatible with the Higgs signal (at $\sim$ \CL{95\%}, see \citere{hifi2} for
details), unphysical regions are displayed in gray.
Contour lines indicate the Higgs masses $M_h$ and $M_H$ (in
GeV).
\begin{figure}[ht!]
\centering
\includegraphics[width=0.45\columnwidth]{HH3mutanb_MHMh}\hfill
\includegraphics[width=0.45\columnwidth]{HH1mutanb_MHMh}
\includegraphics[width=0.45\columnwidth]{HH2Mhptanb_MHMh}
\caption{The low-$M_H^{\rm alt-}$\ and low-$M_H^{\rm alt+}$\ benchmark scenarios in
the $\mu$-$\tan \beta$ plane with $M_{H^\pm} = 155\,\, \mathrm{GeV}$ (\emph{upper left}),
and with $M_{H^\pm} = 185\,\, \mathrm{GeV}$ (\emph{lower right}),
and the low-$M_H^{\rm alt\,v}$\ benchmark scenario in the $M_{H^\pm}$-$\tan \beta$ plane
with $\mu = 6000 \,\, \mathrm{GeV}$ in the lower row. For the color coding and
line styles see text.
}
\label{fig:lowMH}
\end{figure}
While being ``squeezed'' from different searches, \reffi{fig:lowMH}
shows that the heavy Higgs case remains a valid option with the
interesting feature of a light $\ensuremath{{\cal CP}}$-even Higgs {\em below} $125 \,\, \mathrm{GeV}$. We
hope that the new benchmark scenarios facilitate the search for these
light Higgs bosons as well as for the heavier, not yet discovered Higgs bosons
in Run~II.
\newpage
\section{Conclusinos}
We have briefly reviewed the case that the Higgs boson observed at
$\sim 125 \,\, \mathrm{GeV}$ is the heavy $\ensuremath{{\cal CP}}$-even Higgs boson in the MSSM, as
recently analyzed in \citere{hifi2}. The analysis uses an
eight-dimensional MSSM parameter scan to find the regions in the
parameter space that fit best the experimental data. It was found that
the rates of the heavy $\ensuremath{{\cal CP}}$-Higgs boson are close to the SM rates,
but can still differ by 20\% or more to yield a good fit. Parameters
such as $M_A$, $\tan \beta$ or $m_{\tilde{t}_1}$ are confined to relatively small
intervals, making clear predictions for Higgs and SUSY searches. The light
$\ensuremath{{\cal CP}}$-even Higgs boson escaped the LEP searches via a tiny coupling to
SM gauge bosons, and the decay $H \to hh$ is sufficiently suppressed
not to impact too strongly the heavy Higgs boson rates. Three new
benchmark scenarios have been reviewed that have been defined to
facilitate the experimental searches at the LHC Run~II.
\subsection*{Acknowledgements}
I thank
P.~Bechtle,
H.~Haber,
O.~St{\aa}l,
T.~Stefaniak,
G.~Weiglein and
L.~Zeune, with whom the results shown here have been derived.
I~furthermore thank S.~Pa\ss ehr for helpful discussions.
I~thank the organizers of C$H^{\mbox{}^\pm}$\hspace{-2.5mm}arged 2016
for the invitation and the, as always, pleasant and productive
atmosphere, as well as for financial support.
The work of S.H.\ is supported in part by CICYT
(grant FPA 2013-40715-P) and by the Spanish MICINN's Consolider-Ingenio
2010 Program under grant MultiDark CSD2009-00064.
|
train/arxiv
|
BkiUcmHxK1yAga6JrXqy
| 5 | 1 |
\section{Introduction}\label{intro}
Intense theoretical effort is currently devoted to the understanding of the Casimir
effect for real experimental setups. This involves the impact of temperature,
finite conductivity, engineered materials,
and may identify routes to \emph{design} the final Casimir pressure.
Almost all analyses rely on the Lifshitz formula \cite{Lifshitz56, Klimchitskaya09}
where the physical properties of the material are encoded in the scattering
amplitudes (i.e., reflection coefficients in planar geometries).
Their evaluation at imaginary frequencies obscures, however, how the material
objects modify the modes of the electromagnetic field.
A `sum over modes' approach is nevertheless possible, even if the
eigenfrequencies
$\omega_{m}$ are complex (due to material absorption, for example).
For two objects at distance $L$ the Casimir energy at zero temperature can
be written as \cite{Intravaia08}
\begin{equation}
\label{eq:Casimir-Diss}
E = \frac{\hbar}{2}
\sideset{}{'}\sum_{p,\mathbf{k}} {\rm Re}\,\Big[\sum_{m}
\big(\omega_{m}-
\frac{2\imath\omega_{m}}{\pi}
\ln\frac{\omega_{m}}{\Lambda}
\big)\Big]^{L}_{\infty}
, \qquad
\im{\Big[\sideset{}{'}\sum_{p,\mathbf{k},m}
\omega_{m}\Big]_{\infty}^{L}}=0
\end{equation}
where the prime indicates that purely imaginary eigenfrequencies are weighted
with $1/2$. Eq.(\ref{eq:Casimir-Diss}) generalizes Casimir's formula for the
vacuum energy between two perfect reflectors \cite{Casimir48} and is valid for generic
(causal) mirrors with arbitrary thickness. Note that one does not simply take
real parts of the complex eigenfrequencies, as suggested some time
ago\cite{Langbein70} (see also Ref.\refcite{Sernelius06}).
The logarithmic correction in Eq.(\ref{eq:Casimir-Diss}) is consistent with
the `system+bath' paradigm that describes the thermodynamics
of quantum dissipative systems\cite{Weiss08}.
In this context, the frequency scale $\Lambda$ is interpreted as the cutoff
frequency of the bath spectral density. The Casimir energy does not depend
on this constant because of the sum rule in \eqref{eq:Casimir-Diss}.
The sum-over-modes approach
provides an `anatomic view' of the Casimir effect where contributions from
different modes are clearly identified. This is useful to understand
unusual behaviours and may suggest new ways to
taylor the Casimir force\cite{Intravaia05,Intravaia07,Intravaia09}.
In the following, we illustrate Eq.\eqref{eq:Casimir-Diss} with the help of
a few examples.
\section{Dissipative Plasmons at short distance}
One of the most interesting contributions to the Casimir force originates from
surface modes bound to the vacuum/medium interface\cite{Barton79}. These
modes have a dispersion relation that splits in two branches,
$\omega = \Omega_\pm( k )$, as two surfaces are approached.
Substituting these frequencies in Eq.\eqref{eq:Casimir-Diss}, we get a
plasmonic contribution to the Casimir energy ($A$: surface area)
\begin{equation}
E_{\rm pl} = \frac{\hbar A }{2}
\int\!\frac{ k {\rm d}k }{ 2\pi }
{\rm Re}\,\Big[\sum_{i=\pm}
\big(\Omega_{i}(k) -
\frac{2\imath\Omega_{i}(k)}{\pi}
\ln\frac{\Omega_{i}(k)}{\Lambda}
\big)\Big]^{L}_{\infty}
\label{smalldiss1}
\end{equation}
Consider the case of two metals
at a distance smaller than the plasma wavelength
$\lambda_{\rm pl} = 2\pi c/\omega_{\rm pl}$. We are then in the
quasi-electrostatic regime, and the surface plasmon modes are given
by\cite{Economou69} (red and blue points in Fig.\ref{BranchCut})
\begin{equation}
\Omega_{\pm} =
\sqrt{\omega^2_{\pm}-\frac{\gamma^2}{4}}
- \imath\frac{\gamma}{2}
,\qquad
\omega^2_{\pm} = \frac{ \omega^2_{\rm pl} }{ 2 }
\left(1\pm e^{ -kL}\right)
\end{equation}
where $\gamma$ is the damping rate in a Drude description of the metal.
\begin{figure}[htp]
\centering
\includegraphics[width=\textwidth]{Paths}
\caption{(Left) Complex eigenfrequencies in the parallel plate geometry, for a fixed
wavevector $k$ (not to scale).
Red and blue points: dissipative surface plasmons.
Red line: bulk continuum of eddy currents. Black crosses: propagating
modes in the cavity between the plates.
(Right)
A counter-clockwise path around the eddy current continuum is equivalent
to a clockwise path around the whole complex plane, encircling all other
modes.}
\label{ChangePath}
\label{BranchCut}
\end{figure}
One can easily check that the sum rule
in Eq.\eqref{eq:Casimir-Diss} is automatically satisfied.
To leading order in $\gamma\ll \omega_{\rm pl}$ (good conductors) Eq.\eqref{smalldiss1} yields
\begin{equation}
E_{\rm pl} \approx
-\frac{\pi^2 \hbar c A}{720 L^3}
\frac{3}{2}\left(\alpha \frac{L}{\lambda_{\rm pl}}
-
\frac{15\zeta(3)}{\pi^4}
\frac{ \gamma L}{ c }
\right)
,\qquad
\alpha=1.193\ldots
\label{eq:short-distance-expansion}
\end{equation}
where $\zeta(3) \approx 1.202$ is a Zeta function.
This corresponds exactly to the total Casimir force calculated in
Ref.\refcite{Henkel04}, including the dissipative correction.
In fact, in this short distance limit, the Casimir energy
is completely dominated by the plasmonic
contribution\cite{Kampen68,Gerlach71,Henkel04}.
Eq.\eqref{smalldiss1} is valid
also beyond the good conductor limit, however, and could be used, e.g., to
analyze semiconductors where surface plasmons appear in a different
frequency range and can have much stronger damping.
\section{Eddy currents}
As a second example, consider the contribution from eddy current modes.
They are connected with low-frequency currents that satisfy a diffusion
equation in the conducting metal\cite{Jackson75} and are completely
absent within the lossless description of the so-called plasma
model\cite{Klimchitskaya09}.
We have analyzed these
modes recently\cite{Intravaia09} and constructed from the `system+bath'
paradigm their quantum thermodynamics. They behave like free Brownian
particles, since the eigenfrequencies of bulk eddy currents are purely
imaginary $\omega_{m}= - \imath \xi_{m}$ ($\xi_{m}>0$). From
Eq.\eqref{eq:Casimir-Diss}, we get the Casimir energy
\begin{equation}
\label{eq:Eddy}
E_{\rm eddy} = -\sum_{p,\mathbf{k}} \,\Big[\sum_{m}
\frac{\hbar\xi_{m}}{2\pi}
\ln\frac{\xi_{m}}{\Lambda}
\Big]^{L}_{\infty}
\end{equation}
For these modes alone,
the sum rule [Eq.\eqref{eq:Casimir-Diss}] is not satisfied, and the
eddy current contribution to the Casimir energy depends on the cutoff
$\Lambda$. This is also well-known from quantum Brownian motion where
bath modes up to $\Lambda$ are entangled to the particle.
Mathematically, eddy currents form a mode continuum that can be identified
in the complex frequency plane from the branch cut of the root
$k_{m}=\sqrt{\epsilon(\omega)\omega^{2}/c^{2} - k^{2}}$
which describes the propagation of the electromagnetic field inside the
medium. For a Drude metal, the cut is located between
$\omega_{m} = -\imath\xi_{0}(\mathbf{k})
\approx -\imath D k^{2}$ (for $k \ll \omega_{\rm pl} / c$)
and $\omega_{m} = -\imath\gamma$ (see Fig. \ref{BranchCut}),
where $D=\gamma(\lambda_{\rm pl}/2\pi)^2$ is the electromagnetic
diffusion constant.
We get the $L$-dependent change in the mode density along the branch cut
by applying the logarithmic argument theorem to the Green function of
the electromagnetic field. Using the contour sketched in Fig.\ref{BranchCut}(left),
it is possible to show that Eq.\eqref{eq:Eddy} can be written as
\begin{equation}
\label{eq:zero-final}
E_{\rm eddy} = \int_{0}^{\infty}\!\frac{{\rm d}\xi}{\pi}
\sum_{p,\mathbf{k}}\
\partial_{\xi}\Big(
\frac{\hbar\xi}{2\pi}\ln \frac{\xi}{\Lambda}
\Big)
\im{ \ln \left[1-r_{p}^{2}(-\imath \xi-0^{+})e^{-2\kappa L}\right]},
\end{equation}
with $\kappa=\sqrt{\xi^{2}+k^{2}}$ and $r_{p}$ the reflection coefficient of
the mirrors in polarization $p={\rm TE,TM}$. This gives rise to a repulsive Casimir
force (Fig.1 of Ref.~\refcite{Intravaia09}),
provided $\Lambda$ is sufficiently large, e.g., $\Lambda \ge \gamma$.
The structure of Eq.\eqref{eq:zero-final}
allows for an immediate translation to the high-temperature (classical) limit.
Replace the zero-point energy with the classical free energy per mode,
$k_{B}T \ln (\hbar \xi / k_B T)$, and get
\begin{equation}
\label{eq:high-temp}
\mathcal{F}_{\rm eddy}\approx
- \int_{0}^{\infty}\frac{d\xi}{\pi}\sum_{p,\mathbf{k}}\
\frac{k_{B}T}{\xi} \im{ \ln \left[1-r_{p}^{2}(-\imath \xi-0^{+})e^{-2\kappa L}\right]},
\end{equation}
(A more rigorous proof follows from the representation for the free energy
given in Ref. \refcite{Intravaia09}.)
Eq. \eqref{eq:high-temp} is thus the result of the logarithmic argument theorem
applied to the high-temperature limit of the free energy. Now the contour around
the eddy current continuum can also be interpreted as a contour encircling
the whole complex plane, i.e., the surface plasmon and propagating modes
[Fig. \ref{ChangePath}(right)].
This is particularly interesting in the TE-polarization because there are no surface
plasmons, and
the residue at $\omega = 0$ vanishes [$r_{\rm TE}^{2}(\omega \to 0) = 0$].
This means that eddy currents and propagating modes give, up to a sign, the
same Casimir energy at high temperature (or large distance). Since propagating
modes are only slightly affected by conduction on the metal (i.e., they behave
similarly in the Drude and plasma models), we find the simple relation
\begin{equation}
\label{eq:high-energy-diff}
\mathcal{F}^{\rm TE}_{\rm eddy} \approx -
\mathcal{F}^{\rm TE}_{\rm C}( {\rm pl.m.} )
, \qquad \gamma/\omega_{p}\ll 1
\end{equation}
where $\mathcal{F}^{\rm TE}_{\rm C}( {\rm pl.m.} )$ is the Casimir free
energy at high temperature calculated within the plasma
model\cite{Klimchitskaya09}. In the Drude model, the two contributions are
present and cancel each other when they are both in the high-temperature
regime (which happens at different distances, see Fig.4 of
Ref.\refcite{Intravaia09}).
A different scenario occurs in the TM-polarization. The residue at $\omega = 0$
does not vanish and corresponds exactly to the high-temperature limit of the plasma model.\cite{Klimchitskaya09}
Indeed, we have checked that eddy currents give only a very
small contribution.
\section{Conclusions}
Using a mode-summation approach,
we have isolated and analyzed the contribution
of two classes of modes to the Casimir effect, allowing for complex
eigenfrequencies of the electromagnetic field. A previous result for the
short-distance limit between good conductors\cite{Henkel04}
has been generalized to any conductivity and distance by considering
coupled surface plasmonic modes (for the lossless case,
see Refs.\refcite{Intravaia05, Intravaia07}).
We also considered eddy currents which are overdamped or diffusive
modes in the bulk of a Drude metal, and
showed that they contribute a repulsive Casimir interaction, in agreement
with Ref.\refcite{Intravaia09}. At high temperature and for a good conductor,
we found in a simple way that
their free energy in the TE-polarization differs only slightly from the Casimir
free energy within a dissipationless description (the plasma model), but is of
the opposite sign.
In this way, eddy currents nearly cancel out the attractive Casimir interaction
from propagating modes. This explains the
strong difference between the Drude and plasma models for the temperature
correction of the electromagnetic Casimir effect\cite{Klimchitskaya09}.
\smallskip
We thank H. Haakh for a critical reading
and acknowledge financial support by the European Science Foundation
within the activity `New Trends and Applications of the Casimir
Effect' (www.casimir-network.com). F.I.\ acknowledges financial support by the Alexander von Humboldt Foundation.
|
train/arxiv
|
BkiUboA5i7PA9AjsOr1i
| 5 | 1 |
\section{Introduction}
Thermionic electron emission cathodes based on porous, polycrystalline W combined with mixtures of metal oxides (typically $\mathrm{BaO-CaO-Al_2O_3}$) marked a significant evolutionary step in the history of thermionic cathodes, as these dispenser cathodes produce high-current-density emission with long lifetime due to their dynamically stable, low-work-function surfaces.\cite{Kirkwood2018} There are some widely-used mixture ratios for the metal oxides in the dispenser cathodes. The most common mix is $\mathrm{BaO:CaO:Al_2O_3=5:3:2}$ (a B-type cathode) produces emitted current densities of several $\mathrm{A/cm^2}$. There are other variations including the $4:1:1$ cathode (S-type), which is resistant to surface poisoning and can usually be operated at a temperature $30\units{^\circ C}$ lower than other types.\cite{Cronin1981,Vlahos2009,Gilmour2011,Jacobs2017} More recent dispenser cathodes include the M-type and scandate cathodes, which have a lower effective work function than the B- and S-type cathodes.\cite{Zhou2018,Liu2019,Wang2019} The B-, S- and M-type cathodes have constituted the majority of commercial thermionic cathodes for the past 50 years; used as the electron sources in numerous vacuum electronic devices (VEDs) such as communication devices, ion thrusters, thermionic energy converters, and free electron lasers. These applications, taken together, influence multiple facets of our modern life, ranging from defense, satellite communications, radar, and scientific research, to industrial-scale food production and manufacture of heat-harvesting renewable energy technology.\cite{Barker2005,Booske2008}
Numerous experimental and computational studies have shown that the microstructure of real W-based cathodes is complex. The tungsten bodies are polycrystalline and porous, and the cathode surfaces are spatially heterogeneous, with the presence of machining marks from the cathode manufacturing process also contributing to the heterogeneity and causing local field enhancement effects.\cite{Gilmour2011,Jones1979,Jensen2003,Wan2012} One of the results of the complex microstructure is that W-based cathodes are spatially heterogeneous with a distribution of grain sizes and many types of exposed surfaces. These surfaces might have varied crystal facets and metal oxide coatings, each with an associated work function value, leading to highly non-uniform emission.\cite{Vlahos2009,Jacobs2017,Zhou2018,Forman1976Surface-studies,Haas1983,Norman1987Surface-structu,Vlahos2010} The non-uniform nature of thermionic electron emission from polycrystalline W has been observed experimentally by using thermionic electron emission microscopy (ThEEM).\cite{Norman1987Surface-structu,Haas1967,Mroz2019a,Tuck1979,Wan2012Scandium-oxide-,Vaughn2009,Vaughn2010,Kordesch2013,Wan2013,Ren2017,Mroz2018} In a representative ThEEM image, at a particular temperature, certain grains of the W surface are bright while others remain dark, indicating that some grains are more emissive than others, due to factors such as lower work function, surface topography, etc.
Emitted-current-density-versus-temperature, or $J-T$ (Miram) curves and emitted-current-density-versus-voltage, or $J-V$ ($I–V$) curves are commonly used to evaluate the cathode performance. Both the $J–T$ and $J–V$ curves of a cathode can be divided into three regions: temperature-limited (TL) region, full-space-charge-limited (FSCL) region, and the TL-FSCL transition region. The TL region is in the low-temperature end of an $J–T$ curve or the high-voltage end of an $J–V$ curve. Its behavior can be well described with the Richardson–Laue–Dushman equation\cite{Richardson1922,Dushman1930a} with Schottky barrier lowering\cite{Schottky1923}. The FSCL region is in the high-temperature end of a $J–T$ curve or the low-voltage end of a $J–V$ curve. The behavior can be predicted by the Child–Langmuir law\cite{Child1911,Langmuir1923The-Effect-of-S} and Langmuir and Fry's studies\cite{Langmuir1923The-Effect-of-S,Fry1921}, including provision for two-dimensional edge-correction effects\cite{Luginsland1996,Lau2001,Umstattd2001,Luginsland2002Beyond-the-Chil,Quan2009a,Sitek2021,Sitek2021a}. Experimental observations on real thermionic cathodes show that the TL-FSCL region is usually smooth, sometimes referred to as the "roll-off". Despite this seemingly simple observed behavior, it has remained an ongoing challenge to develop a physics-based emission model which is able to accurately predict the behavior of both $J–T$ and $J–V$ curves from polycrystalline cathodes over the entire operational domain of temperature and anode-cathode voltage, and especially challenging to capture the smooth transition between the TL and FSCL regions for real cathodes. Thermionic cathodes are typically operated on the FSCL side near the TL-FSCL transition region, so that the changes in cathode temperature over time do not cause large variations to the emitted current and that the emission is stable over the predicted lifetime of the device.
Some empirical descriptions on the smooth TL-FSCL region have been developed, including the empirical Longo–Vaughan equation\cite{Longo1980,Vaughan1986A-synthesis-of-}, a continuous Gaussian distribution of work function\cite{Gilmour1994}, the work function distribution mathematical treatment of emission data\cite{Tonnerre1983}, and the practical work function distribution function (PWFD)\cite{Cattelino1997}. However, all of these models are based on empirical equations or difficult-to-justify a priori assumptions, for example, the assumption that different work function patches do not interact. Furthermore, these empirical descriptions are not able to reveal the fundamental origin of the smooth behavior of the TL-FSCL transition, thus limiting their usefulness for modeling cathode behavior under different operating conditions.
A number of previous works have studied the interplay of a heterogeneous cathode surface on the resulting thermionic emission, and have sought to connect the smooth TL-FSCL transition to the spatial distribution of work function values. The theory of the anomalous Schottky effect\cite{Hansen1966} studied the contribution of the patch field effect (electrostatic potential nonuniformity on the cathode surface based on local work function values) and the Schottky barrier lowering effect on the smoothness of the TL-FSCL transition in $J-V$ curves. Studies on space charge effects\cite{Sitek2021,Sitek2021a,Chernin2020,Jassem2021} reveal the contribution of 3-D space charge fields on the smooth transition in $J-T$ curves. However, the TL-FSCL transition behaviors predicted from these two separate sets of studies are sharper than experimental observations, indicating that some physical effects are missing. There has been no physics-based emission model which can predict the TL-FSCL transition in agreement with experimental results, although Longo and Vaughan speculated\cite{Longo1980,Vaughan1986A-synthesis-of-} that sharper Miram curve knees might be associated with more uniform work function surfaces, or "better" cathodes. Our recent work\cite{Chen2021} developed a physics-based model that included the effects of nonuniform thermionic emission, 3-D space charge, patch fields, and Schottky barrier lowering. This work gives a mathematical method to calculate the emitted current from a cathode with a spatially heterogeneous work function distribution in a parallel diode, and is able to predict a smooth and gradual TL-FSCL transition comparable with experimental observations by using a checkerboard work function distribution. These findings were encouraging, and indicated our model may be successful in predicting the emission of a real cathode, including the smooth TL-FSCL transition, by applying a two-dimensional work function map obtained from the same real cathode.
In this work, we construct a two-dimensional work function map by incorporating the grain orientation via electron backscatter diffraction (EBSD) and the facet-orientation-specific work function values from density functional theory (DFT) calculations. We use this work function map in conjunction with the nonuniform emission model developed in our previous work\cite{Chen2021} to predict both the $J–T$ (Miram) and $J–V$ ($I–V$) curves, including the TL-FSCL transition. Overall, we find semi-quantitative agreement of our predicted results with experimental measurements. This is the first time a physics-based thermionic emission model incorporating heterogeneous surface effects from a work function distribution on a real commercial thermionic cathode has been used to successfully model the experimental emission over a wide domain of temperature and applied voltage.
\section{Methods}
\subsection{Cathode sample\label{sec:cathode_sample}}
The cathode analyzed in this work is a commercial S-type cathode made by 3M Technical Ceramics. The cathode was made of $80\%$ density W using standard manufacturing methods and impregnated with an oxide mixture of $\mathrm{BaO:CaO:Al_2O_3=4:1:1}$. The cathode was cylinder-shaped with a $2.77\units{mm}$ diameter and $0.97\units{mm}$ height, as measured after the emission test.
\subsection{Emission measurement}
The experimental results of emitted current were measured in a closely spaced diode testing vehicle (Fig. \ref{p2f1}). The heater and the anode fixtures were manufactured by L3-Harris. The anode-cathode distance in this setup for a $0.97\units{mm}$ high cathode was designed to be $d=1.06\units{mm}$. A molybdenum ring was placed around the cathode to shield the emission from the sides. The height of the molybdenum ring was $1.14\units{mm}$, which was $0.17\units{mm}$ higher than the cathode. The inner diameter (ID) $2.90\units{mm}$ was $0.13\units{mm}$ larger than the diameter of the cathode. The heater filament was powered by a Keithley 2200-20-5 programmable power supply, and was operated under constant current mode.
To make it possible to measure the temperature of the cathode surfaces using a pyrometer during operation, a triode design was used with a cylinder as the current collector, or "catcher". The temperature of the cathode surface was measured with a Cat. No. 8622 optical pyrometer made by Leeds \& Northrup Co., which is a $\lambda=0.65\units{\mu m}$ single-wavelength disappearing filament pyrometer. The electron emission cathode industry often simply uses the pyrometer reading to indicate the cathode temperature, reporting it as the brightness temperature. However, the true temperature of the cathode surface is needed to use our nonuniform emission model\cite{Chen2021}. We calibrated the temperature values using Planck's law. The radiation of the cathode received by the disappearing pyrometer at wavelength $\lambda$ is:
\begin{equation}\label{p2e1}
tr\epsilon\frac{2hc^2}{\lambda^5}\frac{1}{\eexp{hc/(\lambdak_\mathrm{B}T)}-1}=\frac{2hc^2}{\lambda^5}\frac{1}{\eexp{hc/(\lambdak_\mathrm{B}T\subm{b})}-1}
\end{equation}
where $T\subm{b}$ is the pyrometer reading (brightness temperature), $T$ is the calibrated "true" temperature of the cathode surface to be used in the emission model, $k$ is the Boltzmann constant, $h$ is the Planck constant, $c$ is the speed of light. The values of the transmissivity of the viewport $t=0.93$ and the reflectivity of the mirror $r=0.76$ in the optical path were as measured. In this study, we used the emissivity value of $\epsilon=0.52$ recommended for impregnated W cathodes.\cite{Cronin1981} The uncertainty of the measured temperature values was approximately $\pm20\units{^\circ C}$.
The cathode was activated before emission test, following the instructions recommended by the cathode manufacturer, 3M Technical Ceramics. The activation process includes four steps: (1) Slowly increase the cathode temperature to a brightness temperature of $1000\units{^\circ C}$, and then hold this temperature for 30 minutes. (2) Continue to increase the cathode temperature to a brightness temperature $1175--1200\units{^\circ C}$ and hold for 1 hour. (3) Cool the cathode to a brightness temperature $1100--1150\units{^\circ C}$ and hold for 2 hours. (4) Reduce cathode temperature and measure the emitted current while cooling down the cathode. The pressure was kept below $5\EEunits{-6}{torr}$ during the activation process.
During the emission measurements, the grid was biased with a PVX-4110 high voltage pulse generator made by Directed Energy (DEI), which was powered by a DC high voltage power supply made by Glassman High Voltage, Inc. with Model No. PS/ER02R150-115, and controlled by a low voltage pulse generator Model 575 pulse/delay generator made by Berkeley Nucleonics Corp (BNC). The catcher was biased with a DC high voltage power supply made by Glassman High Voltage, Inc. with Model No. PS/EQ005R240-22 and was kept more positively biased than the grid. The voltages of the grid and the catcher were measured with a LeCroy 44Xs oscilloscope. The emitted current was measured with the same oscilloscope via a Model 4100C current monitor made by Pearson Electronics, Inc.
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{fig1.pdf}
\caption{\label{p2f1}
Sketch of the closely spaced diode testing vehicle used to measure the thermionic emitted current. The rectangle filled with pink is the cathode. The purple rectangles around the cathode represent the molybdenum ring used to shield the side emission.
}
\end{figure}
\subsection{Microstructure characterization}
The cathode surface grain orientation was characterized using electron backscatter diffraction (EBSD) in a FEI Helios G4 Plasma FIB/FESEM/EBSD/EDS workstation after the emission test. The surface of commercial dispenser cathodes are usually rough due to the machining process of cutting the cathode pellets on a lathe. The machining typically produces micrometer-scale ridges, and those differences can be seen in variation in emission properties.\cite{Jensen2006a} Confidence index (CI) values in the EBSD results were used to quantify the likelihood of correct grain orientation labeling.\cite{Field1997} CI standardization, one of the built-in clean-up algorithms in OIM Analysis™ by EDAX, a software for EBSD analysis, was used to process the raw EBSD data. Pixels with low CI values after applying the CI standardization clean-up procedure were considered as areas where grain orientations were unable to be correctly labeled by EBSD\cite{Nowell2005} and the surface facet orientation could not be reliably determined. Visual inspection showed that the majority of pixels with CI lower than 0.1 were associated with nonemitting areas, such as rough valleys, depressions, grain boundaries, and pores.\cite{Wright2006} Thus, areas with $\mathrm{CI}<0.1$ were considered as no-emit areas, and the grain orientation of areas with $\mathrm{CI}\geq0.1$ were considered as recognizable areas. We then used simulations described below to establish a two-dimensional work function map $\phi\subm{DFT}(x,y)$ for $\mathrm{CI}\geq0.1$ areas.
\subsection{Density functional theory work function values}
Previous density functional theory (DFT) studies have calculated the work functions and surface stabilities of tungsten surfaces with Ba, O, and Ba-O adsorbates of eight different orientations: $(001)$, $(011)$, $(111)$, $(210)$, $(211)$, $(221)$, $(310)$, $(311)$.\cite{Jacobs2017,Zhou2018,Vlahos2010} Auger analysis indicates that the active state for impregnated cathodes can be reproduced by a near monolayer of the stoichiometric Ba-O on the W surface.\cite{Haas1983} Only the DFT work function value for the most stable stoichiometric Ba-O adsorption are used to assigned to each orientation (Table \ref{p2t1}). For a high-index orientation $(hkl)$ other than the calculated eight orientations, the nearest neighbor algorithm is used to predict its work function.\cite{Chen2019} It is assumed that $(hkl)$ orientation has the same work function as the one among the calculated eight orientations with the smallest misorientation with $(hkl)$ (Fig. \ref{p2f2}).
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{fig2.pdf}
\caption{\label{p2f2}
Inverse polar figure (IPF) showing how a grain orientation is grouped into one of the eight orientation groups using the nearest neighbor algorithm. The colors indicate the work function values assigned to each orientation group, which are the work function values of the W surface with most stable stoichiometric Ba-O adsorption calculated by density functional theory (DFT).
}
\end{figure}
A few studies have estimated the uncertainty of DFT work function values by comparing DFT results with experimental results. De Waele \textit{et al.}\cite{DeWaele2016} compared the experimental work function values for different surface orientations for a number of metals with the values predicted by the Perdew-Burke-Ernzerhof parametrization of the generalized gradient approximation (PBE-GGA) method. They did a linear fit of DFT values $\phi\subm{DFT}$ and experimental values $\phi\subm{exp}$. The result was the equation $\phi\subm{exp}=\beta_1\phi\subm{DFT}+\beta_0$, where the values of the fitted coefficients were $\beta_1=0.99\pm0.02$ and $\beta_0=0.30\pm0.09\units{(eV)}$. Tran \textit{et al.}\cite{Tran2019} also compared their DFT results, $\phi\subm{DFT}$, with experimental values, $\phi\subm{exp}$, on single crystals. They made a single-parameter $\phi\subm{DFT}=\phi\subm{exp}-c$ least square fit, where their result was $c=0.30\units{eV}$. Both results indicate that DFT work function predictions of metals using GGA-level functionals tend to underestimate the work function values by approximately $0.30\units{eV}$, on average, compared with experimental results, and that the error of the estimate is on the scale of tenths of eV even after the linear fit. Due to this known underestimation, we consider the shift between experimental and calculated work function as a fitting parameter in our emission modeling (more details in Section \ref{sec:emission_modeling}).
\subsection{Emission modeling\label{sec:emission_modeling}}
It is prohibitively difficult to accurately measure, and thus, to know, the actual anode-cathode distance $d$ at the operating temperatures in our test fixture. Therefore, to better compare the results of the emission model with the experimental results, in this work, we obtained the effective anode-cathode distance $d$ by fitting the FSCL data points with the Child-Langmuir law with the finite temperature correction\cite{Langmuir1923The-Effect-of-S}:
\begin{equation}\label{p2e2}
J\subm{FSCL}=\frac{4\epsilon_0}{9}\sqrt{\frac{2e}{m}}\frac{(V-V\subm{m})^{3/2}}{(d-z\subm{m})^2}
\frac{9}{8\sqrt{\pi}}\eta^{-3/2}\left(\int_0^\eta\frac{\mathrm{d}\eta}{\sqrt{\mathrm{erfx\,}\sqrt{\eta}-1+2\sqrt{\eta/\pi}}}\right)
\end{equation}
where $\epsilon_0$ is the vacuum permittivity, $e$ is the elementary charge, $m$ is the electron mass, $V$ is the anode-cathode voltage, $d$ is the anode-cathode distance, $V\subm{m}$ and $z\subm{m}$ are the voltage and the location from cathode of the voltage minimum, $\eta=e(V-V\subm{m})/(k_\mathrm{B}T)$ where $k$ is the Boltzmann constant and $T$ is the temperature, and $\mathrm{erfcx}$ is the scaled complementary error function. Instead of using the as-designed value of the anode-cathode distance, we used the fitted value for the emission model, which we believe is a more accurate value for high temperatures during emission measurements.
In the theory of the Child-Langmuir law with the finite temperature correction on a uniform cathode,\cite{Langmuir1923The-Effect-of-S} the voltage minimum satisfies the Richardson-Laue-Dushman equation: $J=AT^2\eexp{-eV\subm{m}/(k_\mathrm{B}T)}$, while the position of the voltage minimum $z\subm{m}=0$ at the TL-FSCL transition. When fitting $d$ using Eq. \ref{p2e2}, we made the same assumptions: $z\subm{m}=0$ and $J=AT^2\eexp{-eV\subm{m}/(k_\mathrm{B}T)}$, where $J$ is the emitted current density for which we used experimental results. Here, $A=4\pi mek^2/h^3=120.173\units{A\,cm^{-2}\,K^{-2}}$ is the Richardson constant where h is Planck's constant.
It is not practical to do EBSD on the whole surface of a cathode, so we characterized the grain orientation on a representative area of the cathode surface (more details in Section \ref{sec:sensitivity_analysis}), and used periodic boundary conditions on the edges of the work function map, considering that the nonuniform emission model\cite{Chen2021} was designed for spatially periodic work function maps. Considering the error in the DFT work function value, we added a constant shift $\Delta\phi$ on the DFT work function values, $\phi\subm{DFT}(x,y)$, to get a shifted work function map, $\phi(x,y)=\phi\subm{DFT}(x,y)+\Delta\phi$, for the $\mathrm{CI}\geq0.1$ areas.
The roughness of the thermionic cathode used in this study is mainly due to the machining and the grain structures. The range of the field enhancement factor values expected from the roughness features of typical thermionic cathodes is usually small, with an estimated upper bounds of $5$.\cite{Jensen2003,Jensen2006a,Miller2007,Miller2009} Even in the case that the applied electric field is $500\units{V/mm}$, the difference in the Schottky barrier lowering between a surface with a field enhancement factor of $\beta=5$ and a perfectly flat surface $\beta=1$ is only $0.033\units{eV}$, which will add a negligible enhancement to the thermionic emission, compared with the uncertainty of DFT work function values. Therefore, for simplicity in our model, we assumed the cathode surface was perfectly flat and therefore neglected field enhancement effects.
The grain orientation of areas with $\mathrm{CI}\geq0.1$ were considered as recognizable areas, and a work function map $\phi(x,y)=\phi\subm{DFT}(x,y)+\Delta\phi$ was assigned to these areas. If we let the cathode Fermi level be zero, then the boundary condition of the Poisson's equation for $\mathrm{CI}\geq0.1$ areas is its vacuum level $V(x,y,z=0)=-\phi(x,y)/e$.\cite{Chen2021}
As the majority of $\mathrm{CI}<0.1$ pixels were associated with nonemitting areas, such as rough valleys, depressions, grain boundaries, and pores,\cite{Wright2006} we obtain the boundary condition for the cathode surface $V(x,y,z=0)$ for the $\mathrm{CI}<0.1$ areas by solving the 2-D Laplace's equation $\nabla^2V(x,y)=0$ where the boundary conditions are the values of $\mathrm{CI}\geq0.1$ areas. In this way, we obtain the boundary condition of the whole cathode surface, for both $\mathrm{CI}\geq0.1$ areas and $\mathrm{CI}<0.1$ areas, which will be used as the input of the nonuniform emission model\cite{Chen2021}. There is only one fitting parameter $\Delta\phi$ in this model.
In the model, the potential energy for an electron present in the space within the diode is obtained by solving Poisson's equation, where the charge density is a nonlinear function of the potential energy in the space. The effect of Schottky barrier lowering is included when calculating the potential energy. The patch field effect is naturally included in the non-equipotential boundary condition at cathode surface $V(x,y,z=0)$, and 3-D Poisson's equation includes the 3-D space charge effect. Therefore, such a nonuniform emission model includes the effects of 3-D space charge, patch fields, and Schottky barrier lowering, but neglects the effects of the lateral motion of electrons and the quantum effects (e.g., electron tunneling). More information on the physics and specific calculation methodology of our nonuniform emission model can be found in Ref. \cite{Chen2021}.
\section{Results and Discussion}
\subsection{Spatial distribution of work function}
The spatial distribution of grain orientation was characterized using EBSD after the emission testing was concluded. Emission testing and grain orientation analysis were performed for the same cathode samples, and were performed after emission testing to ensure that any microstructural evolution that may have occurred during the high temperature activation and emission testing processes was captured. Fig. \ref{p2f3}a shows the two-dimensional maps of grain orientation of a representative portion of a cathode surface (more details in Section \ref{sec:sensitivity_analysis}). The percentage of each orientation group in the map is listed in Table \ref{p2t1}.
We measured the emitted current from a commercial S-type cathode made by 3M Technical Ceramics (Section \ref{sec:cathode_sample}) for various anode-cathode voltages and temperatures (Fig. \ref{p2f4}). The anode-cathode distance was obtained by fitting the $24$ data points above $1340\units{K}$ in Fig. \ref{p2f4}a using Eq. \ref{p2e2}, and the result was $d=1.132\units{mm}$. This is close to the designed value of $1.06\units{mm}$. We ascribe the discrepancy between the fitted distance and the designed value to several reasonable factors that include a likely small difference between the designed distance and the actual fabricated distance (at room temperature) as well as the effects of electron optics and thermal expansion. The constant work function shift was obtained by fitting all of the data points in Fig. \ref{p2f4}a with the nonuniform emission model, and the result was $\Delta\phi=0.176\units{eV}$, which indicates that DFT underestimated the work function values compared with the thermionic emission test results. This result is consistent with previous studies on the error of DFT work function values\cite{DeWaele2016,Tran2019} in both the sign and magnitude of the error (underestimation by DFT of about $0.3\units{eV}$). Fig. \ref{p2f3}b is the predicted work function map, obtained by applying shifted DFT work function values to the grain orientation map (Fig. \ref{p2f3}a).
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{fig3.pdf}
\caption{\label{p2f3}
(a) Electron backscatter diffraction (EBSD) inverse polar figure (IPF) of a commercial S-type cathode after clean-up. The areas where it is considered that grain orientations are unrecognized by EBSD are plotted in black. (b) Work function map by assigning the density functional theory (DFT) work function value [5] with a shift of $\Delta\phi=0.176\units{eV}$ to the grain orientation map (a) after grouping the orientations into one of the eight orientation groups.
}
\end{figure}
\begin{table}[hbtp]
\caption{\label{p2t1} List of the eight orientations with work function values predicted using density functional theory (DFT). $\phi\subm{DFT}$ is the DFT work function value for the most stable stoichiometric Ba-O adsorption for each orientation. $\phi\subm{DFT}+\Delta\phi$ is the shifted work function value where the shift is $\Delta\phi=0.176\units{eV}$. The "Percentage" column shows the percentage of each orientation group in Fig. \ref{p2f3}.}
\centering
\begin{tabular}{cccc}
\toprule
Orientation & $\phi\subm{DFT}\units{(eV)}$ & $\phi\subm{DFT}+\Delta\phi\units{(eV)}$ & Percentage \\
\midrule
$(001)$ & $2.15$ & $2.326$ & $6.3\%$
\\
$(011)$ & $1.61$ & $1.786$ & $5.5\%$ \\
$(111)$ & $1.75$ & $1.926$ & $2.3\%$ \\
$(210)$ & $2.31$ & $2.486$ & $8.9\%$ \\
$(211)$ & $1.97$ & $2.146$ & $14.0\%$ \\
$(221)$ & $1.70$ & $1.876$ & $12.7\%$ \\
$(310)$ & $2.30$ & $2.476$ & $19.3\%$ \\
$(311)$ & $1.79$ & $1.966$ & $13.8\%$ \\
Unrecognized & - & - & $17.1\%$ \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Emitted current density}
Fig. \ref{p2f4} shows the experimental emission data from the S-type cathode and the emitted current density predicted by applying the nonuniform emission model \cite{Chen2021} to the work function map (Fig. \ref{p2f3}b). The predicted TL-FSCL transition regions are as smooth as the experimental observations for both the $J-T$ and $J-V$ curves, resulting in semi-quantitative agreement between our model and experimental measurements.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.95]{fig4.pdf}
\caption{\label{p2f4}
Experimental data ($\times$ and $+$ symbols) of an S-type cathode compared with the emitted current density predicted with nonuniform emission model (lines) at different anode-cathode voltages. (a) $J-T$ curves for different anode-cathode voltages $V$. The measured $V$ values for the data points of red $\times$ symbols are between $400\units{V}$ and $404\units{V}$, and $300\units{V}\leq V\leq303\units{V}$ for yellow, $200\units{V}\leq V\leq202\units{V}$ for purple, $100\units{V}\leq V\leq101\units{V}$ for green. (b) $J-V$ curves at different temperatures.
}
\end{figure}
There are only two fitting parameters in our model: the anode-cathode distance $d=1.132\units{mm}$ and the constant shift on DFT work function values $\Delta\phi=0.176\units{eV}$. The main effect of a different $d$ is to scale up and down the FSCL current, while the effect of a different $\Delta\phi$ is to scale up and down the TL current or shift the TL region to a lower or higher temperature in $J-T$ curves. Using fitted values for these two parameters helps to get a better fit for the TL and FSCL regions, enabling a better comparison on the TL-FSCL transition regions between predicted curves and experimental results. The exact values of both of these fitted parameters have negligible effects on the shape of the TL-FSCL transition region.
The smooth TL-FSCL transition in the predicted curves arises as a natural consequence of the nonuniform emission from the polycrystalline cathode with a nonuniform spatial distribution of work function. Previous studies\cite{Sitek2021,Sitek2021a,Chernin2020,Jassem2021} show that the 3-D space charge effect plays a significant role in making the transition region smooth for a nonuniform cathode. However, when using a work function map for a real cathode derived from DFT and EBSD, a model only considering the 3-D space charge effect predicts a TL-FSCL transition in a Miram curve sharper than experimental results.\cite{Chernin2020} The nonuniform emission model used in this work\cite{Chen2021} includes all of the effects of 3-D space charge, patch fields, and Schottky barrier lowering. This result shows that including all of these effects is required to predict the smooth TL-FSCL transition region in the $J-T$ and $J-V$ curves, not only for the checkerboard model cathode illustrated in\cite{Chen2021}, but also in a work function map of a real cathode.
The Richardson constant $A$ is an important factor in the Richardson-Laue-Dushman equation. Its theoretical value is $A=4\pi mek^2/h^3=120.173\units{A\,cm^{-2}\,K^{-2}}$. In multiple previous studies, the Richardson constant was experimentally obtained by fitting both the Richardson constant and the effective work function in the Richardson-Laue-Dushman equation to the experimental emission data under the assumption that the cathode is uniform and has a single work function value.\cite{Fomenko1966} It has been observed that, using this method, the experimental values of the Richardson constant differ from the theoretical value, sometimes by many orders of magnitude.\cite{Gilmour1994,Fomenko1966} However, the Richardson constant does not need to be fit in our model, and is assumed to be fixed to its theoretical value. The agreement between our experimental and predicted $J-T$ and $J-V$ curves indicates that the alteration of the Richardson constant is not needed here. Thus, a key strength of our present model is that knowledge of the fractions of different surface terminations present, their arrangement in 2D space on the surface, and their work functions are all that is required for the nonuniform emission model to provide a physically complete picture of the emission.
\subsection{Two-dimensional emission map}
Fig. \ref{p2f5} shows how the calculated emitted current density maps change as temperature increases and the emission changes from the temperature-limited (TL) region (Fig. \ref{p2f5}a), to the transition region (Fig. \ref{p2f5}b and \ref{p2f5}c), and finally to the full-space-charge-limited (FSCL) region (Fig. \ref{p2f5}d). To better illustrate the effects of the patch fields and space charge, we plotted schematic figures of equipotential curves and electric flux lines in the space in front of a low work function patch surrounded by high work function patches in TL, transition, and FSCL regions (Fig. \ref{p2f6}).
In the TL region (Fig. \ref{p2f5}a), the space charge effect is negligible, and therefore the low work function patches emit more than the high work function patches. This result that the emitted current density varies across different grains due to the difference in their work function values is consistent with experimental thermionic electron emission microscopy (ThEEM) images obtained in the TL region.\cite{Norman1987Surface-structu,Haas1967,Mroz2019a,Tuck1979,Wan2012Scandium-oxide-,Vaughn2009,Vaughn2010,Kordesch2013,Wan2013,Ren2017,Mroz2018}
As the schematic figures show, in the TL (Fig. \ref{p2f6}a) and the transition region (Fig. \ref{p2f6}b), the low work function patch faces a voltage minimum lower than its surface (the local vacuum level), especially at the patch edges, due to the patch field effects from its neighboring high work function patches. Therefore, the local emitted current density from the edge of low work function patches is smaller than the center of the patches (Fig. \ref{p2f5}a and \ref{p2f5}b). This result is different from the edge effect in a cathode surface with nonuniform emission but without patch field effects, where the edges of high-emitting patches emit more than the center of the patches due to the low space charge in front of their neighboring low-emitting patches.\cite{Umstattd2001,Luginsland2002Beyond-the-Chil,Sitek2021a}
In the transition and FSCL regions (Fig. \ref{p2f6}b, \ref{p2f6}c, and \ref{p2f6}d), the low work function patches tend to have more significant space charge effects than the high work function patches due to its higher local emitted current density, and will have a voltage minimum in front of its surface at a lower temperature. Such space charge effect causes that the emission from the low-emitting patches continues to increase while the high-emitting patches start to emit less, and that the emitted current density tends to be increasingly uniform due to the 3-D space charge effect, as the temperature increases from the transition region to the FSCL region (Fig. \ref{p2f5}b, \ref{p2f5}c, and \ref{p2f5}d).
In our nonuniform emission model, even though electrons are restricted along the cathode-anode direction with no lateral momentum, our model is able to predict the trend of the change in the emission nonuniformity as temperature changes. Such a trend has also been observed in experiments\cite{Li2006a,Li2007} and is consistent with some previous computational studies\cite{Sitek2021,Sitek2021a,Chernin2020,Jassem2021}.
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{fig5.pdf}
\caption{\label{p2f5}
Emitted current density maps predicted using nonuniform emission model for a cathode with work function map as Fig. \ref{p2f3}b at anode-cathode voltage $V = 400\units{V}$ and distance $d=1.132\units{mm}$, at different temperatures: (a) TL region: temperature $T=1149\units{K}$, average emitted current density $J = 0.340 \units{A/cm^2}$, (b) transition region: $1250 \units{K}$, $J = 1.234 \units{A/cm^2}$, (c) transition region but with an average emitted current density close to the full-space-charge-limited (FSCL) value: $1411 \units{K}$, $J = 1.525 \units{A/cm^2}$, (d) FSCL region: $1521 \units{K}$, $J = 1.552 \units{A/cm^2}$.
}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{fig6.pdf}
\caption{\label{p2f6}
Schematic figures illustrating the effects of patch fields and 3-D space charge in different regions: (a) temperature-limited (TL) region, (b) transition region, (c) transition region with an average emitted current density close to full-space-charge-limited (FSCL) value, (d) full-space-charge-limited (FSCL) region. The anode (not shown in the figures) is far away on the top of each subfigure, and the cathode is on the bottom of each subfigure, with a low work function patch ($2 \units{eV}$) surrounded by high work function patches ($2.5 \units{eV}$). Dashed black curves are the equipotential curve of the electrostatic potential (unit: V). The red and green solid curves are the electric flux lines. The red ones are for the electric flux lines starting from the anode while the green ones for those starting from the cathode. The aspect ratio of different subfigures may be different, and may not be $1:1$, so the electric flux lines may appear not perpendicular to the equipotential curves.
}
\end{figure}
\subsection{Sensitivity analysis and sources of error\label{sec:sensitivity_analysis}}
It is computationally expensive to simulate large areas (for example, $0.1 \units{mm^2}$ or larger) with the nonuniform emission model and time-consuming to characterize the grain orientation of a large area where there are a large number of grains. The computational cost is significantly increased in beam optics simulations where millions of time steps are typically used.\cite{Petillo2002The-michelle-th} To determine the representativeness of the statistics of the surface facets and ascertain the relationship between the uncertainty of the predicted emitted current and the size of the work function map, we characterized a total of 9 EBSD maps on different regions of the S-type cathode, for a total examined area of $0.15 \units{mm^2}$, and calculated the resulting emitted current density as a function of the examined area of the cathode surface.
To evaluate the effect of the uncertainty in the work function values to the predicted emitted current density, we calculated the emitted current density from a work function map by applying $\phi(hkl)=\phi\subm{DFT} (hkl)+ \Delta\phi(hkl)$ to the grain orientation map in Fig. \ref{p2f3}a, where the $(hkl)$ is one of the eight grain orientations assigned with DFT work function values, and the work function shift for the eight grain orientations $\Delta\phi(hkl)$ are assumed to be independent and identically distributed (i.i.d.) following the normal distribution $\Delta\phi(hkl)\sim N(0.176\units{eV},\sigma\subm{DFT}^2)$, where the standard deviation $\sigma\subm{DFT}$ represents a phenomenological error in our DFT measurements. We have generated 2500 random work function maps for each $\sigma\subm{DFT}$ value and calculated the variability of their emitted current densities.
Fig. \ref{p2f7} shows the variability of the values of the emitted current density for different submap sizes (Fig. \ref{p2f7}a) and different uncertainties of work function values (Fig. \ref{p2f7}b) at a condition for the TL-FSCL transition region. Fig. \ref{p2f7}a shows how the prediction of the emitted current density becomes more precise as the size of the submap increases, which indicates that model users may determine the submap size to use according to their desired precision in the prediction. Fig. \ref{p2f7}b estimates the uncertainty in the predicted emitted current density as a function of the uncertainty in the work function values. Previous studies\cite{DeWaele2016,Tran2019} estimated that the error of the DFT work function values is on the scale of tenths of eV. Our results show that even in the extreme case that the DFT work function values have an uncertainty of $0.4\units{eV}$, the median (the red line in the box in Fig. \ref{p2f7}b) is $1.29 \units{A/cm^2}$, close to $1.23\units{A/cm^2}$, which was the result for the baseline case ($\phi=\phi\subm{DFT}+ 0.176\units{eV}$, Fig. \ref{p2f4} and \ref{p2f5}b). In the $\phi\subm{DFT}=0.4\units{eV}$ results, the first quartile (the lower edge of the box) is $Q_1=1.11\units{A/cm^2}$ while the third quartile (the upper edge) is $Q_3=1.41\units{A/cm^2}$, and the interquartile range is $\mathrm{IQR}=Q_3-Q_1=0.29\units{A/cm^2}$. Such a dispersion is smaller than using a $32\units{\mu m}\times32\units{\mu m}$ submap, which has $\mathrm{IQR}=0.37 \units{A/cm^2}$, indicating a robustly predicted average current density even for the higher end of DFT work function uncertainty values.
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{fig7.pdf}
\caption{\label{p2f7}
Boxplots showing the variability of the emitted current density predicted (a) from the submaps of different characterized area sizes at temperature $T=1250 \units{K}$, anode-cathode voltage $V = 400 \units{V}$ and distance $d=1.132 \units{mm}$, (b) from a work function map by applying $\phi(hkl)=\phi\subm{DFT}(hkl)+ \Delta\phi(hkl)$ to the grain orientation map in Fig. \ref{p2f3}a, where the shift values are independent and identically distributed (i.i.d.) following the normal distribution $\Delta\phi(hkl)\sim N(0.176\units{eV},\sigma\subm{DFT}^2)$. In a boxplot, the red line in the box indicates the median value. The lower edge of the box is the first quartile ($Q_1$ or 25th percentile). The upper edge is the third quartile ($Q_3$ or 75th percentile). The interquartile range (IQR) is defined as $\mathrm{IQR}=Q_3-Q_1$. Data points larger than $Q_3+1.5\mathrm{IQR}$ or smaller than $Q_1-1.5\mathrm{IQR}$ are considered as outliers and plotted individually using the $+$ symbols. The whiskers extend to the most extreme data points not considered to be outliers.
}
\end{figure}
Other possible causes of the error in the predicted emitted current density include the measurement error in temperature values, the dependence of the work function value on temperature due to different stable arrangements of Ba-O surface species at different temperatures, and the dependence of the anode-cathode distance on temperature due to thermal expansion. The assumptions in the nonuniform emission model\cite{Chen2021} may also contribute to the error in the predicted emission, which include the assumption of a perfectly flat cathode surface in an infinite parallel diode, neglecting the lateral motion of the electrons, and neglecting the quantum effects. While it is beyond the scope of the present work to perform an in-depth investigation of the role of each of these sources of error, we find it very encouraging that the results in Fig. \ref{p2f4} illustrate that our model shows near quantitative agreement with experiment over a wide range of temperatures and anode-cathode voltages. This strong agreement with experiment suggests that while many sources of uncertainty in our model exist, they likely play a minor role in the resulting emission compared with the microstructural features of the cathode, including the fractions of each surface present, their size and spatial distribution, and the relative work functions of grains comprising the cathode surface.
\section{Conclusions and Outlook}
Our nonuniform emission model can predict two-dimensional maps of emitted current density and therefore the average emitted current densities at different temperatures and anode-cathode voltages based on a two-dimensional work function map derived by DFT calculations and microstructure characterization. Importantly, the predicted $J-T$ and $J-V$ curves are in semi-quantitative agreement with experimental results, including the nature of the TL-FSCL transition, which shows the same shape as experiments. There are only two fitting parameters in our model: the anode-voltage distance and a constant shift on the DFT work function values. The effect of these two fitting parameters on the shape of the TL-FSCL transition is negligible. Our model is the first of its kind to use a physics-based modeling method coupled with experimental characterization to reproduce experimental emission data, and illustrates that it is not necessary to use an empirical equation such as the Longo-Vaughan equation or to assume a continuous work function distribution. A key result of this work is that a smooth TL-FSCL transition region is a natural consequence of the physics of the nonuniform emission from a spatial distribution work function map when the effects of 3-D space charge, patch fields, and Schottky barrier lowering are included.
The present findings provide both a robust physics-based approach to predict the emitted current from any polycrystalline cathode for which the surface grain orientations and work functions are known, and a means to understand how the cathode microstructure and the underlying work functions couple to the expected emission behavior. The present "forward" model starts from cathode work function distributions and predicts $J-T$ and $J-V$ curves. In the future, it may be possible to to create an "inverse" model where one starts from experimentally measured $J-T$ and/or $J-V$ curves and predicts an effective cathode work function arrangement and associated microsctructure consistent with the measured emission. Such an approach may be an effective method to better understand the coupling of cathode microstructure with the measured emission of new cathodes. Such a method would provide a powerful tool for understanding the expected emission behavior of new cathodes, as conducting an emission test on a new cathode is less time-consuming than a full suite of microstructure and work function studies, e.g., using EBSD characterization and DFT calculations. The results in this work can also be used as input for higher-level simulation codes like MICHELLE\cite{Petillo2002The-michelle-th} to improve the modeling of cathodes in electron gun fixtures, better informing device design and enabling deeper insight into the physical factors governing heterogeneous emission from thermionic cathodes.
Codes for the nonuniform emission model are available on GitHub (https://github.com/chen-dongzheng/nonuniform-emission).
\section*{Acknowledgments}
This work was funded by the Defense Advanced Research Projects Agency (DARPA) through the Innovative Vacuum Electronic Science and Technology (INVEST) program with Leidos, Inc. The authors would like to thank Daniel Busbaher from 3M Technical Ceramics for providing the cathodes from which the data was obtained. The authors gratefully acknowledge use of facilities and instrumentation supported by NSF through the University of Wisconsin Materials Research Science and Engineering Center (DMR-1720415).
\bibliographystyle{IEEEtran}
|
train/arxiv
|
BkiUdWU4eIZjwV-u5t1T
| 5 | 1 |
\section*{Introduction}
There are many mechanisms of primordial black hole (PBH) formation. Specific feature of such objects is that they
can be formed in a very broad mass range even if a mechanism is chosen. Depending on mass, PBHs could play different roles in cosmology and astrophysics. PBHs with mass ranging $1-1000 M_\odot$ could account for both (appreciable part of) dark matter (DM) and gravitation event GW150914 \cite{GW_observ,GW}, while contribution in DM becomes more constrained \cite{abs, Carr, 0912.5297}. Mechanisms described in \cite{1, Dolgov, Carr, 0912.5297} may lead to formation of supermassive black holes. In the works~\cite{3,32,33}, we explored the Hawking radiation of PBH for an explanation of the reionization at redshift $z\sim 8$, which proved by the different observations \cite{Planck1, Planck2}. We found that for a delta-function-like mass distribution, the effect can be reached within narrow mass interval around $5\times 10^{16}$ g but
only for
$z\lesssim 4$.
This mass interval is close to that where PBHs can contribute noticeably to the density of DM, while explanation of all DM could require specific adjusting of PBH mass spectrum in this range \cite{Carr, 0912.5297}.
There is a set of mechanisms (see reviews \cite{Carr, 0912.5297,33,Khlopov}) leading to a variety of PBH mass spectra. In this paper we consider those spectra of PBH masses that could explain the early reionization without connection to specific model.
We study and compare contributions to the reionization and dark matter of the Universe for different mass distributions like delta-functions, power law distributions (including falling, growing and uniform). The appropriate PBH mass interval is $10^{15} \, \text{g}\lesssim M \lesssim 10^{18}~$g. Note that these PBHs could also explain positron line from Galactic center \cite{Gamma-line_1} due to effects of accretion \cite{Gamma-line_2} or Hawking evaporation \cite{Gamma-line_3}.
Constraints on the PBH density are usually applied only for delta-function mass distributions \cite{Carr, 0912.5297}.
In the mass range of our current interest, constraint comes mainly from the observed diffuse gamma-ray background~(DGRB)~\cite{Carr, 0912.5297}. We reproduced it in figure~\ref{constrain} (left). The density value is given in term of ratio of cosmological PBH density $\Omega_{\rm PBH}$ to $\Omega_{\rm CDM}\approx 0.26$.
At high mass tail, $M\gtrsim 10^{17}$~g, constraint from so called femtolensing \cite{femt} starts to prevail. We do not show it here\footnote{The lower edge of mass interval where femtolensing constraint comes into force is indicated as from $5\times 10^{16}$~g to $5\times 10^{17}$~g in different articles.} and put for the most of calculations upper limit for PBH mass distribution to be $M_{\max}=M_{17}=10^{17}$ g, to weaken impact of femtolensing constraint (FC) or evade it. But special cases when $M_{\max}\lesssim M_{17}$ and $M_{\max}> M_{17}$ will be discussed.
For extended mass distribution of PBHs, we get upper limit on the PBH density by comparing the estimated Hawking gamma radiation of PBHs in the full mass range with the data from HEAO, COMPTEL and EGRET~\cite{20,5}, keeping in mind that $\Omega_{\rm PBH}\le \Omega_{\rm CDM}$\footnote{Spectrum of gamma radiation from single PBH is approximated by the Planck black body formula multiplied by a polynomial in energy to fit the expected flux and reproduce constraint of \cite{Carr, 0912.5297} in the given range as shown in figure~\ref{constrain} (left).}.
Then we evaluate the contribution of Hawking radiation to the reionization of the Universe along with contribution to DM density, which is provided by PBHs from all mass interval.
PBHs at masses of interest have Hawking temperature within the interval $10 \,\text{keV} \lesssim T_{\text{PBH}} \lesssim 10 \, \text{MeV}$ so emit \cite{23} gamma rays $\gamma$, electrons and positrons $e^{\pm}$, neutrinos $\nu_{e,\mu,\tau}$ and gravitons $G$.
Ionization losses of $e^{\pm}$ provide the main contribution to the ionization effect of matter, which can be assumed to proceed homogeneously in space \cite{3}.
Gamma rays from PBHs are not so effective, they mostly provide imposing limits on the PBH density by their contribution to the DGRB\footnote{Nevertheless the gamma-radiation can provide the observational effects explaining unidentified point-like gamma-ray sources within Galaxy \cite{PBH_PGRS, PBH_PGRS_2}.}.
\section*{Basic formulas}
In calculation of the temperature $T$ and ionization degree $x_e$ of baryonic matter, we follow here the work \cite{3}. Note that the approximation used there is not so quantitatively correct (however basically when $x_e\ll 1$ what is rather not of interest) \cite{30,31}, but nonetheless seems to be, at least, qualitatively acceptable \cite{31} for estimation of reionization effect and, what is our aim here, demonstration how the effect can be \textit{relatively} enhanced due to extended PBH mass spectra.
The main difference of current calculations from those of \cite{3} is that we use distribution of PBHs in mass. To take into account this one takes Eq.(5) of \cite{3} and generalizes it for a case of extended mass distribution:
\begin{equation}
\frac{d\dot{\Omega}_{\rm ev}}{dM}=\frac{\dot{M}}{M}\frac{d\Omega_{\rm PBH}(M)}{dM}
= \frac{1}{3}\left(\frac{M_U}{M}\right)^3\frac{d\Omega_{\rm PBH}(M)/dM}{t_U}.
\label{Omegaev}
\end{equation}
The sense of presented value is the energy evaporation rate per unit volume, divided by critical density ($\rho_{\rm crit}$). The value $\dot{M}$ is the energy evaporation rate of single PBH, $M_U\approx 5\times 10^{14}$ g is the mass of PBH which is evaporated completely for the modern age of Universe $t_U$.
Distribution of PBH cosmological density in $M$ can be connected with conventional probability distribution, normalized on unit, ($\frac{dw}{dM}$) as
\begin{equation}
\frac{d\Omega_{\rm PBH}(M)}{dM}=\frac{M}{\bar{M}}\Omega_{\rm PBH}\frac{dw}{dM},
\end{equation}
where $\bar M\equiv \int_{M_{\min}}^{M_{\max}}M\frac{dw}{dM}dM$ is the mean mass, $\Omega_{\rm PBH}$ is the total density (of PBHs of all masses) to be found from DGRB and CDM density constraints.
Then we put the value \eqref{Omegaev} in
Eq.(21) of \cite{3}, generalizing it analogously to \eqref{Omegaev},
which in its turn put to Eq.(27) through replacement there
$$
\dot\Omega_{\rm abs}\rightarrow \int_{M_{\min}}^{M_{\max}}\frac{d\dot\Omega_{\rm abs}^{(e-\text{ion})}}{dM}dM.
$$
Given value has the sense of absorbed energy by baryonic matter due to ionization process (other processes of energy transfer from evaporation products to the baryonic matter can be neglected \cite{3,33}). Note that it takes into account that energy absorption process takes a finite time so electrons emitted by the PBH have time to lose energy also due to scattering on CMB and due to the red shift.
Finally, solution of Eq.(27) gives us the temperature, and Eq.(28) of \cite{3}~--- the ionization degree.
The $\gamma$-ray flux from PBHs is estimated as
\begin{gather}
F_{\gamma}^{\text{mod}}(E) = \frac{c}{4\pi} \, \rho_{\rm crit} \iint \frac{ \kappa_{\gamma}}{\left\langle E_{\gamma} \right\rangle } \, \frac{\dot{M}}{M} \, f_{\text{Pl}}(M,E_{\gamma0}=E_{\gamma}(z+1)) \frac{d\Omega_{\text{PBH}}}{dM} \, dM\, \frac{H^{-1}_{\rm mod}dz}{\sqrt{\Omega_{m} \left(z+1 \right)^{3} + \Omega_{\Lambda}}}.
\end{gather}
Here
$\kappa_{\gamma}$ is the energy fraction evaporated in form of $\gamma$, $E_{\gamma0,\gamma}$ are their initial (as radiated by PBH) and final (at the Earth) energy,
$\left\langle E_{\gamma} \right\rangle $ is their mean (final) energy,
$f_{\text{Pl}}$ is the initial photon spectrum, normalized on unit (modified Planck form),
$\Omega_{\Lambda}=0.69$ and $\Omega_m=0.31$ are the modern energy and non-relativistic matter densities,
$H_{\rm mod}$ is the modern Hubble parameter.
We do not consider here contribution in DGRB from PBHs in Galaxy. It leads to constraining abundance of basically less massive PBHs while suffers by extra uncertainties related with PBH distribution in Galaxy \cite{1604.05349, 0912.5297}. In fact, PBHs in Galaxy may give relatively big contribution in gamma when they are already at the last active evaporation stage. It requires their initial mass to be tuned around $M_U$, what is well below of that needed here.
\section*{Mass spectra with two peaks}
We start calculation with the single delta-function mass distribution to compare its result with those for other distributions.
Constraint on $\Omega_{\rm PBH}$ obtained from the data on gamma radiation, as mentioned above, is shown in figure~\ref{constrain} (left).
The figure~\ref{constrain} (right) shows the redshift, at which PBHs ionize 80\% (blue line) and 100\% (red line) of matter, depending on their contribution to dark matter as constraint (shown left) allows.
The x-axis corresponds to the PBH mass range $10^{16} \, \text{g}\lesssim M \lesssim 8 \times 10^{16} \,$g (in this range, maximal $\Omega_{\rm PBH}$ is uniquely defined by $M$).
As clearly seen, the solution of the problems of dark matter and reionization of the Universe can not be reached simultaneously with the single delta-function PBH mass distribution.
For largest contribution to reionization ($z\sim 4$), not more than 40--50\% of dark matter can be in the form of PBHs.
On the contrary, if all dark matter density consists of PBHs, the reionization happens (due to PBH Hawking radiation) not earlier than $z\sim 3$.
The temperature of baryonic matter and degree of its ionization for the PBH mass value ($M\approx 5 \times 10^{16}$ g), giving the best effect, are shown in figure~\ref{leftion} (left and right respectively) in dependence on the redshift.
\begin{figure}
\begin{center}
{\includegraphics[scale=0.3]{omegaM.pdf}}
\quad
{\includegraphics[scale=0.3]{omegaZC.pdf}}
\caption{ (left) Constraints on the PBH density for the mass range of interest, obtained for the delta-function-like mass spectra. Within given mass range, constraint from DGRB prevails. Shaded region is forbidden.
(right) Dependence of the redshifts, at which the ionization of matter achieves 80\% (blue line) and 100\% (red line), from PBH density, which they have accordingly to the upper limit shown left.}
\label{constrain}
\end{center}
\end{figure}
The delta-functional mass spectrum approximates a continuum spectra containing one sharp maximum. Additional maximum in a mass spectrum needs another delta-function to be involved. Let us consider less trivial mass distributions.
In case of two peaks in mass spectrum, we took the first peak at the same position as in case of single delta-function mass spectrum (at $5 \times 10^{16}$ g) and added the second one at $7 \times 10^{16}$ g. The height of second peak was step-by-step raised, while simultaneously the first one was lowered in so manner that DGRB was saturated due to contributions from the both peaks. We stopped when reionization effect became maximal. In fact, taking first peak at any relevant mass and adding the second one leads to amplification of ionisation effect, so there is no big fine tuning here.
The pair of mass values ($5 \times 10^{16}$ g and $7 \times 10^{16}$ g) provides one of the strongest amplification among those we looked over.
A width of peak could be extra parameter but we did not study it explicitly (though, an uniform mass distribution with a finite width is considered below as particular case) and can refer to \cite{1701.07223} on this issue.
Evolution of the temperature and degree of ionization of matter in the case of two peaks mentioned above is shown in the figure~\ref{leftion} together with other cases.
As one can see from figure~\ref{leftion} (left) the temperature of baryonic matter $T$ begins to grow at the moment $z\simeq 50$. At this time, the heating rate of matter due to ionization losses becomes higher than the rate of the Universe expansion.
The dashed red line corresponds to the case when the interaction of baryonic matter (free electrons) with the CMB photons is neglected. It shows that interaction with the CMB becomes important when free electrons appear, what causes them to cool (at $z\sim10$).
Figure~\ref{leftion} (right) shows that for the considered double delta-function case, reionization can be reached by the moment $z\sim 6$, while for single delta function it was not earlier than $z\sim 4$. The given two delta functions provide contribution to the dark matter equal to 10\%. This contribution can be enhanced by a simple addition of a third peak beyond considered mass interval.
\begin{figure}
\begin{center}
\includegraphics[scale=0.3]{temp_double-NEW.pdf}
\quad
\includegraphics[scale=0.3]{all-NEW.pdf}
\caption{The temperature (left) and degree of ionization of matter (right) depending on the redshift for the cases, when the PBH density concentrated in one (black line) and two different mass values (red line) or distributed in mass as power-law (one of the best case).
}
\label{leftion}
\end{center}
\end{figure}
\section*{Power-law mass distribution}
The mechanism of PBH formation due to collapse of domain walls, which are supposed to
form in the result of phase transitions at the inflationary stage \cite{1}, may give a variety of PBH mass distributions in dependence on initial parameters of a scalar field potential responsible for a phase transition. The form of PBH mass spectrum strongly depends on the form of potential.
Nonetheless, simple forms of the latter lead, as a rule, to power-law-like form
\begin{equation}
\frac{dw}{dM}\propto M^{\alpha}
\end{equation}
with negative exponent about $-3\lesssim\alpha\lesssim-1$ (what is noted for other mechanisms in \cite{Carr}) and even with positive ones within limited most contributing mass range. But one should take into account a ``renormalization'' of mass distribution connected with possible successive coalescence evolution of PBH systems \cite{Dokuch}.
Here we consider reionization effect from pure power-law mass spectra of PBHs with different typical exponents.
Different mass intervals for each power-law distributions are considered ($M_\text{min}<M<M_{17}$, where $10^{15}~\text{g}<M_\text{min}<M_{17}$).
So we have two basic varying parameters: $\alpha$ and $M_{\min}$.
As earlier, gamma-ray flux is calculated for PBHs with given extended power-law mass distribution to constrain their maximal density from observation data. For the obtained maximal density, reionization effect is estimated. Evolution of ionization degree for one of the best cases (with highest ionization) of power-law mass distributions is shown in the figure
\ref{leftion} (right) in comparison with best delta-function-like cases.
Changing $\alpha$ and $M_{\min}$ we get different ionization effect and contribution to DM.
The figure~\ref{power} of left panel shows a redshift, at which 80\% of matter are ionized due to PBHs, for different values of $M_{\min}$ and $\alpha$. Right panel shows, for the same values, PBH contribution to DM density $\Omega_{\rm PBH}/\Omega_{\rm CDM}(M_{\min}<M<M_{17})$.
In the region $\alpha\sim 2\div 3$ (growing power-law spectra), full value $\Omega_{\rm CDM}=0.26$ can be reached, because high mass tail starts to contribute strongly. In this case PBH density distribution gets constrained due to the value $\Omega_{\rm CDM}=0.26$ rather than by DGRB. It means that PBHs do not saturate DGRB constraint thereby losing an ionization capability. To rescue such a loss, we change upper limit $M_{\max}=M_{17}$ in interval $2\lesssim\alpha\lesssim 3$ (exacter in all blue region of figure~\ref{power} (right)) by other value $M_{\max}<M_{17}$ to reach maximal PBH gamma-radiation while $\Omega_{\rm PBH}=\Omega_{\rm CDM}$.
It is obtained that $M_{\max}\approx (0.9-1)\times M_{17}$, and $z$, at which 80\% of matter ionized, increases by 1--2 as compared to the case $M_{\max}= M_{17}$. It possibly expands a gap in mass range before FC is on. Figure~\ref{power} includes these corrections.
%
As one can see, there is a big region where reionization effect can be reached (better than in case of single-delta-function-like mass spectrum).
At the same time, PBH mass spectra with $\alpha\gtrsim 2$ may provide all DM density.
One notes generally, we have taken a milder constraint from gamma-background (2--3 sigma excess over observational data was accepted in deriving restriction), and upto $M=M_{17}$ FC is ignored. The latter indulgence allowed to reach aforementioned possibility to get total DM density in form of PBHs within considered mass range. If one supposes FC takes a power at $M\approx 6\times 10^{16}$~g then DM
remains unexplained while reionization effect is weakened insignificantly (the left plot in figure~\ref{power} changes just a little). Nevertheless even in this case, one can try to get dark matter at the cost of some loss of ionization effect. If we look again at the left plot in figure~\ref{power} in the region for $M_{\min}\sim (1.5-2)\times 10^{16}$~g and $\alpha$ around $-2$, reionization is reached there at $z\sim 3-4$ which is not worst. But given power-law spectrum, each logarithmic mass interval ($\Delta \lg M=1$) gives approximately equal contribution to density as small as $\Omega_{\rm PBH}/\Omega_{\rm CDM}\sim 0.01-0.1$ (see respective region in figure~\ref{power} (right)).
So, if we extrapolate spectrum $M^{-2}$ for higher mass, then we get contribution $\Omega_{\rm PBH}/\Omega_{\rm CDM}\sim 0.03-0.1$ on each
interval $\Delta \lg M=1$ and hence reach the total DM density in about 10 intervals.
But it can have tension with FC, which covers $2.5-3.5$ orders of mass magnitudes and falls to $\Omega_{\rm PBH}/\Omega_{\rm CDM}\sim 10 $\% in its minimum for delta-function-like mass distribution (so in the worst we should avoid contribution bigger than $10\%/(2.5-3.5)=3-4$\% in each $\Delta \lg M=1$ interval). However, we suppose that a special analysis of femtolensing effect for an extended PBH mass distribution is required to resolve situation (when all PBHs are assumed to have different mass values, it could be more difficult to reach statistically reliable result analysing data on gamma-ray bursters and constraint could be weakened).
\begin{figure}
\begin{center
{\includegraphics[scale=0.6]{contour_z-NEW-1.pdf}}
\qquad
{\includegraphics[scale=0.6]{contour_omega-NEW-1.pdf}}
\caption{The redshift (left), at which which 80\% of matter are ionized due to PBHs with power-law mass spectrum at $M_{\min}\lesssim M \lesssim M_{17}$ (for $\alpha>2$ see comments in the text), and their contribution to DM density (right) in dependence from $M_{\min}$ and $\alpha$.}
\label{power}
\end{center
\end{figure}
\section*{Conclusion}
In our work we considered different forms of the PBH mass distributions. It was found
that the extended (not single delta-functional) PBH mass distribution gives greater contribution to the reionization of matter. In particular, the ``simplest'' complication of single delta-function mass spectrum by adding a second delta-function (adjusting simultaneously both peak heights to DGRB constraint) allows to enhance noticeably reionization effect.
A region of the values of $\alpha$ and $M_{\min}$ for power-law mass spectrum ($dw/dM\propto M^{\alpha}$, $M_\text{min}<M<M_{17}$) is found where reionization effect is maximal. Moreover, a part of that region with $\alpha>2$ can provide essential contribution to DM still evading femtolensing constraint (FC), if the latter is off until $M\lesssim M_{17}$. If FC takes a power at smaller $M$, then DM explanation possibility is mostly lost while reionization effect is weakened rather insignificantly over all considered $\alpha-M$ parameter space. But there is a benchmark region around $\alpha\sim-2$ of that space where DM explanation could be restored avoiding FC, what can be proved
by specific analysis of femtolensing effects for an extended PBH mass distribution.
\section*{Acknowledgment}
This work was supported by Russian Science Foundation and fulfilled in the framework of MEPhI Academic Excellence Project (contract \textnumero~02.a03.21.0005, 27.08.2013) and according to the Russian Government Program of Competitive Growth of Kazan Federal University.
The work of S.~G.~R. was also supported by the Ministry of Education and Science of the Russian Federation, Project \textnumero~3.4970.2014/BY.
|
train/arxiv
|
BkiUdtc4uzlh0_z-ebAM
| 5 | 1 |
\section{Introduction}
In this paper, we study the nonlinear operator equation
\begin{equation}\label{Op.eqn}
A(\hat f)=\hat g
\end{equation}
with the infinite dimensional separable real Hilbert spaces~$\mathcal H$ and~$\mathcal H'$ with the inner products~$\scalar{\cdot}{\cdot}{\mathcal H}$ and~$\scalar{\cdot}{\cdot}{\mathcal H'}$, respectively. Here,~$\mathcal H'$ is the space of functions between a Polish space~$X$ and a real separable Hilbert space~$Y$. We observe the noisy values of the function~$\hat g$ at the inputs~$x_i$:
\begin{equation}\label{Model}
y_i=\hat g(x_i)+\varepsilon_i
\end{equation}
for~$1\leq i \leq m$. Here,~$m$ is the number of observations which is called the sample size. In contrary to the direct learning scheme where we estimate the function~$\hat g$, here we aim to estimate the function~$\hat f$ directly from the observations.
A common approach to stably approximate the solution of equation~\eqref{Op.eqn} is the Tikhonov regularization scheme. Sometimes, we have the information about the true solution, e.g., the true solution may be differentiable. To incorporate this information, we employ the Tikhonov regularization in Hilbert scales. This scheme consists of a functional which is a linear combination of a fidelity term measuring the fitness of the data and a penalty term in a stronger norm forcing the smoothness in the approximated solution. To define this scheme, we introduce a densely defined, unbounded, closed, linear, self-adjoint, strictly positive operator~$L : \mathcal{D}(L)\subset \mathcal H \to \mathcal H$ such that for some~$\ell_L>0$,
\begin{equation}\label{L.unbound}
\ell_L\norm{f}_{\mathcal H} \leq \norm{Lf}_{\mathcal H} \quad\forall f \in \mathcal{D}(L).
\end{equation}
Here, we observe that~$L^{-1}:\mathcal H\to\mathcal H$ is a bounded operator from the strict positivity of the operator~$L$.
The Tikhonov functional for the considered nonlinear inverse problem with sample~$\mathbf{z}=\brac{(x_i,y_i)}_{i=1}^m$ is given by
$$\mathcal{E}_{\mathbf{z},\lambda}(f)=\brac{\frac{1}{m}\sum\limits_{i=1}^m\norm{A(f)(x_i)-y_i}_Y^2+\lambda\norm{L\paren{f-\bar f}}_{\mathcal H}^2},$$
where~$\bar f\in\mathcal{D}(A)\cap\mathcal{D}(L)$ is an initial guess. The regularization parameter~$\lambda > 0$ has to balance both terms appropriately. Then, the Tikhonov regularization scheme in Hilbert scales can be defined as
\begin{equation}\label{fzl}
f_{\mathbf{z},\la} = \operatornamewithlimits{argmin}\limits_{f\in\mathcal{D}(A)\cap\mathcal{D}(L)} \mathcal{E}_{\mathbf{z},\lambda}(f).
\end{equation}
For the continuous and weakly sequentially closed operator~$A$, there exists a global solution of the regularization scheme in~\eqref{fzl}. But it is not necessarily unique, since~$A$ is nonlinear (see~\cite[Section 4.1.1]{Schuster2012}).
We consider the Hilbert scales~$\mathcal H_a$ generated by the operator~$L$. Here the spaces~$\mathcal H_a := \mathcal{D}(L^a)$ are Hilbert spaces equipped with the inner product~$\inner{ f,g }_{\mathcal H_a}=\inner{ L^a f,L^a g}_\mathcal H,~~f, g \in\mathcal H_a$. For the Hilbert scales, we have the well-known interpolation inequality
\begin{equation}\label{interpolation}
\norm{f}_{\mathcal H_b}\leq\norm{f}_{\mathcal H_a}^{\frac{c-b}{c-a}}\norm{f}_{\mathcal H_c}^{\frac{b-a}{c-a}},\qquad f\in \mathcal H_c
\end{equation}
which holds for any~$a < b < c$.
The regularization schemes in Hilbert scales have been well-studied and analysed under the different assumptions in the classical inverse problems~\cite{Bissantz2004,Engl,Hohage,Schuster2012}. In learning theory, the general regularization in Hilbert scales is introduced for the linear inverse problems and established the rates of convergence~\cite{Rastogi2020b}. Nicole et al.~\cite{Mucke2020} studied the Stochastic Gradient Descent in Hilbert scales and the author provided different examples of Hilbert scales in learning. Further, the authors discussed the error estimates for the Stochastic Gradient Descent scheme for the direct learning problem. In the paper~\cite{Rastogi2020a}, the rates of convergence are established for nonlinear statistical inverse learning problems in the RKHS setting. The authors considered some assumptions on the nonlinearity of the operator~$A$ such as Fr{\'e}chet differentiability of the operator, Lipschitz continuity of the Fr{\'e}chet derivative, and the link condition to transfer the smoothness in terms of the operator~$L$ to the covariance operator.
Here, we consider the nonlinear inverse learning problem in Hilbert scales satisfying conditional stability estimates characterized by general concave index functions. We use the Tikhonov regularization schemes to obtain the stable approximate solution in the RKHS framework. Werner and Hofmann~\cite{Werner2019a} illustrated the validity of the conditional stability estimates in different models and real-world situations. The authors showed that the derivative of~$A$ is not always necessary for this condition.
For the regularization schemes in the RKHSs, generally, we consider the smoothness by the source condition in terms of the covariance operator which implies the rates of convergence. The covariance operator depends on the considered kernel and unknown probability measure. Therefore, the source condition cannot be verified practically. Moreover, the misspecified kernel affects the source condition and consequently, the rates of convergence for the regularization schemes. Here, we consider the smoothness of the true solution in terms of the known operator~$L$ which can be checked in practice. We divide the smoothness into two cases: the regular case (i.e.,~$\hat f\in \mathcal{D}(L)$) and the oversmoothing case (i.e.,~$\hat f\notin \mathcal{D}(L)$). The oversmoothing case is very delicate. We consider that the regularized solution in Hilbert scales~\eqref{fzl} belongs to~$\mathcal{D}(L)$. But the true solution does not belong to~$\mathcal{D}(L)$ in the oversmoothing case. The analysis is also tricky for nonlinear inverse problems since the Tikhonov regularization in Hilbert Scales does not have an explicit solution. The analysis starts with the step~$\mathcal{E}_{\mathbf{z},\lambda}(f_{\mathbf{z},\la})\leq \mathcal{E}_{\mathbf{z},\lambda}(\hat f)$. But~$\mathcal{E}_{\mathbf{z},\lambda}(\hat f)$ is not well-defined in the oversmoothing case (since~$\hat f\notin \mathcal{D}(L)$). We will utilize the concept of distance functions to overcome this problem.
The main results of our paper can be summarized as follows:
\begin{itemize}
\item[-] We discuss the rates of convergence for the Tikhonov regularization in Hilbert Scales under a conditional stability assumption for the inverse problem.
\item[-] We obtain the error estimates in the absence of the widely-considered source condition. We will use the concept of the distance functions for this.
\item[-] We establish the error bounds in both the regular case and the oversmoothing case for the appropriate benchmark smoothness.
\end{itemize}
The manuscript is organized as follows: In Section~\ref{Sec:Notation}, we present the basic definitions, notation, and assumptions required in our analysis. In Section~\ref{Sec:Analysis}, we state and prove our main results. Here, we discuss the rates of convergence for Tikhonov regularization in Hilbert scales in the probabilistic sense. In Section~\ref{Sec:Explicit.rates}, we present the explicit rates in terms of sample size by bounding the distance functions. In Appendix, we state the probabilistic estimate of perturbation inequalities.
\section{Notation and Assumptions}\label{Sec:Notation}
Let the input space~$X$ be a Polish space and the output space~$(Y, \inner{ \cdot,\cdot}_Y)$ be a real separable Hilbert space. We consider the joint probability measure~$\rho$ on the sample space~$Z=X\times Y$. We denote the marginal distribution on~$X$ by~$\nu$ and the conditional distribution of~$y$ given~$x$ by~$\rho(y|x)$. Therefore, the measure~$\rho$ can be split as~$\rho(x, y) = \rho(y|x)\nu(x)$.
For the probability measure~$\rho$ on~$X\times Y$, we assume that
\begin{equation}\label{Y.leq.M.1}
\int_Z\norm{y}_Y^2~d\rho(x,y)<\infty.
\end{equation}
For the considered model~$y=\hat g(x)+\varepsilon$ with centred noise~$\varepsilon$ we find~$\int_Y y d\rho(y|x)= \hat g(x)$ provided that the conditional expectation w.r.t.~$\rho$ of~$y$ given~$x$ exists (a.s.). This holds true under Assumption~\eqref{Y.leq.M.1}. This fact together with the operator equation~\eqref{Op.eqn} motivates us to consider the following assumption.
\begin{assumption}[The true solution]\label{Ass:fp}
The conditional expectation w.r.t.~$\rho$ of~$y$ given~$x$ exists (a.s.), and there exists unique~$\hat f \in \mathrm{int}(\mathcal{D}(A))\subset\mathcal H~$ such that
\begin{equation*}
\int_Y y d\rho(y|x)= A(\hat f)(x), \text{ for all } x\in X.
\end{equation*}
\end{assumption}
Here,~$\hat f$ is the true solution of equation~\eqref{Op.eqn} which we aim at estimating. Here, we want to mention that the function~$\hat f$ is also the minimizer of the expected risk considered in~\cite{Rastogi2020a}.
We consider a \emph{Bernstein-type assumption} for the noise~$\varepsilon=y-A(\hat f)(x)$:
\begin{assumption}[Noise condition]\label{Ass:noise}
There exist some constants~$M,\Sigma$ such that for almost all~$x\in X$,
\begin{equation*}
\int_Y\left(e^{\norm{\varepsilon}_Y/M}-\frac{\norm{\varepsilon}_Y}{M}-1\right)d\rho(y|x)\leq\frac{\Sigma^2}{2M^2}.
\end{equation*}
\end{assumption}
We want to utilize the properties of the Reproducing kernel Hilbert spaces (RKHSs) in our analysis. Therefore, we assume that~$\operatorname{Ran}(A)$ is contained in a vector-valued Reproducing kernel Hilbert space (RKHSvv). The RKHSvv~$\mathcal H_K$ arises from the operator-valued positive semi-definite kernel~$K:X\times X\to \mathcal{L}(Y)$~\cite{Micchelli1}. Here,~$\mathcal{L}(Y)$ is the Banach space of bounded linear operators.
\begin{assumption}[Vector valued reproducing kernel Hilbert space~$\mathcal H'$] \label{Ass:kernel}
Suppose~$\mathcal H'$ is an RKHSvv of functions~$g:X\to Y$ corresponding to the kernel~$K:X\times X\to \mathcal{L}(Y)$ such that
\begin{enumerate}[(i)]
\item For all~$x\in X$,~$K_x:Y\to\mathcal H'$ is a Hilbert-Schmidt
operator, and
\[\kappa^2:=\sup_{x \in X} \norm{K_x}^2_{HS} = {\sup_{x \in
X}\operatorname{tr}(K_x^*K_x)}<\infty.\]
\item The real-valued function~$\varsigma:X\times X \to \mathbb R$, defined by~$\varsigma(x,t)=\inner{ K_tv,K_xw}_{\mathcal H'}$, is measurable~$\forall v,w\in Y$.
\end{enumerate}
\end{assumption}
This assumption implies that~$\mathcal H'\subset \mathscr{L}^2(X,\nu;Y)$. We denote the canonical injection map~$\mathcal H'$ to~$\mathscr{L}^2(X,\nu;Y)$ by~$I_\nu$ and the corresponding covariance operator is~$T_\nu:= I_\nu^{\ast}I_\nu$. From the above assumption, we see that the covariance operator is positive and trace class. The covariance operator is very important in our convergence analysis. We will need some regularity assumptions in terms of the covariance operator on the marginal probability measure~$\nu$ to achieve the uniform convergence rates for the regularized solution~\eqref{fzl}.
The error estimates studied in our analysis are based on the smoothness of the true solution and the behaviour of the effective dimension. The error estimates and the optimal parameter choice depend on the effective dimension for the regularization methods in reproducing kernel Hilbert spaces~\cite{Caponnetto,Blanchard,Rastogi2020}. To achieve the fast convergence rates, we introduce the concept of the effective dimension~$\mathcal{N}(\lambda)$~\cite{Zhang}:
$$\mathcal{N}(\lambda):=Tr\left((T_\nu+\lambda I)^{-1}T_\nu\right), \text{ for }\lambda>0.$$
The effective dimension is a continuous, decreasing function of~$\lambda$. The effective dimension is finite, since the operator~$T_\nu$ is a trace class, and we get
$$
\mathcal{N}(\lambda)\leq \norm{(T_\nu+\lambda I)^{-1}}_{\mathcal{L}(\mathcal H)}Tr\left(T_\nu\right) \leq \frac{\kappa^2}{\lambda}.
$$
The different behaviours of the eigenvalues of the covariance operator lead to different decay rates of the effective dimension~\cite{Lu2020}. Under the different scenarios of the effective dimension, we will get the explicit convergence rates in the next section.
In order to establish the error estimate, we introduce the discrete operators for the samples. For the ordered set~$(\mathbf{x})_i=x_i$, we define the {\it Sampling Operator }
$$(S_\mathbf{x}(g))_i=g(x_i)\quad \text{and} \qquad 1\leq i \leq m.$$
We define the inner product space~$Y^m$ with the inner product~$\inner{\mathbf{y},\mathbf{y}'}_{m}=\frac{1}{m}\inner{y_i,y_i'}_{Y}$ for~$(\mathbf{y})_i=y_i$ and~$(\mathbf{y}')_i=y_i'$ for~$1\leq i \leq m$.
Then, we get the expression of its adjoint~$S_\mathbf{x}^*$ as
$$S_\mathbf{x}^*\mathbf{y}=\frac{1}{m}\sum_{i=1}^m K_{x_i} y_i,~~~~\forall \mathbf{y}\in Y^m.$$
It can be easily checked that under Assumption~\ref{Ass:kernel},~$\norm{S_\mathbf{x}}_{\mathcal H'\to Y^m}\leq \kappa$.
We need to make some assumptions about the nonlinear structure of operator~$A$. Following the work of Werner and Hofmann~\cite{Werner2019a}, we consider the following assumption on~$A$,~$\mathcal{D}(A)$, and~$\hat f$. To introduce this assumption, we define the closed balls~$B_\mu^u(\hat f)=\brac{f\in\mathcal H_u:\norm{f-\hat f}_u\leq \mu}$ in~$\mathcal H_u~(u \in \mathbb R)$ with center~$\hat f \in \mathcal H_u$ and radius~$\mu$~$(0 < \mu\leq 1)$ and their intersections with the domain of~$A$,~$\mathcal{D}_\mu^u(\hat f):=B_\mu^u(\hat f)\cap\mathcal{D}(A)$. For the simplicity, we will denote~$B_\mu^0(\hat f)$ and~$\mathcal{D}_\mu^0(\hat f)$ and by~$B_\mu(\hat f)$ and~$\mathcal{D}_\mu(\hat f)$.
\begin{assumption}\label{Ass:A}
\begin{enumerate}[(i)]
\item The domain~$\mathcal{D}(A)$ of~$A$ is a convex and closed subset of~$\mathcal H$.
\item The operator~$A : \mathcal{D}(A) \to \mathcal H'$ is weak-to-weak sequentially continuous\footnote{i.e.,~$f_n \rightharpoonup \hat{f}\in\mathcal H$ with~$f_n \in \mathcal{D}(A)$,~$n \in \mathbb N$, and~$\hat{f} \in \mathcal{D}(A)$ implies~$A(f_n)\rightharpoonup A(\hat{f}) \in \mathcal H'$.}.
\item The operator~$A$ is Lipschitz continuous with Lipschitz constant~$\ell_A < \infty$ in a sufficiently large ball~$\mathcal{B}_d(\hat f)$,
\begin{equation*}\label{A.cont}
\norm{A(f)-A(\tilde{f})}_{\mathcal H'}\leq \ell_A \norm{f-\tilde{f}}_{\mathcal H} \qquad\forall f ,\tilde{f}\in \mathcal{B}_d(\hat f) \cap \mathcal{D}(A) \subset \mathcal H,
\end{equation*}
\item There exist constants~$p\geq0$,~$s> 0$,~$\alpha> 0$,~$d > 0$,~$\theta\geq 0$ and~$Q \subset\mathcal{D}_d^\theta(\hat f)\cap \mathcal{D}(A)$ such that
\begin{equation*}
\norm{f-\hat f}_{\mathcal H_{-p}}\leq \alpha\norm{I_\nu\sbrac{A(f)-A(\hat f)}}_{\mathscr{L}^2(X,\nu;Y)}^s
\end{equation*}
holds for all~$f \in Q$, where the constant~$\alpha$ may depend on~$p$,~$s$, and~$Q$.
\end{enumerate}
\end{assumption}
Assumption~\ref{Ass:A}~(iv) is called the conditional stability estimate which helps us to characterize the degree of ill-posedness of inverse problems. Here, we note that operator~$A$ may not be differentiable, (see the examples in~\cite{Werner2019a}).
\section{Convergence analysis}\label{Sec:Analysis}
The assertions about the convergence of Tikhonov-regularized solution~$f_{\mathbf{z},\la}$ to the true solution~$\hat f$ are formulated in this section. First of all, we introduce some standard quantities required to establish the error estimates. We denote
\begin{align}
\Theta_{\mathbf{z}}:=&\norm{(T_\nu +\lambda I)^{-1/2}S_\mathbf{x}^*\bm{\varepsilon}}_{\mathcal H'}\qquad \text{for} \quad \bm{\varepsilon}=S_\mathbf{x} \sbrac{A(\hat f )}-\mathbf{y}, \label{theta.z} \\
\Psi_\mathbf{x}:=&\norm{(T_\nu +\lambda I)^{-1/2}(T_\nu-T_\mathbf{x})}_{\mathcal{L}_2(\mathcal H')}. \label{psi}
\end{align}
The probabilistic estimates of the above quantities are given in Appendix~\ref{Sec:prob.est}. We will use the following standard assumption on the sample size~$m$ and the regularization parameter~$\lambda$ for our probabilistic estimates:
\begin{equation}\label{l.la.condition}
\mathcal{N}(\lambda) \leq m\lambda \qquad \text{and}\qquad 0<\lambda\leq 1.
\end{equation}
Now, we introduce the concept of the distance function (also known as `approximate source conditions') which can be used in the absence of the source condition for~$\hat f$~\cite{Baumeister1987,Smale2003}. It measures the violation of a benchmark smoothness of the true solution. It becomes very important in the `oversmoothing case'~$\hat f \notin\mathcal D(L)$ for regularization in Hilbert Scales.
\begin{definition}[Approximate source condition]
For given~$q$, we define the distance function~$d : [0, \infty)\to[0, \infty)$ by
\begin{align}\label{Defi:dist}
d(R)=\inf\brac{\norm{f-\hat f}_{\mathcal H}:f-\bar f= L^{-q}v \text{ and }\norm{v}_{\mathcal H} \leq R},\quad R>0.
\end{align}
\end{definition}
Here~$q$ defines the benchmark smoothness. Let~$\fp^R$ be the minimizing element of the above problem. Here, we also denote the quantities~$d_A(R)=\norm{I_\nu\sbrac{A(\fp^R)-A(\hat f)}}_{\mathscr{L}^2(X,\nu;Y)}$ and~$d^{p}(R)=\norm{\fp^R-\hat f}_{\mathcal H_{-p}}$.
Here, we note that when the true solution is of the form~$\hat f=L^{-q}u$ with~$\norm{u}_{\mathcal H} \leq \bar{R}$, then the distance function~$\mathcal D(\bar{R})=0$ and the minimizer~$\hat f^{\bar{R}}=\hat f$.
The error analysis starts using the fact that~$f_{\mathbf{z},\la}$ is the minimizer of the Tikhonov functional~\eqref{fzl}. We get the deterministic expressions~\eqref{err_1.2},~\eqref{p_s.2} for the quantities~$\norm{I_\nu\sbrac{A(f_{\mathbf{z},\la})-A(\hat f)}}_{\mathscr{L}^2(X,\nu;Y)}$ and~$\norm{L(f_{\mathbf{z},\la}-\hat f)}_{\mathcal H}$ after some rearrangement, and using Cauchy-Schwarz inequality, Young's inequality. After the simplification and using the probabilistic estimates from Proposition~\ref{main.bound} we get the error estimates in terms of the sample size~$m$, the regularization parameter~$\lambda$, and distance function by $R(\lambda)$. The distance function can be measured using the source condition for~$\hat f$ (see Section~\ref{Sec:Explicit.rates}). Consequently, we get the explicit dependency~$\lambda \to R(\lambda)$. The estimates depend on the effective dimension which will be explicitly expressed in terms of~$\lambda$ by using the different decay conditions on the effective dimension. Then, the bounds can be expressed explicitly in terms of~$\lambda$ and~$m$ for the given smoothness of the solution~$\hat f$. In Section~\ref{Sec:Explicit.rates}, the a-priori choice of regularization parameter will be obtained by balancing the terms in the error bounds.
\begin{theorem}\label{err.upper.bound.p.1}
Let Assumptions~\ref{Ass:fp}--\ref{Ass:A}, and condition~\eqref{l.la.condition} hold true. Let~$1\leq q \leq 2 + p$,~$q(s-1)\leq p+s$ and~$f_{\mathbf{z},\la},\fp^R\in Q$ (for sufficiently large sample size~$m$) for~$p$,~$s$,~$Q$,~$q$ defined in Assumption~\ref{Ass:A}~(iv),~\eqref{Defi:dist}. Then, for all~$0<\eta<1$, the following bounds hold with the confidence~$1-\eta$:
\begin{align*}
\norm{I_\nu\sbrac{A(f_{\mathbf{z},\la})-A(\fp^R)}}_{\mathscr{L}^2(X,\nu;Y)} \leq \widetilde{C}\lambda^{\frac{1}{2}}&\brac{\sqrt{\frac{\mathcal{N}(\lambda)}{m\lambda}}+R(\lambda)^{\frac{p+1}{(p+q)-s(q-1)}}\lambda^{\frac{s(q-1)}{2(p+q)-2s(q-1)}}}\log\paren{\frac{4}{\eta}},\\
\norm{L(f_{\mathbf{z},\la}-\fp^R)}_{\mathcal H}\leq \widetilde{C}&\brac{\sqrt{\frac{\mathcal{N}(\lambda)}{m\lambda}}+R(\lambda)^{\frac{p+1}{(p+q)-s(q-1)}}\lambda^{\frac{s(q-1)}{2(p+q)-2s(q-1)}}}\log\paren{\frac{4}{\eta}},\\
\norm{f_{\mathbf{z},\la}-\fp^R}_{\mathcal H}\leq \widetilde{C}\lambda^{\frac{s}{2(p+1)}}&\brac{\sqrt{\frac{\mathcal{N}(\lambda)}{m\lambda}}+R(\lambda)^{\frac{p+1}{(p+q)-s(q-1)}}\lambda^{\frac{s(q-1)}{2(p+q)-2s(q-1)}}}^{\frac{(p+s)}{(p+1)}}\log\paren{\frac{4}{\eta}}.
\end{align*}
Here,~$R(\lambda)$ is the solution of the equation~$d_A(R) R^{-\frac{p+1}{(p+q)-s(q-1)}}= \lambda^{\frac{p+q}{2(p+q)-2s(q-1)}}$ for~$d_A(R) \neq 0$ and~$R(\lambda)$ is a fixed constant for~$d_A(R) =0$.
\end{theorem}
\begin{proof}
By the definition of~$f_{\mathbf{z},\la}$ as the solution to the minimization problem in~\eqref{fzl}, we have
\begin{equation*}
\frac{1}{m}\sum\limits_{i=1}^m\norm{\sbrac{A(f_{\mathbf{z},\la})}(x_i)-y_i}_Y^2+\lambda \norm{L(f_{\mathbf{z},\la}-\bar f)}_{\mathcal H}^2\leq \frac{1}{m}\sum\limits_{i=1}^m\norm{\sbrac{A(\fp^R)}(x_i)-y_i}_Y^2+\lambda \norm{L(\fp^R-\bar f)}_{\mathcal H}^2.
\end{equation*}
We re-express the above inequality as follows,
\begin{equation*}\label{idea.1}
\norm{S_\mathbf{x} \sbrac{A(f_{\mathbf{z},\la})}-\mathbf{y}}_m^2+\lambda \norm{L(f_{\mathbf{z},\la}-\bar f)}_{\mathcal H}^2\leq \norm{S_\mathbf{x} \sbrac{A(\fp^R)}-\mathbf{y}}_m^2+\lambda \norm{L(\fp^R-\bar f)}_{\mathcal H}^2
\end{equation*}
which implies
\begin{align*}
&\norm{S_\mathbf{x}\sbrac{A(f_{\mathbf{z},\la})-A(\fp^R)}}_m^2+2\inner{S_\mathbf{x}\sbrac{A(f_{\mathbf{z},\la})-A(\fp^R)}, S_\mathbf{x} \sbrac{A(\fp^R )}-\mathbf{y}}_m+\lambda\norm{L(f_{\mathbf{z},\la}-\fp^R)}_{\mathcal H}^2 \\ \nonumber
\leq& 2\lambda\inner{ L(f_{\mathbf{z},\la}-\fp^R),L(\bar f-\fp^R)}_{\mathcal H}.
\end{align*}
Then we have,
\begin{align}\label{B1.1}
&\norm{I_\nu\sbrac{A(f_{\mathbf{z},\la})-A(\fp^R)}}_{\mathscr{L}^2(X,\nu;Y)}^2+\lambda\norm{L(f_{\mathbf{z},\la}-\fp^R)}_{\mathcal H}^2 \\ \nonumber
\leq & 2\lambda\inner{ L(f_{\mathbf{z},\la}-\fp^R),L(\bar f-\fp^R)}_{\mathcal H}+2\inner{I_\nu\sbrac{A(f_{\mathbf{z},\la})-A(\fp^R)},I_\nu\sbrac{A(\fp^R)-A(\hat f)}}_{\mathcal H'}\\ \nonumber
&+2\inner{A(f_{\mathbf{z},\la})-A(\fp^R),(T_\nu-T_\mathbf{x})\sbrac{A(\fp^R)-A(\hat f)}+S_\mathbf{x}^*\bm{\varepsilon}}_{\mathcal H'} \\ \nonumber
&+\inner{A(f_{\mathbf{z},\la})-A(\fp^R),(T_\nu-T_\mathbf{x})\sbrac{A(f_{\mathbf{z},\la})-A(\fp^R)}}_{\mathcal H'}.
\end{align}
Using the interpolation inequality~\eqref{interpolation}, the definition of distance function~\eqref{Defi:dist} and for~$f_{\mathbf{z},\la} \in Q$ under Assumption~\ref{Ass:A}, we obtain
\begin{align}\label{B_2.1}
\inner{ L(f_{\mathbf{z},\la}-\fp^R),L(\bar f-\fp^R)}_{\mathcal H}\leq & \norm{\fp^R-\bar f}_{\mathcal H_{q}}\norm{f_{\mathbf{z},\la}-\fp^R}_{\mathcal H_{2-q}}\\ \nonumber
\leq & R\norm{L(f_{\mathbf{z},\la}-\fp^R)}_{\mathcal H}^{\frac{p-q+2}{p+1}}\norm{f_{\mathbf{z},\la}-\fp^R}_{\mathcal H_{-p}}^{\frac{q-1}{p+1}}.
\end{align}
We have
\begin{align*}
\norm{f_{\mathbf{z},\la}-\fp^R}_{\mathcal H_{-p}} \leq &\norm{f_{\mathbf{z},\la}-\hat f}_{\mathcal H_{-p}}+\norm{\hat f-\fp^R}_{\mathcal H_{-p}}\\
\leq &\alpha\norm{I_\nu\paren{A(f_{\mathbf{z},\la})-A(\hat f)}}_{\mathscr{L}^2(X,\nu;Y)}^s+\alpha\norm{I_\nu\paren{A(\hat f)-A(\fp^R)}}_{\mathscr{L}^2(X,\nu;Y)}^s\\
\leq & \alpha\norm{I_\nu\paren{A(f_{\mathbf{z},\la})-A(\hat f)}}_{\mathscr{L}^2(X,\nu;Y)}^s+2\alpha\norm{I_\nu\paren{A(\hat f)-A(\fp^R)}}_{\mathscr{L}^2(X,\nu;Y)}^s
\end{align*}
which implies
\begin{align*}
\norm{f_{\mathbf{z},\la}-\fp^R}_{\mathcal H_{-p}}^{\frac{2(q-1)}{(p+q)}} \leq & 2 \alpha^{\frac{2(q-1)}{(p+q)}}\norm{I_\nu\paren{A(f_{\mathbf{z},\la})-A(\hat f)}}_{\mathscr{L}^2(X,\nu;Y)}^{\frac{2s(q-1)}{(p+q)}}+ 4 \alpha^{\frac{2(q-1)}{(p+q)}}d_A(R)^{\frac{2s(q-1)}{(p+q)}},
\end{align*}
where~$d_A(R)=\norm{I_\nu\sbrac{A(\fp^R)-A(\hat f)}}_{\mathscr{L}^2(X,\nu;Y)}$.
Now we apply Young's inequality ($ab\leq \frac{a^u}{u}+\frac{b^v}{v}$ for~$\frac{1}{u}+\frac{1}{v}=1$) with~$a=\paren{\frac{u}{4}}^{\frac{1}{u}}\norm{L(f_{\mathbf{z},\la}-\fp^R)}_{\mathcal H}^{\frac{2}{u}}$,~$b=\paren{\frac{4}{u}}^{\frac{1}{u}}R\norm{f_{\mathbf{z},\la}-\fp^R}_{\mathcal H_{-p}}^{\frac{q-1}{p+1}}$,~$u=\frac{2p+2}{p-q+2}$ and~$v = \frac{2p+2}{p+q}$ in~\eqref{B_2.1}, and this implies
\begin{align*}
\inner{ L(\fp^R-f_{\mathbf{z},\la}),L(\fp^R-\bar f)}_{\mathcal H}\leq &\frac{1}{4}\norm{L(f_{\mathbf{z},\la}-\fp^R)}_{\mathcal H}^2+CR^{\frac{2p+2}{p+q}}\norm{f_{\mathbf{z},\la}-\fp^R}_{\mathcal H_{-p}}^{\frac{2(q-1)}{p+q}},
\end{align*}
where~$C=\frac{1}{v}\paren{\frac{4}{u}}^{\frac{v}{u}}$. Now, using~\eqref{Ass:A} we get,
\begin{align}\label{B2.1}
&\inner{ L(\fp^R-f_{\mathbf{z},\la}),L(\fp^R-\bar f)}_{\mathcal H}\\ \nonumber
\leq &\frac{1}{4}\norm{L(f_{\mathbf{z},\la}-\fp^R)}_{\mathcal H}^2+C'^2 R^{\frac{2p+2}{p+q}}\norm{I_\nu\paren{A(f_{\mathbf{z},\la})-A(\hat f)}}_{\mathscr{L}^2(X,\nu;Y)}^{\frac{2s(q-1)}{(p+q)}}+2C'^2R^{\frac{2p+2}{p+q}}d_A(R)^{\frac{2s(q-1)}{(p+q)}},
\end{align}
where~$C'^2=2C\alpha^{\frac{2(q-1)}{(p+q)}}$.
To estimate the last two terms in~\eqref{B1.1} we consider the inequality
\begin{align*}
\inner{f,g}_{\mathcal H'}= & \lambda\inner{f,(T_\nu+\lambda I)^{-1}g}_{\mathcal H'}+\inner{f,T_\nu(T_\nu+\lambda I)^{-1}g}_{\mathcal H'}\\
\leq & \brac{\sqrt{\lambda}\norm{f}_{\mathcal H'}+\norm{I_\nu f}_{\mathscr{L}^2(X,\nu;Y)}}\norm{(T_\nu+\lambda I)^{-1/2}g}_{\mathcal H'}.
\end{align*}
By taking~$f=A(f_{\mathbf{z},\la})-A(\fp^R)$, and~$g=(T_\nu-T_\mathbf{x})\sbrac{A(f_{\mathbf{z},\la})-2 A(\hat f)+A(\fp^R)}+2S_\mathbf{x}^*\bm{\varepsilon}$ and using~\eqref{L.unbound},~Assumption~\ref{Ass:A}~(iii) we get,
\begin{align}\label{B3.1}
&\inner{A(f_{\mathbf{z},\la})-A(\fp^R),(T_\nu-T_\mathbf{x})\sbrac{A(f_{\mathbf{z},\la})-2 A(\hat f)+A(\fp^R)}+2S_\mathbf{x}^*\bm{\varepsilon}}_{\mathcal H'}\\ \nonumber
\leq & \brac{\ell_1\Psi_\mathbf{x}+2\Theta_{\mathbf{z}}}\brac{\sqrt{\lambda}\norm{A(f_{\mathbf{z},\la})-A(\fp^R)}_{\mathcal H'}+\norm{I_\nu\sbrac{A(f_{\mathbf{z},\la})-A(\fp^R)}}_{\mathscr{L}^2(X,\nu;Y)}}\\ \nonumber
\leq &\brac{\ell_1\Psi_\mathbf{x}+2\Theta_{\mathbf{z}}}\brac{\ell_A\sqrt{\lambda}\norm{f_{\mathbf{z},\la}-\fp^R}_{\mathcal H}+\norm{I_\nu\sbrac{A(f_{\mathbf{z},\la})-A(\fp^R)}}_{\mathscr{L}^2(X,\nu;Y)}}\\ \nonumber
\leq &\brac{\ell_1\Psi_\mathbf{x}+2\Theta_{\mathbf{z}}}\brac{\ell\sqrt{\lambda}\norm{L(f_{\mathbf{z},\la}-\fp^R)}_{\mathcal H}+\norm{I_\nu\sbrac{A(f_{\mathbf{z},\la})-A(\fp^R)}}_{\mathscr{L}^2(X,\nu;Y)}},
\end{align}
where~$\ell_1=\norm{A(f_{\mathbf{z},\la})-2A(\hat f)+A(\fp^R)}_{\mathcal H'}$,~$\ell=\frac{\ell_A}{\ell_L}$ and~$\ell_A$,~$\ell_L$,~$\Theta_\mathbf{z}$,~$\Psi_\mathbf{x}$ are defined in~Assumption~\ref{Ass:A}~(iii),~\eqref{L.unbound},~Assumption~\ref{Ass:A}~(iii),~\eqref{theta.z},~\eqref{psi}, respectively.
Using the above estimates~\eqref{B2.1},~\eqref{B3.1} in~\eqref{B1.1} we obtain,
\begin{align*}
&\norm{I_\nu\sbrac{A(f_{\mathbf{z},\la})-A(\fp^R)}}_{\mathscr{L}^2(X,\nu;Y)}^2+\frac{\lambda}{2}\norm{L(f_{\mathbf{z},\la}-\fp^R)}_{\mathcal H}^2 \\
\leq &2C'^2\lambda R^{\frac{2p+2}{p+q}}\norm{I_\nu\sbrac{A(f_{\mathbf{z},\la})-A(\fp^R)}}_{\mathscr{L}^2(X,\nu;Y)}^{\frac{2s(q-1)}{p+q}}+4 C'^2\lambda R^{\frac{2p+2}{p+q}}d_A(R)^{\frac{2s(q-1)}{(p+q)}}\\
&+2\norm{I_\nu\sbrac{A(\fp^R)-A(\hat f)}}_{\mathscr{L}^2(X,\nu;Y)}\norm{I_\nu\sbrac{A(f_{\mathbf{z},\la})-A(\fp^R)}}_{\mathscr{L}^2(X,\nu;Y)}\\
&+(\ell_1\Psi_\mathbf{x}+2\Theta_{\mathbf{z}})\norm{I_\nu\sbrac{A(f_{\mathbf{z},\la})-A(\fp^R)}}_{\mathscr{L}^2(X,\nu;Y)}\\
&+\ell\sqrt{\lambda}(\ell_1\Psi_\mathbf{x}+2\Theta_{\mathbf{z}})\norm{L(f_{\mathbf{z},\la}-\fp^R)}_{\mathcal H}.
\end{align*}
Now, using the inequality~$ab\leq a^2+b^2$ we get,
\begin{align*}
&\norm{I_\nu\sbrac{A(f_{\mathbf{z},\la})-A(\fp^R)}}_{\mathscr{L}^2(X,\nu;Y)}^2+\frac{\lambda}{2}\norm{L(f_{\mathbf{z},\la}-\fp^R)}_{\mathcal H}^2 \\
\leq &2C'^2\lambda R^{\frac{2p+2}{p+q}}\norm{I_\nu\sbrac{A(f_{\mathbf{z},\la})-A(\fp^R)}}_{\mathscr{L}^2(X,\nu;Y)}^{\frac{2s(q-1)}{p+q}}+4 C'^2\lambda R^{\frac{2p+2}{p+q}}d_A(R)^{\frac{2s(q-1)}{(p+q)}}\\
&+4\norm{I_\nu\sbrac{A(\fp^R)-A(\hat f)}}_{\mathscr{L}^2(X,\nu;Y)}^2+\frac{1}{4}\norm{I_\nu\sbrac{A(f_{\mathbf{z},\la})-A(\fp^R)}}_{\mathscr{L}^2(X,\nu;Y)}^2\\
&+(\ell_1\Psi_\mathbf{x}+2\Theta_{\mathbf{z}})^2+\frac{1}{4}\norm{I_\nu\sbrac{A(f_{\mathbf{z},\la})-A(\fp^R)}}_{\mathscr{L}^2(X,\nu;Y)}^2\\
&+\ell^2(\ell_1\Psi_\mathbf{x}+2\Theta_{\mathbf{z}})^2+\frac{\lambda}{4}\norm{L(f_{\mathbf{z},\la}-\fp^R)}_{\mathcal H}.
\end{align*}
which implies
\begin{align*}
&\frac{1}{2}\norm{I_\nu\sbrac{A(f_{\mathbf{z},\la})-A(\fp^R)}}_{\mathscr{L}^2(X,\nu;Y)}^2+\frac{\lambda}{4}\norm{L(f_{\mathbf{z},\la}-\fp^R)}_{\mathcal H}^2 \\
\leq &2C'^2\lambda R^{\frac{2p+2}{p+q}}\norm{I_\nu\sbrac{A(f_{\mathbf{z},\la})-A(\fp^R)}}_{\mathscr{L}^2(X,\nu;Y)}^{\frac{2s(q-1)}{p+q}}+4 C'^2\lambda R^{\frac{2p+2}{p+q}}d_A(R)^{\frac{2s(q-1)}{(p+q)}}\\
&+4d_A(R)^2+(\ell^2+1)(\ell_1\Psi_\mathbf{x}+2\Theta_{\mathbf{z}})^2.
\end{align*}
Now by rearranging the terms we obtain,
\begin{align*}
&\norm{I_\nu\sbrac{A(f_{\mathbf{z},\la})-A(\fp^R)}}_{\mathscr{L}^2(X,\nu;Y)}^2+\lambda\norm{L(f_{\mathbf{z},\la}-\fp^R)}_{\mathcal H}^2 \\
\leq &\paren{8C'^2\lambda R^{\frac{2p+2}{p+q}}\norm{I_\nu\sbrac{A(f_{\mathbf{z},\la})-A(\fp^R)}}_{\mathscr{L}^2(X,\nu;Y)}^{\frac{2s(q-1)}{p+q}}-\norm{I_\nu\sbrac{A(f_{\mathbf{z},\la})-A(\fp^R)}}_{\mathscr{L}^2(X,\nu;Y)}^2}\\
&+16 C'^2\lambda R^{\frac{2p+2}{p+q}}d_A(R)^{\frac{2s(q-1)}{(p+q)}}+16d_A(R)^2+4(\ell^2+1)(\ell_1\Psi_\mathbf{x}+2\Theta_{\mathbf{z}})^2\\
\leq & \sup_{\tau\geq 0}\paren{8C'^2\lambda R^{\frac{2p+2}{p+q}}\tau^{\frac{2s(q-1)}{p+q}}-\tau^2}+16 C'^2\lambda R^{\frac{2p+2}{p+q}}d_A(R)^{\frac{2s(q-1)}{(p+q)}}\\
&+16d_A(R)^2+4(\ell^2+1)(\ell_1\Psi_\mathbf{x}+2\Theta_{\mathbf{z}})^2 \\
= & C''^2R^{\frac{2p+2}{(p+q)-s(q-1)}}\lambda^{\frac{p+q}{(p+q)-s(q-1)}}+16 C'^2\lambda R^{\frac{2p+2}{p+q}}d_A(R)^{\frac{2s(q-1)}{(p+q)}}\\
&+16d_A(R)^2+4(\ell^2+1)(\ell_1\Psi_\mathbf{x}+2\Theta_{\mathbf{z}})^2,
\end{align*}
where~$C''=\paren{\frac{C'^2 8s(q-1)}{p+q}}^{\frac{(p+q)}{2(p+q)-2s(q-1)}}\paren{\frac{(p+q)-s(q-1)}{s(q-1)}}^{1/2}$.
Hence we get,
\begin{align}\label{err_1.1}
&\norm{I_\nu\sbrac{A(f_{\mathbf{z},\la})-A(\fp^R)}}_{\mathscr{L}^2(X,\nu;Y)}\\ \nonumber
\leq &2(\ell+1)(\ell_1\Psi_\mathbf{x}+2\Theta_{\mathbf{z}})+C''R^{\frac{p+1}{(p+q)-s(q-1)}}\lambda^{\frac{p+q}{2(p+q)-2s(q-1)}}+4 C'\sqrt{\lambda}R^{\frac{p+1}{p+q}}d_A(R)^{\frac{s(q-1)}{(p+q)}}+4d_A(R)
\end{align}
and
\begin{align}\label{p_s.1}
&\norm{L(f_{\mathbf{z},\la}-\fp^R)}_{\mathcal H}\\ \nonumber
\leq &\frac{1}{\sqrt{\lambda}}\brac{2(\ell+1)(\ell_1\Psi_\mathbf{x}+2\Theta_{\mathbf{z}})+C''R^{\frac{p+1}{(p+q)-s(q-1)}}\lambda^{\frac{p+q}{2(p+q)-2s(q-1)}}+4 C'\sqrt{\lambda}R^{\frac{p+1}{p+q}}d_A(R)^{\frac{s(q-1)}{(p+q)}}+4d_A(R)}.
\end{align}
In case~$d_A(R)=0$, for some fixed~$\bar{R}$, we get explicit bounds from~\eqref{err_1.1} and~\eqref{p_s.1} in terms of~$m$ and~$\lambda$ using~\eqref{Theta.bound} and~\eqref{Psi.bound}.
For~$d_A(R)\neq 0$ and~$\lambda>0$, we optimize the bounds by balancing the terms in~$R$ and~$\lambda$. Let~$R = R(\lambda)$ solves the equation~$\Gamma(R) := d_A(R) R^{-\frac{p+1}{(p+q)-s(q-1)}}= \lambda^{\frac{p+q}{2(p+q)-2s(q-1)}}$. The function~$\Gamma(R)$ is a non-vanishing decreasing function, and hence the inverse~$\Gamma^{-1}$ exists, and it is decreasing. With this, the error bounds can be expressed as
\begin{align}\label{err_1.2}
&\norm{I_\nu\sbrac{A(f_{\mathbf{z},\la})-A(\fp^R)}}_{\mathscr{L}^2(X,\nu;Y)}\\ \nonumber
\leq &2(\ell+1)(\ell_1\Psi_\mathbf{x}+2\Theta_{\mathbf{z}})+C'''R(\lambda)^{\frac{p+1}{(p+q)-s(q-1)}}\lambda^{\frac{p+q}{2(p+q)-2s(q-1)}}
\end{align}
and
\begin{align}\label{p_s.2}
&\norm{L(f_{\mathbf{z},\la}-\fp^R)}_{\mathcal H}\\ \nonumber
\leq &\frac{1}{\sqrt{\lambda}}\brac{2(\ell+1)(\ell_1\Psi_\mathbf{x}+2\Theta_{\mathbf{z}})+C'''R(\lambda)^{\frac{p+1}{(p+q)-s(q-1)}}\lambda^{\frac{p+q}{2(p+q)-2s(q-1)}}}.
\end{align}
where~$C'''=C''+4 C'+4$.
Now, using~\eqref{Theta.bound} and~\eqref{Psi.bound} in~\eqref{err_1.1},~\eqref{p_s.1} we obtain with probability~$1-\eta$,
\begin{equation}\label{err1.3}
\norm{I_\nu\sbrac{A(f_{\mathbf{z},\la})-A(\fp^R)}}_{\mathscr{L}^2(X,\nu;Y)}\leq \widetilde{C}\brac{\sqrt{\frac{\mathcal{N}(\lambda)}{m}}+R(\lambda)^{\frac{p+1}{(p+q)-s(q-1)}}\lambda^{\frac{p+q}{2(p+q)-2s(q-1)}}}\log\paren{\frac{4}{\eta}}
\end{equation}
and
\begin{equation}\label{ps.3}
\norm{L(f_{\mathbf{z},\la}-\fp^R)}_{\mathcal H}\leq \frac{\widetilde{C}}{\sqrt{\lambda}}\brac{\sqrt{\frac{\mathcal{N}(\lambda)}{m}}+R(\lambda)^{\frac{p+1}{(p+q)-s(q-1)}}\lambda^{\frac{p+q}{2(p+q)-2s(q-1)}}}\log\paren{\frac{4}{\eta}},
\end{equation}
where~$\widetilde{C}$ depends on~$\ell$,~$\ell_1$,~$p$,~$q$,~$s$,~$\kappa$,~$M$,~$\Sigma$,~$\alpha$.
Taking the mean using the inequality~\eqref{interpolation} we get,
\begin{align*}
&\norm{f_{\mathbf{z},\la}-\fp^R}_{\mathcal H}\\
\leq & \norm{L(f_{\mathbf{z},\la}-\fp^R)}^{\frac{p}{p+1}}_{\mathcal H}\norm{f_{\mathbf{z},\la}-\fp^R}^{\frac{1}{p+1}}_{\mathcal H_{-p}}\\
\leq & \alpha^{\frac{1}{p+1}}\norm{L(f_{\mathbf{z},\la}-\fp^R)}_{\mathcal H}^{\frac{p}{p+1}}\norm{I_\nu\sbrac{A(f_{\mathbf{z},\la})-A(\fp^R)}}_{\mathscr{L}^2(X,\nu;Y)}^{\frac{s}{p+1}}\\
\leq & \widetilde{C}\lambda^{-\frac{p}{2(p+1)}}\brac{\sqrt{\frac{\mathcal{N}(\lambda)}{m}}+R(\lambda)^{\frac{p+1}{(p+q)-s(q-1)}}\lambda^{\frac{p+q}{2(p+q)-2s(q-1)}}}^{\frac{(p+s)}{(p+1)}}\log\paren{\frac{4}{\eta}}.
\end{align*}
Hence the proof completes.
\end{proof}
\section{Explicit rates under source condition}\label{Sec:Explicit.rates}
Here, we consider the smoothness of~$\hat f$ by the source condition in terms of the operator~$L^{-1}$ to get the explicit rates in terms of~$m$ and~$\lambda$. The smoothness parameter~$r$ influences the rates of convergence, the larger~$r$ (Smoother~$\hat f$) will lead to the faster convergence rates.
\begin{assumption}[General source condition]\label{source.cond}
The true solution~$\hat f$ satisfy the condition:
\begin{equation*}
\hat f-\bar f= L^{-r}v \text{ and }\norm{v}_{\mathcal H} \leq R^\dagger.
\end{equation*}
\end{assumption}
The rates in Theorem~\ref{err.upper.bound.p.1} can be further simplified in two cases based on the behaviour of the distance function~$d_A(R)$.
In case~$d_A(R)=0$, we get the explicit error bounds in terms of~$\lambda$ and~$m$ from Theorem~\ref{err.upper.bound.p.1}. We get the function~$d_A(\bar{R})=0$ when~$\hat f-\bar f=L^{-q}v$ and~$\norm{v}\leq \bar{R}$ for some~$\bar{R}$, i.e.,~$r \geq q$. Consequently, this also implies~$\hat f^{\bar{R}}=\hat f$. So, the rates of convergence in the reconstruction norm and prediction norm can be given as:
\begin{align*}
\mathcal P \Big\{\norm{I_\nu\sbrac{A(f_{\mathbf{z},\la})-A(\hat f)}}_{\mathcal{L}^2} \leq \widetilde{C}\lambda^{\frac{1}{2}}&\paren{\sqrt{\frac{\mathcal{N}(\lambda)}{m\lambda}}+\bar{R}^{\frac{p+1}{(p+q)-s(q-1)}}\lambda^{\frac{s(q-1)}{2(p+q)-2s(q-1)}}}\log\paren{\frac{4}{\eta}}\Big\}\geq 1-\eta,\\
\mathcal P \Big\{\norm{f_{\mathbf{z},\la}-\hat f}_{\mathcal H}\leq \widetilde{C}\lambda^{\frac{s}{2(p+1)}}&\paren{\sqrt{\frac{\mathcal{N}(\lambda)}{m\lambda}}+\bar{R}^{\frac{p+1}{(p+q)-s(q-1)}}\lambda^{\frac{s(q-1)}{2(p+q)-2s(q-1)}}}^{\frac{(p+s)}{(p+1)}}\log\paren{\frac{4}{\eta}}\Big\}\geq 1-\eta.
\end{align*}
By balancing the error terms, we choose the regularization parameter~$\lambda$ in terms of the sample size~$m$. Consequently, we get the explicit rates of convergence in terms of the sample size.
\begin{corollary}\label{cor.err.upper.bound.gen}
Under the same assumptions of Theorem~\ref{err.upper.bound.p.1} and Assumption~\ref{source.cond} with~$r\geq q$ and the a-priori choice of the regularization parameter~$\lambda^*=\Theta_{\mathcal{N},u}^{-1}\paren{\frac{1}{\sqrt{m}}}$ for~$\Theta_{\mathcal{N},u}(t)=\frac{t^u}{\sqrt{\mathcal{N}(t)}}$ and~$u=\frac{p+q}{2(p+q)-2s(q-1)}$, for all~$0<\eta<1$, the following error estimates holds with confidence~$1-\eta$:
\begin{align*}
\norm{I_\nu\sbrac{A(f_{\mathbf{z},\la})-A(\hat f)}}_{\mathcal{L}} \leq \overline{C}\paren{\lambda^*}^u\log\paren{\frac{4}{\eta}}
\end{align*}
and
\begin{align*}
\norm{f_{\mathbf{z},\la}-\hat f}_{\mathcal H}\leq & \overline{C}\paren{\lambda^*}^{\frac{2u(s+p)-p}{2(p+1)}}\log\paren{\frac{4}{\eta}}
= \overline{C}\paren{\lambda^*}^{\frac{sq}{2(p+q)-2s(q-1)}}\log\paren{\frac{4}{\eta}}.
\end{align*}
where~$\overline{C}$ depends on~$\ell$,~$\ell_1$,~$p$,~$q$,~$s$,~$\kappa$,~$M$,~$\Sigma$,~$\alpha$,~$\bar{R}$.
\end{corollary}
In case~$d_A(R)\neq 0$, we have to estimate the function~$d_A(R)$ explicitly. We utilize the result of~\cite[Theorem~5.9]{Hofmann2007} to estimate the distance function using the source condition. For the benchmark smoothness~$q$ and the given smoothness~$r$, we assume that~$q\geq r$ and~$2q\geq p+r$. Then, under Assumption~\ref{source.cond}, we get the bound
$$
d(R) \leq \frac{\paren{R^\dagger}^{\frac{q}{q-r}}}{R^{\frac{r}{q-r}}},\quad R>0.
$$
Following the analysis in~\cite[Theorem~5.9]{Hofmann2007} we also obtain the bounds for the distance function:
\begin{equation}\label{R.choice}
d^{p}(R):=\norm{\fp^R-\hat f}_{\mathcal H_{-p}} \leq \frac{\paren{R^\dagger}^{\frac{q+p}{q-r}}}{R^{\frac{r+p}{q-r}}},\quad R>0.
\end{equation}
To bound the distance function~$d_A(R)$, we assume the following assumption in addition to Assumption~\ref{Ass:A}~(iv) with the same parameters:
\begin{assumption}\label{Cond.est}
\item There exists a constant~$\beta>0$ such that
\begin{equation*}
\alpha\norm{I_\nu\sbrac{A(f)-A(\hat f)}}_{\mathscr{L}^2(X,\nu;Y)}^s \leq \beta\norm{f-\hat f}_{\mathcal H_{-p}}
\end{equation*}
holds for all~$f \in Q$.
\end{assumption}
Now, according to Theorem~\ref{err.upper.bound.p.1} we have to solve the following equation in order to estimate~$R$ in terms of~$\lambda$.
$$d_A(R) R^{-\frac{p+1}{(p+q)-s(q-1)}}= \lambda^{\frac{p+q}{2(p+q)-2s(q-1)}}.$$
Here, we get the estimate of~$d_A(R)$ from Assumption~\ref{Cond.est} and the bound~\eqref{R.choice}. By ignoring the multiplicative constant in Assumption~\ref{Cond.est} we get the following identity from the above equation:
$$\frac{\paren{R^\dagger}^{\frac{q+p}{s(q-r)}}}{R^{\frac{r+p}{s(q-r)}}} R^{-\frac{p+1}{(p+q)-s(q-1)}}= \lambda^{\frac{p+q}{2(p+q)-2s(q-1)}}.$$
This yields
$$
R(\lambda) = \paren{R^{\dag}}^{\frac{(p+q)-s(q-1)}{(p+r)-s(r-1)}} \lambda^{\frac{s(r-q)}{2(p+r)-2s(r-1)}},\quad R>0.
$$
We get the explicit error bound from Theorem~\ref{err.upper.bound.p.1} in terms of the sample size~$m$ and~$\lambda$ using the above dependency~$\lambda \to R(\lambda)$.
\begin{align*}
\mathcal P \Big\{\norm{I_\nu\sbrac{A(f_{\mathbf{z},\la})-A(\fp^R)}}_{\mathcal{L}^2} \leq \widetilde{C}\lambda^{\frac{1}{2}}&\paren{\sqrt{\frac{\mathcal{N}(\lambda)}{m\lambda}}+\paren{R^\dagger}^{\frac{p+1}{(p+r)-s(r-1)}}\lambda^{\frac{s(r-1)}{2(p+r)-2s(r-1)}}}\log\paren{\frac{4}{\eta}}\Big\}\geq 1-\eta,\\
\mathcal P \Big\{\norm{f_{\mathbf{z},\la}-\fp^R}_{\mathcal H}\leq \widetilde{C}\lambda^{\frac{s}{2(p+1)}}&\paren{\sqrt{\frac{\mathcal{N}(\lambda)}{m\lambda}}+\paren{R^\dagger}^{\frac{p+1}{(p+r)-s(r-1)}}\lambda^{\frac{s(r-1)}{2(p+r)-2s(r-1)}}}^{\frac{(p+s)}{(p+1)}}\log\paren{\frac{4}{\eta}}\Big\}\geq 1-\eta.
\end{align*}
Now, we get the following error estimates using the identity~$f_{\mathbf{z},\la}-\hat f=(f_{\mathbf{z},\la}-\fp^R)+(\fp^R-\hat f)$ and the estimates of distance functions in it.
\begin{align*}
\mathcal P \Big\{\norm{I_\nu\sbrac{A(f_{\mathbf{z},\la})-A(\hat f)}}_{\mathcal{L}^2} \leq \widetilde{C}\lambda^{\frac{1}{2}}&\paren{\sqrt{\frac{\mathcal{N}(\lambda)}{m\lambda}}+\paren{R^\dagger}^{\frac{p+1}{(p+r)-s(r-1)}}\lambda^{\frac{s(r-1)}{2(p+r)-2s(r-1)}}}\log\paren{\frac{4}{\eta}}\Big\}\geq 1-\eta,\\
\mathcal P \Big\{\norm{f_{\mathbf{z},\la}-\hat f}_{\mathcal H}\leq \widetilde{C}\lambda^{\frac{s}{2(p+1)}}&\paren{\sqrt{\frac{\mathcal{N}(\lambda)}{m\lambda}}+\paren{R^\dagger}^{\frac{p+1}{(p+r)-s(r-1)}}\lambda^{\frac{s(r-1)}{2(p+r)-2s(r-1)}}}^{\frac{(p+s)}{(p+1)}}\log\paren{\frac{4}{\eta}}\Big\}\geq 1-\eta.
\end{align*}
By balancing the error terms, we choose the regularization parameter~$\lambda$ in terms of the sample size~$m$. Consequently, we get the explicit rates of convergence in terms of the sample size.
\begin{corollary}\label{cor.err.upper.bound.gen.l}
Under the same assumptions of Theorem~\ref{err.upper.bound.p.1} and Assumption~\ref{source.cond} with~$r\leq q$,~$r+p\leq 2q$ and the a-priori choice of the regularization parameter~$\lambda^*=\Theta_{\mathcal{N},u}^{-1}\paren{\frac{1}{\sqrt{m}}}$ for~$\Theta_{\mathcal{N},u}(t)=\frac{t^u}{\sqrt{\mathcal{N}(t)}}$ and~$u=\frac{p+r}{2(p+r)-2s(r-1)}$, for all~$0<\eta<1$, the following error estimates holds with confidence~$1-\eta$:
\begin{align*}
\norm{I_\nu\sbrac{A(f_{\mathbf{z},\la})-A(\hat f)}}_{\mathcal{L}} \leq \overline{C}\paren{\lambda^*}^u\log\paren{\frac{4}{\eta}}
\end{align*}
and
\begin{align*}
\norm{f_{\mathbf{z},\la}-\hat f}_{\mathcal H}\leq & \overline{C}\paren{\lambda^*}^{\frac{2u(s+p)-p}{2(p+1)}}\log\paren{\frac{4}{\eta}}
= \overline{C}\paren{\lambda^*}^{\frac{sr}{2(p+r)-2s(r-1)}}\log\paren{\frac{4}{\eta}}.
\end{align*}
where~$\overline{C}$ depends on~$\ell$,~$\ell_1$,~$p$,~$q$,~$s$,~$\kappa$,~$M$,~$\Sigma$,~$\alpha$,~$R^\dagger$.
\end{corollary}
The effective dimension exhibits different behaviour under the different choices kernel and unknown probability measures~\cite{Lu2020}. We consider the following decay conditions on it.
\begin{assumption}[Polynomial decay condition]\label{N(l).bound}
Assume that for some~$0<b<1$ there exists some positive constant~$C>0$ such that
\begin{equation*}
\mathcal{N}(\lambda):=Tr\left((T_\nu+\lambda I)^{-1}T_\nu\right) \leq C\lambda^{-b},\forall \lambda>0.
\end{equation*}
\end{assumption}
\begin{assumption}[Logarithmic decay condition]\label{log.decay}
Assume that there exists some positive constant~$C>0$ such that
\begin{equation*}
\mathcal{N}(\lambda)\leq C\log\left(\frac{1}{\lambda}\right),\forall \lambda>0.
\end{equation*}
\end{assumption}
\begin{corollary}\label{err.upper.bound.p.para}
Under the same assumptions of Theorem~\ref{err.upper.bound.p.1} and Assumption~\ref{source.cond},~\ref{Cond.est},~\ref{N(l).bound} with the a-priori choice of the regularization parameter~$\lambda^*=m^{-\frac{1}{2u+b}}$, for all~$0<\eta<1$, the following error estimates hold with confidence~$1-\eta$:
\begin{align*}
\norm{f_{\mathbf{z},\la}-\hat f}_{\mathcal H}\leq &\widetilde{C}\paren{\lambda^*}^{\frac{2u(s+p)-p}{2(p+1)}}\log\paren{\frac{4}{\eta}},\qquad u=\frac{p+q}{2(p+q)-2s(q-1)} \quad \text{for} \quad r\geq q.\\
\norm{f_{\mathbf{z},\la}-\hat f}_{\mathcal H}\leq &\overline{C}\paren{\lambda^*}^{\frac{2u(s+p)-p}{2(p+1)}}\log\paren{\frac{4}{\eta}},\qquad u=\frac{p+r}{2(p+r)-2s(r-1)}\quad \text{for} \quad r\leq q,~r+p\leq 2q.
\end{align*}
\end{corollary}
\begin{corollary}\label{err.upper.bound.cor.log}
Under the same assumptions of Theorem~\ref{err.upper.bound.p.1} and Assumption~\ref{source.cond},~\ref{Cond.est},~\ref{log.decay} with the a-priori choice of the regularization parameter~$\lambda^*=\left(\frac{\log m}{m}\right)^{\frac{1}{2r+1}}$, for all~$0<\eta<1$, we have the following convergence rates with confidence~$1-\eta$:
\begin{align*}
\norm{f_{\mathbf{z},\la}-\hat f}_{\mathcal H}\leq &\widetilde{C}\paren{\lambda^*}^{\frac{2u(s+p)-p}{2(p+1)}}\log\paren{\frac{4}{\eta}},\qquad u=\frac{p+q}{2(p+q)-2s(q-1)} \quad \text{for} \quad r \geq q.\\
\norm{f_{\mathbf{z},\la}-\hat f}_{\mathcal H}\leq &\overline{C}\paren{\lambda^*}^{\frac{2u(s+p)-p}{2(p+1)}}\log\paren{\frac{4}{\eta}},\qquad u=\frac{p+r}{2(p+r)-2s(r-1)}\quad \text{for} \quad r \leq q,~r+p\leq 2q.
\end{align*}
\end{corollary}
Now, we summarize the above results with conditions. We presented the rates of convergence under the different decay conditions on the effective dimension in Corollaries~\ref{err.upper.bound.p.para},~\ref{err.upper.bound.cor.log}. In both the corollaries, first, we discuss the case when the actual smoothness is higher than the benchmark smoothness of the true solution. In this case, we get the rates of convergence corresponding to the benchmark smoothness~$q$ for~$1\leq q\leq \min(r,2+p)$,~$0<s \leq 1$. Although, the actual smoothness is higher. Second, we discuss the case when the actual smoothness is lesser than the benchmark smoothness. Here, we get the error estimates corresponding to the actual smoothness~$r$ for~$\max(1,p,r) \leq q \leq 2+p$,~$0<s\leq 1$. So, the rates are the same as what we would get by directly using the smoothness information of the true solution. At the intersection point, when~$q = r$, then both rates coincide. So, this analysis suggests that if we consider the benchmark smoothness in the appropriate range, then we would get the best rates of convergence. We emphasize that our analysis covers the oversmoothing case, i.e.,\ $r\leq 1$.
\section*{Acknowledgements}
This research has been partially funded by Deutsche Forschungsgemeinschaft (DFG) under The Berlin Mathematics Research Center MATH+ (EXC-2046/1 - 390685689).\\
The author is grateful for fruitful discussions with Peter Math{\'e} about regularization in Hilbert Scales.
|
train/arxiv
|
BkiUbs7xaJJQnKrAjEp4
| 5 | 1 |
\section{Introduction}
The wetting and adhesion of soft materials have recently become a quickly expanding domain and finds applications in the design of innovative materials (adhesives~\cite{Li378}, slippery surfaces~\cite{Newton2011}, highly stretchable electronics~\cite{Rogers1603}), to analyse the mechanics of cells and biological tissues~\cite{Discher2005aa,Douezan2012aa}, and in between, in the field of bioengineering (reversible adhesives~\cite{King2014}, e-skin~\cite{Zoueaaq0508}, etc). Reticulated polymer networks are model soft materials, with versatile properties. At small length and time scales their structure is liquid-like and highly deformable. At large scales, however, the presence of crosslinks give the polymer networks a finite shear modulus $G$ such that they behave like elastic solids~\cite{binder2011glassy,de1979scaling,doi1988theory,rubinstein2003polymer}. The elasticity is of entropic origin, and as a consequence the elastic moduli of polymer networks can be exceedingly small compared to those of (poly)crystalline materials, whose elasticity is of enthalpic origin.
This dual liquid-solid character of polymer networks has recently led to a strong controversy on the so-called Shuttleworth effect~\cite{Shut50,AS16,SJHD2017,AS2020}, which describes the capillary forces at an elastic interface. The key question is whether the surface energy $\gamma$ of a soft solid, which is a nano-scale quantity, depends on the amount of stretching, i.e. on the macroscopically applied deformation. If such a dependency exists, then the excess force per unit length in the interfacial region of the solid, which is by definition the surface tension $\Upsilon$, is not equal to the excess energy per unit surface area $\gamma$. The two quantities are related by the Shuttleworth relation~\cite{Shut50},
\begin{equation}\label{eq:shworth}
\Upsilon=\gamma + \lambda \frac{d \gamma}{d \lambda},
\end{equation}
where $\lambda$ is the stretch of a surface element. This offers an exciting perspective analogous to surface rheology, where surface tension $\Upsilon(\lambda)$ depends on the state of the system -- potentially leading to stiffening or even softening of the interface. However, given that interfacial properties are determined at the nanoscale, the emergence of a Shuttleworth effect for soft polymeric networks is debated \cite{AS2020,Marchand2012c,Bostwick:2014aa,XuNatComm2017,xu2018,Schulman2018aa,Snoeijer2018,liang2018surface,Wu2018,MasurelPRL2019,ChenDanielsSM2019,Gorcum2020}. To a large extent, the discussion is due to a lack of a consistent analytical theory to interpret macroscopic experiments.
Hitherto, all observations on the Shuttleworth effect in polymer networks are based on ``Soft Wetting"~\cite{AS2020}, where a liquid is partially wetting the substrate. A drop of liquid sitting on a soft amorphous polymeric solid exhibits a shape that is globally similar to that on a non-deformable crystalline solid. However, intermolecular forces are able to deform the soft solid over a scale set by the balance between capillarity and elasticity, known as the elastocapillary length~\cite{Rusanov:1975aa,Shanahan1987aa,Carre1996a,White:2003aa,PC2008aa,Jerison2011a,PARKNATURE}. Below this length scale, the soft substrate takes the shape of a sharp ridge that is characterised by the solid angle at its tip. A fundamental question is then how the contact angles, the prime characteristics of wetting, are selected in the hybrid case where both capillarity and elasticity play a role. Is the liquid contact angle with respect to the undeformed substrate still selected by the Young's law? Is the local structure of the interfaces at the contact line selected by a simple force balance, leading to a generalised Neumann's law? What is the role of contact line pinning? The controversies on the existence, or not, of the Shuttleworth effect in soft solids revolve around these questions. For example, recent experiments probing a strain-dependent surface tension~\cite{XuNatComm2017,xu2018} have been based on the measurement of the angle $\theta_S$ made by the solid below the contact line ($\theta_S$ defined in Fig.~\ref{fig:zoom}a). Indeed, such an angle receives a simple explanation when a Neumann force balance of surface tensions is assumed -- as was originally derived using the small deformation theory of linear elasticity~\cite{Marchand2012a,Style2012a,Limat2012a,Style2013b}. However, this interpretation has been challenged by molecular \cite{liang2018surface} and continuum simulations \cite{Wu2018,MasurelPRL2019}, suggesting that the elastic stress contributes to the force balance at the contact line -- potentially giving a change in $\theta_S$ without invoking any Shuttleworth effect. A recent proposal is that the wetting ridge below the contact line could behave like a disclination defect in crystalline solid~\cite{MasurelPRL2019}: in the regime of large deformations, a singular Eshelby force could then emerge at the contact line, which would be involved in the force balance and invalidates the Neumann's law. Numerical simulations using a finite element method may appear to suggest such alternative description of the soft wetting problem \cite{Wu2018,MasurelPRL2019}, where no Shuttleworth effect is present but an elastic singularity appears at the contact line. However, no closed form analytical theory is available to predict the properties of wetting ridges at large deformations \cite{Brummelen:2017sh}.
Before trying to analyse the microscopic origin of a potential Shuttleworth effect, implying a strain-dependent surface tension $\Upsilon(\lambda)$, there is an urgent need to clarify the mechanical consequences of the existence of such an effect. In particular, numerical simulations ultimately rely on a mechanical description which must be totally self-consistent, including the possibility of singularities. If such singularities do exist, then non-adaptive numerical approximations become unreliable to obtain the correct solution of a problem.
In this paper we numerically resolve the problem of soft wetting, using an adaptive numerical technique that allows us to resolve the elastocapillary wetting ridge on all scales (Fig.~\ref{fig:zoom}a). This includes the possibility of singularities, large elastic deformations and the Shuttleworth effect. It is found that the elastic singularity at the wetting ridge is not sufficiently strong to interfere with the balance of surface tensions at the contact line, so that Neumann's law is universally valid -- irrespective of the presence of large deformations, Shuttleworth effect and pinning. Subsequently, we derive exact solutions to nonlinear elasticity that analytically resolve the ridge-singularity in the presence of large deformations. These asymptotic solutions, valid near the singularity, are fully confirmed by the numerical results and offer an novel route to interpret experiments, via the surface stretch measured at the contact line. Applying our analysis to the strain measurements in~\cite{XuNatComm2017}, we provide further evidence for a strong Shuttleworth effect. Finally, we show how Eshelby-like forces can emerge when the substrate has true defects that represent pinning sites on the substrate, and reveal their effect on the contact angles.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\textwidth]{Figure1_mod}
\caption{Symmetric wetting ridges under large deformation, with and without Shuttleworth effect. (a) Typical numerical solution, where successive magnifications show the adaptive resolution of the elastocapillary ridge. The example is a case without Shuttleworth effect, with equal liquid and solid surface energies $\gamma$ (giving a solid angle $\theta_S=120^\circ$). The scales are expressed in the corresponding elastocapillary length $\gamma/G$. (b) The solid angle $\theta_S$ versus the ratio of liquid-vapor surface tension $\gamma_{LV}$ and solid surface tension $\Upsilon_S$. Symbols are numerical results with Shuttleworth effect (open symbols, $\Upsilon_S$ measured at the contact line) and without Shuttleworth effect (closed symbols). We varied both $\gamma_{LV}$ (circles), and the amount of prestretch of the substrate from $\lambda_\infty=1$ to 2 (squares). The solid line corresponds to Neumann's law (\ref{eq:neumsymmetric}), with $\Upsilon_S$ based on its value at the contact line.}
\label{fig:zoom}
\end{figure}
\section{Free energy formulation}
In experiments, the drop size is usually large compared to the elastocapillary length, $\gamma/G$, where $\gamma$ is a typical surface energy (of the solid or the liquid), while $G$ is the shear modulus of the substrate. In this regime, the curvature of the contact line is negligible compared the size of the wetting ridge, and the geometry is quasi-two-dimensional. Below, we therefore formulate the problem in a plane strain description and explain the numerical method that is used to adaptatively resolve the singular nature of the elastocapillary ridge.
\subsection{Minimising the elastocapillary energy}
The statics of wetting amounts to finding the state of minimal elastocapillary energy. The substrate deformation is described by a mapping from the reference state prior to deformation, to a current state after deformation. Following standard notation, the mapping is written as
\begin{equation}
\mathbf x = \chi(\mathbf X),
\end{equation}
where $\mathbf X$ is the position of a material point on the reference domain, mapped onto its current position $\mathbf x$. We consider the geometry to be invariant along the contact line, so that the problem is two-dimensional (plane strain elasticity). Hyperelastic solids are described by an elastic energy density $W(\mathbf F)$, which depends on the deformation gradient tensor $\mathbf F = \partial \mathbf x/\partial \mathbf X$. We now turn to the interface, which in the (plane strain) two-dimensional description is one-dimensional. We define the arc-length material coordinate at the interface as $S$, and the current surface position $\mathbf x_s(S)=\chi(\mathbf X(S))$. The surface stretch, accounting for the change of length of surface elements follows as
\begin{equation}
\lambda^2 = \frac{\partial \mathbf x_s}{\partial S} \cdot \frac{\partial \mathbf x_s}{\partial S},
\end{equation}
which is a scalar in this plane strain description. Now we can express capillarity, usually defined by the excess energy $\gamma$ per unit area of the \emph{deformed} state, as a free energy $\lambda\gamma$ per unit area in the \emph{reference} state.
Crucially, elastic media can exhibit a nontrivial capillarity where the surface energy $\gamma(\lambda)$ will itself be a function of the stretch $\lambda$ -- this is the Shuttleworth effect~\cite{Shut50,AS16,SJHD2017,AS2020}.
With this, the elastocapillary energy (per unit length along the contact line) takes the form
\begin{eqnarray}
\label{eq:Echi}
\mathcal E[\chi]
= \int d^2X \, W(\mathbf F) + \oint dS \, \lambda \gamma(\lambda),
\end{eqnarray}
respectively giving the total (bulk) elastic energy and the (surface) capillary energy. $\mathbf F$ and $\lambda$ are the corresponding bulk and surface stretches, and are both defined by the map $\chi(\mathbf X)$. We anticipate that we will consider incompressible substrates, in which case the constraint of incompressibility will be included in $W(\mathbf F)$.
Equilibrium configurations of the elastocapillary substrate are found by minimising the functional $\mathcal E$ with respect to $\chi(\mathbf X)$. Considering variations $\delta \mathbf x = \delta \chi(\mathbf X)$, we find
\begin{eqnarray}\label{eq:var}
\delta \mathcal E &=&
\int d^2X \, \left( \frac{\partial W}{\partial \mathbf F} : \delta \mathbf F \right)
+ \oint dS \, \frac{d(\lambda \gamma)}{d\lambda} \delta \lambda
=\int d^2X \, \left(\mathbf s : \mathrm{Grad}(\delta \mathbf x) \right)
+ \oint dS \, \left(\Upsilon \mathbf t \cdot \frac{\partial \delta \mathbf x}{\partial S}\right).
\end{eqnarray}
Here we introduced the nominal (or first Piola-Kirchhoff) stress tensor $\mathbf s$, and the surface tension $\Upsilon$,
\begin{equation}\label{eq:upsilondef}
\mathbf s = \frac{\partial W}{\partial \mathbf F}, \quad \quad \Upsilon = \frac{d(\lambda \gamma)}{d\lambda} = \gamma + \lambda \frac{d\gamma}{d\lambda},
\end{equation}
where for the latter we indeed recognise the Shuttleworth relation (\ref{eq:shworth}).
In addition, we used that $\delta \lambda = \mathbf t \cdot \partial \delta \mathbf x/\partial S$ along the boundary, where $\mathbf t$ is the surface-tangent unit vector in the current configuration.
To study the elastocapillary ridge, we still need to include the pull of the contact line, induced by the liquid drop that is wetting the solid. This can be achieved by making explicit the capillary energy of the drop, via its liquid-vapour surface energy $\gamma_{LV}$. The subtlety here is that one needs to impose a constraint at the contact line~~\cite{LubbersJFM14,Snoeijer2018}: the position $\mathbf x$ of the liquid-vapour interface must (by definition) coincide with that of the solid interface. The effect of this constraint, imposed by a Lagrange multiplier, provides a localised traction on the substrate, pulling with a strength $\gamma_{LV}$ along the direction of the liquid-vapour interface $\mathbf t_{LV}$ \footnote{Note that in the present work the solid interface is treated as part of the substrate, so that the external traction of the liquid-vapour interface is $\gamma_{LV}\mathbf t_{LV}$. The force transmitted onto the elastic \emph{bulk} of the substrate, i.e. after passing the interface, is more intricate as discussed e.g. in~\cite{Marchand2012c,Bostwick:2014aa,AS16}}. The representation by a local force is indeed commonly used in modelling approaches~\cite{Style2012a,Limat2012a,Style2013b,Wu2018,MasurelPRL2019}. Here we therefore treat the contact line as a perfectly localized external traction, with the associated work functional $\mathcal R = \gamma_{LV} \mathbf t_{LV} \cdot \mathbf x(\mathbf X_{\rm cl})$, where $\mathbf X_{\rm cl}$ is the solid's material point at which the contact line is acting. During the variation this corresponds to a work
\begin{equation}\label{eq:forcing}
\delta \mathcal R = \gamma_{LV} \mathbf t_{LV} \cdot \delta \mathbf x(\mathbf X_{\rm cl}),
\end{equation}
The virtual work principle, $\delta \mathcal E = \delta \mathcal R$, then gives the equilibrium condition
\begin{eqnarray}\label{eq:weak}
\int d^2X \, \left(\mathbf s : \mathrm{Grad}(\delta \mathbf x) \right)
+ \oint dS \, \left(\Upsilon \mathbf t \cdot \frac{\partial \delta \mathbf x}{\partial S}\right) =
\gamma_{LV} \mathbf t_{LV} \cdot \delta \mathbf x(\mathbf X_{\rm cl}),
\end{eqnarray}
which should be satisfied for arbitrary $\delta \mathbf x$.
Equation~(\ref{eq:weak}) defines the elastocapillary equilibrium in the weak formulation. This equilibrium is indeed highly singular. Namely, the forcing on the right hand side appears as a point force, pulling at $\mathbf X_{\rm cl}$, while the elastocapillary energies on the left contains only surface and bulk contributions. The debate in the literature precisely revolves around the following question: Do singularities appear in surface (capillarity) or in bulk (elasticity), in order to balance the point force at the contact line?
\subsection{Numerical method}
Our interest pertains to finding equilibrium configurations of the elastocapillary problem, i.e. to minimisers of the energy functional in~(\ref{eq:Echi}) extended with the work functional $\mathcal R$ representing the contact line, subject to appropriate boundary conditions.
Specifically, we consider substrates that are flat in the reference configuration, with complete fixation at the bottom boundary and guided fixation (slip) at the lateral boundaries. We allow for the possibility to impose a prestretch $\lambda_\infty$, refering to the uniaxial stretch far away from the contact line. Besides the work associated to the point-forcing at the contact line, the top surface is free of traction, as is made explicit in the weak formulation (\ref{eq:weak}) of the minimisation problem. The constitutive relations for the strain-energy density and the surface energy are specified in Section~\ref{sec:FEMresults} below. In all simulations, the shear modulus $G$ and the relevant surface energies are chosen such that the wetting ridge is much smaller than the width of the domain, with a typical example given in Fig.~\ref{fig:zoom}(a). In that example the domain width and height respectively are $8 \gamma_{LV}/G$ and height $\frac{8}{3}\gamma_{LV}/G$, which are representative for all presented results.
Here we numerically approximate the minimiser of $\mathcal E - \mathcal R$ by means of a \emph{goal-adaptive finite-element method}~\cite{BeckerRannacher2001,Oden:2001ss}. In goal-adaptive methods, the finite-element approximation is locally refined on the basis of an a-posteriori error estimate, in such a manner that an optimal approximation to a predefined quantity of interest (the \emph{goal\/}) is obtained. Goal-adaptive finite-element methods generally proceed according to the SEMR (\texttt{Solve} $\rightarrow$ \texttt{Estimate} $\rightarrow$ \texttt{Mark} $\rightarrow$ \texttt{Refine}) process~\cite{Nochetto:2012hl,Brummelen:2017rr}. The SEMR process starts by solving a finite-element approximation on a coarse mesh. Next, the contribution of each element to the error in the goal quantity is estimated, based on a so-called dual problem~\cite{BeckerRannacher2001,Oden:2001ss,Brummelen:2017rr}.
The elements that yield the largest contribution to the error are marked according to a refinement strategy. These marked elements are subsequently refined by subdivision. This process is repeated until a certain threshold for the error estimate is satisfied or a preset number of refinement iterations has been executed. In accordance with our interest in minimisers of $\mathcal E - \mathcal R$, we take the energy itself as the goal functional. The optimality conditions are resolved by means of the Newton--Raphson method. The goal-adaptive finite-element method for the present problem has been implemented in the open-source software framework Nutils~\cite{nutils}. The optimality conditions~(\ref{eq:weak}) are in fact directly derived from an implementation of the energy functional $\mathcal E - \mathcal R$ via the automatic-differentiation functionality in Nutils.
An illustration of a goal-adaptive finite-element approximation is provided in Fig.~\ref{fig:zoom}(a). The approximation is based on 16 refinement iterations. Accordingly, the smallest elements in the adaptive approximation are $2^{16}$ times smaller than the initial element size. The initial mesh comprises $24\times8$ uniform quadrilateral elements and, correspondingly, the smallest elements are 5--6 orders of magnitude smaller than the elastocapillary length. Importantly, the adaptive procedure automatically introduces the local refinements in the vicinity of the contact line. This refinement pattern is in agreement with the singularity of the pressure towards the contact line, and we extensively verified the numerical convergence of the result. For the result shown in Fig.~\ref{fig:zoom}(a), the relative numerical error in the computed value of the solid opening angle $\theta_S$ is less than $10^{-6}$.
\section{Elastocapillary ridges, with and without Shuttleworth effect}\label{sec:FEMresults}
We now present the adaptively resolved numerical results for the elastocapillary ridge. We will consider cases with constant surface energy and with variable surface energy, i.e. without and with Shuttleworth effect. For the bulk elasticity, we will consider materials with a neo-Hookean strain-energy density (using plain strain),
\begin{equation}\label{eq:Wneohookean}
W(\mathbf F) = \frac{1}{2}G \left( \mathbf F^T \!\!:\! \mathbf F - 2 \right) - p \left( \det \mathbf F - 1\right),
\end{equation}
where we introduced the pressure $p$ to impose the constraint of incompressibility. In contrast to bulk elasticity, there are no standard constitutive relations for the surface energy of soft solids. Here, we propose a surface energy of the form
\begin{equation}\label{eq:rheology}
\gamma_S(\lambda) = \gamma_0 \left( 1 - c_0 \log \lambda + c_1 (\lambda-1) \right).
\end{equation}
We from now on add the subscript ``$S$" to indicate that we refer to the solid interface (to distinguish from the liquid-vapour surface energy $\gamma_{LV}$). Expanding (\ref{eq:rheology}) around $\lambda=1$ up to quadratic order, one recovers the Ansatz for surface elasticity as proposed in \cite{Gorcum2020}, while if in addition $c_0=c_1$ one finds a linear surface elasticity as proposed in \cite{xu2018}. An advantage of the constitutive relation (\ref{eq:rheology}) is that the logarithm conveniently keeps the system away from $\lambda \rightarrow 0$. The parameters $c_{0,1}$ must satisfy an admissibility condition such that the surface energy remains convex and that both the energy $\gamma_S$ and the surface tension $\Upsilon_S$ remain positive definite. According to the Shuttleworth relation of (\ref{eq:upsilondef}), the above surface energy gives a surface tension
\begin{equation}
\Upsilon_S(\lambda) = \gamma_0 \left( 1 + c_1- c_0 - c_0 \log \lambda + 2 c_1 (\lambda-1) \right),
\end{equation}
and one verifies that ensuring $\Upsilon_S>0$ is sufficient for the constants $c_{0,1}$ to be admissible. Below we present results for the case where $c_{0,1}=0$ (no Shuttleworth effect), and for $c_{0,1}=1$ (strong Shuttleworth effect), which are indeed in the admissible regime. For later reference we also define the associated ``chemical potential"
\begin{equation}\label{eq:gammamu}
\mu_S(\lambda) \equiv \lambda^2 \frac{d\gamma_S}{d\lambda}= \gamma_0 \left( c_1 \lambda^2 - c_0 \lambda \right),
\end{equation}
which will be relevant in Sec.~\ref{sec:pinning}.
In general, the solid-liquid and liquid-vapour interfaces of course exhibit a different surface constitutive relation, respectively, which we write $\gamma_{SL}(\lambda)$ and $\gamma_{SV}(\lambda)$. For most of the paper we focus on cases where the solid-liquid and solid-vapour energies are identical, and simply denoted $\gamma_{S}(\lambda)$. This renders the problem symmetric around the contact line, so that the equilibrium contact angle of the liquid is $90^\circ$ and the associated forcing is vertical. Also, this symmetry replaces the ``second boundary condition" discussed in~\cite{Snoeijer2018,AS2020}. Asymmetric surface energies will be considered in Sec.~\ref{sec:pinning}, where we adress the relation between pinning, the contact angle, and the second boundary condition.
\subsection{Universality of Neumann's law}
We first consider the solid angle $\theta_S$, as measured at the tip of the wetting ridge in FEM. Figure~\ref{fig:zoom}(b) shows $\theta_S$ plotted against $\gamma_{LV}/\Upsilon_S$, with the value of $\Upsilon_S$ taken at the tip. Clearly, $\theta_S$ follows a universal curve for all cases considered. The parameters that were varied in our simulations are the contact line force $\gamma_{LV}$ (compared to the value of $\gamma_0$ in (\ref{eq:rheology})), while solid surface tensions are with or without Shuttleworth effect ($c_{0,1}=0$, respectively $c_{0,1}=1$). We also considered different amounts of prestretch of the substrate, ranging from $\lambda_\infty=1$ (no prestretch) to $\lambda_\infty=2$ (extending the length by 100\%). The universal curve for $\theta_S$ indeed follows Neumann's law, which for the specific case of identical solid-liquid and solid-vapour energies reads
\begin{equation}\label{eq:neumsymmetric}
2\Upsilon_S \sin \left( \frac{1}{2}(\pi - \theta_S) \right) = \gamma_{LV}.
\end{equation}
Here, we emphasise that owing to the Shuttleworth effect, the surface tension $\Upsilon_S(\lambda)$ depends on the strain. Since the Neumann balance is to be interpreted as a boundary condition at the contact line, we consider (\ref{eq:neumsymmetric}) with values of the stretch $\lambda_{\rm cl}$ taken at the contact line. The result of (\ref{eq:neumsymmetric}) is superimposed as the solid line in Fig.~\ref{fig:zoom}(b), providing a perfect description of the FEM results.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\textwidth]{Figure2_mod}
\caption{Geometry of the elastocapillary ridge upon stretching the substrate. (a) Solid angle $\theta_S$ as a function of the stretch at the contact line $\lambda_{\rm cl}$. Open circles correspond to FEM in the presence of a strong Shuttleworth effect ($c_0=c_1=1$, with $\gamma_0=\gamma_{LV}$). Solid line is the analytical prediction by Neumann's law (\ref{eq:neumsymmetric}). The closed circles (several measurements superimposed) corresponds to FEM without a Shuttleworth effect ($c_0=c_1=0$, with $\gamma_0=\gamma_{LV}$). (b) Relation between the stretch at the contact line $\lambda_{\rm cl}$ and the globally imposed stretch $\lambda_\infty$. In the presence of a strong Shuttleworth effect, the two stretches takes on very similar values (red dashed line, $\lambda_{\rm cl}=\lambda_\infty$, is a guide to the eye). Without Shuttleworth effect $\lambda_{\rm cl}=\sqrt{\pi/\theta_S}$ takes on a constant value (dash-dotted line).}
\label{fig:thetavslambda}
\end{figure}
We thus reach a first major conclusion: Neumann's law (based on the local values of the surface tension) universally applies to elastocapillary wetting ridges, irrespective of the large elastic deformations at the contact line. This rejects the recent hypothesis that strong elastic nonlinearity, as encountered for narrow $\theta_S$ and large prestretch, would lead to a failure of Neumann's law~\cite{MasurelPRL2019}. The universal validity of Neumann's law has an immediate consequence for measurements of the surface-constitutive relation based on $\theta_S$, since we safely conclude that $\theta_S$ gives a direct access to the values of $\Upsilon_S$. Phrased differently, the experimental observation for PDMS that $\theta_S$ increases with $\lambda_\infty$~\cite{XuNatComm2017} can, in a macroscopic theory based on hyperelasticity, only be explained via a strong Shuttleworth effect.
To further illustrate this, we closely follow the experimental protocol of \cite{XuNatComm2017} in our simulations, and consider how the geometry of the ridge evolves when stretching the substrate by an increasing amount $\lambda_\infty$. Figure~\ref{fig:thetavslambda}(a) shows $\theta_S$ versus the stretch at the contact line $\lambda_{\rm cl}$. The open circles are FEM results with a Shuttleworth effect ($c_{0,1}=1$, and $\gamma_0=\gamma_{LV}$), showing an increase of the solid opening angle $\theta_S$. Indeed, the dependence of $\theta_S$ is perfectly predicted by Neumann's law (\ref{eq:neumsymmetric}), as is indicated by the solid line. In experiments, one of course does not control the stretch at the contact line $\lambda_{\rm cl}$, but rather the global stretch of the substrate $\lambda_\infty$. In Fig.~\ref{fig:thetavslambda}(b) we therefore plot these two stretches against one another. While $\lambda_{\rm cl}$ is not exactly identical to the imposed stretched $\lambda_\infty$, the differences turn out to be minor -- consistently with experiments \cite{XuNatComm2017}). As a guide to the eye, the dashed line in Fig.~\ref{fig:thetavslambda}(b) indicates $\lambda_{\rm cl}=\lambda_\infty$. We expect this near-homogeneity of $\lambda$'s to arise only for nearly symmetric $\gamma_{SL}$ and $\gamma_{SV}$, as asymmetry in general leads to stronger gradients of stretch (cf. Sec~\ref{sec:pinning}).
The scenario changes dramatically when the substrate does \emph{not} exhibit a Shuttleworth effect (i.e. $c_{0,1}=0$). In that case, both $\theta_S$ and $\lambda_{\rm cl}$ take on a constant value, that is totally independent of the imposed $\lambda_\infty$. This is indicated in Fig.~\ref{fig:thetavslambda}(a) by the closed circle -- which in fact corresponds to various simulations with $\lambda_\infty$ ranging from 1 to 2. This invariance of $\theta_S$ with respect to $\lambda_\infty$ is easily understood from the Neumann balance. Namely, surface tensions are constant when $c_{0,1}=0$, and since we consider $\gamma_0=\gamma_{LV}$ we find that $\theta_S=120^\circ$. By contrast, the invariance of the stretch at the tip comes as a surprise and its explanation calls for a better understanding the nature of the elastic singularity. Below, we will derive analytically that without the Shuttleworth effect, $\lambda_{\rm cl}=\sqrt{\pi/\theta_S}$, irrespective of the externally imposed prestretch $\lambda_\infty$ of the substrate.
Measurements of the stretch at the contact line thus provide important additional information on the Shuttleworth effect, that till date has not yet been explored. Namely, experiments by \cite{XuNatComm2017} reveal an increase of stretch at the contact line upon a global stretching of the substrate. From the above it is clear that such a dependence can, in a macroscopic theory based on hyperelasticity, not occur when there is no Shuttleworth effect.
\subsection{Stress singularity and the elastic Marangoni effect}
To further analyse the vicinity of the tip, we now turn to the elastic stress measured along the free surface. In Fig.~\ref{fig:pressure}(a,b) we plot the pressure $p$ as a function of the distance to the contact line $x$, on a semilogarithmic scale. In all cases the FEM simulations exhibit a weak singularity of the pressure, diverging logarithmically with the distance to the tip.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\textwidth]{Figure3_mod}
\caption{Elastic stress along the free surface near the ridge singularity (symmetric ridges). (a) Pressure $p$ vs distance to the contact line $x$, scaled as indicated on the axes. Data correspond to the situation without Shuttleworth ($c_{0,1}=0$) with different $\theta_S$ obtained by varying the ratio $\gamma_{LV}/\gamma_0$. (b) Pressure $p$ vs distance to the contact line $x$, scaled as indicated on the axes. Data correspond to the situation with Shuttleworth ($c_{0,1}=1$) with different amounts of prestretch $\lambda_\infty$. (c) Shear stress $\sigma_{nt}$ vs distance to the contact line $x$, scaled as indicated on the axes. Data correspond to the same case as in (b).}
\label{fig:pressure}
\end{figure}
Panel (a) corresponds to a case without Shuttleworth effect $(c_{0,1}=0)$, for different ratios $\gamma_{LV}/\gamma_0$. With this, we cover a broad range of $\theta_S$ down to very narrow angles with $20^\circ$. The prefactor of the logarithmic pressure singularity is larger for narrow $\theta_S$. The pressure plotted in Fig.~\ref{fig:pressure}(a) is scaled by $G\times\left(\frac{\pi}{\theta_S} -\frac{\theta_S}{\pi} \right)$, which indeed captures the $\theta_S$ dependence of the prefactor of the singularity. We remark that for very narrow angles the logarithmic asymptotic only emerges at distances much below the elastocapillary length $\gamma_0/G$; this illustrates the challenge of accurate numerical resolutions for small $\theta_S$. Panel (b) corresponds to the case with a strong Shuttleworth effect $(c_{0,1}=1)$, for different amounts of substrate prestretch $\lambda_\infty$ (the corresponding $\theta_S$ are in Fig.~\ref{fig:thetavslambda}). Figure~\ref{fig:pressure}(b) again reveals a logarithmic singularity of pressure, with a weak variation of the prefactor with $\lambda_\infty$.
Interestingly, the Shuttleworth effect allows for a new phenomenon induced by gradients of surface tension. For liquid interfaces, gradients in surface tension arise due to gradients in composition or in temperature -- this is known as the Marangoni effect, and leads to tangential interfacial stress. For the elastic interfaces considered here, the gradients in surface tension are due to gradients of $\lambda$ along the interface. Given this analogy, we refer to this as the \emph{elastic Marangoni effect}.
Figure~\ref{fig:pressure}(c) indeed reveals the emergence of elastic (Cauchy) shear stress $\sigma_{nt}$ along the interface, which we will refer to as elastic Marangoni stress. Somewhat surprisingly, the Marangoni stress is not singular, but converges to a constant value upon approaching the contact line. This elastic Marangoni stress can be positive or negative, depending on the prestretch that is imposed. Without prestretch ($\lambda_\infty=1$), the contact line region will have the largest surface tension, giving a Marangoni stress that is oriented towards the contact line ($\sigma_{nt} < 0$). Conversely, when the imposed $\lambda_\infty$ is large, the contact line region has the smallest surface tension and the Marangoni stress is directed away from the contact line ($\sigma_{nt} < 0$). This is further quantified in Fig.~\ref{fig:marangoni}, where the change of direction of the Marangoni effect is observed to be close to $\lambda_{\rm cl}\approx 1.2$. Indeed, this nearly coincides with the point where $\lambda_{\rm tip} \approx \lambda_\infty$ [cf. Fig.~\ref{fig:thetavslambda}(b)]. So the orientation of the Marangoni stress depends on whether the stretch at the tip is larger or smaller than the stretch imposed at large distance.
\section{Exact nonlinear solutions}\label{sec:theory}
\subsection{Splitting off the singularity}
We will now pursue a fully analytical theory for the numerical observations above. We have seen that the elastic singularity is weak, only logarithmic in the stress, so we first try to split off the singularity. For this, we perform an integration by parts on (\ref{eq:weak}) by writing
\begin{eqnarray}\label{eq:varbis}
\delta \mathcal E
&=& \int d^2X \, \left(
\mathrm{Div} (\mathbf s \cdot \delta \mathbf x)
- \left( \mathrm{Div} \cdot \mathbf s \right) \cdot \delta \mathbf x\right)
+ \oint dS \, \left( \frac{\partial}{\partial S}\left( \Upsilon \mathbf t \cdot \delta \mathbf x\right)
- \frac{\partial (\Upsilon \mathbf t)}{\partial S}\cdot \delta \mathbf x \right)
\end{eqnarray}
The integral over the third term indeed gives point-like contributions
\begin{equation}\label{eq:disc}
\oint dS \, \frac{\partial}{\partial S}\left( \Upsilon \mathbf t \cdot \delta \mathbf x\right) =
- \sum_{\mathrm{disc.} \, i} \left[ \Upsilon \mathbf t \right]^+_- \cdot \delta \mathbf x_i,
\end{equation}
where the sum runs over all possible discontinuities along the contour. The term $\mathrm{Div}(\mathbf s \cdot \delta \mathbf x)$ can be brought to the surface using the divergence theorem. For a smooth domain of integration, the divergence theorem holds for any vector field which is in ${\mathcal L}^1$ and whose spatial derivatives are in ${\mathcal L}^1$ \cite{Willem2013}. This is allowed as long as the corresponding singularity is weaker than $1/|\mathbf X|$.
\begin{eqnarray}\label{eq:varbis}
\delta \mathcal E
&=&
-\int d^2X \, \mathrm{Div} \left( \mathbf s \right) \cdot \delta \mathbf x
+ \oint dS \, \left( \mathbf s \cdot \mathbf N - \frac{\partial (\Upsilon \mathbf t)}{\partial S}\right) \cdot \delta \mathbf x
- \sum_{\mathrm{disc.} \, i} \left[ \Upsilon \mathbf t \right]^+_- \cdot \delta \mathbf x_i
\nonumber \\
&=&
-\int d^2x \, \mathrm{div}\left( \boldsymbol{\sigma} \right) \cdot \delta \mathbf x
+ \oint ds \, \left( \boldsymbol{\sigma} \cdot \mathbf n - \frac{\partial (\Upsilon \mathbf t)}{\partial s}\right) \cdot \delta \mathbf x
- \sum_{\rm disc.\, i} \left[ \Upsilon \mathbf t \right]^+_- \cdot \delta \mathbf x_i,
\end{eqnarray}
where in the last step we transformed the result to the current domain, using the definition of the true stress (or Cauchy stress) according to $\boldsymbol{\sigma}=\mathbf s \cdot \mathbf F^T/ \det(\mathbf F)$.
The condition of equilibrium, $\delta \mathcal E=\delta \mathcal R$ obtained from (\ref{eq:forcing}),(\ref{eq:varbis}), then splits into bulk, surface and point conditions:
\begin{eqnarray}
\mathrm{div}( \boldsymbol{\sigma} )&=& 0, \quad \mathbf x \in \mathcal D, \label{eq:bulk}
\\
\boldsymbol{\sigma} \cdot \mathbf n - \frac{\partial (\Upsilon \mathbf t)}{\partial s} &=& 0,
\quad \mathbf x \in \partial \mathcal D,\label{eq:laplacemarangoni}
\\
\left[ \Upsilon \mathbf t \right]^+_- + \gamma_{LV} \mathbf t_{LV} &=& 0,
\quad \mathbf x =\mathbf x_{\rm cl}, \label{eq:neumannvariation}
\end{eqnarray}
where $\mathcal D$ denotes the current domain of the deformed state. Besides the classical elastic stress equilibrium in bulk (\ref{eq:bulk}), the interface condition (\ref{eq:laplacemarangoni}) gives the Marangoni effect where $\sigma_{nt}\equiv \mathbf t \cdot \boldsymbol{\sigma} \cdot \mathbf n$ balances gradients in surface tension $\partial \Upsilon/\partial s$, while the normal component of elastic stress $\sigma_{nn}\equiv \mathbf n \cdot \boldsymbol{\sigma} \cdot \mathbf n$ balances the Laplace pressure. Finally, the Neumann condition appears at the contact line, equation (\ref{eq:neumannvariation}), expressed as a discontinuity of the surface tangents. The only assumption made in the derivation above is that the stress singularity is sufficiently weak for the divergence theorem to be applicable, as is the case for a logarithmic singularity.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\textwidth]{Figure4_mod}
\caption{Marangoni stress $\sigma_{nt}/G$ at the contact line, as a function of the stretch at the contact line $\lambda_{\rm cl}$. The Marangoni stress can be positive or negative depending on whether the stretch at the tip is larger or smaller than $\lambda_\infty$. Open circles obtained by FEM, solid line from similarity solutions described in Sec.~\ref{sec:theory}. The grids represent geometry of the ridge and deformation within it for negative (A) and positive (B) Marangoni stresses, as obtained from the similarity solutions. The grey lines denote the undeformed grid, and the arrows indicate the direction of the Marangoni stress. }
\label{fig:marangoni}
\end{figure}
\subsection{Similarity solutions}
We now analytically establish the nature of the elastic singularity, through an asymptotic analysis near the contact line. For this we express the mapping $\chi(\mathbf X)$ in polar coordinates, $(r,\varphi)$ and $(R,\Phi)$, respectively for the current and reference state. The contact line is located at $r=0$ and $R=0$, and without loss of generality the initially flat free surface is chosen to be along the lines $\Phi=0$ and $\Phi=\pi$. We make use of the fact that the boundary condition (\ref{eq:neumannvariation}) forces the solid into an angle $\theta_S$, which is defined by the property
\begin{equation}\label{eq:defcorner}
\theta_S = \lim_{R\rightarrow 0}\left( \varphi_{\Phi=\pi} - \varphi_{\Phi=0}\right).
\end{equation}
As is common with singularities~\cite{EggersFontelos2015}, we expect the asymptotics to be scale-invariant, so we propose a similarity Ansatz
\begin{eqnarray}
r(R,\Phi) &=& R^\alpha g_1(\Phi), \nonumber \\
\varphi(R,\Phi) &=& R^\beta g_2(\Phi).
\end{eqnarray}
Imposing (\ref{eq:defcorner}) one finds that $\beta=0$. A critical feature of soft elastic solids is that these are basically incompressible, i.e. $\mathrm{det}(\mathbf F)=1$. Combined with $\beta=0$, this then dictates $\alpha=1$, which implies that the radial stretch $\lambda_r = dr/dR$ remains finite and is independent of $R$. In the azimuthal direction, incompressibility implies a relation between the functions $g_{1,2}$, which can be accounted for by writing
\begin{eqnarray}
r(R,\Phi) &=& \frac{R}{\sqrt{f(\Phi)}}, \nonumber \\
\varphi(R,\Phi) &=& \int_{\Phi_0}^\Phi dU \, f(U),\label{eq:phicorner}
\end{eqnarray}
so that the solid angle follows as $\theta_S = \int_0^\pi d\Phi \, f(\Phi)$. The deformation gradient tensor of this mapping reads
\begin{equation}
\mathbf F =
\begin{pmatrix}
F_{rR} & F_{r\Phi}\\
F_{\varphi R} & F_{\varphi \Phi} \\
\end{pmatrix}
=
\begin{pmatrix}
\frac{\partial r}{\partial R} & \frac{1}{R}\frac{\partial r}{\partial \Phi}\\
r\frac{\partial \varphi}{\partial R} & \frac{r}{R}\frac{\partial \varphi}{\partial \Phi} \\
\end{pmatrix}
=
\begin{pmatrix}
\frac{1}{\sqrt{f}}
& -\frac{1}{2\sqrt{f}} \frac{f'}{f} \\
0 & \sqrt{f} \\
\end{pmatrix},
\end{equation}
which indeed satisfies $\mathrm{det}(\mathbf F) =1$ for arbitrary $f(\Phi)$. The corresponding Finger tensor reads
\begin{equation}\label{eq:finger}
\mathbf B = \mathbf F \cdot \mathbf F^T =
\begin{pmatrix}
\frac{1}{f}\left(1 + \left(\frac{f'}{2f}\right)^2 \right)
& - \frac{f'}{2f} \\
- \frac{f'}{2f} & f \\
\end{pmatrix}.
\end{equation}
This defines the most general scale-invariant incompressible map that generates a corner.
For the special case where $f'=0$, one recovers the classical solution by Singh \& Pipkin~\cite{Singh1965}. However, that solution is shear-free (i.e. $\mathbf F$ and $\mathbf B$ are diagonal) and therefore cannot be universally valid. Here we derive the most general corner solution that satisfies mechanical equilibrium, $\mathrm{div}(\boldsymbol{\sigma})=0$. We focus on a neo-Hookean material defined by (\ref{eq:Wneohookean}), which has a Cauchy stress $\boldsymbol{\sigma} = G \mathbf B - p \mathbf I$, so that (\ref{eq:bulk}) becomes
\begin{equation}\label{eq:nablapressure}
\mathrm{grad}(p) = G \, \mathrm{div}(\mathbf B).
\end{equation}
This implies that $\textrm{div}(\mathbf B)$ must be irrotational, i.e. $\mathrm{curl}\left(\mathrm{div}(\mathbf B) \right)=0$, which here takes the form
\begin{equation}
\frac{\partial }{\partial \varphi}\left(\frac{\partial B_{r\varphi}}{\partial \varphi} + B_{rr}- B_{\varphi \varphi}\right) =0
\quad \Rightarrow \quad
\frac{\partial B_{r\varphi}}{\partial \varphi} + B_{rr} - B_{\varphi \varphi} =K,
\end{equation}
where $K$ is an integration constant. Inserting (\ref{eq:finger}) and bearing in mind that $\partial/\partial \varphi(\cdots ) =(\cdots)' /f$, we find
\begin{eqnarray}\label{eq:ODE}
- \left(\frac{f'}{2f}\right)' + 1+ \left( \frac{f'}{2f}\right)^2 - f^2 = Kf.
\end{eqnarray}
This is a nonlinear second order ODE for $f(\Phi)$. As boundary conditions we impose the stretch at each of the boundaries, which will subsequently give the shear stress via the connections
\begin{eqnarray}
\lambda_r&=& \frac{dr }{dR} = \frac{1}{\sqrt{f(\Phi)}} \quad \Rightarrow \quad \sigma_{r\varphi} = -\frac{G f'(\Phi)}{2f(\Phi)},
\end{eqnarray}
We note that $\lambda_r$ in the similarity solution is independent of $R$, and can therefore be identified to the stretch at the contact line $\lambda_r = \lambda_{\rm cl}$. The constant $K$ can be adjusted to accommodate the desired $\theta_S$. Explicit solutions will be presented below, and compared directly to FEM simulations.
Once a solution is found, one can explicitly integrate (\ref{eq:nablapressure}) to obtain the pressure
\begin{equation}\label{eq:plog}
p(r,\varphi) = G K \log r,
\end{equation}
up to an integration constant. This completes the analytical description of incompressible corner solutions in the fully nonlinear regime.
\subsection{Theory compared to FEM}
The similarity solutions derived above capture all FEM results of Sec.~\ref{sec:FEMresults}, in the vicinity of the contact line. First, we consider the stress, which for a neo-Hookean solid is given by $\boldsymbol{\sigma} = G \mathbf B - p \mathbf I$. Our theory explains the FEM result that the normal stress diverges logarithmically, following the singularity of pressure (\ref{eq:plog}), and offers a way to compute the prefactor $K$. Furthermore, the corner solution shows that $\mathbf B$ as given in (\ref{eq:finger}) remains finite at the contact line. This explains why the Marangoni stress $\sigma_{nt}=\sigma_{r\varphi}$ remains finite at the contact line.
We now turn to a fully quantitative analysis, by solving (\ref{eq:ODE}) for various boundary conditions. Typical (symmetric) similarity solutions are represented graphically in Fig.~\ref{fig:grids} denoting the Lagrangian grid both in undeformed (grey) and deformed (black) configurations. The three panels each correspond to $\theta_S=120^\circ$, with different amount of stretch imposed on the free surfaces. In Panel (a) we report the solution without shear stress, for which $f'=0$ for all $\varphi$. In this case, (\ref{eq:phicorner}) reduces to the classical solution by Singh \& Pipkin~\cite{Singh1965}, with the constant $f=\theta_S/\pi$. In the context of elastocapillary ridges, the absence of shear corresponds to a substrate without a Shuttleworth effect. This explains why in the absence of a Shuttleworth effect, the stretch at the contact line $\lambda_{\rm cl}$ was found to be independent of $\lambda_\infty$ in our FEM simulations: in a shear-free corner, the stretch takes on a specific value that depends only on the solid angle, as $\lambda_r=\lambda_{\rm cl}=\sqrt{\pi/\theta_S}$. The stretch at the contact line is therefore locally determined by $\theta_S$, irrespective of the conditions imposed at large distance. Furthermore, in this specific case without shear stress, we find an analytical expression for the strength of the pressure singularity, the constant $K$ in (\ref{eq:plog}). Inserting $f=\theta_S/\pi$ in (\ref{eq:ODE}) gives $K=\frac{\pi}{\theta_S}-\frac{\theta_S}{\pi}$. Indeed, this was exactly the scaling used in Fig.~\ref{fig:pressure}(a), necessary to account for the $\theta_S$ dependence. This demonstrates that the corner solutions are fully quantitative and provide the correct asymptotics observed in FEM, valid in the strongly nonlinear regime.
The Shuttleworth effect dramatically changes the physical picture. Now, a variety of surface stretches $\lambda_r$ is possible, as shown in Fig.~\ref{fig:grids}(b,c). Each of these solutions comes with its own value of the elastic Marangoni stress. Figure~\ref{fig:marangoni} illustrates this point, where the prediction of the similarity solutions is shown as a solid line and compared directly to the Marangoni stress in FEM. For the symmetric surface tensions considered in our simulations, the corresponding similarity solution is naturally symmetric and can be found without any adjustable parameters: it follows directly from the surface constitutive relation (\ref{eq:rheology}), which in combination with Neumann's law determines the appropriate combination of $\theta_S$ and $\lambda$. The perfect prediction of the elastic Marangoni stress in Fig.~\ref{fig:marangoni} confirms that the corner solutions indeed offer the correct asymptotic description of the singularity -- also in the presence of the Shuttleworth effect.
As a conclusive remark, we emphasise again that the observation in PDMS that $\lambda_{\rm cl}$ increases upon varying $\lambda_\infty$ \cite{XuNatComm2017} cannot be explained in a hyperelastic theory without a Shuttleworth effect.
\begin{figure}[t]
\centering
\includegraphics[width=1\textwidth]{Figure5_mod.eps}
\caption{Similarity solutions for symmetric corners obtained from (\ref{eq:phicorner}) and (\ref{eq:ODE}), all with $\theta_S=120^\circ$. (a) Without Shuttleworth effect, the shear stress vanishes at the interface and one recovers the Singh-Pipkin solution~\cite{Singh1965}. (b,c) The Shuttleworth effect induces Marangoni stresses, giving positive (b) or negative (c) elastic shear stress at the interface, the direction indicated by the arrows.}
\label{fig:grids}
\end{figure}
\section{Liquid contact angle, pinning and Eshelby forces}\label{sec:pinning}
\subsection{Hysteresis via a process zone}
So far we have considered an isolated contact line, at some prescribed position $\mathbf X_{\rm cl}$, pulling vertically with perfectly symmetric wetting conditions. In a real wetting problem, however, a droplet will spread dynamically until it reaches its equilibrium liquid angle -- simultaneously the contact line reaches an equilibrium material position $\mathbf X_{\rm cl}$, which is not known a priori but which needs to be found self-consistently. Hence, the full equilibration involves a free exploration of the contact line over the substrate. Technically, such an equilibrium without pinning implies that the change of material coordinate is energetically neutral.
Naturally, this is the case when the substrate is perfectly homogeneous in its reference state. Indeed, in contrast to the rigid case, there are various examples where well-prepared soft polymeric substrates are basically free from pinning and contact angle hysteresis~\cite{XuNatComm2017,Schulman2018aa,Snoeijer2018,Lhermerout2016aa}.
Here we take the \emph{opposite} perspective and consider the possibility that the presence of the contact line itself induces heterogeneity in the material -- in its reference state. Even when the originally prepared soft polymeric substrate does not exhibit permanent defects that provide a frozen surface energy landscape, the substrate can develop heterogeneities dynamically, due to the presence of the contact line. Indeed, a large-stress region builds up at small scale, which can lead to irreversible plastic flow, like in the ``Fracture Process Zone" that forms at a crack tip. Although wetting-induced damaging processes have been evidenced in experiments where a soft gel exhibits fracture by wetting~\cite{bostwick2013capillary}, we focus here on non-damaging plastic deformations in the near-surface region -- so that the bulk reference is not affected. Plasticity typically occurs in situations where there is multistability, where multiple stable configurations coexist, which can lead to a hysteretic response upon contact displacement \cite{Caroli1998}.
The large strain may indeed provide a configurational plasticity, without damaging the material. When chains between cross-linkers are long enough to produce entanglement, strain may trigger changes of glassy chain conformation. As an alternative mechanism, the contact line may lead to a local strengthening associated with the elongation of polymeric chains, producing a highly dissipative zone when the contact line explores its environment. Below we derive the consequences of a non-damaging, plastic process zone induced by the presence of the contact line. By analogy with fracture mechanics, or with defects in crystalline solids, such a plastic process zone can be described by a defect singularity in the theory of elasticity. The singularity then represents the effect of the plastic process zone on the elastic ``outer" region. We reveal how the strength of such a defect directly relates to contact angle hysteresis.
\subsection{Displacing an elastocapillary defect}
The consequence of a defect, representing the effect of a process zone on the outer region, can be computed from the change in energy associated to a global displacement of the solution. This is illustrated in Fig.~\ref{fig:young}, showing such a displacement $\delta \mathbf X_{\rm cl}=\delta U \mathbf T$ on the reference domain (panel a), and on the current domain (panel b). The change in \emph{elastic} energy associated to the displacement of a defect is known as the Eshelby force \cite{Eshelby1975},
\begin{equation}\label{eq:eshelby}
f_{\rm Esh} = - \frac{\partial \mathcal E_{\rm el}}{\partial U} = \mathbf T \cdot \oint dS \, \mathbf \Pi \cdot \mathbf N,
\end{equation}
where the integral encloses the defect and we define the Eshelby's energy-momentum tensor
\begin{equation}\label{eq:pi}
\mathbf \Pi = W \mathbf I - \mathbf F^T \cdot \frac{\partial W}{\partial \mathbf F}.
\end{equation}
The Eshelby force reduces to the J-integral in small deformation (linear) elasticity, where it finds an interpretation as the energy release rate in fracture mechanics~\cite{Rice68}. To derive the \emph{capillary} energy released by moving a defect, it is instructive to follow the derivation of the classical result (\ref{eq:eshelby}), which is based on the application of Noether's theorem in the space of material coordinates \cite{Eshelby1975}.
\begin{figure}[t]
\centering
\includegraphics[width = 0.45\textwidth]{Figure6_mod}
\caption{Determining the liquid contact angle $\theta_L$ upon global displacement of the solution. (a) Lagrangian point of view: On the domain of material coordinates the shift is achieved by a change of material point $\delta \mathbf X_{\rm cl} = \delta U \mathbf T$, where the contact line applies. Without pinning, the displacement is energetically neutral, while in the presence of a pinning defect an energy $-\Gamma \delta U$ is dissipated at the contact line. (b) Eulerian point of view: The displacement $\delta U$ leads to a variation of the entire solution as given in (\ref{eq:shift}). At large distance from the contact line, the change of the surface energies reads $(\gamma_{SL}-\gamma_{SV})\lambda_\infty \delta U$.}
\label{fig:young}
\end{figure}
On the reference domain, the displacement simply amounts to a translation $\delta \mathbf X_{\rm cl}=\delta U \mathbf T$ of the contact line force, as in Fig.~\ref{fig:young}(a). The corresponding translation on the current domain is sketched in Fig.~\ref{fig:young}(b). The idea of deriving the elastic energy released by displacing a defect is to interpret the translation $\delta U$ as a variation $\delta \mathbf x$, which can be expressed as
\begin{equation}\label{eq:shift}
\delta \mathbf x = \chi \left( \mathbf X - \delta U \mathbf T \right) - \chi\left( \mathbf X \right)
= - \frac{\partial \chi}{\partial \mathbf X} \cdot \mathbf T \delta U = - \delta U \mathbf T \cdot \mathbf F^T.
\end{equation}
The associated change in elastic energy can be computed from this variation, as
\begin{eqnarray}
\delta \mathcal E_{\rm el} &=&
\int d^2X \, \delta \mathbf x \cdot \left(\frac{\delta \mathcal E_{\rm el}}{\delta \mathbf x}\right) =
- \delta U\, \mathbf T \cdot \int d^2X \, \left( \mathbf F^T \cdot \frac{\delta \mathcal E_{\rm el}}{\delta \mathbf x} \right)
= - \delta U\, \mathbf T \cdot \int d^2X \, \mathrm{Div}\left(\mathbf \Pi \right).
\end{eqnarray}
Importantly, in the last step one uses that the (reference) substrate is homogeneous everywhere except at the defect \cite{Eshelby1975}. When in the vicinity of the singularity $\mathbf \Pi \sim 1/|\mathbf X-\mathbf X_{\rm cl}|$, the integral is finite and can be expressed as (\ref{eq:eshelby}). When the material is homogeneous everywhere, i.e. no defects, the Eshelby force uniformly vanishes as a consequence of translational invariance.
We now follow the same scheme for the capillary energy, upon replacing $W$ by $\lambda \gamma$, and the deformation gradient tensor $\mathbf F$ by its vectorial surface analogue $\mathbf F_s = \partial \mathbf x_s/\partial S$. Subsequently, we define the surface-equivalent of the Eshelby tensor (\ref{eq:pi}), which now is a scalar, and which takes the form:
\begin{equation}
\lambda \gamma - \mathbf F_s^T \cdot \left(\frac{\partial (\lambda \gamma)}{\partial \mathbf F_s}\right) =
\lambda \gamma - \lambda \Upsilon = - \lambda^2 \gamma' \equiv - \mu,
\end{equation}
which is the chemical potential anticipated in (\ref{eq:gammamu}).
Indeed, the associated change in capillary energy reads
\begin{eqnarray}
\delta \mathcal E_{\rm cap} &=& -
\delta U \int dS \, \left( \mathbf F_s^T \cdot \frac{\delta \mathcal E_{\rm cap}}{\delta \mathbf x} \right)
= \delta U \int dS \, \frac{d\mu}{dS}
= \delta U \left[ \mu \right]^+_-,
\end{eqnarray}
where the integral runs over an infinitesimal domain across the singularity. It is clear that a finite capillary defect-energy appears only when $\mu$ exhibits a discontinuity at the contact line, i.e. $[\mu]_-^+ \neq 0$.
We thus conclude that the total energy release rate $\Gamma$, liberated upon displacing the elastocapillary defect at the contact line, takes the form
\begin{equation}\label{eq:release}
\Gamma = - \frac{ \partial \mathcal E}{\partial U} = - \left[ \mu \right]^+_- + \mathbf T \cdot \oint dS \, \mathbf \Pi\cdot \mathbf N
= - \left[ \mu \right]^+_- + f_{\rm Esh}.
\end{equation}
Given that the defect represents a process zone, this indicates a loss of energy $-\Gamma \delta U$, dissipated inside the process zone during the translation. For the special case where there is no pinning defect and the contact line is free to move, the variation of the contact line position should be energetically neutral, so that $\Gamma=0$.
The notion of the (elastic) Eshelby force in wetting was recently proposed in \cite{MasurelPRL2019}, where it was argued that the formation of a ridge would already be sufficient to induce an elastic Eshelby force. However, from the above it is clear that this is not the case when the substrate is perfectly \emph{homogeneous in its reference state}, so that there is a translational invariance of the space of reference coordinates: applying Noether's theorem to this translational invariance \cite{Eshelby1975}, one finds $\partial \mathcal E_{\rm el}/\partial U=0$. This vanishing of the Eshelby force is indeed confirmed by our FEM results and analytical solutions: the stress is only logarithmically singular, so that for an infinitesimal integration volume around the contact line (\ref{eq:eshelby}) gives $f_{\rm Esh}=0$. Therefore, for homogenous substrates, the condition $\Gamma=0$ reduces to $[\mu]_-^+ = 0$. The continuity of $\mu$ across the contact line can be interpreted as an ``equality of chemical potential", necessary for a free exchange of material points across the contact line. This condition of no-pinning was previously derived within the strong restrictions of linear elasticity \cite{Snoeijer2018} -- but it turns out to be valid also when deformations are large.
For a non-damaging process zone, i.e. the reference state remains intact, we expect the Eshelby force to vanish owing to translational invariance. Nonetheless, a capillary defect $\Gamma$ could still emerge, associated with the interfacial microstate of the polymer.
\subsection{The liquid contact angle}
Up to here we have considered properties of the solid, and did not discuss explicitly the liquid. Yet, the liquid contact angle $\theta_L$ is the prime feature that characterises the wetting of a liquid drop. To complete the theory, we now show how the equilibration determines $\theta_L$ on homogeneous substrates -- and how the maximum strength of a contact line defect can be related to contact angle hysteresis on elastic substrates.
We restrict ourselves to the case of a sufficiently large drop, so that far away from the contact line one encounters a flat substrate (Fig.~\ref{fig:young}). At a large distance from the contact line, the substrate respectively has a solid-liquid energy $\gamma_{SL}(\lambda_\infty)$ and a solid-vapour energy $\gamma_{SV}(\lambda_\infty)$. The usual argument leading to Young's law for the contact angle amounts to the global horizontal displacement~\cite{deGe02}. In the present case the (Eulerian) displacement reads $\lambda_\infty \delta U$, so that the solid capillary energy increases by $(\gamma_{SL}-\gamma_{SV})\lambda_\infty \delta U$, the value of which has to be taken far away from the contact line. This balances the work $-\gamma_{LV}\cos \theta_L \lambda_\infty \delta U$ performed by the liquid-vapour interface, which together gives Young's law. The situation is modified by the presence of a defect: as described above, such a displacement also involves a dissipation inside the process zone, indicating a loss of energy $-\Gamma \delta U$. By consequence, we find a modification of Young's law
\begin{eqnarray}
\lambda_\infty \left(\gamma_{SL} - \gamma_{SV}\right)_{\lambda_\infty} + \Gamma = - \lambda_\infty \gamma_{LV} \cos \theta_L
\quad
\Rightarrow \quad \gamma_{LV} \left(\cos \theta_L - \cos \theta_{Y,\lambda_\infty}\right) = \lambda_\infty^{-1} \left[\mu \right]_{SL}^{SV}
\label{eq:youngplus}
\end{eqnarray}
where in the second line we anticipate that $f_{\rm Esh}=0$ (owing to the weak logarithmic elastic singularity). For homogeneous substrates $\Gamma = 0$, and we recover Young's law for the liquid contact angle. We remark that, $\theta_Y$ is based on the surface energies corresponding to $\lambda_\infty$.
The analysis above, in particular (\ref{eq:youngplus}), can be verified by the FEM simulations. In the numerics, we fix a priori the material position $\mathbf X_{\rm cl}$ of the pulling force, so that we effectively work with a pinned contact line.
For symmetric surface tensions and pulling vertically, this is equivalent to the unpinned case, but we can consider any liquid angle $\theta_L$, by changing the pulling direction $\mathbf t_{LV}=(-\cos \theta_L,\sin \theta_L)$ in (\ref{eq:weak}). We then measure the jump $[\mu]_-^+=[\mu]_{SL}^{SV}$ across the contact line obtained for the corresponding solution, as a function of $\theta_L$. We consider two cases: (i) symmetric surface energies $\gamma_{SL} = \gamma_{SV}$ (so that $\theta_Y=90^\circ$), and (ii) asymmetric surface energies $\gamma_{SL} \neq \gamma_{SV}$ (here with $\theta_{Y}=113.6^\circ$).
The result is presented in Fig.~\ref{fig:pinning}(a). It is clear that both cases, symmetric and asymmetric, are in perfect agreement with (\ref{eq:youngplus}) with $\Gamma = - [\mu]_{SL}^{SV}$. This implies that $f_{\rm Esh}=0$, consistent with the weak logarithmic singularity. Hence, $\theta_L$ can be different from its equilibrium value $\theta_Y$ by the presence of a non-damaging process zone, represented by a capillary defect. In that case, interfacial plasticity could be associated with a contact angle hysteresis. A typical asymmetric similarity solution is shown via the grid representation in Fig.~\ref{fig:pinning}(b), for which there is a jump in stretch at across the contact line. We remark that in all cases, Neumann's law was still observed to be valid, irrespective of the defect.
\begin{figure}[t]
\centering
\includegraphics[width=0.85\textwidth]{Figure7_mod}
\caption{(a) The strength of the surface defect, quantified by the discontinuity of chemical potential $[\mu]_-^+$, plotted versus the liquid contact angle $\theta_L$. Solid lines are theory of (\ref{eq:youngplus}), open symbols from FEM with Shuttleworth effect ($c_0=c_1=1$). Blue data: symmetric surface energies $\gamma_{0,SV}=\gamma_{0,SL}=\gamma_{LV}$, so that the equilibrium angle $\theta_Y=90^\circ$. Red data: asymmetric surface energies $\gamma_{0,SV}=4/5\gamma_{LV}$ and $\gamma_{0,SL}=6/5\gamma_{LV}$, so that based on $\lambda_\infty =1$ we find $\cos \theta_Y = -2/5$. The numerics confirm that $[\mu]_-^+$ provides the pinning force; when pulling at $\theta_L=\theta_Y$ there is no pinning and $[\mu]_-^+=0$. (b) The grid plot represents the asymmetric ridge for symmetric surface energies, resulting from a contact angle $\theta_L>\theta_Y$. This ridge corresponds to the data point marked by an arrow in the main panel.}
\label{fig:pinning}
\end{figure}
\section{Discussion}
In summary, we have explored analytically and numerically the macroscopic theory for elastocapillary ridges, based on the minimisation of a bulk elastic free energy and a surface capillary free energy. This for the first time offers a fully self-consistent description of ``Soft Wetting", including the possibility that capillarity depends on strain (Shuttleworth effect), large elastic deformation, and pinning. In this macroscopic theory there is a perfect separation of scales between elastocapillary length $\gamma/G$ and the molecular scale $a$, since effectively $a\rightarrow 0$ in the continuum. This limit is relevant for typical experiments, and it is of theoretical importance in order to reveal the nature of the ridge-singularity as predicted from large deformation elasticity. We now discuss these new theoretical results in comparison to recent literure on the Shuttleworth effect.
\subsection{Theory}
\textbf{First boundary condition.}
In this macroscopic description, it was found that the stress singularity associated with the contact line ridge is weak (i.e. logarithmic) and therefore integrable, under all conditions that were considered. Hence, the singularity does not behave analogously to an elastic disclination defect and no qualitative difference emerges when the substrate is globally stretched. As a consequence, in this limit where $\gamma/Ga \to \infty$, the Neumann tension balance at the contact line is strictly valid. In the scheme of energy minimisation, Neumann's law emerges as auxiliary condition (\ref{eq:neumannvariation}), and as such serves as a \emph{first boundary condition} at the contact line. We have no explanation why previous continuum simulations suggested a deviation from Neumann's law \cite{Wu2018,MasurelPRL2019}. We emphasise, however, that the present numerics are based on an adaptive method, which was necessary to fully resolve the elastic singularity, and that we extensively verified that the results are fully converged. Furthermore we derived new analytical solutions of nonlinear elasticity that describe the singularity -- these are indeed perfectly recovered by the numerics.
How can one understand the deviation to Neumann's law observed in molecular dynamics simulations of wetting on cross linked polymer networks \cite{liang2018surface}? This deviation finds its origin in the lack of scale separation between $\gamma/G$ and the molecular scale $a$, which is inevitable in molecular simulations -- the scale $a$ there enters as a molecular cutoff of the continuum and also gives a finite width of the interface. As argued in \cite{SJHD2017,AS2020}, the elastic contributions near the contact line can be computed by integrating the elastic stress over a small but finite region -- in molecular simulations, the smallest possible size for this region would be $a$. In the present work we have demonstrated that the stress singularity is always logarithmic, $\sigma \sim G \log(r/\frac{\gamma}{G})$. Hence, the integral over stress gives an elastic contribution
\begin{equation}\label{eq:MD}
\int_{-a}^a dr \, \sigma \sim Ga \log(aG/\gamma),
\end{equation}
which needs to be compared to the surface tensions. In molecular simulations, where typically $\gamma/Ga$ is of order 1 to 100, a measurable elastic correction to Neumann's law indeed appears. We refer to~\cite{AS2020} for a quantitative test of (\ref{eq:MD}), as the elastic correction to Neumann's law. However, in typical experiments performed with polymeric gel, where the $\gamma/G$ is well above the micron scale, the scale-separation correction would be $10^{-4}$, and one approaches the macroscopic continuum limit. In such experiments, one safely concludes that Neumann's law holds.\\
\textbf{Second boundary condition versus pinning.} While much theoretical work focussed on the validity (or not) of Neumann's law, very little attention was given to the implications of contact line pinning \cite{AS2020}. In many experiments on soft polymer networks contact line pinning is virtually absent, as quantified by a very small hysteresis \cite{XuNatComm2017,Schulman2018aa,Snoeijer2018,Lhermerout2016aa}. This implies that the contact line can freely move, exchanging the substrate's material point touching the liquid-vapour interface without any energetic cost. Here we demonstrated that such a free motion occurs only under a very specific condition. Namely the chemical potential defined by $\mu = \lambda^2 d\gamma/d\lambda$ must be continuous across the contact line. This is the \emph{second boundary condition} that needs to be imposed when there is no contact line pinning. Such a condition was previously derived under the restrictive assumption of linear elasticity~\cite{Snoeijer2018} -- here we demonstrate this to be valid also at large deformation, and explored its consequences in numerical simulations. In particular, we have confirmed numerically that Young's law is only recovered when the second boundary condition, $\mu_{SV}=\mu_{SL}$, is satisfied at the contact line. For asymmetric surface energies, the second boundary condition in general implies a jump in stretch across the contact line, so that in the presence of a Shuttleworth effect one generically expects large deformations.
The possibility of pinning is interesting in itself. Depending on the material strength, large deformations might lead to fracture, as observed in \cite{bostwick2013capillary}, or local plasticity. We demonstrated how such a local ``process zone" can be accounted for by introducing a defect in the elastocapillary continuum theory. With the defect, one can accommodate a range of angles $\theta_L$ by adjusting the strength of the defect at the contact line. In our simulations we only encountered a weak logarithmic singularity of elastic stress, which implies that the strength of the defect received no contribution from elasticity (i.e. the elastic Eshelby force vanishes). The defect strength, in fact, was found to be equal to the discontinuity in chemical potential at the contact line, i.e. $\Gamma = - [\mu]_-^+$, giving rise to a modified Young's law (\ref{eq:youngplus}). In practice, one would expect the defect to exhibit a ``toughness", just like in fracture, which is the maximum value that can be sustained before depinning occurs. Given (\ref{eq:youngplus}), we immediately infer that this implies a contact angle hysteresis, with advancing and receding angles $\cos \theta_r - \cos \theta_a = 2|[\mu]_{-}^+|_{\rm max}$. Future theoretical work should be dedicated to a more detailed description of the interior of the process zone.
\subsection{Experiments and outlook}
Experiments that probe the strain-dependence of surface tension have so far been based on wetting experiments, with a key role to the contact angles of the solid and of the liquid. Having established the elastocapillary continuum framework for soft wetting, for the first time consistently accounting for large deformations and the Shuttleworth effect, we can now critically assess the experimental situation.
Different series of experiments have been performed with stretched PDMS gels, for static \cite{XuNatComm2017,xu2018} and dynamical wetting \cite{Snoeijer2018}. They consistently show a change of the solid angle $\theta_S$ under stretching. Similarly, the solid angle was found to change in dynamical experiments on PVS \cite{Gorcum2020}. Till date, these experimental observations have not received any other explanation than via a surface tension that depends on the strain (or, on the history of strain). Hence, they offer a convincing case for a nontrivial surface constitutive relation in soft polymer networks, at least for two different systems.
Another direct piece of evidence for the Shuttleworth effect is that experiments in \cite{XuNatComm2017} reveal an increase of stretch at the contact line upon a global stretching of the substrate. This information was previously not used to interpret results in the context of a Shuttleworth effect. However, our numerical and analytical results show that such a variation of the stretch at the contact line can only occur in the presence of elastic Marangoni stresses, induced by a Shuttleworth effect -- if surface tension were constant, the stretch at the contact line would take on a constant value.
This evokes an important question that remains to be resolved: What is the microscopic origin of the coupling between surface energy and strain? The polymer is expected to be liquid-like at small scale, where surface tension is exerted: What can produce the coupling between the microscopic scale and the deformation of the network of reticulation (or entanglement) points? A possible scenario is that the coupling emerges from a superficial layer where the mechanical and structural properties are different from bulk~\cite{AS16,Schulman2018aa}. Related to this open question is the experimental observation that, in contrast to solid angle $\theta_S$, the liquid contact angle $\theta_L$ turns out \emph{not} to depend on stretching \cite{Schulman2018aa} -- a property that was confirmed for 6 different liquid-substrate systems in \cite{Schulman2018aa}, and which also holds for PDMS \cite{XuNatComm2017,Snoeijer2018}, for substrates stretched up to 100\%. This is surprising, since Young's law for the liquid angle should be valid for sufficiently large drops, but with surface energies $\gamma_{SV}-\gamma_{SL}$ based on the externally imposed stretch $\lambda_\infty$~\cite{Schulman2018aa}. This interpretation of Young's law is confirmed in Sec.~\ref{sec:pinning}, in an analysis where the large deformation elasticity and the Shuttleworth effect are explicitly accounted for. The implication of the experimental invariance of $\theta_L$ (within an experimental resolution of $\pm 1^\circ$) is that, for all imposed $\lambda_\infty$, the strain dependence $d\gamma/d\lambda$ must be nearly the same on both sides of the contact line. While there is no understanding of the microscopic/mesoscopic origin of the strain dependent surface energy, there is {\it a fortiori} no real understanding of this property, observed to be valid for many different pairs of liquid and reticulated polymers.
Another assessment of the Shuttleworth effect makes use of an elastic Wilhelmy plate, where a polymeric wire is partially immersed in a liquid reservoir -- allowing one to measure the stretch discontinuity across the contact line. In \cite{ChenDanielsSM2019}, it was found that the strain remains very small and no discontinuity was observed -- implying once more that $d\gamma/d\lambda$ is equal on both sides. In the initial experiment in~\cite{Marchand2012c}, conversely, a strong discontinuity of strain was observed at the contact line, implying a jump in $d\gamma/d\lambda$. Given that strains remain very small in these experiments, we can assume that the measured strain reflects the actual strains close to the contact line. Therefore, one can interpret these experiments using the no-pinning condition of Sec.~\ref{sec:pinning}, i.e. $[\mu]_-^+=0$, which at small strains implies the continuity of $d\gamma/d\lambda$ across the contact line -- in perfect agreement with the observation in \cite{ChenDanielsSM2019}. It was argued in~\cite{ChenDanielsSM2019} that discontinuous strains could be an artefact due to swelling. As an alternative interpretation, we note that in~\cite{Marchand2012c} a strong contact angle hysteresis was observed, which in the Shuttleworth-interpretation would also be consistent with a breakdown of the no-pinning condition $[\mu]_-^+=0$.
In conclusion, this research opens the promising perspective of identifying different conditions or different preparation protocols to get, or not, polymer networks with intricate surface properties. The main open question is to understand the microscopic origin of the Shuttleworth effect, which in the present understanding is unambiguously confirmed for at least two different systems. We emphasise that mechanically, none of the experimental observations are in contradiction with the presence of a Shutttleworth effect, in particular since $\theta_L$ and the elastic Wilhelmy plate only probe the \emph{difference} of strain-dependence on either sides of the contact line. By contrast, the independent measurements of \emph{both} the solid angle and the stretch at the contact line \cite{XuNatComm2017,xu2018} cannot be explained by a hyperelastic theory without explicitly accounting for a strong Shuttleworth effect. Future experiments on a broad class of soft materials should therefore simultaneously explore both contact angles $\theta_L$ and $\theta_S$, as well as the strains near the contact line. Combined with the fully nonlinear numerics as presented here, this will offer a systematic quantification of the capillarity of soft solids. A next step is to extend the numerical method to a ridge travelling at constant velocity, including the substrate's bulk viscoelasticity, and possibly history-dependent surface rheology \cite{Gorcum2018}.
\emph{Acknowledgments.}~
We thank J. Eggers, M. van Gorcum, T. Salez and R. Style for discussions. We acknowledge financial support from ERC (European Research Council) Consolidator Grant number 616918, (to A.P. and S.K.), from ANR (French National Agency for Research) grant SMART (to B.A.), from NWO through VICI Grant No. 680-47-632 (to J.H.S.) and an Industrial Partnership Program (a joint research program of Canon Production Printing, Eindhoven University of Technology,
University of Twente, and NWO (to E.H.B.).
|
train/arxiv
|
BkiUb_DxK3xgpfdGO506
| 5 | 1 |
\section{Introduction}
The main purpose of this paper is to calculate a heat kernel on $SL(2,\mathbb{R})$ explicitly, where we treat $SL(2,\mathbb{R})$ as a Riemannian manifold equipped with a certain invariant metric. The main theorem (Theorem \ref{MyThm1}) is stated at the end of Section 7.
First of all, we present the significance of our problelm as well as a background history. In 1961, V. Bargmann(\cite{b1}) introduced a Hilbert space
\begin{align*}
\mathfrak{F}_{n,t}=\{F:\mathbb{C}^n\rightarrow\mathbb{C}:\mathrm{holomorphic}\, |\, \int_{\mathbb{C}^n}|F(z)|^2\rho_{\mathbb{C}^n}(t,z)dxdy<\infty\},
\end{align*}
where $\rho_{\mathbb{C}^n}(t,z)=\frac{1}{(\pi t)^n}e^{-\frac{|z|^2}{t}}$, and an integral operator $A_{t}:L^2(\mathbb{R}^n,dx)\rightarrow\mathfrak{F}_{n,t}$ given by
\begin{align*}
A_{t}f(z)=\frac{1}{(\pi t)^{\frac{1}{4}}}\int_{\mathbb{R}^n}e^{\frac{-z^2+2\sqrt{2}z\cdot x-x^2}{2t}}f(x)dx,\, \, \, \, f\in L^2(\mathbb{R}^n,dx)
\end{align*}
for any $t>0$ and any $n\in\mathbb{N}$. Bargmann proved that $A_{t}$ is a unitary operator. The space $\mathfrak{F}_{n,t}$ is called the Segal-Bargmann space and the operator $A_{t}$ is called the Segal-Bargmann transform since I. E. Segal considered almost the same things at the same time (\cite{s1}). The Segal-Bargmann space, the Segal-Bargmann transform and their generalizations are important research objects in mathematical physics, probability theory and representation theory today with some open problems. We remark that the function $\rho_{\mathbb{C}^n}(t,z)$ satisfies the heat equation on $\mathbb{C}^n\cong\mathbb{R}^{2n}$:
\begin{align*}
\frac{\partial \rho_{\mathbb{C}^n}}{\partial t}=\frac{1}{4}\Delta_{\mathbb{R}^{2n}}\rho_{\mathbb{C}^n},
\end{align*}
where $\Delta_{\mathbb{R}^{2n}}$ is the Euclidean Laplacian on $\mathbb{R}^{2n}$. We call the function $\rho_{\mathbb{C}^n}$ the heat kernel on $\mathbb{C}^n$. After Bargmann's and Segal's works, in 1994, B. Hall observed that an operator slightly modified from Segal-Bargmann transform can be treated as a convolution operator (see \cite{bh1}, \cite{bh2}). We introduce the operator following \cite[Section 6.2]{bh2}. Let $\rho_{\mathbb{R}^n}:(0,\infty)\times\mathbb{R}^n\rightarrow\mathbb{R}$ be a function defined by
\begin{align*}
\rho_{\mathbb{R}^n}(t,x)=\frac{1}{(2\pi t)^{\frac{n}{2}}}e^{-\frac{|x|^2}{2t}}.
\end{align*}
We remark that the function $\rho_{\mathbb{R}^n}$ satisfies the heat equation
\begin{align*}
\frac{\partial \rho_{\mathbb{R}^n}}{\partial t}=\frac{1}{2}\Delta_{\mathbb{R}^n}\rho_{\mathbb{R}^n}
\end{align*}
and that $\rho_{\mathbb{R}^n}$ has an analytic continuation to $\mathbb{C}^n$. We call the function $\rho_{\mathbb{R}^n}$ the heat kernel on $\mathbb{R}^n$. We define $\rho_{t}(x)=\rho_{\mathbb{R}^n}(t,x)$. For $t>0$, let $M_{\sqrt{\rho_{t}}}:L^2(\mathbb{R}^n,\rho_{t}dx)\rightarrow L^2(\mathbb{R}^n,dx)$ be a multiplication operator defined by
\begin{align*}
M_{\sqrt{\rho_{t}}}f(x)=\sqrt{\rho_{t}(x)}f(x).
\end{align*}
Then, $M_{\sqrt{\rho_{t}}}$ is a unitary operator. Let $B_{t}=A_{t}\circ M_{\sqrt{\rho_{t}}}$. Then, the operator $B_{t}$ is a unitary operator from $L^2(\mathbb{R}^n,\rho_{t}dx)$ onto $\mathfrak{F}_{n,t}$ given by
\begin{align*}
B_{t}f(z)=\int_{\mathbb{R}^n}\rho_{t}(z-x)f(x)dx.
\end{align*}
This is a convolution operator. We call $B_{t}$ also the Segal-Bargmann transform. Furthermore, Hall introduced a different kind of the Segal-Bargmann transform, where $\mathbb{R}^n$ is replaced by any compact Lie group $K$ and $\mathbb{C}^n$ is replaced by a complex Lie group $K_{\mathbb{C}}$ which is a complexification of $K$ (see \cite[Theorem 1']{bh1}). To understand this theory deeply, we need to calculate examples. To calculate the examples, we should calculate the heat kernels on Lie groups. The universal covering group of $K$ is a direct product group of a vector group $\mathbb{R}^N$ and connected simply-connected compact simple Lie groups. Thus simple examples are the cases where $K$ is a torus or $K$ is a connected simply-connected compact simple Lie group. If $K$ is a torus, we can calculate concretely the heat kernels on $K$ and $K_{\mathbb{C}}$. Next, we consider the case that $K$ is a connected simply-connected compact simple Lie group. The simplest example is the case $K=SU(2)$. We can calculate the heat kernel on $SU(2)$ (see Theorem \ref{cpt_heat}). However, the heat kernel on $K_{\mathbb{C}}=SL(2,\mathbb{C})$ has not been calculated explicitly as far as we know. Thus, if we can calculate the heat kernel on $SL(2,\mathbb{C})$ explicitly, we will give the first example of the Segal-Bargmann transform for a simple Lie group $K$. In this way, we pose the problem of calculating the heat kernel on $SL(2,\mathbb{C})$.
Generalizing the problem, we consider the heat equation and the heat kernel on $SL(2,\mathbb{R})$. The heat kernel on $SL(2,\mathbb{R})$ seems not to be calculated either. We think that this problem is easier than the one for $SL(2,\mathbb{C})$. Indeed, since we have found a good approach to the problem for $SL(2,\mathbb{R})$ in this paper, we may apply it similarly to the problem for $SL(2,\mathbb{C})$ in future and give the example of the Segal-Bargmann transform. In addition, we can understand some open problems about Segal-Bargmann transform deeply.
Before the discussion in details, we recall heat problems on Riemannian manifolds $M$ in two cases. The first case is when $M$ is a compact Lie group. This situation is discussed in Stein's book (see \cite[Chapter 2, Theorem 1]{st}). To know the heat kernel, we calculate the heat semigroup by using Peter-Weyl Theorem. The second case is when $M$ is a Riemannian symmetric space. This situation is discussed by Gangolli (see \cite[Proposition 3.1]{gan}). In this paper, we use the spherical transform on symmetric spaces (see \cite[Chapter 4]{helg2}). Our work is a generalization of both. In fact, we use Peter-Weyl theorem in Section 3 and general spherical transforms in Section 5.
In this paper, we use a general method to calculate the heat kernel on $SL(2,\mathbb{R})$. We expect that the heat kernel on $SL(2,\mathbb{C})$ (more generally, a non-compact semisimple Lie group $G$ having a multiplicity-free subgroup $K$) is calculated by a similar way in future.
Let us explain contents of each section. In Section 2, we introduce Hall's Segal-Bargmann transform. This section is based on \cite{bh1} mainly. In section 3, we discuss the heat kernels on Lie groups. We formulate the heat equation in terms of Riemannian symmetric pairs in Section 3.1. Next, we show existence and uniqueness of the heat kernel on Lie groups. In Section 4, we give a decomposition of the $L^2$ space on a semisimple Lie group. Each subspace can be understood as a space of sections of a homogeneous vector bundle. In Section 5, we discuss the spherical transforms for homogeneous vector bundles. This theory is introduced by R. Camporesi firstly (see \cite{ca1}). In Section 6, we discuss the Helgason-Fourier transform for homogeneous vector bundles. This theory is also introduced by R. Camporesi (see \cite{ca2}). In Section 7, we calculate the heat kernel on $SL(2,\mathbb{R})$.
\section*{Acknowledgement}
I am very grateful to Professor Hideyuki Ishi for giving me advices in writing this paper. I am also grateful to Professors Hiroshi Oda and Nobukazu Shimeno for giving me insightful suggestions related to this research.
\section{Segal-Bargmann transform}\label{SBtrans}
Let $K$ be a connected compact Lie group, $dk$ be the normalized Haar measure on $K$, and $\mathfrak{k}$ be the Lie algebra of $K$. We take an $\mathrm{Ad}(K)$ invariant inner product $\langle , \rangle_\mathfrak{k}$ on $\mathfrak{k}$. Then $\langle , \rangle_\mathfrak{k}$ induces a $K$ bi-invariant Riemannian metric on $K$. Let $\{ X_1, ... , X_m\}$ be an orthonormal basis of $\mathfrak{k}$, and $\tilde{X}_1, ... , \tilde{X}_m$ be left invariant vector fields on $K$ induced by the elements $X_1, ... , X_m$. Let $\Delta_K = \sum_{j = 1}^{m}\tilde{X}_j\circ\tilde{X}_j$. The differential operator $\Delta_K$ is a Laplace-Beltrami operator on $K$ as a Riemannian manifold (see \cite[Theorem 1]{ura}).
\begin{defi}(cf. \cite[(10)]{bh1}, \cite{ne1}, \cite[Chapter 2]{st})\\
We call a function $\rho_K :(0,\infty)\times K \rightarrow (0,\infty)$ a heat kernel on $K$ if $\rho_K$ satisfies the conditions below:
\begin{eqnarray}
&\mathrm{(i)}& \frac{\partial}{\partial t}(\rho_K * f)=\frac{1}{2}\Delta_K (\rho_K * f), \nonumber \\
&\mathrm{(ii)}& \parallel \rho_K * f - f \parallel _{L^2(K, dk)} \rightarrow 0 \, \, (t \rightarrow +0),\, \, f \in L^2(K,dk). \nonumber
\end{eqnarray}
Here, $f_1 * f_2 (k) = \int_K f_1(k'^{-1}k)f_2(k')dk'.$
\end{defi}
A classical theorem gives a series expression of the heat kernel $\rho_K$.
\begin{thm}(cf. \cite[Chapter 2, Theorem 1]{st})\label{cpt_heat}\\
The heat kernel $\rho_K$ exists uniquely, and it is given by
\begin{align*}
\rho_K(t,k)=\sum_{\tau \in \hat{K}}(\mathrm{dim}V_{\tau})e^{-c_{\tau}t}\chi_{\tau}(k),
\end{align*}
where $\hat{K}$ is the set of equivalence classes of irreducible unitary representations of $K$ and $c_{\tau}$ is a non-negative real number determined by an equation $\sum_{j=1}^{m}d\tau(X_j)^2=-c_{\tau}id_{V_{\tau}}$.
\end{thm}
We shall introduce a specific complexification of the compact Lie group $K$.
\begin{thm}(cf. \cite[Chapter 17.5, Theorem 5.1]{hoc})\\
A complexification $(K_{\mathbb{C}}, \iota : K\rightarrow K_{\mathbb{C}})$ of $K$ with the following property exists: If $H$ is a complex Lie group and $\phi : K \rightarrow H$ is a homomorphism, there exists a unique holomorphic homomorphism $\tilde{\phi} : K_{\mathbb{C}}\rightarrow H$ such that $\phi=\tilde{\phi}\circ\iota$. Moreover, such a complexification is unique up to a complex Lie group isomorphism.
\end{thm}
We call $K_{\mathbb{C}}$ a universal complexification of $K$.
\begin{thm}(cf. \cite[Proposition 7.5.]{knapp})\\
If $K$ is semisimple, a complexification of $K$ exists and is unique up to a complex Lie group isomorphism. In particular, any complexification is universal.
\end{thm}
The heat kernel $\rho_K$ has an analytic continuation to $K_{\mathbb{C}}$ (see \cite[Proposition 1]{bh1}). Next, we fix notation related to $K_{\mathbb{C}}$. Let $dg$ be a Haar measure on $K_{\mathbb{C}}$ and $\mathfrak{k}_{\mathbb{C}}$ be the Lie algebra of $K_{\mathbb{C}}$. We define an inner product $\langle , \rangle_{\mathfrak{k}_{\mathbb{C}}}$ by
\begin{align*}
\langle X + iX', X'' + iX''' \rangle_{\mathfrak{k_{\mathbb{C}}}} = \langle X, X'' \rangle_{\mathfrak{k}} + \langle X', X''' \rangle_{\mathfrak{k}},\, \, X, X', X'', X'''\in \mathfrak{k}.
\end{align*}
Let $Y_{j}=iX_{j} (1\leq j \leq m)$. Then, $\{ X_1, ... , X_m, Y_1, ... , Y_m\}$ is an orthonormal basis of $\mathfrak{k}_{\mathbb{C}}$.
\begin{rem}
A decomposition $\mathfrak{k}_{\mathbb{C}}=\mathfrak{k}\oplus i\mathfrak{k}$ is exactly the Cartan decomposition for the Riemannian symmetric pair $(K_{\mathbb{C}}, K)$.
\end{rem}
The inner product $\langle , \rangle_{\mathfrak{k}_{\mathbb{C}}}$ induces a left $K_{\mathbb{C}}$ invariant and right $K$ invariant Riemannian metric on $K_{\mathbb{C}}$.\, Let $\Delta_{K_{\mathbb{C}}} = \sum_{j = 1}^{m}(\tilde{X_j}\circ\tilde{X_j}+ \tilde{Y_j}\circ\tilde{Y_j})$. The differential operator $\Delta_{K_{\mathbb{C}}}$ is a Laplace-Beltrami operator on $K_{\mathbb{C}}$ as a Riemannian manifold. Following Hall \cite{bh1}, we give a definition of a heat kernel and recall an existence and uniqueness theorem which is slightly different from the ones for $K$.
\begin{defi}\label{cpxheat}
We call a function $\rho_{K_{\mathbb{C}}} :(0,\infty)\times K_{\mathbb{C}} \rightarrow (0,\infty)$ a heat kernel of $K_{\mathbb{C}}$ if $\rho_{K_{\mathbb{C}}}$ satisfies the conditions below.
\begin{eqnarray}
&\mathrm{(i)}& \frac{\partial}{\partial t}(\rho_{K_{\mathbb{C}}} * f)=\frac{1}{4}\Delta_{K_{\mathbb{C}}} (\rho_{K_{\mathbb{C}}} * f),\nonumber \\
&\mathrm{(ii)}& \parallel \rho_{K_{\mathbb{C}}} * f - f \parallel _{L^2(K_{\mathbb{C}}, dg)} \rightarrow 0 \, \, (t \rightarrow +0),\, \, f \in L^2(K_{\mathbb{C}},dg). \nonumber
\end{eqnarray}
\end{defi}
\begin{thm}(\cite{ne1})
Let $A$ be a closure of $\frac{1}{4}\Delta_{K_{\mathbb{C}}}$ as a operator on $L^2(K_{\mathbb{C}},dg)$ and $e^{tA}$ be a semigroup generated by $A$. Then, $e^{tA}$ is a convolution operator and its integral kernel is a heat kernel of $K_{\mathbb{C}}$ (we denote it $\rho_{K_{\mathbb{C}}}$).
\end{thm}
Now, we recall B. Hall's theorem. Let $HL^2(K_{\mathbb{C}},\alpha(g)dg)$ be the weighted Bergman space of holomorphic $L^2$ functions on $K_{\mathbb{C}}$ with respect to the weight $\alpha$ which is a positive continuous function on $K_{\mathbb{C}}$.
\begin{thm}(\cite[Theorem 1']{bh1})\\
Fix a positive number $t>0$. Let $B_{t}: L^2(K, \rho_{K}(t,k)dk)\rightarrow HL^2(K_{\mathbb{C}}, \rho_{K_{\mathbb{C}}}(t,g)dg)$ be a linear operator given by
\begin{align*}
B_{t}f(g)=\int_{K}\rho_{K}(t,k^{-1}g)f(k)dk.
\end{align*}
Then, $B_{t}$ is a unitary isomorphism.
\end{thm}
We want to observe closely what happens actually in Theorem 8 for concrete examples. To see the simplest case $K=SU(2)$, we pose a problem.
\begin{prob}\label{SL2Cheat}
Calculate the heat kernel on $SL(2,\mathbb{C})$ explicitly.
\end{prob}
\section{Heat equations}
\subsection{Heat equations on semisimple Lie groups}
In this section, we generalize Problem \ref{SL2Cheat}. Let $G$ be a noncompact connected semisimple Lie group , $dg$ be a Haar measure on $G$, $K\subset G$ be a maximal compact subgroup and $dk$ be the normalized Haar measure on $K$. The pair $(G,K)$ is a Riemannian symmetric pair (see \cite[Chapter 5]{helg1}). Let $\sigma:G\rightarrow G$ be a Cartan involution with respect to the pair $(G,K)$. Let $\mathfrak{g}$ be the Lie algebra of G, $\mathfrak{k}$ be the Lie algebra of $K$ and $d\sigma:\mathfrak{g}\rightarrow\mathfrak{g}$ be a differential of $\sigma$. Then, $\mathfrak{k}=\{X\in\mathfrak{g}\, |\, d\sigma(X)=X\}$. We denote by $\mathfrak{p}$ the set $\{X\in\mathfrak{g}\, |\, d\sigma(X)=-X\}$. Then, we get the decomposition $\mathfrak{g}=\mathfrak{k}\oplus\mathfrak{p}$ as a linear space. We take an $\mathrm{Ad}(K)$ invariant inner product $\langle , \rangle_\mathfrak{g}$ on $\mathfrak{g}$. Then, $\langle , \rangle_\mathfrak{g}$ induces a left $G$ invariant and right $K$ invariant Riemannian metric on $G$. Let $\{ X_1, ... , X_m\}$ be an orthonormal basis of $\mathfrak{k}$, $\{ Y_1, ... , Y_n\}$ be an orthonormal basis of $\mathfrak{p}$, and $\tilde{X}_1, ... , \tilde{X}_m, \tilde{Y}_1, ... , \tilde{Y}_n$ be left invariant vector fields on $G$ which are induced by elements $X_1, ... , X_m, Y_1, ... , Y_n$. Let $\Delta_G = \sum_{j = 1}^{m}\tilde{X}_j\circ\tilde{X}_j +\sum_{k = 1}^{n}\tilde{Y}_k\circ\tilde{Y}_k$. The differential operator $\Delta_G$ is a Laplace-Beltrami operator on $G$ as a Riemannian manifold.
\begin{defi}\label{riemannheat}
We call a function $\rho_{G} :(0,\infty)\times G \rightarrow (0,\infty)$ a heat kernel on $G$ if $\rho_{G}$ satisfies the conditions below:
\begin{eqnarray}
&\mathrm{(i)}& \frac{\partial}{\partial t}(\rho_{G} * f)=\Delta_{G} (\rho_{G} * f),\nonumber \\
&\mathrm{(ii)}& \parallel \rho_{G} * f - f \parallel _{L^2(G, dg)} \rightarrow 0 \, \, (t \rightarrow +0),\, \, f \in L^2(G,dg). \nonumber
\end{eqnarray}
\end{defi}
\begin{rem}
Definition \ref{cpxheat} in Section \ref{SBtrans} is a special example of Definition \ref{riemannheat} because $(K_{\mathbb{C}},K)$ is a Riemannian symmetric pair.
\end{rem}
Now, we pose a generalized problem.
\begin{prob}\label{G_heat}
Calculate the heat kernel on $G$ explicitly. Especially, calculate the heat kernel on $SL(2,\mathbb{R})$.
\end{prob}
\subsection{Existence and uniqueness of the heat kernel}
First, we recall basic notions. Let $M$ be a Riemannian manifold. Let $\mathfrak{X}(M)$ be the set of all vector fields on $M$, $\nabla:\mathfrak{X}(M)\times\mathfrak{X}(M)\rightarrow\mathfrak{X}(M)$ be the Levi-Civita connection, $R(X,Y):=\nabla_{X}\nabla_{Y}-\nabla_{Y}\nabla_{X}-\nabla_{[X,Y]}$ be the curvature of the pair $(X,Y)\in\mathfrak{X}(M)\times\mathfrak{X}(M)$ and let $\mathrm{Ric}:\mathfrak{X}(M)\times\mathfrak{X}(M)\rightarrow C^{\infty}(M)$ be the Ricci curvature given by
\begin{align*}
\mathrm{Ric}(X,Y)=\sum_{j=1}^{m}\langle R(X,E_{j})Y,E_{j} \rangle,\, \, X, Y\in\mathfrak{X}(M),
\end{align*}
where $\{E_j\}_{j=1}^m$ is a frame field on $M$ (see \cite[Lemma 52 in Chapter 3]{neil}). We recall an important notion about the Ricci curvature.
\begin{defi}\label{riccibound}(cf. \cite[Section 3.3]{cha})
We say that the Ricci curvature is bounded from below if there exists $\kappa\in\mathbb{R}$ such that
\begin{align*}
\mathrm{Ric}(u,u)\geq\kappa||u||_{x}^2,\, \, u\in T_{x}M, x\in M.
\end{align*}
\end{defi}
Next, we discuss the heat kernel. Let $dv$ be the volume form on $M$ and $\Delta$ be the Laplace-Beltrami operator.
\begin{defi}
A function $\rho$ on $(0,\infty)\times M\times M$ is called a heat kernel if the following holds: for each $f_{0}\in L^2(M,dv)$, if we put $\displaystyle f(t,x):=\int_{M}\rho(t,x,y)f_{0}(y)dv(y)\, (t>0)$, then $f(t,\bullet)\in L^2(M,dv)$ for all $t>0$ and $f$ satisfies the conditions below.
\begin{eqnarray}
&\mathrm{(i)}& \frac{\partial f}{\partial t}=\Delta f,\nonumber \\
&\mathrm{(ii)}& ||f(t,\bullet)-f_{0}||_{L^2(M,dv)}\rightarrow 0\,\, (t\rightarrow +0).\nonumber
\end{eqnarray}
\end{defi}
We show existence and uniqueness of the heat kernel on $G$. Dodziuk's work \cite{dod} tells us that if $M$ is complete with Ricci curvature bounded from below, then there exists a unique heat kernel. Thus, it is enough to show that $G$ is a complete Riemannian manifold with Ricci curvature bounded from below. The statements below are well-known, while we write their proofs for completeness.
\begin{lem}
$G$ is complete.
\end{lem}
\begin{proof}
We show that every Cauchy sequence converges. Let $d$ be the Riemannian distance on $G$ and $\{g_{\nu}\}_{\nu=1}^{\infty}$ be any Cauchy sequence in $G$. Since $G$ is locally compact, there exists $r>0$ such that the open subset $U=\{g\in G|d(g,e)<r\}$ is relatively compact. Since $\{g_{\nu}\}_{\nu=1}^{\infty}$ is a Cauchy sequence, there exists $N\in\mathbb{N}$ such that $d(g_{\nu},g_{\nu'})<r$ for all $\nu, \nu'\geq N$. For $g\in G$, let $l_{g}:G\ni x\mapsto gx\in G$. These are isometries. We consider the sequence $\{l_{g_{N}^{-1}}(g_{\nu})\}_{\nu=1}^{\infty}$. This is also a Cauchy sequence and $\{l_{g_{N}^{-1}}(g_{\nu})\}_{\nu=N}^{\infty}\subset U$. Since $U$ is relatively compact, the sequence $\{l_{g_{N}^{-1}}(g_{\nu})\}_{\nu=1}^{\infty}$ converges in $G$. Then, the sequence $\{g_{\nu}\}_{\nu=1}^{\infty}$ converges to $\displaystyle l_{g_{N}}(\lim_{\nu\to\infty}l_{g_{N}^{-1}}(g_{\nu}))$.
\end{proof}
\begin{lem}
The Ricci curvature is bounded from below.
\end{lem}
\begin{proof}
It is known that $\nabla_{\tilde{X}}\tilde{Y}$ is left invariant for $X, Y\in\mathfrak{g}$ (see \cite[Section 1.3 in Chapter 2]{helg1}). Thus, $R(\tilde{X},\tilde{Y})\tilde{Z}$ is also left invariant for $X,Y,Z\in\mathfrak{g}$. We put $\{E_{j}\}=\{\tilde{X_{j}}\}_{j=1}^{m}\cup\{\tilde{Y_{k}}\}_{k=1}^{n}$ as a frame field. The value $\mathrm{Ric}(\tilde{X},\tilde{Y})$ is a constant on $G$ for each $X,Y\in\mathfrak{g}$. Then, value of $\mathrm{Ric}$ and $\kappa$ in Definition \ref{riccibound} is determined on $T_{e}G$.
\end{proof}
\begin{cor}
The heat kernel on $G$ exists uniquely.
\end{cor}
\section{Decomposition of the $L^2$ space}\label{decom}
We introduce a decomposition of $L^2(G,dg)$ to be used later. Since the pair $(G,K)$ is a Riemannian symmetric pair, the map $\mathfrak{p}\times K\ni (Y,k)\mapsto e^{Y}k\in G$ is a diffeomorphism (see \cite[Theorem 6.31]{knapp}). In this situation, $\mathfrak{p}$ is diffeomorphic to $G/K$ naturally. Thus $G/K\times K$ and $G$ are diffeomorphic. Let $\iota:G/K\times K\rightarrow G$ be the diffeomorphism and $dp$ be a left $G$ invariant measure on $G/K$ such that $\iota^{*}dg = dp dk$. Then,
\begin{align*}
L^2(G,dg) \cong L^2(G/K,dp)\otimes L^2(K,dk).
\end{align*}
In this equality, the right hand side means completion of a tensor product space. Using the Peter-Weyl theorem:
\begin{align*}
L^2(K,dk)\cong\sum_{\tau\in \hat{K}}V_{\tau}\otimes V_{\tau}^*,
\end{align*}
we get
\begin{align*}
L^2(G,dg) \cong \sum_{\tau\in \hat{K}}(L^2(G/K,dp)\otimes V_{\tau})\otimes V_{\tau}^{*}.
\end{align*}
Let
\begin{align*}
L^2(G,\tau)=\{f:G\rightarrow V_{\tau}\, |\, f(gk)=\tau(k^{-1})f(g)\, (k\in K, g\in G), \, \int_{G}||f(g)||^2_{V_{\tau}}dg<\infty \},
\end{align*}
which is regarded naturally as the space of $L^2$-sections of a homogeneous vector bundle $G\times_{K}V_{\tau}$ over $G/K$ (see \cite[Definition 3 in Chapter 4]{mits}). Note that $L^2(G,\tau)$ is preserved by the left translation $L_h\, (h\in G)$ given by $L_hf(g)=f(h^{-1}g)$.
\begin{lem}
The map
\begin{align*}
L^2(G/K,dp)\otimes V_{\tau}\ni f\otimes v \mapsto (\iota(p,k)\mapsto \tau(k^{-1})f(p)v)\in L^2(G,\tau)\, \, \, \, \, \, \, \, \, (1)
\end{align*}
gives an isomorphism as Hilbert spaces:
\begin{align*}
L^2(G/K,dp)\otimes V_{\tau}\cong L^2(G,\tau).
\end{align*}
\end{lem}
\begin{proof}
Let $\{v_{1}, ... , v_{d}\}$ be an orthonormal basis of $V_{\tau}$. For a function $f\in L^2(G,\tau)$ and $g\in G$, we obtain the expression $f(g)=\sum_{s=1}^{d}f_{s}(g)v_{s}$. Then the map
\begin{align*}
L^2(G,\tau)\ni f\mapsto \sum_{s=1}^{d}f_{s}(\iota(\bullet, e_{K}))\otimes v_{s}\in L^2(G/K,dp)\otimes V_{\tau}
\end{align*}
is an inverse of the map $(1)$.
\end{proof}
Therefore we have
\begin{align*}
L^2(G,dg)\cong\sum_{\tau\in \hat{K}}L^2(G,\tau)\otimes V_{\tau}^*.
\end{align*}
We define an action of $G\times K$ on $L^2(G,dg)$ by
\begin{align*}
(h,k)\cdot f(g)=f(h^{-1}gk),\, \, h\in G,\, k\in K
\end{align*}
and an action of $G\times K$ on $L^2(G,\tau)\otimes V_{\tau}^{*}$ by
\begin{align*}
(h,k)\cdot f\otimes v^{*} =L_hf\otimes \tau^{*}(k)v^{*},\, \, h\in G,\, k\in K.
\end{align*}
In this situation, the isomorphism in the decomposition of $L^2(G,dg)$ above is an intertwining operator as $G\times K$ modules. We treat the differential operator $\Delta_{G}$ as an operator on $L^2(G,dg).$ Then, $\Delta_{G}$ preserves each subspace which is isomorphic to $L^2(G,\tau)\otimes V_{\tau}^{*}$. Thus, we can reduce the heat equation on $G$ to heat equations on homogeneous vector bundles $G\times_{K}V_{\tau}$.
\section{The spherical transforms}\label{ST}
\subsection{A multiplicity free subgroup}
Let us recall Camporesi's work here. Let $G$ be a locally compact group satisfying the 2nd axiom of countability and $K$ be a compact subgroup of $G$. For $\tau\in\hat{K}$ and $U\in\hat{G}$, let $m(\tau,U)$ be the multiplicity of $\tau$ in $U|_{K}$.
\begin{defi}(\cite{koo})
A compact subgroup $K$ of $G$ is said to be a multiplicity free subgroup if $m(\tau,U)\leq 1$ for all $\tau\in\hat{K}$ and all $U\in\hat{G}$.
\end{defi}
Examples of pairs $(G,K)$ where $K$ is a multiplicity free subgroup of $G$ are found in \cite[Theorem 1]{koo}. One of the examples is $(SU(1,1), S(U(1)\times U(1)))$. Let $c=\frac{1}{\sqrt{2}}
\begin{pmatrix}
1 & i \\
i & 1
\end{pmatrix}
$. Since $cSU(1,1)c^{-1}=SL(2,\mathbb{R})$ and $cS(U(1)\times U(1))c^{-1}=SO(2)$, the compact group $SO(2)$ is a multiplicity free subgroup of $SL(2,\mathbb{R})$.
\subsection{The spherical transforms}
We discuss the spherical transform of ``radial systems of sections'' (denoted by $C_{0}^{\infty}(G,\tau,\tau)$ below, see \cite{ca1}). Let $G$ be a connected noncompact semisimple Lie group with finite center and $K$ be a maximal compact subgroup of $G$. We assume that $K$ is a multiplicity free subgroup of $G$. Let $\tau\in\hat{K}$ and
\begin{flushleft}
$C_{0}^{\infty}(G,\tau,\tau)=$\\
$\{F\in C_{0}^{\infty}(G,\mathrm{End}(V_{\tau}))\, |\, F(k_{1}gk_{2})=\tau(k_{2}^{-1})F(g)\tau(k_{1}^{-1}),\, \, x \in G,\, k_{1},\, k_{2}\in K\}$.\\
\end{flushleft}
For any $F\in C_{0}^{\infty}(G,\tau,\tau)$ and $v\in V_{\tau}$, a vector-valued function $f(g)=F(g)v\, (g\in G)$ defines a section of a homogeneous vector bundle $G\times_{K}V_{\tau}$. Since $G=KAK$ (Cartan decomposition), the value of $f$ is determined on $A$. Because of this observation, $f$ is called a ``radial section'' and $F$ is called a ``radial system''. The convolution product on $C_{0}^{\infty}(G,\tau,\tau)$ is defined by
\begin{align*}
F_{1}*F_{2}(g)=\int_{G}F_{1}(g'^{-1}g)F_{2}(g')dg'.
\end{align*}
In fact, $F_{1}*F_{2}\in C_{0}^{\infty}(G,\tau,\tau)$ because
\begin{eqnarray}
F_{1}*F_{2}(k_{1}gk_{2})&=&\int_{G}F_{1}(g'^{-1}(k_{1}gk_{2}))F_{2}(g')dg' \nonumber \\
&=&\int_{G}F_{1}(g'^{-1}gk_{2})F_{2}(k_{1}g')dg' \nonumber \\
&=&\tau(k_{2}^{-1})\Bigl( \int_{G}F_{1}(g'^{-1}g)F_{2}(g')dg'\Bigr)\tau(k_{1}^{-1}) \nonumber \\
&=&\tau(k_{2}^{-1})F_{1}*F_{2}(g)\tau(k_{1}^{-1}). \nonumber
\end{eqnarray}
Let $\hat{G}(\tau)=\{ U\in\hat{G}\, |\, m(\tau,U)\neq 0\}$. We fix $U\in\hat{G}(\tau)$ and we define the spherical function on $G$ (see \cite[(10)]{ca1}). Let $H_{U}$ be a representation space of $U$, $H_{\tau}\subset H_{U}$ be
the isotypic component of $\tau$ and $P_{\tau}:H_{U}\rightarrow H_{\tau}$ be the orthogonal projection operator. Since $m(\tau,U)=1$, there exists an isomorphism $\iota_{\tau}:V_{\tau}\rightarrow H_{\tau}$. Let $\Phi_{\tau}^{U}:G\rightarrow \mathrm{End}(V_{\tau})$ be
\begin{align*}
\Phi_{\tau}^{U}(g)=\iota_{\tau}^{-1}P_{\tau}U(g)\iota_{\tau}.
\end{align*}
We call this function $\Phi_{\tau}^{U}$ a spherical function of type $\tau$ (\cite{ca1}). Next, we define the spherical transform.
\begin{defi}(\cite[Definition 3.3]{ca1})
The spherical transform of $F\in C_{0}^{\infty}(G,\tau,\tau)$ is the function $\hat{F}:\hat{G}(\tau)\rightarrow \mathbb{C}$ defined by
\begin{align*}
\hat{F}(U)=\frac{1}{\mathrm{dim}V_{\tau}}\int_{G}\mathrm{tr}(\Phi_{\tau}^{U}(g)F(g))dg.
\end{align*}
\end{defi}
The spherical transform preserves products.
\begin{thm}(\cite[After Lemma 3.4]{ca1})\label{product}
For any $F_{1}, F_{2}\in C_{0}^{\infty}(G,\tau,\tau)$, we have
\begin{align*}
\widehat{(F_{1}*F_{2})}(U)=\hat{F_{1}}(U)\hat{F_{2}}(U),\, \,U\in\hat{G}(\tau).
\end{align*}
\end{thm}
The spherical transform has an inverse map. To describe this, we recall the Plancherel measure on $\hat{G}$. For any $f\in C_{0}(G)$ and $U\in\hat{G}$, let $U(f)$ be the linear operator on $H_{U}$ defined by
\begin{align*}
U(f)=\int_{G}f(g)U(g)dg
\end{align*}
(see \cite[Section 4.1]{wa1}).
\begin{thm}(cf. \cite[Theorem 7.2.1.1]{wa2})
There exists the measure $d\mu$ on $\hat{G}$ uniquely such that
\begin{align*}
\int_{G}|f(g)|^2dg=\int_{\hat{G}}\mathrm{tr}(U(f)U(f)^{*})d\mu(U),\, \, f\in L^1(G,dg)\cap L^2(G,dg).
\end{align*}
\end{thm}
Such $d\mu$ is called the Plancherel measure on $\hat{G}$.
\begin{thm}(\cite[Theorem3.9]{ca1})
For any $F\in C_{0}^{\infty}(G,\tau,\tau)$, we have
\begin{align*}
F(g)=\frac{1}{\mathrm{dim}V_{\tau}}\int_{\hat{G}(\tau)}\Phi_{\tau}^{U}(g^{-1})\hat{F}(U)d\mu(U).
\end{align*}
\end{thm}
In addition, the spherical transform is a unitary map.
\begin{thm}(\cite[Corollary 3.10]{ca1})\label{plancherel}
The spherical transform extends to a unitary map $L^2(G,\tau,\tau)\rightarrow L^2(\hat{G}(\tau),d\mu)$.
\end{thm}
\section{The Helgason-Fourier transforms}
\subsection{Principle series representations}\label{Principle series}
First, we discuss parabolic subgroups according to \cite[Section 7.7]{knapp}. Let $G$ be a linear connected reductive group, $K$ be the compact subgroup, $\Theta$ be the global Cartan involution, $B$ be a nondegenerate bilinear form on $\mathfrak{g}$ and $\mathfrak{g}=\mathfrak{k}\oplus\mathfrak{p}$ be the Caran decomposition. Let $\mathfrak{a}\subset\mathfrak{p}$ be a maximal abelian subspace, $\Sigma(\mathfrak{g},\mathfrak{a})$ be the set of restricted roots, $\Sigma^{+}(\mathfrak{g},\mathfrak{a})$ be a set of positive restricted roots with some linear order, $\mathfrak{m}\subset\mathfrak{k}$ be the centralizer of $\mathfrak{a}$ in $\mathfrak{k}$, and $\mathfrak{n}=\oplus_{\lambda\in\Sigma^{+}(\mathfrak{g},\mathfrak{a})}\mathfrak{g}_{\lambda}$. Let $A$ and $N$ be the analytic subgroups of $G$ corresponding to $\mathfrak{a}$ and $\mathfrak{n}$ respectively, and $M\subset G$ be the centralizer of $A$. Then, $\mathfrak{m}=\mathrm{Lie}(M)$ and we get an Iwasawa decomposition $G=KAN$ (see \cite[Proposition 7.31]{knapp}) and a closed subgroup $Q=MAN$ called a minimal parabolic subgroup (see \cite[After Proposition 7.31]{knapp}). Let $\mathfrak{q}=\mathfrak{m}\oplus\mathfrak{a}\oplus\mathfrak{n}$. A Lie subalgebra $\mathfrak{q}'\subset\mathfrak{g}$ is called a $\mathfrak{q}$-parabolic subalgebra if $\mathfrak{q}\subset\mathfrak{q}'$. We call it just a parabolic subalgebra for simplicity in what follows. Let $\Pi(\mathfrak{g},\mathfrak{a})$ be the set of simple roots.
\begin{prop}(cf. \cite[Proposition 7.76]{knapp})
Let $\mathfrak{q}'$ be a parabolic subalgebra. Then, there exists a subset $\Pi'\subset\Pi(\mathfrak{g},\mathfrak{a})$ such that
\begin{align*}
\mathfrak{q}'=\mathfrak{m}\oplus\mathfrak{a}\oplus(\bigoplus_{\beta\in\Gamma}\mathfrak{g}_{\beta}),
\end{align*}
where $\Gamma=\Sigma^{+}(\mathfrak{g},\mathfrak{a})\cup(\Sigma(\mathfrak{g},\mathfrak{a})\cap\mathrm{span}(\Pi'))$.
\end{prop}
We put
\begin{eqnarray}
\mathfrak{a}'&=&\bigcap_{\beta\in\Gamma\cap(-\Gamma)}\mathrm{Ker}\beta, \nonumber \\
\mathfrak{m}'&=&\mathfrak{m}\oplus\mathfrak{a}'^{\perp}\oplus(\bigoplus_{\beta\in\Gamma\cap(-\Gamma)}\mathfrak{g}_{\beta}), \nonumber \\
\mathfrak{n}'&=&\bigoplus_{\beta\in\Gamma\setminus(-\Gamma)}\mathfrak{g}_{\beta}.\nonumber
\end{eqnarray}
Let $A'$ and $N'$ be analytic subgroups such that $\mathrm{Lie}(A')=\mathfrak{a'}$ and $\mathrm{Lie}(N')=\mathfrak{n'}$. Let $\mathfrak{z}(\mathfrak{a}')\subset \mathfrak{g}$ be the centralizer of $\mathfrak{a}'$, $Z(A')\subset G$ be the centralizer of $A'$, $M'_{ss}\subset G$ be the analytic subgroup corresponding to $[\mathfrak{z}(\mathfrak{a}'),\mathfrak{z}(\mathfrak{a}')]$ and $M'$ be the group $(K\cap Z(A'))M'_{ss}$. Then we have $\mathfrak{q'}=\mathfrak{m'}\oplus\mathfrak{a'}\oplus\mathfrak{n'}$ (see \cite[After Proposition 7.76]{knapp}) and $Q'=M'A'N'$ is a closed subgroup of $G$ (see \cite[After Corollary 7.81]{knapp}). These decompositions are called the Langlands decompositions. Moreover, we call $Q'$ a parabolic subgroup. Let $\theta=d\Theta$. A parabolic subgroup $Q'=M'A'N'$ is called cuspidal if there is a $\theta$ stable compact Cartan subalgebra in $\mathfrak{m}'$.
Next, we discuss induced representations according to \cite[Section 7.1]{knapp2}. We put $(\mathfrak{a}')^{*}=\mathrm{Hom}_{\mathbb{R}}(\mathfrak{a}',\mathbb{R})$ and $(\mathfrak{a}'_{\mathbb{C}})^{*}=\mathrm{Hom}_{\mathbb{R}}(\mathfrak{a}',\mathbb{C})$. We take $\sigma'\in\hat{M'}$, $\nu'\in(\mathfrak{a}'_{\mathbb{C}})^{*}$ and $\rho_{\mathfrak{a}'}=\frac{1}{2}\sum_{\alpha\in\Sigma^{+}(\mathfrak{g},\mathfrak{a}')}(\mathrm{dim}\mathfrak{g}_{\alpha})\alpha$. Let $V_{\sigma'}$ be a representation space of $\sigma'$, $\mathscr{H}(Q',\sigma',\nu')$ be
the space of $V_{\sigma'}$-valued measurable functions $F$ such that $F(gm'a'n')=e^{-(\nu'+\rho_{\mathfrak{a}'})(\mathrm{log}a)}\sigma'(m)^{-1}F(g)$ for a.e. $g\in G$, $m'\in M'$, $a'\in A'$ and $n'\in N'$ with $||F||^2:=\int_{K}|F(k)|^2dk<\infty$. The induced representation $U(Q',\sigma',\nu')=\mathrm{ind}_{Q'}^{G}(\sigma'\otimes e^{\nu'}\otimes 1)$ is defined on $\mathscr{H}(Q',\sigma',\nu')$ by
\begin{align*}
U(Q',\sigma',\nu')(g)F(h)=F(g^{-1}h),\, \, g, h\in G.
\end{align*}
Then, the pair $(U(Q',\sigma',\nu'),\mathscr{H}(Q',\sigma',\nu'))$ is a continuous representation of $G$ (see \cite[Section 7.2]{knapp2}), which is called principle series. It is known that $U(Q',\sigma',\nu')$ is unitary if $\nu'$ is pure imaginary valued on $\mathfrak{a}'$. Moreover, if $\sigma'$ has a real infinitesimal character and no root in $\Sigma(\mathfrak{g},\mathfrak{a}')$ is orthogonal to $\nu'$, then $U(Q',\sigma',\nu')$ is irreducible (see \cite[Theorem 14.93]{knapp2}).
\subsection{The Helgason-Fourier transforms}\label{helgfourier}
First, we discuss the Helgason-Fourier transforms. We assume that $G$ is a semisimple Lie group having a multiplicity-free subgroup $K$ and that $Q'$ is cuspidal. We take $A_{1}, N_{1}\subset G$ so that $A=A'A_{1}$ and $N=N'N_{1}$ as in \cite[Proposition 7.14]{knapp2}. It is known that $MA_{1}N_{1}$ is a minimal parabolic subgroup of $M'$ (see \cite[Section 8.10]{knapp2}).
\begin{prop}(\cite[Proposition 4.1]{ca2})
Let $\sigma'\in\hat{M'}$ be a discrete series representation (see \cite[Theorem 8.51]{knapp2}). Then, there exists $\tilde{\sigma}'\in\hat{M}$ and $\mu_{1}\in\mathfrak{a}_{1}^{*}$ such that $\sigma'$ is infinitesimally equivalent with a subrepresentation of $\mathrm{ind}_{MA_{1}N_{1}}^{M'}(\tilde{\sigma'}\otimes e^{\mu_{1}}\otimes 1)$.
\end{prop}
We write $g=\bold{k}(g)e^{H(g)}n(g)\in KAN$ for each $g\in G$. Let $\tau\in\hat{K}$,
\begin{align*}
C_{0}^{\infty}(G,\tau)=\{f:G\rightarrow V_{\tau}\, |\, f(gk)=\tau(k^{-1})f(g)\, (k\in K, g\in G), \, f\in C_{0}^{\infty}(G,V_{\tau}) \}
\end{align*}
and $F^{\lambda}:G\rightarrow\mathrm{End}_{\mathbb{C}}(V_{\tau})$ be a map given by $F^{\lambda}(g)=e^{\lambda(H(g))}\tau(\bold{k}(g))$ for each $g\in G$ and $\lambda\in\mathfrak{a}_{\mathbb{C}}^{*}$. We define the inner product on $\mathrm{End}_{M}(V_{\tau},V_{\tilde{\sigma}'})$ by $\langle S,T\rangle=\frac{1}{\mathrm{dim}V_{\tilde{\sigma}'}}\mathrm{tr}(S^{*}T)$, where $S^{*}$ is an adjoint operator of $S$. We take an orthonormal basis $\{T_{\xi}\}$ of $\mathrm{End}_{M}(V_{\tau},V_{\tilde{\sigma}'})$ and $T_{\tilde{\sigma}'}=\sum_{\xi}T_{\xi}^{*}T_{\xi}$.
\begin{defi}(\cite[(3.18)]{ca2})
For $f\in C_{0}^{\infty}(G,\tau)$, we define the function $\tilde{f}:\mathfrak{a}^{*}_{\mathbb{C}}\times K\to V_{\tau}$ by
\begin{align*}
\tilde{f}(\lambda,k)=\int_{G}F^{i\bar{\lambda}-\rho_{\mathfrak{a}}}(g^{-1}k)^{*}f(g)dg.
\end{align*}
We say $\tilde{f}$ is the Helgason-Fourier transform of $f$.
\end{defi}
\begin{thm}\label{Plancherel_formula}(\cite[After Theorem 4.3]{ca2})
Let $f_{1}, f_{2}\in C_{0}^{\infty}(G,\tau)$. Then, there exists $c_{Q'}>0$ for each $Q'$ such that
\begin{flushleft}
$\langle f_{1},f_{2}\rangle_{L^2(G,dg)}$\\
$\displaystyle =\sum_{Q'}c_{Q'}\sum_{\sigma'}\frac{1}{\mathrm{dim}V_{\tilde{\sigma}'}}\int_{\mathfrak{a}'^{*}\times K}\langle T_{\tilde{\sigma}'}\tilde{f}_{1}(\nu'+i\mu_{1},k),T_{\tilde{\sigma}'}\tilde{f}_{2}(\nu'-i\mu_{1},k)\rangle_{V_{\tau}}p_{\sigma'}(\nu')d\nu'dk$,
\end{flushleft}
where $p_{\sigma'}(\nu')d\nu'=d\mu(U(Q',\sigma',i\nu'))$ and each $\sigma'$ is a discrete series of $M'$ such that $U(Q',\sigma',i\nu')|_{K}$ contains $\tau$.
\end{thm}
Next, we describe relations between the Helgason-Fourier transforms and the spherical transforms in a special case. Let $\Psi\in C_{0}^{\infty}(G,\tau,\tau)$, $v\in V_{\tau}$ and $\psi(g)=\Psi(g)v$ for $g\in G$. Then, $\psi\in C_{0}^{\infty}(G,\tau)$. In this situation, we get
\begin{align*}
\tilde{\psi}(\nu,k)=\sum_{\sigma\in\hat{M}, \sigma\subset\tau|_{M}}\ ^{t}\hat{\Psi}(U(Q,\sigma,i\nu))P_{\sigma}\tau(k^{-1})v,\, \, \nu\in\mathfrak{a}^{*}, k\in K
\end{align*}
(\cite[(5.28)]{ca2}). In relation to this, we show an important formula which does not appear in \cite{ca2}. This is generalization of \cite[Lemma 1.4 in Chapter 3]{helg3}. Let $\Psi\in C_{0}^{\infty}(G,\tau,\tau)$ and $f\in C_{0}^{\infty}(G,\tau)$. We define the convolution between $\Psi$ and $f$ by
\begin{align*}
\Psi*f(g)=\int_{G}\Psi(g'^{-1}g)f(g')dg'.
\end{align*}
\begin{thm}\label{conv_mult}
Let $\Psi\in C_{0}^{\infty}(G,\tau,\tau)$ and $f\in C_{0}^{\infty}(G,\tau)$. Then,
\begin{align*}
\widetilde{\Psi*f}(\nu,k)=\Bigl( \sum_{\sigma\in\hat{M}, \sigma\subset\tau|_{M}}\ ^{t}\hat{\Psi}(U(Q,\sigma,i\nu))P_{\sigma}\Bigr) \tilde{f}(\nu,k),\, \, \nu\in\mathfrak{a}^{*}, k\in K.
\end{align*}
\end{thm}
\begin{proof}
Let $h, h'\in G$ and $l\in K$. We rewrite $hh'l$:
\begin{eqnarray}
hh'l&=&h\bold{k}(h'l)e^{H(h'l)}n(h'l) \nonumber \\
&=&\bold{k}(h\bold{k}(h'l))e^{H(h\bold{k}(h'l))}n(h\bold{k}(h'l))e^{H(h'l)}n(h'l) \nonumber \\
&=&\bold{k}(h\bold{k}(h'l))e^{H(h\bold{k}(h'l))+H(h'l)}e^{-H(h'l)}n(h\bold{k}(h'l))e^{H(h'l)}n(h'l). \nonumber
\end{eqnarray}
Since $A$ normalizes $N$, we get $\bold{k}(hh'l)=\bold{k}(h\bold{k}(h'l))$ and $H(hh'l)=H(h\bold{k}(h'l))+H(h'l)$.
Then,
\begin{eqnarray}
&&\widetilde{\Psi*f}(\nu,k)\nonumber \\
&=&\int_{G}e^{-(i\nu+\rho_{\mathfrak{a}})(H(g^{-1}k))}\tau(\bold{k}(g^{-1}k))^{-1}\Psi*f(g)dg \nonumber \\
&=&\int_{G}\int_{G}e^{-(i\nu+\rho_{\mathfrak{a}})(H(g^{-1}k))}\tau(\bold{k}(g^{-1}k))^{-1}\Psi(g'^{-1}g)f(g')dgdg'. \nonumber
\end{eqnarray}
We change the variable from $g$ to $g'g$:
\begin{eqnarray}
&&\widetilde{\Psi*f}(\nu,k)\nonumber \\
&=&\int_{G}\int_{G}e^{-(i\nu+\rho_{\mathfrak{a}})(H(g^{-1}g'^{-1}k))}\tau(\bold{k}(g^{-1}g'^{-1}k))^{-1}\Psi(g)f(g')dgdg'. \nonumber
\end{eqnarray}
Next, we use the formula above with $h=g^{-1}$, $h'=g'^{-1}$ and $l=k$:
\begin{eqnarray}
&&\widetilde{\Psi*f}(\nu,k)\nonumber \\
&=&\int_{G}\int_{G}e^{-(i\nu+\rho_{\mathfrak{a}})(H(g^{-1}\bold{k}(g'^{-1}k))+H(g'^{-1}k))}\tau(\bold{k}(g^{-1}\bold{k}(g'^{-1}k)))^{-1}\Psi(g)f(g')dgdg'. \nonumber
\end{eqnarray}
We change the variable from $g$ to $\bold{k}(g'^{-1}k)g$:
\begin{eqnarray}
&&\widetilde{\Psi*f}(\nu,k)\nonumber \\
&=&\int_{G}\int_{G}e^{-(i\nu+\rho_{\mathfrak{a}})(H(g^{-1})+H(g'^{-1}k))}\tau(\bold{k}(g^{-1}))^{-1}\Psi(\bold{k}(g'^{-1}k)g)f(g')dgdg' \nonumber \\
&=&\int_{G}\int_{G}e^{-(i\nu+\rho_{\mathfrak{a}})(H(g^{-1})+H(g'^{-1}k))}\tau(\bold{k}(g^{-1}))^{-1}\Psi(g)\tau(\bold{k}(g'^{-1}k))^{-1}f(g')dgdg'. \nonumber \\
&=&\int_{G}e^{-(i\nu+\rho_{\mathfrak{a}})(H(g^{-1}))}\tau(\bold{k}(g^{-1}))^{-1}\Psi(g)dg\tilde{f}(\nu,k). \nonumber
\end{eqnarray}
Thus, we get
\begin{align*}
\widetilde{\Psi*f}(\nu,k)=\Bigl( \sum_{\sigma\in\hat{M}, \sigma\subset\tau|_{M}}\ ^{t}\hat{\Psi}(U(Q,\sigma,i\nu))P_{\sigma}\Bigr) \tilde{f}(\nu,k).
\end{align*}
\end{proof}
In general, let $Q'$ be any parabolic subgroup, $\Psi\in C_{0}^{\infty}(G,\tau,\tau)$, $v\in V_{\tau}$ and $\psi(g)=\Psi(g)v$ for $g\in G$. It is known that
\begin{align*}
T_{\tilde{\sigma}'}\tilde{\psi}(\nu'-i\mu_{1})=\ ^{t}\hat{\Psi}(U(Q',\sigma',\nu'))T_{\tilde{\sigma}'}\tau(k^{-1})v
\end{align*}
(see \cite[After (5.30)]{ca2}).
\section{The heat kernel on $SL(2,\mathbb{R})$}
\subsection{The heat equation on $SL(2,\mathbb{R})$}
First, we fix some notations and define a differential operator. Let $G=SL(2,\mathbb{R})$ and $K=SO(2)$. Since the map $\theta :G\ni g\mapsto (^{t}g)^{-1}\in G$ is a Cartan involution, the pair $(G,K)$ is a Riemannian symmetric pair. Let $\mathfrak{g}=\mathfrak{sl}(2,\mathbb{R})$, $\mathfrak{k}=\mathfrak{so}(2)$ and $\mathfrak{p}=\{X\in\mathfrak{g}\, |\, d\theta(X)=-X\}$. Then $\mathfrak{g}=\mathfrak{k}\oplus\mathfrak{p}$. Let $\langle, \rangle_{\mathfrak{g}}$ be an inner product on $\mathfrak{g}$ defined by $\langle X, Y \rangle_{\mathfrak{g}}=4\mathrm{tr}(^tXY)$. Then the inner product satisfies the conditions below:
\begin{flushleft}
$\langle X,Y \rangle_{\mathfrak{g}}=-4\mathrm{tr}(XY),\, \, X, Y\in\mathfrak{k}$,\\
$\langle X,Y \rangle_{\mathfrak{g}}=+4\mathrm{tr}(XY),\, \, X, Y\in\mathfrak{p}$,\\
$\langle X,Y \rangle_{\mathfrak{g}}=0,\, \, X\in\mathfrak{k},Y\in\mathfrak{p}$.
\end{flushleft}
These properties are related to the Killing form. Moreover $\langle, \rangle_{\mathfrak{g}}$ is $\mathrm{Ad}(K)$ invariant. Let
\begin{align*}
X_{1}=\frac{1}{\sqrt{8}}
\begin{pmatrix}
0 & -1 \\
1 & 0
\end{pmatrix}
,\, Y_{1}=\frac{1}{\sqrt{8}}
\begin{pmatrix}
1 & 0 \\
0 & -1
\end{pmatrix}
,\, Y_{2}=\frac{1}{\sqrt{8}}
\begin{pmatrix}
0 & 1 \\
1 & 0
\end{pmatrix}.
\end{align*}
Then, $\{X_{1}\}$ is an orthonormal basis of $\mathfrak{k}$ and $\{Y_{1},Y_{2}\}$ is an orthonormal basis of $\mathfrak{p}$. Thus the differential operator which we should consider is
\begin{align*}
\Delta_{G}=\tilde{X_{1}}\circ\tilde{X_{1}}+\tilde{Y_{1}}\circ\tilde{Y_{1}}+\tilde{Y_{2}}\circ\tilde{Y_{2}}.
\end{align*}
Next, we define a Haar measure $dg$ on $G$ according to \cite[Proposition 5.1 in Chapter 5]{mits}. Let
\begin{align*}
k_{\theta}=
\begin{pmatrix}
\mathrm{cos}\frac{\theta}{2} & \mathrm{sin}\frac{\theta}{2} \\
-\mathrm{sin}\frac{\theta}{2} & \mathrm{cos}\frac{\theta}{2}
\end{pmatrix}
,\, a_{s}=
\begin{pmatrix}
e^{\frac{s}{2}} & 0 \\
0 & e^{-\frac{s}{2}}
\end{pmatrix}
,\, n_{x}=
\begin{pmatrix}
1 & x \\
0 & 1
\end{pmatrix}
\end{align*}
for $\theta, s, x\in\mathbb{R}$ and
\begin{align*}
A=\{a_{s}\, |\, s\in\mathbb{R}\}, N=\{n_{x}\, |\, x\in\mathbb{R}\}.
\end{align*}
Then, we get the Iwasawa decomposition
\begin{align*}
G=KAN\cong K\times A\times N
\end{align*}
and the Cartan decomposition
\begin{align*}
G=KAK.
\end{align*}
We define $dg$ by
\begin{align*}
\int_{G}f(g)dg=\frac{1}{4\pi}\int_{(0,4\pi)\times\mathbb{R}\times\mathbb{R}}f(k_{\theta}a_{s}n_{x})e^{s}d\theta dsdx,\, \, f\in C_{0}(G).
\end{align*}
\subsection{Decomposition of $L^2(SL(2,\mathbb{R}),dg)$}
For $n\in\mathbb{Z}$, let $\tau_{n}:K\rightarrow GL(1,\mathbb{C})\cong \mathbb{C}^{\times}$ be the function defined by $\tau_{n}(k_{\theta})=e^{in\frac{\theta}{2}}$. Then, $\hat{K}=\{\tau_{n}\}_{n\in\mathbb{Z}}$. Note that all the representation space $V_{\tau_{n}}$ is $\mathbb{C}$. According to Section \ref{decom}, we get the following decomposition:
\begin{align*}
L^2(G,dg)=\sum_{n\in\mathbb{Z}}L^2(G,\tau_{n}),
\end{align*}
where $L^2(G,\tau_{n})=\{f\in L^2(G)\, |\, f(gk)=\tau_{n}(k^{-1})f(g),\, \, g\in G,\, k\in K\}$. In this situation, $L^2(G,\tau_{n},\tau_{n})\subset L^2(G,\tau_{n})$. Let $P_{\tau_{n}}:L^2(G,dg)\rightarrow L^2(G,\tau_{n})$ be the orthogonal projection operator. Then we have
\begin{align*}
P_{\tau_{n}}f(g)=\int_{K}\tau_{n}(k)f(gk)dk
\end{align*}
since
\begin{align*}
\int_{K}\tau_{n}(k')f((gk)k')dk'=\int_{K}\tau_{n}(k^{-1}k')f(gk')dk'=\tau_{n}(k^{-1})\int_{K}\tau_{n}(k')f(gk')dk'
\end{align*}
for $g\in G,\, k\in K$.
\subsection{Irreducible unitary representations of $SL(2,\mathbb{R})$}
First, we describe important irreducible unitary representations according to \cite[Chapter 5]{mits}. For $\varepsilon=0,1$ and $\nu\in\mathbb{R}$, let $U_{\varepsilon, \nu}:G\rightarrow U(L^2(\mathbb{R},\frac{1}{\pi}dx))$ be the unitary representation of $G$ given by
\begin{align*}
U_{\varepsilon, \nu}(g)f(x)=(\mathrm{sgn}(cx+d))^{\varepsilon}(cx+d)^{i2\nu-1}f(\frac{ax+b}{cx+d}),
\end{align*}
where $g^{-1}=
\begin{pmatrix}
a & b\\
c & d
\end{pmatrix}
\in G$. It is known that $U_{\varepsilon, -\nu}\cong U_{\varepsilon, \nu}$. The set $\{U_{\varepsilon, \nu}\}_{\varepsilon=0,1, \nu>0}$ is called the continuous series. We denote by $\mathbb{H}$ the upper half plane in $\mathbb{C}$. For an integer $m\geq 2$, let
\begin{flushleft}
$\mathscr{H}_{m}^{+}=\{f:\mathbb{H}:\rightarrow\mathbb{C}\, |\, f:$\, holomorphic$, ||f||=\int_{\mathbb{H}}|f(z)|^2y^{m-2}dxdy<\infty\}$,\\
$\mathscr{H}_{m}^{-}=\{f:\mathbb{H}:\rightarrow\mathbb{C}\, |\, f:$\, anti-holomorphic$, ||f||=\int_{\mathbb{H}}|f(z)|^2y^{m-2}dxdy<\infty\}$.
\end{flushleft}
These are Hilbert spaces. Let $U_{m}^{+}:G\rightarrow U(\mathscr{H}_{m}^{+})$ be the unitary representation of $G$ given by
\begin{align*}
U_{m}^{+}(g)f(z)=(cz+d)^{-m}f(\frac{az+b}{cz+d})
\end{align*}
and $U_{m}^{-}:G\rightarrow GL(\mathscr{H}_{m}^{-})$ be the unitary representation of $G$ given by
\begin{align*}
U_{m}^{-}(g)f(z)=(c\bar{z}+d)^{-m}f(\frac{az+b}{cz+d}),
\end{align*}
where $g^{-1}=
\begin{pmatrix}
a & b\\
c & d
\end{pmatrix}
\in G$. The set $\{U_{m}^{+}\}_{m\geq 2}\cup\{U_{m}^{-}\}_{m\geq 2}$ is called the regular discrete series. Let $\hat{G}_{p}=\{U_{\varepsilon, \nu}\}_{\varepsilon=0,1, \nu>0}\cup\{U_{m}^{+}\}_{m\geq 2}\cup\{U_{m}^{-}\}_{m\geq 2}$. This set is called the regular principal series.
Next, we discuss characters according to \cite[Section 5.6]{mits}. Let $\mathfrak{g}_{\mathbb{C}}$ be a complexification of $\mathfrak{g}$, $U(\mathfrak{g}_{\mathbb{C}})$ be the enveloping algebra of $\mathfrak{g}_{\mathbb{C}}$, $\mathfrak{Z}\subset U(\mathfrak{g}_{\mathbb{C}})$ be the center of $U(\mathfrak{g}_{\mathbb{C}})$ and $U\in\hat{G}$. For $u\in \mathfrak{Z}$, we see that $dU(u)$ is a scalar operator by the Schur Lemma, so that we can write $dU(u)=\chi_{U}(u)Id_{H_{U}}$ with $\chi_{U}(u)\in\mathbb{C}$. Then, the homomorphism $\mathfrak{Z}\ni u\mapsto\chi_{U}(u)\in\mathbb{C}$ is defined. The function $\chi_{U}$ is called the character of $U$. Since $U(\mathfrak{g}_{\mathbb{C}})$ is isomorphic to the space $\mathbb{D}_{L}(G)$ of left invariant differential operators (see \cite[Section 2.5]{mits}), we can identify $\mathfrak{Z}$ with a center of $\mathbb{D}_{L}(G)$. Let
\begin{align*}
C_{G}=-\tilde{X_{1}}\circ\tilde{X_{1}}+\tilde{Y_{1}}\circ\tilde{Y_{1}}+\tilde{Y_{2}}\circ\tilde{Y_{2}},
\end{align*}
then $C_{G}\in\mathfrak{Z}$ and $\Delta_{G}=2\tilde{X_{1}}\circ\tilde{X_{1}}+C_{G}$. The element $C_{G}$ is called the Casimir operator of $G$. We can describe the value of characters at the center $C_{G}$.
\begin{thm}\label{character}(cf. \cite[Theorem 6.4 in Chapter 5]{mits})
We have
\begin{eqnarray}
\chi_{U_{\varepsilon,\nu}}(C_{G})&=&-\frac{1}{2}(\nu^2+\frac{1}{4}), \nonumber \\
\chi_{U_{m}^{+}}(C_{G})&=&\frac{m(m-2)}{8}, \nonumber \\
\chi_{U_{m}^{-}}(C_{G})&=&\frac{m(m-2)}{8}. \nonumber
\end{eqnarray}
\end{thm}
Finally, we describe the weights and $\hat{G}(\tau_{n})_{p}=\hat{G}(\tau_{n})\cap\hat{G}_{p}$ according to \cite[Section 5.6]{mits}. For $U\in\hat{G}$, let $\Lambda_{U}=\{n\in\mathbb{Z}\, |\, m(\tau_{n},U)\neq 0\}$ which is called the set of weights of $U$. For $n\in\Lambda_{U}$, let $e_{\tau_{n}}^{U}\in H_{U}$ be a normalized weight vector of $n$. Now, we can give the set of weights.
\begin{thm}(cf. \cite[Theorem 6.4 in Chapter 5]{mits})\label{weight}
We have
\begin{eqnarray}
\Lambda_{U_{\varepsilon,\nu}}&=&\{n\in\mathbb{Z}\, |\, n\equiv\varepsilon\, (\mathrm{mod} 2)\}, \nonumber \\
\Lambda_{U_{m}^{+}}&=&\{n\in\mathbb{Z}\, |\, n\leq -m,\, n\equiv m\, (\mathrm{mod} 2)\}, \nonumber \\
\Lambda_{U_{m}^{-}}&=&\{n\in\mathbb{Z}\, |\, n\geq m,\, n\equiv m\, (\mathrm{mod} 2)\}. \nonumber
\end{eqnarray}
\end{thm}
Using Theorem \ref{weight}, we can describe $\hat{G}(\tau_{n})_{p}$.
\begin{cor}\label{intedom}
\begin{eqnarray}
\mathrm{If}\, n \geq 0, \mathrm{\, we\, \, have}\nonumber \\
\hat{G}(\tau_{n})_{p}&=&\{U_{\varepsilon,\nu}\, |\, \varepsilon\equiv n(\mathrm{mod} 2),\, \nu>0\}\cup\{U_{m}^{+}\, |\, 2\leq m\leq |n|,\, m\equiv n(\mathrm{mod} 2)\}. \nonumber \\
\mathrm{If}\, n < 0, \mathrm{\, we\, \, have}\nonumber \\
\hat{G}(\tau_{n})_{p}&=&\{U_{\varepsilon,\nu}\, |\, \varepsilon\equiv n(\mathrm{mod} 2),\, \nu>0\}\cup\{U_{m}^{-}\, |\, 2\leq m\leq |n|,\, m\equiv n(\mathrm{mod} 2)\}. \nonumber
\end{eqnarray}
\end{cor}
\subsection{The spherical functions on $SL(2,\mathbb{R})$}\label{SFonSL2R}
First, we discuss derivatives of the function $\Phi_{\tau_{n}}^{U}(g^{-1})$. For $U\in\hat{G}$, let $\langle,\rangle_{H_{U}}$ be an inner product of $H_{U}$. We get $\Phi_{\tau_{n}}^{U}(g)=\langle U(g)e_{\tau_{n}}^{U},e_{\tau_{n}}^{U}\rangle_{H_{U}}$. On the other hand,
\begin{align*}
\Phi_{\tau_{n}}^{U}(g^{-1})=\langle U(g^{-1})e_{\tau_{n}}^{U},e_{\tau_{n}}^{U}\rangle_{H_{U}}=\langle e_{\tau_{n}}^{U},U(g)e_{\tau_{n}}^{U}\rangle_{H_{U}}=\overline{\Phi_{\tau_{n}}^{U}(g)}.
\end{align*}
We put $\eta:G\ni g\mapsto g^{-1}\in G$ and $\tilde{\Phi}_{\tau_{n}}^{U}=\Phi_{\tau_{n}}^{U}\circ\eta$. Let $X\in\mathfrak{g}$ and $\tilde{X}$ be the left invariant vector field on $G$ induced by $X$. Then,
\begin{eqnarray}
\tilde{X}\tilde{\Phi}_{\tau_{n}}^{U}(g)&=&\frac{d}{dt}\langle e_{\tau_{n}}^{U},U(ge^{tX})e_{\tau_{n}}^{U}\rangle_{H_{U}}|_{t=0} \nonumber \\
&=&\langle e_{\tau_{n}}^{U},U(g)dU(X)e_{\tau_{n}}^{U}\rangle_{H_{U}}. \nonumber
\end{eqnarray}
To know $\tilde{X}\tilde{\Phi}_{\tau_{n}}^{U}(g)$ more, we should know an expression of $dU(X)e_{\tau_{n}}^{U}$. We can observe $dU(X)e_{\tau_{n}}^{U}$ as follows.
\begin{thm}\label{derivative}(cf. \cite[Section 6.5, 6.6]{lang})
We have
\begin{eqnarray}
&dU_{\varepsilon, \nu}(X_{1})e_{\tau_{n}}^{U_{\varepsilon, \nu}}=&-\frac{1}{\sqrt{8}}ine_{\tau_{n}}^{U_{\varepsilon, \nu}}, \nonumber \\
&dU_{\varepsilon, \nu}(Y_{1})e_{\tau_{n}}^{U_{\varepsilon, \nu}}=&\frac{1}{2\sqrt{8}}\biggl( (i2\nu +1-n)e_{\tau_{n-2}}^{U_{\varepsilon, \nu}} + (i2\nu +1+n)e_{\tau_{n+2}}^{U_{\varepsilon, \nu}} \biggr) , \nonumber \\
&dU_{\varepsilon, \nu}(Y_{2})e_{\tau_{n}}^{U_{\varepsilon, \nu}}=&\frac{i}{2\sqrt{8}}\biggl( (i2\nu +1-n)e_{\tau_{n-2}}^{U_{\varepsilon, \nu}} - (i2\nu +1+n)e_{\tau_{n+2}}^{U_{\varepsilon, \nu}} \biggr) , \nonumber \\
&dU_{m}^{\pm}(X_{1})e_{\tau_{n}}^{U_{m}^{\pm}}=&-\frac{1}{\sqrt{8}}ine_{\tau_{n}}^{U_{m}^{\pm}}, \nonumber \\
&dU_{m}^{\pm}(Y_{1})e_{\tau_{n}}^{U_{m}^{\pm}}=&\frac{1}{2\sqrt{8}}\biggl( (m-n)e_{\tau_{n-2}}^{U_{m}^{\pm}} + (m+n)e_{\tau_{n+2}}^{U_{m}^{\pm}} \biggr), \nonumber \\
&dU_{m}^{\pm}(Y_{2})e_{\tau_{n}}^{U_{m}^{\pm}}=&\frac{i}{2\sqrt{8}}\biggl( (m-n)e_{\tau_{n-2}}^{U_{m}^{\pm}} - (m+n)e_{\tau_{n+2}}^{U_{m}^{\pm}} \biggr) . \nonumber
\end{eqnarray}
\end{thm}
By Theorem \ref{character} and Theorem \ref{derivative}, we obtain $\tilde{X_{1}}\circ\tilde{X_{1}}\tilde{\Phi}_{\tau_{n}}^{U}(g)=-\frac{1}{8}n^2\tilde{\Phi}_{\tau_{n}}^{U}(g)$ and $C_{G}\tilde{\Phi}_{\tau_{n}}^{U}(g)=\chi_{U}(C_{G})\tilde{\Phi}_{\tau_{n}}^{U}(g)$, so that
\begin{align*}
\Delta_{G}\tilde{\Phi}_{\tau_{n}}^{U}(g)=\lambda_{\tau_{n}}^{U}\tilde{\Phi}_{\tau_{n}}^{U}(g),\, \, \mathrm{where}\, \, \lambda_{\tau_{n}}^{U}=-\frac{1}{4}n^2+\chi_{U}(C_{G})<0.\, \, \, (2)
\end{align*}
Next, we see expressions of the functions $\tilde{\Phi}_{\tau_{n}}^{U}(g)$ and its derivatives explicitly. To do so, it is enough to know the expressions of the functions $\langle e_{\tau_{n_{1}}}^{U},U(g)e_{\tau_{n_{2}}}^{U}\rangle_{H_{U}}$ explicitly. By using the Cartan decomposition $G=KAK$, we have
\begin{eqnarray}
\langle e_{\tau_{n_{1}}}^{U},U(k_{\theta_{1}}a_{s}k_{\theta_{2}})e_{\tau_{n_{2}}}^{U}\rangle_{H_{U}}&=&\langle U(k_{\theta_{1}}^{-1})e_{\tau_{n_{1}}}^{U},U(a_{s})U(k_{\theta_{2}})e_{\tau_{n_{2}}}^{U}\rangle_{H_{U}} \nonumber \\
&=&e^{-i\frac{n_{1}\theta_{1}+n_{2}\theta_{2}}{2}}\langle e_{\tau_{n_{1}}}^{U},U(a_{s})e_{\tau_{n_{2}}}^{U}\rangle_{H_{U}} \nonumber
\end{eqnarray}
for any $\theta_{1}, \theta_{2}\in\mathbb{R}, s>0$. Thus the function $\langle e_{\tau_{n_{1}}}^{U},U(g)e_{\tau_{n_{2}}}^{U}\rangle_{H_{U}}$ is determined by the function $\langle e_{\tau_{n_{1}}}^{U},U(a_{s})e_{\tau_{n_{2}}}^{U}\rangle_{H_{U}}$. Explicit expression of $\langle e_{\tau_{n_{1}}}^{U},U(a_{s})e_{\tau_{n_{2}}}^{U}\rangle_{H_{U}}$ is given by two ways. We refer to both of them.
\begin{thm}\label{hypergeo}(cf. \cite[Proposition 7.16 in Chapter 5]{mits})
Let $\nu$ be a complex number such that $\chi_{U}(-2C)=\nu^2+1$, $a=i\nu+\frac{1}{2}+\frac{|n_{1}-n_{2}|}{4}+\frac{n_{1}+n_{2}}{4}$ and $b=i\nu+\frac{1}{2}+\frac{|n_{1}-n_{2}|}{4}-\frac{n_{1}+n_{2}}{4}$. Then
\begin{align*}
\langle e_{\tau_{n_{1}}}^{U},U(a_{s})e_{\tau_{n_{2}}}^{U}\rangle_{H_{U}}=\Bigl( \mathrm{tanh}\frac{s}{2} \Bigr) ^{\frac{|n_{1}-n_{2}|}{2}}\Bigl( \mathrm{cosh}\frac{s}{2}\Bigr)^{-i2\nu-1}F(a,b,1,(\mathrm{tanh}\frac{s}{2})^2)
\end{align*}
where $F$ is the hypergeometric function.
\end{thm}
\begin{thm}\label{mtxelementSL2R}(cf. \cite[(2.15)]{koo2})
The function $\langle U(a_{s})e_{\tau_{n_{1}}}^{U},e_{\tau_{n_{2}}}^{U}\rangle_{H_{U}}$ is given by follows.
\begin{eqnarray}
&\langle U_{\varepsilon,\nu}(a_{s})e_{\tau_{n_{1}}}^{U_{\varepsilon,\nu}},e_{\tau_{n_{2}}}^{U_{\varepsilon,\nu}}\rangle_{H_{U_{\varepsilon,\nu}}}=\Bigl( \mathrm{cosh}\frac{s}{2}\Bigr)^{-i2\nu-1}\nonumber \\
&\frac{1}{4\pi}\int_{0}^{4\pi}\Bigl( 1-\mathrm{tanh}\frac{s}{2}\, e^{i\psi} \Bigr)^{-i\nu+\frac{n_1}{2}-\frac{1}{2}}\Bigl(1- \mathrm{tanh}\frac{s}{2}\, e^{-i\psi} \Bigr)^{-i\nu-\frac{n_1}{2}-\frac{1}{2}}e^{i\frac{n_2-n_1}{2}\psi}d\psi. \nonumber \\
&\langle U_{m}^{\pm}(a_{s})e_{\tau_{n_{1}}}^{U_{m}^{\pm}},e_{\tau_{n_{2}}}^{U_{m}^{\pm}}\rangle_{H_{U_{m}^{\pm}}}=\Bigl( \mathrm{cosh}\frac{s}{2}\Bigr)^{-m}\nonumber \\
&\frac{1}{4\pi}\int_{0}^{4\pi}\Bigl( 1-\mathrm{tanh}\frac{s}{2}\, e^{i\psi} \Bigr)^{\frac{-m+n_1}{2}}\Bigl(1- \mathrm{tanh}\frac{s}{2}\, e^{-i\psi} \Bigr)^{\frac{-m-n_1}{2}}e^{i\frac{n_2-n_1}{2}\psi}d\psi. \nonumber
\end{eqnarray}
\end{thm}
Thus, we can calculate $\tilde{\Phi}_{\tau_{n}}^{U}(g)=\Phi_{\tau_{n}}^{U}(g^{-1})$ and its derivatives explicitly. Moreover, we discuss an another explicit form of the spherical function $\Phi_{\tau_{n}}^{U}$. Let $\alpha, \beta, \lambda\in\mathbb{C}$ $(\alpha\neq -1, -2, ...)$. We give a differential equation on $\mathbb{R}$:
\begin{align*}
\frac{d^2\phi}{dt^2}+\Bigl( (2\alpha+1)\frac{1}{\mathrm{tanh}t}+(2\beta+1)\mathrm{tanh}t \Bigr)\frac{d\phi}{dt}+\Bigl( \lambda^2 +(\alpha+\beta+1)^2 \Bigr)\phi =0.
\end{align*}
When we assume $\phi(0)=1$ and $\phi$ is even, the unique solution is called a Jacobi function. We denote $\phi=\phi_{\lambda}^{(\alpha,\beta)}$. We can write $\Phi_{\tau_{n}}^{U}$ by using a Jacobi function (see \cite[Theorem 2.1]{koo2}):
\begin{eqnarray}
\Phi_{\tau_{n}}^{U_{\varepsilon,\nu}}(a_{s})&=& (\mathrm{cosh}s)^{n}\phi_{2\nu}^{(0,n)}(\frac{s}{2}), \nonumber \\
\Phi_{\tau_{n}}^{U_{m}^{\pm}}(a_{s})&=&(\mathrm{cosh}s)^{n}\phi_{i(m-1)}^{(0,n)}(\frac{s}{2}).\nonumber
\end{eqnarray}
These are expressions of $\Phi_{\tau_{n}}^{U}$ using Jacobi functions.
Finally, we discuss the Plancherel measure $d\mu$ on $\hat{G}$. It is known that $d\mu$ is described by the following formulas (see \cite[Section 8.4]{lang}, \cite[Section 7.2.1]{wa2}):
\begin{eqnarray}
d\mu(U_{0,\nu})&=&\frac{1}{2\pi}\nu\, \mathrm{tanh}\pi\nu \, d\nu, \nonumber \\
d\mu(U_{1,\nu})&=&\frac{1}{2\pi}\nu\, \frac{1}{\mathrm{tanh}\pi\nu} \, d\nu, \nonumber \\
d\mu(U_{m}^{\pm})&=&\frac{m-1}{4\pi}, \nonumber \\
d\mu(\hat{G}\setminus\hat{G}_{p})&=&0.\, \, \, \, \, \, \, \,\, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \,\,\,\,\,\,\,\, (3) \nonumber
\end{eqnarray}
\begin{rem}
The Plancherel measure $d\mu$ is determined uniquely by the Haar measure $dg$ (cf. \cite[Theorem 7.2.1.1]{wa2}). Of course, an expression of $d\mu$ depends on a parametrization of $\hat{G}$. We describe $d\mu$ following \cite[(8.3), (8,4) in Chapter 5]{mits}.
\end{rem}
\subsection{Principle series representations of $SL(2,\mathbb{R})$}
We apply the argument of Section \ref{Principle series} to $SL(2,\mathbb{R})$. We can show $M=\{\pm I_{2}\}$. Let $Q=MAN$. Since each parabolic subgroup $Q'$ is a block upper triangular subgroup (see \cite[Section 5.5]{knapp2}), we get $Q'=Q$ or $G$. When $Q'=Q$, we have $M'=M$, $A'=A$ and $N'=N$. When $Q'=G$, we have $M'=G$ and both $A'$ and $N'$ are trivial. In addition, both $Q$ and $G$
are cuspidal. Next, we describe the result of induced representations when $Q'=Q$. We denote the set of irreducible unitary representations of $M$ by $\{\sigma_{0},\sigma_{1}\}$, where $\sigma_{0}$ is the trivial representation and $\sigma_{1}$ is defined by $\sigma_{1}(\pm I_{2})=\pm 1$.
\begin{thm}(cf. \cite[Proposition 2.10 in Chapter 5]{mits})
For each $\varepsilon=0,1$ and $\nu>0$, $U_{\varepsilon,\nu}\cong U(Q,\sigma_{\varepsilon},i\nu)$.
\end{thm}
\subsection{The Helgason-Fourier transforms on $SL(2,\mathbb{R})$}
We rewrite Theorem \ref{Plancherel_formula} when $G=SL(2,\mathbb{R})$. We fix $n\in\mathbb{Z}$. First, we consider the case when $Q'=Q$. Since $A'=A$, the group $A_{1}$ is trivial. Thus all $\mu_{1}$ is trivial. Since $M'=M$, we have $\sigma'=\tilde{\sigma}'$. In addition, there is at most one $\sigma$ such that $m(\tau_{n},U(Q,\sigma,\nu))\neq0$ and $T_{\tilde{\sigma}}$ is an identity map. We denote this $\sigma$ by $\sigma_{\varepsilon}$. Next, we consider the case when $Q'=G$. Since $A'$ is trivial, all $\nu'$ is trivial. $T_{\tilde{\sigma}'}$ is also an identity map. In addition, the sum $\sum_{\sigma'}$ is a finite sum (recall Corollary \ref{intedom}). In conclusion, we get the following formula. Let $f_{1},f_{2}\in C_{0}^{\infty}(G,\tau)$. Then,
\begin{flushleft}
$\langle f_{1},f_{2}\rangle_{L^2(G,dg)}$\\
$\displaystyle =c_{Q}\int_{\mathfrak{a}^{*}\times K}\langle \tilde{f}_{1}(\nu,k),\tilde{f}_{2}(\nu,k)\rangle_{V_{\tau_{n}}}p_{\sigma_{\varepsilon}}(\nu)d\nu dk$\\
$\displaystyle +c_{G}\sum_{\sigma'_{m}}\int_{K}\langle \tilde{f}_{1}(i\mu_{1},k),\tilde{f}_{2}(-i\mu_{1},k)\rangle_{V_{\tau_{n}}}p_{\sigma'_{m}}dk$,
\end{flushleft}
where $p_{\sigma_{\varepsilon}}(\nu)d\nu=d\mu(U_{\varepsilon,\nu})$, $p_{\sigma'_{m}}=d\mu(U_{m}^{\pm})$ and each $\sigma'_{m}$ is discrete series of $G$ such that $\sigma'_{m}\cong U_{m}^{\pm}\in \hat{G}(\tau_{n})_{p}$. We call this formula is the Plancherel formula for $SL(2,\mathbb{R})$.
\subsection{Calculation of the heat kernel on $SL(2,\mathbb{R})$}
Let $\rho_{G}(t,g)$ be the heat kernel on $G=SL(2,\mathbb{R})$. Then, $\rho_{G}=\sum_{\mathbb{Z}}P_{\tau_{n}}\rho_{G}$. To calculate $\rho_{G}$, it is enough to calculate $P_{\tau_{n}}\rho_{G}$ for each $n\in\mathbb{Z}$. We fix $n\in\mathbb{Z}$. Let $\rho_{t,n}(g)=\int_{\hat{G}(\tau_{n})}\tilde{\Phi}_{\tau_{n}}^{U}(g)e^{t\lambda_{\tau_{n}}^{U}}d\mu(U)$. If $\rho_{t,n}$ satisfies conditions below:
\begin{eqnarray}
&\mathrm{(i)}& \frac{\partial}{\partial t}(\rho_{t,n} * f)=\Delta_{G} (\rho_{t,n} * f),\nonumber \\
&\mathrm{(ii)}& \parallel \rho_{t,n} * f - f \parallel _{L^2(G, dg)} \rightarrow 0 \, \, (t \rightarrow +0),\, \, f \in C_{0}^{\infty}(G)\cap L^2(G,\tau_{n}), \nonumber
\end{eqnarray}
then $\rho_{t,n}(g)=P_{\tau_{n}}\rho_{G}(t,g)$ by uniqueness of the heat kernel. Thus, to calculate $P_{\tau_{n}}\rho_{G}$, it is enough to show conditions above. First, we show $\mathrm{(i)}$.
\begin{lem}\label{each-conti}
The function $\rho_{t,n}(g)$ is continuous on $(0,\infty)\times G$.
\end{lem}
\begin{proof}
For $T>0$, we prove that the function $e^{T\lambda_{\tau_{n}}^{U}}$ is a dominant function of the function $\tilde{\Phi}_{\tau_{n}}^{U}(g)e^{t\lambda_{\tau_{n}}^{U}}$ on $[T,\infty)\times G$. By Cauchy-Schwartz's inequality, we have $|\tilde{\Phi}_{\tau_{n}}^{U}(g)|\leq 1$. Thus $|\tilde{\Phi}_{\tau_{n}}^{U}(g)e^{t\lambda_{\tau_{n}}^{U}}|\leq e^{T\lambda_{\tau_{n}}^{U}}$ since $\lambda_{\tau_{n}}^{U}<0$. We calculate an upper bound of $\parallel e^{T\lambda_{\tau_{n}}^{U}} \parallel_{L^1(\hat{G}(\tau_{n}),d\mu)}$. We have
\begin{align*}
\parallel e^{T\lambda_{\tau_{n}}^{U}} \parallel_{L^1(\hat{G}(\tau_{n})_{p},d\mu)}=\int_{\hat{G}(\tau_{n})}e^{T\lambda_{\tau_{n}}^{U}}d\mu(U)=\int_{\hat{G}(\tau_{n})_{p}\cap\{U_{\varepsilon,\nu}\}}+\int_{\hat{G}(\tau_{n})_{p}\cap\{U_{m}^{\pm}\}}.\, \, \mathrm{(4)}
\end{align*}
The first term in (4) is evaluated as
\begin{eqnarray}
\int_{\hat{G}(\tau_{n})_{p}\cap\{U_{\varepsilon,\nu}\}}&\leq&\int_{0}^{\infty}e^{T(-\frac{1}{4}n^2-\frac{1}{2}(\nu^2+\frac{1}{4}))}\frac{1}{2\pi}\nu\, \mathrm{max}\{\mathrm{tanh}\,\nu,\frac{1}{\mathrm{tanh}\,\nu}\}d\nu \nonumber \\
&\leq&\frac{e^{-\frac{1}{8}T-\frac{1}{4}Tn^2}}{\pi}\biggl(\int_{0}^{1}d\nu+\int_{1}^{\infty}e^{-\frac{1}{2}T\nu^2}\nu d\nu \biggr)\nonumber \\
&=&\frac{e^{-\frac{1}{8}T}}{\pi}\biggl(1+\frac{e^{-\frac{1}{2}T}}{T}\biggr)e^{-\frac{1}{4}Tn^2}. \nonumber
\end{eqnarray}
On the other hand, the second term in (4) is observed as
\begin{eqnarray}
\int_{\hat{G}(\tau_{n})_{p}\cap\{U_{m}^{\pm}\}}&=&\sum_{2\leq m\leq |n|, m\equiv n (\mathrm{mod} 2)}e^{T(-\frac{1}{4}n^2+\frac{m(m-2)}{8})}\frac{m-1}{4\pi} \nonumber \\
&\leq&e^{T(-\frac{1}{4}n^2+\frac{1}{8}n^2)}\frac{|n|}{4\pi}\sum_{2\leq m\leq |n|, m\equiv n (\mathrm{mod} 2)} \nonumber \\
&\leq&\frac{1}{8\pi}e^{-\frac{1}{8}Tn^2}n^2. \nonumber
\end{eqnarray}
Therefore the function $e^{T\lambda_{\tau_{n}}^{U}}$ is a dominant function.
\end{proof}
\begin{lem}\label{L2}
For $t>0$, we have $\rho_{t,n}\in L^2(G,dg)$.
\end{lem}
\begin{proof}
By the Theorem \ref{plancherel}, we have
\begin{align*}
\parallel \rho_{t,n} \parallel_{L^2(G,dg)}=\parallel e^{t\lambda_{\tau_{n}}^{U}} \parallel_{L^2(\hat{G}(\tau_{n}),d\mu)}.
\end{align*}
By the same way as the proof of Lemma \ref{each-conti}, we obtain
\begin{align*}
\parallel e^{t\lambda_{\tau_{n}}^{U}} \parallel_{L^2(\hat{G}(\tau_{n}),d\mu)}^2\leq\frac{e^{-\frac{1}{4}t}}{\pi}\biggl(1+\frac{e^{-t}}{2t}\biggr)e^{-\frac{1}{2}tn^2}+\frac{1}{8\pi}e^{-\frac{1}{4}tn^2}n^2.
\end{align*}
\end{proof}
\begin{lem}
For $X\in\mathfrak{g}$, We have
\begin{eqnarray}
\frac{\partial \rho_{t,n}}{\partial t}(g)&=&\int_{\hat{G}(\tau_{n})}\lambda_{\tau_{n}}^{U}\tilde{\Phi}_{\tau_{n}}^{U}(g)e^{t\lambda_{\tau_{n}}^{U}}d\mu(U), \nonumber \\
\tilde{X}\rho_{t,n}(g)&=&\int_{\hat{G}(\tau_{n})}\tilde{X}\tilde{\Phi}_{\tau_{n}}^{U}(g)e^{t\lambda_{\tau_{n}}^{U}}d\mu(U), \nonumber \\
\tilde{X}\circ\tilde{X}\rho_{t,n}(g)&=&\int_{\hat{G}(\tau_{n})}\tilde{X}\circ\tilde{X}\tilde{\Phi}_{\tau_{n}}^{U}(g)e^{t\lambda_{\tau_{n}}^{U}}d\mu(U). \nonumber
\end{eqnarray}
\end{lem}
\begin{proof}
We see that $\rho_{t,n}(g)$ is smooth in $t>0$ and $g\in G$ by using interchanging differentiation and integration. To see the situation, we prove that $\rho_{t,n}$ can be differentiated by $\tilde{Y}_{1}$. For each $T>0$, we find a dominant function of $\tilde{Y}_{1}\tilde{\Phi}_{\tau_{n}}^{U}(g)e^{t\lambda_{\tau_{n}}^{U}}$ on $[T,\infty)\times G$. By Theorem \ref{derivative}, we compute
\begin{eqnarray}
\tilde{Y}_{1}\tilde{\Phi}_{\tau_{n}}^{U_{\varepsilon, \nu}}(g)=\frac{1}{2\sqrt{8}}\biggl(&(-i2\nu +1-n)\langle e_{\tau_{n}}^{U_{\varepsilon, \nu}},U_{\varepsilon,\nu}(g)e_{\tau_{n-2}}^{U_{\varepsilon, \nu}}\rangle_{H_{U_{\varepsilon,\nu}}}&\nonumber \\
&+ (-i2\nu +1+n)\langle e_{\tau_{n}}^{U_{\varepsilon, \nu}},U_{\varepsilon,\nu}(g)e_{\tau_{n+2}}^{U_{\varepsilon, \nu}}\rangle_{H_{U_{\varepsilon,\nu}}}& \biggr) \nonumber
\end{eqnarray}
and
\begin{eqnarray}
\tilde{Y}_{1}\tilde{\Phi}_{\tau_{n}}^{U_{m}^{\pm}}(g)=\frac{1}{2\sqrt{8}}\biggl( &(m-n)\langle e_{\tau_{n}}^{U_{m}^{\pm}},U_{m}^{\pm}(g)e_{\tau_{n-2}}^{U_{m}^{\pm}}\rangle_{H_{U_{m}^{\pm}}}&\nonumber \\
&+ (m+n)\langle e_{\tau_{n}}^{U_{m}^{\pm}},U_{m}^{\pm}(g)e_{\tau_{n+2}}^{U_{m}^{\pm}}\rangle_{H_{U_{m}^{\pm}}}& \biggr). \nonumber
\end{eqnarray}
By Cauchy-Schwartz's inequality, we have
\begin{align*}
|\langle e_{\tau_{n}}^{U},U(g)e_{\tau_{n-2}}^{U}\rangle_{H_{U}}|\leq 1\, \, \mbox{and}\,\, |\langle e_{\tau_{n}}^{U},U(g)e_{\tau_{n+2}}^{U}\rangle_{H_{U}}|\leq 1.
\end{align*}
Let $\varphi_{n}$ be a function on $\hat{G}(\tau_{n})$ defined by
\begin{eqnarray}
\varphi_{n}(U)=
\begin{cases}
|-i2\nu +1-n|+ |-i2\nu +1+n| & (U=U_{\varepsilon,\nu}) \\
|m-n|+|m+n| & (U=U_{m}^{\pm}).
\end{cases} \nonumber
\end{eqnarray}
Then, we can show that $\varphi_{n}(U)e^{T\lambda_{\tau_{n}}^{U}}$ is a dominant function of $\tilde{Y}_{1}\tilde{\Phi}_{\tau_{n}}^{U}(g)e^{t\lambda_{\tau_{n}}^{U}}$ by using a similarly method in the proof of Lemma \ref{each-conti}. Thus, $\rho_{t,n}$ can be differentiated by $\tilde{Y}_{1}$ and
\begin{align*}
\tilde{Y}_{1}\rho_{t,n}(g)=\int_{\hat{G}(\tau_{n})}\tilde{Y}_{1}\tilde{\Phi}_{\tau_{n}}^{U}(g)e^{t\lambda_{\tau_{n}}^{U}}d\mu(U).
\end{align*}
\end{proof}
\begin{lem}\label{conv-each-conti}
The function $\rho_{t,n}*f$ is continuous on $(0,\infty)\times G$.
\end{lem}
\begin{proof}
For $T>0$, we prove that the function $e^{T\lambda_{\tau_{n}}^{U}}|f(h)|$ is a dominant function of the function $\tilde{\Phi}_{\tau_{n}}^{U}(h^{-1}g)e^{t\lambda_{\tau_{n}}^{U}}f(h)$ with respect to $(t,g)\in [T,\infty)\times G$, where the integration domain is $(h,U)\in G\times \hat{G}(\tau_{n})$. Since $|\tilde{\Phi}_{\tau_{n}}^{U}(h^{-1}g)|\leq 1$, $|\tilde{\Phi}_{\tau_{n}}^{U}(h^{-1}g)e^{t\lambda_{\tau_{n}}^{U}}f(h)|\leq e^{T\lambda_{\tau_{n}}^{U}}|f(h)|$. We calculate an upper bound of $\parallel e^{T\lambda_{\tau_{n}}^{U}}|f(h)|\parallel_{L^1(G\times\hat{G}(\tau_{n}),dhd\mu)}$. Since variables are separated, we have
\begin{align*}
\parallel e^{T\lambda_{\tau_{n}}^{U}}|f(h)|\parallel_{L^1(G\times\hat{G}(\tau_{n}),dhd\mu)}=\parallel e^{T\lambda_{\tau_{n}}^{U}}\parallel_{L^1(\hat{G},d\mu)}\parallel f(h)\parallel_{L^1(G,dh)}.
\end{align*}
An upper bound of $\parallel e^{T\lambda_{\tau_{n}}^{U}}\parallel_{L^1(\hat{G},d\mu)}$ is calculated in the proof of Lemma \ref{each-conti}.
\end{proof}
\begin{lem}\label{conv-L2}
We have $\rho_{t,n}*f\in L^2(G,dg)$.
\end{lem}
\begin{proof}
By Theorem \ref{Plancherel_formula} and Theorem \ref{conv_mult},
\begin{flushleft}
$\parallel \rho_{t,n}*f \parallel_{L^2(G,dg)}^2$\\
$\displaystyle =c_{Q}\int_{\mathfrak{a}^{*}\times K}e^{2t\lambda_{\tau_{n}}^{U_{\varepsilon,\nu}}}\parallel \tilde{f}(\nu,k)\parallel^2_{V_{\tau_{n}}}p_{\sigma_{\varepsilon}}(\nu)d\nu dk$\\
$\displaystyle +c_{G}\sum_{\sigma'_{m}}e^{2t\lambda_{\tau_{n}}^{U_{m}^{\pm}}}\int_{K}\langle \tilde{f}(i\mu_{1},k),\tilde{f}(-i\mu_{1},k)\rangle_{V_{\tau_{n}}}p_{\sigma'_{m}}dk$.
\end{flushleft}
The right side hand converges since $\lambda_{\tau_{n}}^{U}<0$.
\end{proof}
\begin{lem}
For $X\in\mathfrak{g}$, we have
\begin{eqnarray}
\frac{\partial (\rho_{t,n}*f)}{\partial t}(g)&=&\biggl(\frac{\partial \rho_{t,n}}{\partial t}\biggr)*f(g), \nonumber \\
\tilde{X}(\rho_{t,n}*f)(g)&=&(\tilde{X}\rho_{t,n})*f(g) \nonumber \\
\mathrm{and\, \, }\tilde{X}\circ\tilde{X}(\rho_{t,n}*f)(g)&=&(\tilde{X}\circ\tilde{X}\rho_{t,n})*f(g). \nonumber
\end{eqnarray}
\end{lem}
The proof of this lemma is almost the same as the proof of Lemma \ref{conv-each-conti}. Then, $\mathrm{(i)}$ is proved. Finally, we show $\mathrm{(ii)}$.
\begin{lem}
$\parallel \rho_{t,n}*f-f\parallel_{L^2(G,dg)}\rightarrow 0\, \, (t\rightarrow +0)$.
\end{lem}
\begin{proof}
By Theorem \ref{Plancherel_formula} and Theorem \ref{conv_mult},
\begin{flushleft}
$\parallel \rho_{t,n}*f - f\parallel_{L^2(G,dg)}^2$\\
$\displaystyle =c_{Q}\int_{\mathfrak{a}^{*}\times K}(1-e^{t\lambda_{\tau_{n}}^{U_{\varepsilon,\nu}}})^2\parallel \tilde{f}(\nu,k)\parallel^2_{V_{\tau_{n}}}p_{\sigma_{\varepsilon}}(\nu)d\nu dk$\\
$\displaystyle +c_{G}\sum_{\sigma'_{m}}(1-e^{t\lambda_{\tau_{n}}^{U_{m}^{\pm}}})^2\int_{K}(1-e^{t\lambda_{\tau_{n}}^{U_{m}^{\pm}}})^2\langle \tilde{f}_{1}(i\mu_{1},k),\tilde{f}_{2}(-i\mu_{1},k)\rangle_{V_{\tau_{n}}}p_{\sigma'_{m}}dk$.
\end{flushleft}
The first term converges to $0$ by the monotone convergence theorem. Other terms converge to $0$ clearly.
\end{proof}
Then, $\mathrm{(ii)}$ is proved. Thus, we get the main result.
\begin{thm}\label{MyThm1}
Let $G=SL(2,\mathbb{R})$. The function
\begin{align*}
\rho(t,g)=\sum_{n\in\mathbb{Z}}\int_{\hat{G}(\tau_{n})}\tilde{\Phi}_{\tau_{n}}^{U}(g)e^{t\lambda_{\tau_{n}}^{U}}d\mu(U),\, \, (t,g)\in(0,\infty)\times G
\end{align*}
is the heat kernel on $G$.
\end{thm}
In addition, we can describe $\rho(t,g)$ explicitly by Corollary \ref{intedom}, Theorem \ref{hypergeo} (or Theorem \ref{mtxelementSL2R}), $(2)$ and $(3)$.
\section{Future problems}\label{futher}
In this paper, we have calculated the heat kernel on $SL(2,\mathbb{R})$. Our next problem is Problem \ref{SL2Cheat} in Section 2 about the heat kernel on $SL(2,\mathbb{C})$. We have discussed in a general situation in Section 3-6. Thus, if $G$ is a non-compact semisimple Lie group $G$ having a multiplicity-free subgroup $K$, the heat kernel on $G$ will be calculated by a similar way.
Furthermore, there are problems about the heat kernel related to the Segal-Bargmann space. We introduce them. Let $G$ be a Lie group and $\rho_{G}$ be the heat kernel on $G$. First, we discuss problems when $G=SL(2,\mathbb{C})$.
\begin{prob}(\cite[Open problem 1]{bh3})
Let $t>0$. Prove that the set of linear sums of matrix entries of finite dimensional holomorphic representations is a dense subspace of $HL^2(SL(2,\mathbb{C}),\rho_{SL(2,\mathbb{C})}(t,g)dg)$.
\end{prob}
\begin{prob}(\cite[Open problem 2]{bh3})
Let $t,\varepsilon>0$. Prove that the space \\$HL^2(SL(2,\mathbb{C}),\rho_{SL(2,\mathbb{C})}(t+\varepsilon,g)dg)$ is a dense subspace of the space \\$HL^2(SL(2,\mathbb{C}),\rho_{SL(2,\mathbb{C})}(t,g)dg)$.
\end{prob}
These problems seems not yet solved. Generalizing them, we can consider other problems as follows.
\begin{prob}
Let $t>0$. Prove that the set of linear sums of matrix entries of finite dimensional representations is a dense subspace of $L^2(SL(2,\mathbb{R}),\rho_{SL(2,\mathbb{R})}(t,g)dg)$.
\end{prob}
\begin{prob}
Let $t,\varepsilon>0$. Prove that the space \\
$L^2(SL(2,\mathbb{R}),\rho_{SL(2,\mathbb{R})}(t+\varepsilon,g)dg)$ is a dense subspace of the space\\$L^2(SL(2,\mathbb{R}),\rho_{SL(2,\mathbb{R})}(t,g)dg)$.
\end{prob}
If we know an explicit expression of the heat kernel, we may get a hint to approach these problems.
|
train/arxiv
|
BkiUfEw4uzliDEmfg_ZT
| 5 | 1 |
\section{Introduction}
The standard model (SM) of electroweak interactions can be considered as a baronial theory of fundamental interactions of nature. Ever since the discovery of weak neutral currents in 1973 in a neutrino scattering experiment in the Gargamelle bubble chamber at CERN, SM has been substantiated through a fecundity of experimental observations. The discovery of the Higgs Boson marks the culmination of the particle spectrum of the SM. Though SM may flare out to be an irrefutable theory, there are several other observations which propel us to clamor for physics beyond the SM. These include disappearance of anti-matter, existence of dark matter and dark energy. Further, gravity is excluded from the SM. Therefore the nonpareil theory of fundamental interactions of nature is still far away from the bay.
The evidence of physics beyond the SM has already started burgeoning at several fronts. These include observables related to the decays of $B$ mesons. These anomalous discrepancies can be classified into two categories: decays induced by the charged current transition $b \to c \ell \nu$ ($\ell=e,\,\mu,\, \tau$) and neutral current transition $b \to s \ell \ell$ ($\ell=e,\,\mu$). In this work, we rivet on decays induced by the $b \to c \ell \nu$ transition which occurs at the tree level in the SM. A series of measurements by the Belle, BaBar and LHCb collaborations over the last decade have provided several enthralling hints of new physics in this sector.
The BaBar~\cite{Lees:2012xj,Lees:2013uzd}, Belle~\cite{Huschle:2015rga,Sato:2016svk,Hirose:2016wfn} and LHCb~\cite{Aaij:2015yra} collaborations measured the following flavor ratios
\begin{equation}
R_{D^{(*)}}\equiv \frac{{\Gamma}(B \to D^{(*)}\, \tau\, \bar \nu)}{{\Gamma}(B \to D^{(*)}\, (e,\, \mu) \,\bar \nu)}.
\end{equation}
The average values of these measurements differ from their respective SM predictions at the level of 3.9$\sigma$~\cite{hflav-2016}.
These deviations are inklings of lepton flavor universality violation. All of these experiments were based on methodologies where the $\tau$ lepton was identified through kinematical information rather than reconstruction. The reconstruction technique was emplaced by the LHCb collaboration using the $3\pi$ decay mode of the $\tau$ lepton ~\cite{Aaij:2017uff}. This resulted in a distinct measurement of $R_{D^*}$. Including this measurement, the incongruence of the
$R_D$-$R_{D^*}$ data with SM predictions escalated to $4.1\sigma$~\cite{hflav-2017}.
In 2019, Belle collaboration announced a new measurement of $R_D$ and $R_{D^*}$~\cite{Abdesselam:2019dgh}, which is consistent with the SM prediction. By including these measurements, the discrepancy with SM reduced from $4.1\sigma$ to $3.1\sigma$.
Apart from $R_{D^{(*)}}$, the LHCb collaboration measured the following ratio in $B_c\rightarrow J/\psi \, \ell \, \bar{\nu}$ decay modes
\begin{equation}
R_{J/\psi} = \frac{\Gamma(B_c\rightarrow J/\psi \, \tau \, \bar{\nu})}{\Gamma(B_c\rightarrow J/\psi \, \mu\, \bar{\nu})}\,.
\end{equation}
These decays are generated by the same quark level transition which induces $R_{D^{(*)}}$. The measured value is $1.7\sigma$ higher than the SM prediction~\cite{Aaij:2017tyk}. These dissension with the SM can be imputed to new physics in $\tau$, $\mu$ or $e$ sectors. However, in \cite{Alok:2017qsi} it was shown that new physics only in $\mu$ or $e$ sectors cannot accommodate these measurements. This is mainly due to measurements of the ratios $
R^{\mu/e}_D = \Gamma(B\rightarrow D\,\mu\,\nu)/\Gamma(B\rightarrow D\,e\,\nu)= 0.995\pm 0.022\, {\rm(stat.)}\pm 0.039\, {\rm (syst.)}$ and
$
R^{e/\mu}_{D^*} = \Gamma(B\rightarrow D^* \,e \, \nu)/\Gamma(B\rightarrow D^* \,\mu \, \nu) = 1.04\pm 0.05\, {\rm(stat.)}\, \pm 0.01 {\rm (syst.)}
$ \cite{Glattauer:2015teq, Abdesselam:2017kjf}.
The measured values of these ratios are in agreement with their SM predictions. Hence new physics only in $b\rightarrow c\,\mu\,\bar{\nu}$ or $b\rightarrow c\, e \,\bar{\nu}$ will blight this agreement. Therefore new physics in $b \to c \tau \nu$ is imperative to accommodate the current measurements of flavor ratios in these sectors \footnote{In \cite{Carvunis:2021dss}, it was shown that new physics only in muons can accommodate the entire $b\rightarrow c\, l \,\bar{\nu}$ data using a different set of combinations of new physics operators. }.
In May 2022, the LHCb collaboration reported the first observation of the semileptonic $b$-baryon decay $\Lambda_b \to \Lambda_c^+ \tau^- \bar{\nu_{\tau}}$ with a significance of 6.1$\sigma$ \cite{LHCb:2022piu}. This was obtained by collecting a data sample corresponding to 3 $\rm fb^{-1}$ of integrated luminosity at centre-of-mass energies of 7 and 8 TeV. The LFU ratio $R(\Lambda_c)$ was measured to be \cite{LHCb:2022piu}
\begin{equation}
R(\Lambda_c) =\frac{\Lambda_b \to \Lambda_c^+ \tau^- \bar{\nu_{\tau}}}{\Lambda_b \to \Lambda_c^+ \mu^- \bar{\nu_{\mu}}}= 0.242 \pm 0.026 \,(\rm stat.) \pm 0.040 \,(\rm syst.) \pm 0.059\,.
\end{equation}
Here the last error is due to the external branching fraction uncertainty
from the channel $\Lambda_b \to \Lambda_c^+ \mu^- \bar{\nu_{\mu}}$. The measured value is consistent with the SM prediction of $0.324 \pm 0.004$ \cite{Bernlochner:2018bfn}.
Barring these LFU observables, we also have measurements of few angular observables. The Belle collaboration has measured the $\tau$ polarization, $P^{D^*}_{\tau}$, in $B \to D^* \tau \bar{\nu}$ decay. The measured value \cite{Hirose:2016wfn}
\begin{equation}
P^{D^*}_{\tau} = - 0.38 \pm 0.51\, (\rm stat.) ^{+0.21}_{-0.16}\, (\rm syst.),
\end{equation}
is consistent with its SM prediction of $-0.497\pm0.013$ \cite{Tanaka:2012nw}. In 2018,
Belle collaboration reported the measurement of $D^*$ longitudinal polarization fraction $F^{D^*}_L$ in the decay $B \to D^* \tau \bar{\nu}$. The measured value \cite{Abdesselam:2019wbt}
\begin{equation}
F^{D^*}_{L} = 0.60 \pm 0.08\, (\rm stat.)\pm 0.04\, (\rm syst.)\,
\end{equation} is $1.6\sigma$ higher than the SM prediction of $0.46\pm 0.04$~\cite{Alok:2016qyh}.
The possible new physics effects in $b\rightarrow c\tau\bar{\nu}$ decay can be analyzed in a model independent way using the language of effective field theory. There are many such analyses, see for e.g, \cite{Freytsis:2015qca,Jung:2018lfu,Bhattacharya:2018kig,Hu:2018veh,Alok:2019uqc,Asadi:2019xrc,Murgui:2019czp,Bardhan:2019ljo,Blanke:2019qrx,Shi:2019gxi,Becirevic:2019tpx,Sahoo:2019hbu,Cheung:2020sbq,Cardozo:2020uol}. These analyses identified Lorentz structure of possible new physics. However, there are no unique solutions. Depending upon the adopted methodology and assumptions, there are multiple new physics operators with specific values of corresponding WCs which can provide a good fit to data. A unique determination of the new Lorentz structure of new physics would require measurements of additional observables in $b\rightarrow c\tau\bar{\nu}$ sector \cite{Alok:2018uft}.
The allowed model independent solutions can be realized in specific new physics models. There are a good number of such models. In context of some of these models it would be interesting to see whether some correlations exist between the observables in $b \to c$ sector and other sectors. In other words, what implications measurement in $b \to c$ sector have on other sectors. In this work we explore such correlations in $b \to u$ sector in the context of $U_1$ leptoquark (LQ) model. In particular, we study imprints of $b \to c$ measurements on several observables in $\Lambda_b \to p \tau \bar{\nu}$ decay mode. The baryonic decay mode $\Lambda_b \to p \tau \bar{\nu}$ is studied in the literaure \cite{Dutta:2015ueb,Ray:2018hrx}.
The quark level transition $b\rightarrow u\tau\bar{\nu}$ induces decays such as $B^+ \to \tau \bar{\nu}$, $B \to \pi \tau \bar{\nu}$, $B \to \rho \tau \bar{\nu}$, $B \to \omega \tau \bar{\nu}$ and $\Lambda_b \to p \tau \bar{\nu}$. Out of these decays, currently, the only observed decay channel is the purely leptonic decay $B^+ \to \tau \bar{\nu}$ \cite{pdg}. The measured value of its branching ratio is $(1.09 \pm 0.24) \times 10^{-5}$ which is consistent with the SM value $(8.80 \pm 0.73) \times 10^{-5}$. Further, the Belle collaboration provides an upper bound on the branching ratio of the semileptonic decay $B \to \pi \tau \bar{\nu}$. At 90\% C.L., the branching ratio of $B \to \pi \tau \bar{\nu}$ can be as high as $2.5 \times 10^{-4}$ \cite{Belle:2015qal}. Thus due to lack of enough measurements, any model independent analysis would allow a large new physics effects in some of the observables in $b\rightarrow u\tau\bar{\nu}$ transition. However, in the context of $U_1$ leptoquark model considered in this work, we show that the necessary couplings in $b\rightarrow u\tau\bar{\nu}$ decay are all related to the couplings in $b\rightarrow c\tau\bar{\nu}$ sector. Given the fact that we have relatively accurate measurements of number of observables in this sector, it would be interesting to see the extent up to which the new physics effects are allowed in $b\rightarrow u\tau\bar{\nu}$ sector. In particular, we study the impact of $b\rightarrow c\tau\bar{\nu}$ measurements on several observables in $\Lambda_b \to p \tau \bar{\nu}$ decay mode.
Plan of work is as follows. In Sec.\ref{tf}, we provide theoretical framework of this work. Starting with the effective Hamiltonian, we provide all necessary theoretical expressions in this section. This includes various observables in $b \to c\, \tau \, \bar{\nu}$ sector and $\Lambda_b \to p \tau \bar{\nu}$ decay. In the next section, we first provide constraints on $b \to c\, \tau \, \bar{\nu}$ couplings by performing a fit. Using the allowed parameter space of these couplings, we obtain predictions of several observables in $\Lambda_b \to p \tau \bar{\nu}$ decay. The conclusions are discussed in Sec.\ref{concl}.
\section{Theoretical Framework}
\label{tf}
\subsection{Effective Hamiltonian}
\label{eff}
Within the SM, the effective Hamiltonian for the quark level transition $b \to q\, \tau \, \bar{\nu}$ with $q = u,c$ is given by
\begin{equation}
H_{eff}^{\rm SM} = \frac{4 G_F}{\sqrt{2}} V_{qb} \,O_{V_L}\,,
\label{effH}
\end{equation}
where ${O}_{V_L} =(\bar{q} \gamma_\mu P_L b)\,(\bar{\tau} \gamma^\mu P_L \nu)$. In the presence of new physics, the effective Hamiltonian takes the form
\begin{equation}
H_{eff} = \frac{4 G_F}{\sqrt{2}} V_{qb} \left[ (1+C_{V_L}) O_{V_L} + C_{V_R} O_{V_R} + C_{S_L} O_{S_L} + C_{S_R} O_{S_R} + C_T O_T \right]\,,
\end{equation}
where
\begin{eqnarray}
{O}_{V_R} &=& (\bar{q} \gamma_\mu P_R b)\,(\bar{\tau} \gamma^\mu P_L \nu)\,,\\
{O}_{S_R}&=& (\bar{q} P_R b)\,(\bar{\tau} P_L \nu)\,,\\
{O}_{S_L}&=& (\bar{q} P_L b)\,(\bar{\tau} P_L \nu)\,,\\
{O}_T&=& (\bar{q}\sigma^{\mu\nu}P_L b)\,(\bar{\tau}\sigma_{\mu\nu}P_L \nu)\,.
\end{eqnarray}
The interactions between the vector singlet $U_1$ LQ and the SM quarks and leptons can be written as \cite{Dorsner:2016wpm,Bhaskar:2021pml}
\begin{equation}
H_{eff}^{U_1} = h^L_{ij}\bar{Q}^i \gamma_{\mu}U_1^{\mu}P_L L^j + h^R_{ij}\bar{d}^i \gamma_{\mu}U_1^{\mu}P_R l^j_R + {\it h.c.,}
\end{equation}
where $Q_i$ and $L_j$ are the SM left-handed quark and lepton doublets and $d_R^i$ and $l^j_R$ are right handed quarks and leptons.
Here $h^L_{ij}$ and $h^R_{ij}$ are the 3$\times$ 3 matrices in the flavor space. This LQ contributes to $b \to c \tau \bar{\nu}$ at the tree level. As we only require $\bar{c}\,\nu \,U_1$ and $\bar{b}\, \tau\, U_1$ couplings to be non-zero, we have
\begin{equation}
h^L=\begin{pmatrix}
0 & 0 & 0\\
0 & 0 & h^L_{23}\\
0 & 0 & h^L_{33}
\end{pmatrix}\,,
\quad \quad
h^R=\begin{pmatrix}
0 & 0 & 0\\
0 & 0 & 0\\
0 & 0 & h^R_{33}
\end{pmatrix}
\end{equation}
Assuming mixing in the up-type quark sector, the interaction Hamiltonian can be written as
\begin{eqnarray}
H_{ eff} &=& \Bigg[\left(V_{us}h_L^{23}+V_{ub}h_{33}^L \right)\bar{u_L}\gamma_{\mu}\nu_L + \left(V_{cb}h_L^{33}+V_{cs}h_{23}^L \right)\bar{c_L}\gamma_{\mu}\nu_L \nonumber\\
&& + h_L^{23} \bar{s}_L\gamma_{\mu}\tau_L + h_L^{33} \bar{b}_L\gamma_{\mu}\tau_L + h_R^{33} \bar{b}_R\gamma_{\mu}\tau_R \Bigg]U_1^{\mu}\,.
\end{eqnarray}
It is ostensible from the above Lagrangian that only $O_{V_L}$ and $O_{S_R}$ contribute to
$b \to c \tau \bar{\nu}$ and $b \to u \tau \bar{\nu}$ processes. Also, the same couplings appear in both decay modes. The relevant WCs for $b \to c \tau \bar{\nu}$ decay can be written as
\begin{eqnarray}
C_{V_L}^{b \to c} &=& \frac{1}{2\sqrt{2}G_F V_{cb}} \frac{\left(V_{cb}h^L_{33}+V_{cs}h^L_{23}\right)h_{33}^L}{M^2_{U_1}}\,,\\
C_{S_R}^{b \to c} &=& -\frac{1}{\sqrt{2}G_F V_{cb}} \frac{\left(V_{cb}h^L_{33}+V_{cs}h^L_{23}\right)h_{33}^R}{M^2_{U_1}}\,.
\end{eqnarray}
The WCs for $b \to u \tau \bar{\nu}$ decay are
\begin{eqnarray}
C_{V_L}^{b \to u} &=& \frac{1}{2\sqrt{2}G_F V_{ub}} \frac{\left(V_{ub}h^L_{33}+V_{us}h^L_{23}\right)h_{33}^L}{M^2_{U_1}}\,,\\
C_{S_R}^{b \to u} &=& -\frac{1}{\sqrt{2}G_F V_{ub}} \frac{\left(V_{ub}h^L_{33}+V_{us}h^L_{23}\right)h_{33}^R}{M^2_{U_1}}\,.
\end{eqnarray}
Thus we see that the $b \to c$ couplings can determine the new physics contributions to $b \to u$. Therefore we need to analyze observables in the $b \to c\, \tau \, \bar{\nu}$ sector. In the next section we provide theoretical expressions for $b \to c\, \tau \, \bar{\nu}$ observables used in our analysis to constrain the new physics parameter space.
\subsection{Observables in $b \to c\, \tau \, \bar{\nu}$ sector}
\label{b2c}
We consider following observables in our analysis:
\begin{itemize}
\item the flavor ratios $R_D$, $R_{D^*}$ and $R(\Lambda_c)$,
\item tau polarization in $B \to D^* \tau \bar{\nu}$ decays,
\item $D^*$ longitudinal polarization fraction in $B \to D^* \tau \bar{\nu}$ decay,
\item branching ratio of $B_c\rightarrow \tau\bar{\nu}$.
\end{itemize}
We do not include the flavor ratio $R_{J/\psi}$ due to large theoretical errors. The theoretical expressions for $R_D$, $R_{D^*}$ and $ R_{\Lambda_c}$ in terms of WCs are given as \cite{Blanke:2018yud}
\begin{eqnarray}
R^{th}_D & \simeq & R_D^{\rm SM} \Big\{\vert 1+C_{V_L}\vert^2+1.54 \,\mathrm{Re}\,[(1+C_{V_L}) C_{S_R}]+1.09 \vert C_{S_R}\vert^2 \Big\}\,, \label{eq:rd} \\
R^{th}_{D^*} &\simeq& R_{D^*}^{\rm SM} \Big\{\vert 1+C_{V_L}\vert^2 + 0.13\,\mathrm{Re}\,[(1+C_{V_L})C_{S_R}]+0.05\vert C_{S_R}\vert^2\Big\} \,,\label{eq:rds} \\
R^{th}_{\Lambda_c} & \simeq & R_{\Lambda_c}^{\rm SM} \Big\{\vert 1 + C_{V_L}\vert^2 + 0.50 \,\mathrm{Re}\,[(1 + C_{V_L}) C_{S_R} ] +0.33 \vert C_{S_R} \vert^2 \Big\}.
\label{eq:rlc}
\end{eqnarray}
Tau polarization in $B \to D^* \tau \bar{\nu}$ decay, $P_\tau^{D^*}$, in the $U_1$ LQ model is given as \cite{Blanke:2018yud}
\begin{eqnarray}
P_{\tau}^{D^*\, th} \simeq \left(\frac{R^{th}_{D^*}}{R_{D^*}^{\rm SM} }\right)^{-1}\Big\{-0.49 \vert1 + C_{V_L}\vert^2 + 0.13 \,\mathrm{Re}\,[(1 +C_{V_L}) C_{S_R}] + 0.05 \vert C_{S_R}\vert^2 \Big\}\,.
\end{eqnarray}
The expression for $D^*$ longitudinal polarization fraction, $f_L^{D^*}$ , in $B \to D^* \tau \bar{\nu}$ decay is \cite{Blanke:2018yud}
\begin{eqnarray}
f_L^{D^*\, th} & \simeq & \left(\frac{R^{th}_{D^*}}{R_{D^*}^{\rm SM} }\right)^{-1}\Big\{0.46 \vert1 + C_{V_L}\vert^2 + 0.13 \,\mathrm{Re}\,[(1 + C_{V_L}) C_{S_R}] + 0.05 \vert C_{S_R}\vert^2 \Big\}\,.
\end{eqnarray}
We also consider the constraints coming from the purely leptonic decay $B_c\rightarrow \tau\,\bar{\nu}$.
This decay mode is not affected by the helicity suppression provided the transition is induced through the pseudo-scalar operators. The branching ratio of $B_c\rightarrow \tau\bar{\nu}$ in the $U_1$ LQ model can be written as
\begin{eqnarray}
{\cal B}(B_c\rightarrow \tau\bar{\nu})& \simeq & 0.02\bigg(\frac{f_{B_c}}{0.43\,\text{GeV}}\bigg)^2 \Big\vert 1+C_{V_L} + 4.3\,C_{S_R}\Big\vert ^2\,.
\end{eqnarray}
\subsection{Observables in $\Lambda_b \to p \tau \bar{\nu}$ decay mode}
\label{b2u}
In this section, we provide theoretical expressions for various $\Lambda_b \to p \tau \bar{\nu}$ observables used in our analysis. These observables can be defined with the help of angular differential decay distribution of this mode.
The two-fold angular differential distribution for $\Lambda_b \to p l\bar{\nu}$ can be written in terms of $q^2$ and $\cos\theta_l$ where $q^2$ is the momentum transfer squared and $\theta_l$ is the angle between the daughter baryon and the lepton in the di-lepton rest frame. The two-fold angular differential distribution can be written as
\begin{equation}
\frac{d^2\Gamma(\Lambda_b \to p l\bar{\nu})}{dq^2\, d\cos\theta_l} = N \Big(1-\frac{m_l^2}{q^2}\Big)^2\Big[A + \frac{m_l^2}{q^2}B + 2\,C + \frac{4m_l}{\sqrt{q^2}}D\Big]\,,
\label{twofold}
\end{equation}
where
\begin{eqnarray}
A &=& 2\sin^2\theta_l\Big(H^2_{\small{\frac{1}{2},0}} + H^2_{\small{-\frac{1}{2},0}}\Big) + \Big(1-\cos\theta_l\Big)^2 H^2_{\small{\frac{1}{2},1}} + \Big(1+\cos\theta_l\Big)^2H^2_{\small{-\frac{1}{2},-1}},\\
B &=& 2\cos^2\theta_l\Big(H^2_{\small{\frac{1}{2},0}} + H^2_{\small{-\frac{1}{2},0}}\Big) + \sin^2\theta_l \Big(H^2_{\small{\frac{1}{2},1}} + H^2_{\small{-\frac{1}{2},-1}}\Big) + 2\Big(H^2_{\small{\frac{1}{2},t}} + H^2_{\small{-\frac{1}{2},t}}\Big)\nonumber \\
&& - 4\cos\theta_l\Big( H_{\small{\frac{1}{2},t}} H_{\small{\frac{1}{2},0}} + H_{\small{-\frac{1}{2},t}} H_{\small{-\frac{1}{2},0}}\Big)\\
C &=& \Big(H^{SP}_{\small{\frac{1}{2},0}}\Big)^2 + \Big(H^{SP}_{\small{-\frac{1}{2},0}}\Big)^2,\\
D &=& -\cos\theta_l\Big(H_{\small{\frac{1}{2},0}} H^{SP}_{\small{\frac{1}{2},0}} + H_{\small{-\frac{1}{2},0}} H^{SP}_{\small{-\frac{1}{2},0}}\Big) + \Big(H_{\small{\frac{1}{2},t}} H^{SP}_{\small{\frac{1}{2},0}} + H_{\small{-\frac{1}{2},t}} H^{SP}_{\small{-\frac{1}{2},0}}\Big)\,.
\end{eqnarray}
The differential decay rate for $\Lambda_b \to p l\bar{\nu}$ can be obtained after integrating out equation \ref{twofold} over the $\cos\theta_l$ variable \cite{Shivashankara:2015cta}
\begin{equation}
\frac{d\Gamma(\Lambda_b \to p l\bar{\nu})}{dq^2} = \frac{8N}{3}\Big(1-\frac{m_l^2}{q^2}\Big)^2\Big[E + \frac{m_l^2}{2q^2}F + \frac{3}{2}G + \frac{3m_l}{\sqrt{q^2}}H\Big]\,.
\end{equation}
Here $N = \frac{G_F^2|V_{ub}|^2 q^2|\vec{p_p}|}{512\pi^3 m_{\Lambda_b}^2}, |\vec{p_p}| = \sqrt{\lambda(m^2_{\Lambda_b},m_p^2,q^2)}/(2m_{\Lambda_b})$ with $\lambda(a,b,c) = a^2 + b^2 + c^2 - 2(ab+bc+ca)$. Further,
\begin{eqnarray}
E &=& H^2_{\small{\frac{1}{2}0}} + H^2_{\small{-\frac{1}{2}0}} + H^2_{\small{\frac{1}{2}1}} + H^2_{\small{-\frac{1}{2}-1}},\\
F &=& H^2_{\small{\frac{1}{2}0}} + H^2_{\small{-\frac{1}{2}0}} + H^2_{\small{\frac{1}{2}1}} + H^2_{\small{-\frac{1}{2}-1}} + 3(H^2_{\small{\frac{1}{2}t}} + H^2_{\small{-\frac{1}{2}t}}),\\
G &=& (H^{SP}_{\small{\frac{1}{2}0}})^2 + (H^{SP}_{\small{-\frac{1}{2}0}})^2,\\
H &=& H_{\small{\frac{1}{2}t}} H^{SP}_{\small{\frac{1}{2}0}} + H_{\small{-\frac{1}{2}t}} H^{SP}_{\small{-\frac{1}{2}0}}\,.
\end{eqnarray}
The differential branching fraction can then be written as
\begin{equation}
\frac{d\mathcal{B}(\Lambda_b \to p l \bar{\nu})}{dq^2} = \tau_{\Lambda_b} \frac{d\Gamma}{dq^2}\,.
\label{dbr}
\end{equation}
One can also define the following LFU ratios of the differential branching fractions as
\begin{equation}
R_p(q^2) = \frac{d\Gamma(\Lambda_b \to p \tau\bar{\nu})/dq^2}{d\Gamma(\Lambda_b \to p \mu\bar{\nu})/dq^2},\,\,\,\,\,
\label{rlfu}
\end{equation}
The lepton forward-backward asymmetry is defined as
\begin{equation}
A_{FB} = \frac{\int_{0}^{1} (d^2\Gamma/dq^2\,d\cos\theta)d\cos\theta - \int_{-1}^{0} (d^2\Gamma/dq^2\,d\cos\theta)d\cos\theta}{\int_{0}^{1} (d^2\Gamma/dq^2\,d\cos\theta)d\cos\theta + \int_{-1}^{0} (d^2\Gamma/dq^2\,d\cos\theta)d\cos\theta}\,.
\label{afb}
\end{equation}
Moreover, the longitudinal polarization of final state baryon and $\tau$ lepton is given by
\begin{eqnarray}
P^L_{p} &=& \frac{d\Gamma^{\lambda_p = 1/2}/dq^2 - d\Gamma^{\lambda_p = -1/2}/dq^2}{d\Gamma^{\lambda_p = 1/2}/dq^2 + d\Gamma^{\lambda_p = -1/2}/dq^2}\\
P^L_{\tau} &=& \frac{d\Gamma^{\lambda_{\tau} = 1/2}/dq^2 - d\Gamma^{\lambda_{\tau} = -1/2}/dq^2}{d\Gamma^{\lambda_{\tau} = 1/2}/dq^2 + d\Gamma^{\lambda_{\tau} = -1/2}/dq^2}
\label{pol}
\end{eqnarray}
The convexity parameter, which is the measure of curvature of the $\cos\theta$ distribution, is defined as
\begin{equation}
C_F^l(q^2) = \frac{1}{\int d\cos\theta\, W(\theta)}\frac{d^2 W(\theta)}{d(\cos\theta)^2}
\label{conv}
\end{equation}
with
$$W(\theta) = \frac{3}{8}\Big[A + \frac{m_l^2}{q^2} B +2\,C + \frac{4m_l}{\sqrt{q^2}}D\Big].$$
The helicity amplitudes defined in terms of the form factors are given in Appendix \ref{appen}.
\section{Results and Discussions}
\label{res}
\subsection{Fit results}
\label{fit}
From Sec \ref{eff}, it is apparent that in the context of $U_1$ LQ model, the WCs in $b \to u$ transition can be written in terms of $b \to c$ couplings. Therefore the observables in $b \to u $ sector are expected to have strong correlations with $b \to c$ observables. In other words, the extent up to which the new physics effects can be generated in $b \to u$ observables would be determined by the allowed parameter space of couplings by the current $b \to c$ data. Given the fact that we have relatively large number of measured observables in this sector and moreover, some of them are accurately measured and predicted fairly well within the SM, it would be interesting to see possible deviation in $\Lambda_b \to p \tau \bar{\nu}$ observables allowed by the $b \to c$ data.
{\rowcolors{2}{black!50!white!50}{black!50!white!40}
\begin{table}
\centering
\begin{tabular}{ |c|c| }
\hline
Observable& Experimental Values \\
\hline
$ R_D$ & $ 0.340\pm 0.027\pm 0.013$ \cite{avg19} \\
$R_{D^*} $ & $ 0.295\pm 0.011\pm 0.008$ ~\cite{avg19}\\
$R_{\Lambda_c} $ & $ 0.242 \pm 0.026 \,(\rm stat.) \pm 0.040 \,(\rm syst.) \pm 0.059 $ \cite{LHCb:2022piu} \\
$ P_{\tau}^{D^*}$ & $-0.38\pm 0.51^{+0.21}_{-0.16} $ \cite{Hirose:2016wfn} \\
$ f_L^{D^*}$ & $0.60\pm 0.08\,(\rm stat.)\pm 0.04\,(\rm syst.) $ \cite{Adamczyk:2019wyt,Abdesselam:2019wbt} \\
\hline
\end{tabular}
\caption{Experimental values of observables used in the fit. The third error in $R_{\Lambda_c} $ is due to the external branching fractions measurements.}
\label{fit-obs}
\end{table}
The theoretical expressions of observables $R_D$, $R_{D^*}$, $R_{\Lambda_c}$ , $P_\tau^{D^*}$ and $f_L^{D^*}$ as functions of the relevant WCs are given in Sec. \ref{b2c}. By fitting these expressions to the measured values of the observables, we obtain the values of WCs which
are consistent with the data.
The corresponding $\chi^2$ is defined as
\begin{eqnarray}
\chi^2(C^{\rm{eff}}_i)&=&\sum_{m,n= R_D, R_{D^*}}\left(O^{th}(C_i)-O^{exp}\right)_{m}\left(V^{exp}+V^{SM}\right)^{-1}_{mn}\left(O^{th}(C_i)-O^{exp}\right)_{n}\nonumber\\
& &+ \frac{(R_{\Lambda_c}^{th}(C_i)-R_{\Lambda_c}^{exp})^2}{\sigma^2_{R_{\Lambda_c}}} + \frac{(P_{\tau}^{D^*\, th}(C_i)-P_{\tau}^{D^*\,exp})^2}{\sigma^2_{P_{\tau}}} + \frac{(f_L^{D^*\, th}(C_i)-f_L^{D^*\,exp})^2}{\sigma^2_{f_L}}.
\label{chi2}
\end{eqnarray}
Here $O^{th}(C^{\rm{eff}}_i)$ are the theoretical predictions for $R_D$ and $R_{D^*}$ whereas $R_{\Lambda_c}^{th}$ , $P_{\tau}^{D^*\, th}$ and $f_L^{D^*\, th}$ are theoretical expressions for $R_{\Lambda_c}$, $P_{\tau}^{D^*}$ and $f_L^{D^*}$, respectively. These expressions depend upon the new physics WCs $C_{V_L}$ and $C_{S_R}$ which in turn are functions of $h_{23}^L$, $h_{33}^L$ and $h_{33}^R$ couplings. $O^{exp}$ are the corresponding experimental measurements. $V^{exp}$ and $V^{SM}$ are the experimental and SM covariance matrices in the $R_D$, $R_{D^*}$ space, respectively. The matrix $V^{exp}$ includes the correlation in the combined experimental determination of $R_D$ and $R_{D^*}$. In eq.~(\ref{chi2}), $\sigma_{R_{\Lambda_c}}$, $\sigma_{P_{\tau}}$ and , $\sigma_{f_L}$ are the uncertainties in the measurements of $R_{\Lambda_c}$ , $P_\tau^{D^*}$ and $f_L^{D^*}$, respectively. The measured values are given in Table \ref{fit-obs}. The fit results are shown in Table \ref{one}. It is evident that the SM doesn't provide a good fit to the data as $\chi^2_{\rm min}\sim 26.06$ whereas for the $U_1$ LQ model, the fit is significantly improved as indicated by that the $\chi^2_{\rm min}$ value which is $\sim $ 6.64. The best fit of the new physics couplings are also shown in the Table \ref{one}.
While obtaining allowed parameter space of new physics couplings
we imposed additional constraint of $B(B_c\rightarrow \tau\bar{\nu}) < 0.3$ \cite{Alonso:2016oyd} by lifetime of $B_c$ meson.
{\rowcolors{2}{black!50!white!50}{black!50!white!40}
\begin{table}
\centering
\begin{tabular}{|c|c|c|}
\hline\hline
& Best fit value(s) & $\chi^2_{\rm min}$ \\
SM & $C_{i}=0$ & 26.06 \\
$U_1$ LQ & $h^L_{33} = 1.2 \pm 2.3, \, h^L_{23} = 0.05 \pm 0.2, \, h^R_{33} = 0.1 \pm 0.3$ & 6.64 \\
\hline
\end{tabular}
\caption{ Best fit values of new physics couplings by making use of data of $R_D$, $R_{D^*}$, $R_{\Lambda_c}$, $P_{\tau}^{D^*}$ and $f_L^{D^*}$ in the fit. }
\label{one}
\end{table}
Using the allowed values of new physics couplings obtained in this section, in the next subsection, we predict several observables in the decay of $\Lambda_b \to p l\bar{\nu}$.
\subsection{Predictions}
\begin{figure*}[htb]
\centering
\includegraphics[width = 3.1in]{01-dbr}
\includegraphics[width = 3.1in]{02-r}\\
\includegraphics[width = 3.1in]{03-P_p}
\includegraphics[width = 3.1in]{04-P_tau}\\
\includegraphics[width = 3.1in]{05-afb}
\includegraphics[width = 3.1in]{06-clf}
\caption{Predictions for various observables in $\Lambda_b \to p l\bar{\nu}$ decay. The band corresponds to the SM uncertainties.}
\label{fig-pred}
\end{figure*}
We consider following $\Lambda_b \to p l\bar{\nu}$ observables in our analysis:
\begin{itemize}
\item differential branching ratio $dB/dq^2$, defined in eq. \eqref{dbr}
\item LFU ratio $R_p$, defined in defined in eq. \eqref{rlfu}
\item longitudinal polarization of final state baryon, defined in eq. \eqref{pol}
\item longitudinal polarization of $\tau$, defined in eq. \eqref{pol}
\item lepton forward-backward asymmetry $A_{FB}$, defined in eq. \eqref{afb}
\item convexity parameter $C$, defined in eq. \eqref{conv}.
\end{itemize}
The SM prediction of these observables along with 1$\sigma$ upper and lower new physics bounds are illustrated in Fig.~\ref{fig-pred}. From the left panel of top figure, it is luculent that the new physics can ameliorate the branching ratio by an order of magnitude. Even the 1$\sigma$ new physics lower bound is about three times the SM 1$\sigma$ upper limit. Thus the current $b \to c$ data does allow a large enhancement in the branching ratio of $\Lambda_b \to p l\bar{\nu}$. This roseate feature is also toted to the LFU ratio $R(\Lambda_p)$ as can be seen from the right panel of the top figure. Here too an order of magnitude enhancement is allowed. This is due to the fact that a large enhancement was viable for the differential branching ratio. Therefore $\Lambda_b \to p l\bar{\nu}$ decay mode can serve as an important channel to probe LFU violation in the $b \to u $ sector.
The predictions of the longitudinal polarization of the final state baryon as well as the $\tau$ lepton in $\Lambda_b \to p l\bar{\nu}$ decay are shown in the left and right panels of Fig.~\ref{fig-pred}. For $q^2>$ 17 $\rm GeV^2$, $P^L_{p}(q^2) $ is consistent with the SM prediction whereas for $q^2<$ 17 $\rm GeV^2$, there is marginal deviation from the SM prediction. The deviation is more prominent for lower values of $q^2$. On the other hand, the predictions of tau polarization is consistent with the SM prediction in the entire $q^2$ region. The same is true for lepton forward backward asymmetry and convexity parameter as can be seen from the left and right panels of bottom figure, respectively.
\section{Conclusions}
\label{concl}
In this work we anatomize new physics effects in $\Lambda_b \to p \tau \bar{\nu}$ decay in $U_1$ leptoquark model. This decay mode is induced by the quark level transition $b \to u \tau \bar{\nu}$. A model independent analysis of new physics in $b \to u \tau \bar{\nu}$ can lead to large effects due to the fact that, as of now, we only have one measurement in this sector. However, in the context of $U_1$ leptoquark model considered in this work, the new physics couplings in $b \to u \tau \bar{\nu}$ transition can be expressed in terms of couplings in $b \to c \tau \bar{\nu}$ decay along with a suitable combinations of elements of the CKM matrix. Therefore, one expects a strong correlations between these two sectors. Given the fact that, unlike $b \to u \tau \bar{\nu}$ sector, there are measurements of a number of observables in decays induced by $b \to c \tau \bar{\nu}$ transition, one expects that meaningful constraints on new physics parameter space can be obtained. It would then be interesting to see whether such constraints can allow for large enhancements in some of the observables in $\Lambda_b \to p \tau \bar{\nu}$ decay.
In order to obtain constraints on new physics couplings, we perform a fit to all $b \to c \tau \bar{\nu}$ data. For allowed parameter space of the couplings, we obtain predictions of the branching ratio, LFU ratio, the longitudinal polarization of final state baryon and $\tau$ lepton, lepton forward-backward asymmetry and in the decay of $\Lambda_b \to p \tau \bar{\nu}$. We find that
\begin{itemize}
\item The branching ratio as well as the LFU ratio can be enhanced by an order of magnitude over the SM value.
\item There can be a marginal deviation from the SM in the longitudinal polarization of final state baryon in the low-$q^2$ region.
\item The longitudinal polarization of $\tau$, lepton forward-backward asymmetry as well as the convexity parameter are consistent with the SM.
\end{itemize}
{\bf Acknowledgements}: The work of DK is supported by the SERB, India under the research grant no. SERB/EEQ/2021/000965.
\newpage
\section{APPENDIX}
\subsection{$\Lambda_b \to p$ transition form factors and Helicity amplitudes}
\label{appen}
The $q^2$ dependence of the helicity form factors in the lattice QCD calculations are defined as\cite{Detmold:2015aaa}:
\begin{equation}
f_i(q^2) = \dfrac{1}{1-q^2/(m^f_{\small pole})^2}[a_0^f+a_1^f z(q^2)]\,,
\end{equation}
where $i = +, \perp, 0$ and the expansion parameter is defined as
\begin{equation}
z(q^2) = \dfrac{\sqrt{t_+-q^2}-\sqrt{t_+-t_0}}{\sqrt{t_+-q^2}+\sqrt{t_+-t_0}}\,.
\end{equation}
Here $t_+ = (m_{B_1} + m_{B_2})^2$ and $t_0 = (m_{B_1} - m_{B_2})^2$. The nominal form factor parameters $a_{0,1}^{f(g)}$ and $m^f_{pole}$ for $\Lambda_b \to p$ are taken from \cite{Detmold:2015aaa}.
The decay $\Lambda_b \to p l\bar{\nu}$ is considered to be through $\Lambda_b \to pW^*$ and the off-shell $W^*$ decays to $l\bar{\nu}$. The helicity amplitudes for vector and axial-vector type current is defined by
\begin{eqnarray}
H_{\lambda_p, \lambda_W} &=& H^V_{\lambda_p,\lambda_W} - H^A_{\lambda_p,\lambda_W}\,,\\
H^V_{\lambda_p,\lambda_W} &=& \epsilon^{\dagger\mu}(\lambda_W)\braket{p,\lambda_p|\bar{c}\gamma_{\mu}b|\Lambda_b, \lambda_{\Lambda_b}}\,,\\
H^A_{\lambda_p,\lambda_W} &=& \epsilon^{\dagger\mu}(\lambda_W)\braket{p,\lambda_p|\bar{c}\gamma_{\mu}\gamma_5 b|\Lambda_b, \lambda_{\Lambda_b}}\,.
\end{eqnarray}
Also, the helicity amplitudes for scalar and pseudo-scalar current is given by
\begin{eqnarray}
H^S_{\lambda_p} &=& \braket{p,\lambda_p|\bar{c}b|\Lambda_b, \lambda_{\Lambda_b}}\,,\\
H^P_{\lambda_p} &=& \braket{p,\lambda_p|\bar{c}\gamma_5 b|\Lambda_b, \lambda_{\Lambda_b}}\,.
\end{eqnarray}
One can show from the parity argument or explicit calculation that $H^V_{-\lambda_p, -\lambda_W} = H^V_{\lambda_p, \lambda_W}$, $H^A_{-\lambda_p, -\lambda_W} = -H^A_{\lambda_p, \lambda_W}$, $H^S_{\lambda_p, \lambda_{NP}} = H^S_{-\lambda_p, -\lambda_{NP}}$ and $H^P_{\lambda_p, \lambda_{NP}} = -H^P_{-\lambda_p, -\lambda_{NP}}$.
The helicity amplitudes can be defined in terms of the helicity form factors as\cite{Shivashankara:2015cta}:
\begin{eqnarray}
H^V_{\small{\frac{1}{2},0}} &=& (1+C_{V_L}+C_{V_R}) \dfrac{\sqrt{Q_-}}{\sqrt{q^2}}(m_{B_1}+m_{B_2})f_{+}(q^2)\,,\\
H^A_{\small{\frac{1}{2}},0} &=& (1+C_{V_L}-C_{V_R})\dfrac{\sqrt{Q_+}}{\sqrt{q^2}}(m_{B_1}-m_{B_2})g_+(q^2)\,,\\
H^V_{\small{\frac{1}{2}},1} &=& - (1+C_{V_L}+C_{V_R}) \sqrt{2Q_-}f_\perp(q^2)\,,\\
H^A_{\small{\frac{1}{2}},1} &=& - (1+C_{V_L}-C_{V_R}) \sqrt{2Q_+}g_\perp(q^2)\,,\\
H^V_{\small{\frac{1}{2}},t} &=& (1+C_{V_L}+C_{V_R}) \dfrac{\sqrt{Q_+}}{\sqrt{q^2}}(m_{B_1}-m_{B_2})f_0(q^2)\,,\\
H^A_{\small{\frac{1}{2}},t} &=& (1+C_{V_L}-C_{V_R}) \dfrac{\sqrt{Q_-}}{\sqrt{q^2}}(m_{B_1}+m_{B_2})g_0(q^2)\,,
\end{eqnarray}
where
\begin{equation}
Q_{\pm} = (m_{B_1} \pm m_{B_2})^2-q^2\,.
\end{equation}
The scalar and pseudo-scalar helicity amplitudes are defined as:
\begin{eqnarray}
H^{SP}_{\small{\frac{1}{2}},0} &=& H^S_{\small{\frac{1}{2}}0}-H^P_{\small{\frac{1}{2}}0}\,,\\
H^S_{\small{\frac{1}{2}},0} &=& (C_{S_L}+C_{S_R}) \dfrac{\sqrt{Q_+}}{m_b-m_{u}}(m_{B_1}-m_{B_2})f_0(q^2)\,,\\
H^P_{\small{\frac{1}{2}},0} &=& (C_{S_L}-C_{S_R}) \dfrac{\sqrt{Q_-}}{m_b+m_{u}}(m_{B_1}+m_{B_2})g_0(q^2)\,.
\end{eqnarray}
The helicity-dependent differential decay rates are required to compute the longitudinal polarization asymmetry of final state baryon and $\tau$ and these decay rates are defined as
\begin{eqnarray}
\frac{d\Gamma^{\lambda_p = \frac{1}{2}}}{dq^2} &=& \frac{m_l^2}{q^2}\Big[\frac{4}{3}\Big(H^2_{\small{\frac{1}{2},1}} + H^2_{\small{\frac{1}{2}0}} + 3H^2_{\small{\frac{1}{2},t}}\Big)\Big] + \frac{8}{3}\Big( H^2_{\small{\frac{1}{2},0}} + H^2_{\small{\frac{1}{2},1}}\Big) + 4 H^{SP^2}_{\small{\frac{1}{2},0}} + \frac{8m_l}{\sqrt{q^2}}H_{\small{\frac{1}{2},t}} H^{SP}_{\small{\frac{1}{2},0}}\\
\frac{d\Gamma^{\lambda_p = -\frac{1}{2}}}{dq^2} &=& \frac{m_l^2}{q^2}\Big[\frac{4}{3}\Big(H^2_{\small{-\frac{1}{2},1}} + H^2_{\small{-\frac{1}{2},0}} + 3H^2_{\small{-\frac{1}{2},t}}\Big)\Big] + \frac{8}{3}\Big( H^2_{\small{-\frac{1}{2},0}} + H^2_{\small{-\frac{1}{2},-1}}\Big) + 4 H^{SP^2}_{\small{-\frac{1}{2},0}} \nonumber\\
&&+ \frac{8m_l}{\sqrt{q^2}}H_{\small{-\frac{1}{2},t}} H^{SP}_{\small{-\frac{1}{2},0}}\\
\frac{d\Gamma^{\lambda_{\tau} = \frac{1}{2}}}{dq^2} &=& \frac{m_l^2}{q^2}\Big[\frac{4}{3}\Big(H^2_{\small{\frac{1}{2},1}} + H^2_{\small{\frac{1}{2},0}} + H^2_{\small{-\frac{1}{2},-1}} + H^2_{\small{-\frac{1}{2},0}}\Big)+ 4( H^2_{\small{\frac{1}{2},t}} + H^2_{\small{-\frac{1}{2},t}}\Big)\Big] + 4\Big( H^{SP^2}_{\small{\frac{1}{2},0}} + H^{SP^2}_{\small{-\frac{1}{2},0}}\Big) \nonumber\\
&&+ \frac{8m_l}{\sqrt{q^2}}\Big(H_{\small{\frac{1}{2},t}} H^{SP}_{\small{\frac{1}{2},0}} + H_{\small{-\frac{1}{2},t}} H^{SP}_{\small{-\frac{1}{2},0}}\Big)\\
\frac{d\Gamma^{\lambda_{\tau} = -\frac{1}{2}}}{dq^2} &=& \frac{8}{3}\Big(H^2_{\small{\frac{1}{2},1}} + H^2_{\small{\frac{1}{2},0}} + H^2_{\small{-\frac{1}{2},-1}} + H^2_{\small{-\frac{1}{2},0}}\Big)
\end{eqnarray}
|
train/arxiv
|
BkiUdrQ4uBhi65FQgk2x
| 5 | 1 |
\section{Introduction}
Control over communication networks has been a hot research topic in the past decade \cite{Zaidi2014}. This is mainly motivated by the rapid development of wireless communication technology that enables the connection of geographically distributed systems and devices. However, the insertion of wireless communication networks also poses challenges in analysis and design of control systems due to constraints and uncertainties in communications. One must take the communication networks into consideration and analyze how they affect the stability and performance of the closed-loop control systems.
Until now, there have been plentiful results that reveal requirements on communication channels to ensure the stabilizability. For noiseless digital channels, the celebrated data rate theorem is given in \cite{Nair2004SIAM}. For noisy channels, the problem is complicated by the fact that different channel capacities are required under different stability definitions. For almost sure stability, \cite{Matveev2007} shows that the Shannon capacity in relation to unstable dynamics of a system constitutes the critical condition for its stabilizability. While for moment stability, \cite{Sahai2006} shows that the Shannon capacity is too optimistic while the zero-error capacity is too pessimistic, and the anytime capacity introduced in this paper characterizes the stabilizability conditions. Essentially, to keep the $\eta$-moment of the state of an unstable scalar plant bounded, it is necessary and sufficient for the feedback channel's anytime capacity corresponding to anytime-reliability $\alpha=\eta \mathrm{log}_2|\lambda|$ to be greater than $\mathrm{log}_2 |\lambda|$, where $\lambda$ is the unstable eigenvalue of the plant. The anytime capacity has a more stringent reliability requirement than the Shannon capacity. However, it is worthy noting that there exist no systematic method to calculate the anytime capacities of channels.
In control community, the anytime capacity is usually studied under the mean square stability requirement, for which the anytime capacity is commonly named as the mean square capacity. For example, \cite{Elia2005} characterizes the mean square capacity of a fading channel. \cite{Braslavsky2007} studies the mean square stabilization problem over a power constrained AWGN channel and characterizes the critical capacity to ensure mean square stabilizability. They further show that the extension from linear encoders/decoders to more general causal encoders/decoders cannot provide additional benefits of increasing the channel capacity \cite{Freudenberg2010}.
Specifically, the results stated above deal with fading channels or AWGN channels separately. While in wireless communications, it is practical to consider them as a whole. In this paper, we are interested in a power constrained fading channel which is corrupted by both fading and AWGN. We aim to find the critical condition on the channel to ensure the mean square stabilizability of the system. Note that \cite{Xiao2011} has derived the necessary and sufficient condition for such kind of channel to ensure mean square stabilizability under a linear encoder/decoder. It is still unknown whether we can achieve a higher channel capacity with more general causal strategies. This paper provides a positive answer to this question.
This paper is organized as follows. Problem formulation and some preliminaries are given in Section 2. Section 3 provides the results for scalar systems. Section 4 discusses the extension to vector systems. Section 5 provides numerical illustrations and this paper ends with some concluding remarks in Section 6.
\section{Problem Formulation and Preliminaries}
This paper studies the following single-input discrete-time linear system
\begin{equation}
\label{LTIDynamics}
x_{t+1}=A x_{t}+B u_{t}
\end{equation}
where $x\in \mathbb{R}^n$ is the system state and $u \in \mathbb{R}$ is the control input. Without loss of generality, we assume that all the eigenvalues of $A$ are unstable, i.e., $|\lambda_i(A)|\ge 1 $ for all $i=1,2,\ldots, n$ \cite{Freudenberg2010}. The initial value $x_0$ is randomly generated from a Gaussian distribution with zero mean and bounded covariance ${ \Sigma_{x_0}}$. The system state $x_t$ is observed by a sensor and then encoded and transmitted to the controller through a power constrained fading channel. The communication channel is modeled as
\begin{equation}
\label{channel1}
r_t=g_ts_t+n_t
\end{equation}
in which $s_t$ denotes the channel input; $r_t$ represents the channel output; $\{g_t\}$ is an i.i.d. stochastic process modeling the fading effects and $\{n_t\}$ is the additive white Gaussian noise with zero-mean and known variance $\sigma_n^2$. The channel input $s_t$ must satisfy an average power constraint, i.e., $\mathbb{E} \{s_t^2\}\le P$. We also assume that $x_0, g_0, n_0, g_1, n_1, \ldots$ are independent. In the paper, it is assumed that after each transmission, the instantaneous value of the fading factor $g_t$ is known to the decoder, which is a reasonable assumption for slowly varying channels with channel estimation \cite{Goldsmith1997}.
The instantaneous Shannon channel capacity is $c_t=\frac{1}{2}\mathrm{ln}\big( 1+\frac{g_t^2P}{\sigma_n^2} \big)$ with $c_t$ being measured in nats/transmission. The feedback configuration among the plant, the sensor and the controller, and the channel encoder/decoder structure are depicted in Fig. 1.
\begin{figure}[htpb]
\centering
\includegraphics[width=0.4\textwidth]{figs/ncsStructure.pdf}\\
\caption{Network control structure over power constraint fading channel}
\end{figure}
In this paper, we try to find requirements on the power constrained fading channel such that there exists a pair of causal encoder/decoder $\{f_t\}, \{h_t\}$ that can mean square stabilize the LTI dynamics \eqref{LTIDynamics}, i.e., to render $\mathrm{lim}_{t \rightarrow \infty} \mathbb{E} \{x_tx_t'\}=0$.
To solve this problem, the following preliminaries are needed, which are borrowed from \cite{Freudenberg2010}. Throughout the paper, a sequence $\{\chi_i\}_{i=0}^t$ is denoted by $\chi^t$; random variables are denoted by uppercase letters, and their realizations by lower case letters. All random variables are assumed to exist on a common probability space with measure $\mathcal{P}$. The probability density of a random variable $X$ in Euclidean space with respect to Lebesgue measure on the space is denoted by $p_X$, and the probability density of $X$ conditioned on the $\sigma$-field generated by the event $Y=y$ by $p_{X|y}$. Let the expectation operator be denoted by $\mathbb{E} $, and the expectation conditioned on the event $Y=y$ by $\mathbb{E} _{y}$. We use $\mathrm{log}$ to denote the logarithm to the base two, and $\mathrm{ln}$ to denote the natural logarithm.
The differential entropy of $X$ is defined by $H(X)=-\mathbb{E} \{\mathrm{ln} p_X \}$, provided that the defining integral exists. Denote the conditional entropy of $X$ given the event $Y=y$ by $H_y(X)=H(X|Y=y)=-\mathbb{E} _y\{ \mathrm{ln} p_{X|y} \}$, and the random variable associated with $H_y(X)$ by $H_Y(X)$. The average conditional entropy of $X$ given the event $Y=y$ and averaged over $Y$ is defined by $H(X|Y)=\mathbb{E} \{H_Y(X) \}$, and the average conditional entropy of $X$ given the events $Y=y$ and $Z=z$ and averaged only over $Y$ by $H_z(X|Y)=\mathbb{E} _{z}\{H_{Y,Z}(X)\}$. The conditional mutual information between two random variables $X$ and $Y$ given the event $Z=z$ is defined by $I_z(X;Y)=H_z(X)-H_z(X|Y)$. Given a random variable $X\in \mathbb{R}^{n}$ with entropy $H(X)$, the entropy power of $X$ is defined by $N(X)=\frac{1}{2\pi e} e^{\frac{2}{n}H(X)}$. Denote the conditional entropy power of $X$ given the event $Y=y$ by $N_y(X)=\frac{1}{2\pi e}e^{\frac{2}{n}H_y(X)}$, and the random variable associated with $N_y(X)$ by $N_Y(X)$. The average conditional entropy power of $X$ given the event $Y=y$ and averaged over $Y$ is defined by $N(X|Y)=\mathbb{E} \{N_Y(X)\}$, and the average conditional entropy power of $X$ given the events $Y=y$ and $Z=z$ and averaged only over $Y$ by $N_z(X|Y)=\mathbb{E} _z \{N_{Y,Z}(X)\}$. The following lemma shows that the entropy power of a random variable provides an estimation of the lower bound for its variance.
\begin{lemma}[\cite{Freudenberg2010}]
\label{lemma:varianceIsBoundedByEntropyPower}
Let $X$ be an $n$-dimensional random variable. Then $N_y(X)\le \frac{1}{n} \mathbb{E} _y\{\|X\|^2\}$.
\end{lemma}
\begin{lemma}
\label{lemma:mutualInformationEqual}
Let $X$ be an $n$-dimensional random variable, $f(X)$ be a function of $X$, and $Y=f(X)+N$ with $N$ being a random variable that is independent with $X$. Then $I(X;Y)=I(f(X);Y)$.
\end{lemma}
\begin{proof}
Since $H(Y|X)=H(Y|X,f(X))\le H(Y|f(X))$, we have $H(Y)=I(X;Y)+H(Y|X)\le I(X; Y)+H(Y|f(X))$. Thus $ H(Y)-H(Y|f(X)) = I(Y;f(X))\le I(X;Y)$.
Besides, noting that $X\rightarrow f(X)\rightarrow Y$ forms a Markov chain, the data processing inequality \cite{Cover2006} implies that $I(X;Y) \le I(f(X); Y)$. Combining the two facts, we have $I(X;Y)=I(f(X);Y)$.
\end{proof}
\begin{remark}
Lemma \ref{lemma:mutualInformationEqual} indicates that for the AWGN channel, the amount of information that the channel output contains about the source is equal to the amount of information that the channel output contains about the channel input.
\end{remark}
\section{Scalar Systems}
To better convey our ideas, we start with scalar systems. Consider the following scalar system
\begin{equation}
\label{scalarDynamics}
x_{t+1}=\lambda x_t+u_t
\end{equation}
where $|\lambda|\ge 1$ and $\mathbb{E} \{x_0^2\}=\sigma_{x_0}^2$.
With the communication channel given in \eqref{channel1}, the stabilizability result is stated in the following theorem.
\begin{theorem}
\label{theorem:theorem1}
There exists a causal encoder/decoder pair $\{f_t\}, \{h_t\}$, such that the system \eqref{scalarDynamics} can be stabilized over the communication channel \eqref{channel1} in mean square sense if and only if
\begin{equation}
\label{iffConditionForScalarSystem}
\mathrm{log} |\lambda| < - \frac{1}{2} \mathrm{log} \mathbb{E} \{ \frac{\sigma_n^2}{\sigma_n^2+g_t^2P} \}
\end{equation}
\end{theorem}
Theorem \ref{theorem:theorem1} indicates that the mean square capacity of the power constraint fading channel is $C_{\mathrm{MSC}}=-\frac{1}{2} \mathrm{log} {\mathbb{E} \{ \frac{\sigma_n^2}{\sigma_n^2+g_t^2P} \}}$. In the following, we will prove the necessity and sufficiency of Theorem \ref{theorem:theorem1}, respectively. The proof essentially follows the same steps as in \cite{Minero2009, Freudenberg2010, Kumar2014}, however, with some differences due to the channel structure.
\subsection{Proof of Necessity}
The proof of necessity follows from the intuition below. In view of Lemma \ref{lemma:varianceIsBoundedByEntropyPower}, the entropy power provides a lower bound for the mean square value of the system state. We thus can use the average entropy power as a measure of the uncertain region of the system state and analyze its update. At time $t$, the controller maintains a knowledge of the uncertain region of $x_t$. When it takes action on the plant, the average uncertain region of $x_{t+1}$ predicated by the controller is expanded to $\lambda^2$ times that of $x_t$. This is the iteration we term as dynamics update, which describes the update of the uncertain region of $x$ maintained by the controller from time $t$ to $t+1$. After receiving information about $x_{t+1}$ from the sensor through the communication channel, the controller can reduce the predication error of the uncertain region of $x_{t+1}$ by a factor of $\mathbb{E} \{\frac{\sigma_n^2}{\sigma_n^2+g_t^2P}\}$. This is the iteration we term as communication update, which describes the update of the uncertain region of $x$ maintained by the controller at time $t+1$ after it has received the information about $x_{t+1}$ from the sensor through the communication channel. Thus to ensure mean square stability, the average expanding factor $\lambda^2 \mathbb{E} \{\frac{\sigma_n^2}{\sigma_n^2+g_t^2P} \}$ of the system state's uncertain region should be smaller than one, which gives the necessary requirement in Theorem \ref{theorem:theorem1}. The formal proof is stated as follows. Here we use the uppercase letters $X, S, R, G$ to denote the random variables of the system state, the channel input, the channel output and the channel fading coefficient. We use the lowercase letters $x, s, r, g$ to denote their realizations.
\subsubsection{Communication Update}
The average entropy power of $X_t$ conditioned on $(R^t,G^t)$ is
$\scriptstyle N(X_t|R^t,G^t)= \mathbb{E} \{ N_{R^t,G^t}(X_t) \} \overset{(a)}{=} \mathbb{E} \{ \mathbb{E} \{N_{R^t,G^t}(X_t) | R^{t-1}, G^t \} \}
\overset{(b)}{=}\frac{1}{2\pi e} \mathbb{E} \{ \mathbb{E} \{ e^{ {2} H_{R^t,G^t}(X_t)} | R^{t-1}, G^t \}\} $
where $(a)$ follows from the law of total expectation and $(b)$ follows from the definition of entropy power.
Since
$
\begin{aligned}
& \mathbb{E} \{ e^{2 H_{R^t,G^t} (X_t) } | R^{t-1}= r^{t-1}, G^t= g^t \} \\
& \overset{(c)}{\ge} e^{ 2 \mathbb{E} \{ H_{R^t,G^t} (X_t) | R^{t-1}= r^{t-1}, G^t= g^t \} }\\
&\overset{(d)}{=} e^{2 H(X_t| R_t, R^{t-1}= r^{t-1}, G^t= g^t) }\\
&\overset{(e)}{=} e^{2 \left( H(X_t| R^{t-1}= r^{t-1}, G^t= g^t) - I(X_t, R_t| R^{t-1}= r^{t-1}, G^t= g^t) \right)}\\
&\overset{(f)}{=} e^{2 \left( H(X_t| R^{t-1}= r^{t-1}, G^t= g^t) - I(S_t, R_t| R^{t-1}= r^{t-1}, G^t= g^t) \right) }\\
& \overset{(g)}{\ge} e^{2 \left( H(X_t | R^{t-1}= r^{t-1}, G^t= g^t) - c_t \right) }\\
& \overset{(h)}{=} e^{- 2 c_t} e^{2 H(X_t|R^{t-1}=r^{t-1},G^{t-1}=g^{t-1})}
\end{aligned}
$
where $(c)$ follows from Jensen's inequality; $(d)$ follows from the definition of conditional entropy; $(e)$ follows from the definition of conditional mutual information; $(f)$ follows from Lemma \ref{lemma:mutualInformationEqual}; $(g)$ follows from the definition of channel capacity, i.e., $I(S_t, R_t|R^{t-1}=r^{t-1}, G^t=g^t )\le c_t$ and $(h)$ follows from the fact that $G_t$ is independent with $X_t$, we have
$\scriptstyle
N(X_t|R^t,G^t)
\ge\frac{1}{2\pi e} \mathbb{E} \{ e^{-2 C_t} e^{2 H_{R^{t-1},G^{t-1}}(X_t)} \}
=\mathbb{E} \{ \frac{\sigma_n^2}{\sigma_n^2+g_t^2P} \} N(X_t|R^{t-1},G^{t-1})
$.
\subsubsection{Dynamics Update}
Since
$e^{2H(X_{t+1}|R^t=r^t,G^t=g^t)} = e^{2 H(\lambda X_t+U_t|R^t=r^t,G^t=g^t)} \overset{(i)}{=} e^{2 H (\lambda X_t|R^t=r^t,G^t=g^t)}
\overset{(j)}{=} e^{2H(X_t|R^t=r^t,G^t=g^t)+2\ln|\lambda|}
= \lambda^2 e^{2 H(X_t|R^t=r^t,G^t=g^t)}
$
where $(i)$ follows from the fact that $u_t=h_t(r^t, g^t)$ and $(j)$ follows from
Theorem 8.6.4 in \cite{Cover2006}, we have
$ \scriptstyle N(X_{t+1}|R^t,G^t) \ge \mathbb{E} \left\{\frac{1}{2\pi e} \lambda^2 e^{2 H_{R^t,G^t}(X_t)} \right\} = \lambda^2 N(X_t|R^t,G^t) $.
\subsubsection{Proof of Necessity}
Combining the results of communication update and dynamics update, we have
$ \scriptstyle N(X_{t+1}|R^t,G^t) \ge \lambda^2 \mathbb{E} \{ \frac{\sigma_n^2}{\sigma_n^2+g_t^2P} \} N(X_t|R^{t-1}, G^{t-1})$.
In view of Lemma \ref{lemma:varianceIsBoundedByEntropyPower}, $N(X_{t+1}|R^t,G^t)$ should converge to zero asymptotically. Thus $\lambda^2 \mathbb{E} \{ \frac{\sigma_n^2}{\sigma_n^2+g_t^2P} \} <1$, which is \eqref{iffConditionForScalarSystem} and this proves the necessity.
\subsection{Proof of Sufficiency}
To prove the sufficiency, we need to construct a pair of encoder and decoder. The encoder and decoder are designed following an "estimation then control" strategy. The controller consecutively estimates the initial state $x_0$ by using the received information from the channel and then applies an equivalent control to the plant. The reason for adopting such strategy is explained as follows. The response of the linear system is $x_t=\lambda^t (x_0-\hat{x}_t)$ with $\hat{x}_t=-\sum_{i=0}^{t-1} \lambda^{-1-i} u_i$, which means $\mathbb{E} \{x_t^2\}=\lambda^{2t} \mathbb{E} \{(x_0-\hat{x}_t)^2\}$. We can treat $\hat{x}_t$ as an estimate of the controller for the initial state $x_0$. If the estimation error $\mathbb{E} \{(x_0-\hat{x}_t)^2\}$ converges to zero at a speed that is greater than $\lambda^2$, i.e., there exists $\eta>\lambda^2$ and $\alpha>0$, such that $\mathbb{E} \{(x_0-\hat{x}_t)^2\}\le \frac{\alpha}{\eta^t}$, the mean square value of the system state would be bounded by
$
\mathbb{E} \{x_t^2\}\le \alpha \left(\frac{\lambda^2}{\eta}\right)^{t}
$.
Thus $\underset{t\rightarrow \infty}{\mathrm{lim}}\mathbb{E} \{x_t^2\}=0$, i.e., system \eqref{scalarDynamics} is mean square stable. This intuition can be formalized using the following lemma.
\begin{lemma}[\cite{Kumar2014}]
\label{lemma:estimationThenControl}
If there exists an estimation scheme $\hat{x}_t$ for the initial system state $x_0$, such that the estimation error $e_t=\hat{x}_t-x_0$ satisfies the following property,
\begin{eqnarray}
\label{eq:estimationThenControlRequirement}
\mathbb{E} \{ e_t \}=0 \label{eq:estimationThenControlRequirement1} \\
\lim_{t\rightarrow \infty} A^t \mathbb{E} \{ e_te_t' \} (A')^t=0 \label{eq:estimationThenControlRequirement2}
\end{eqnarray}
then the system \eqref{LTIDynamics} can be mean square stabilized by the controller
$ u_t=K\left( A^t \hat{x}_t+ \sum_{i=1}^t A^{t-i} Bu_{i-1} \right) $
with $K$ being selected such that $A+BK$ is stable.
\end{lemma}
When $g_t$ is known at the receiver, channel \eqref{channel1} resembles an AWGN channel. Shannon shows that when estimating a Gaussian random variable through an AWGN channel, the minimal mean square estimation error can be attained by using linear encoders and decoders, respectively \cite{Gattami2014}. And the minimal mean square error variance is given by $\frac{P\sigma_n^2}{\sigma_n^2+g_t^2P}$. Thus through one channel use, we can at best decrease the estimation error by a factor of $\frac{\sigma_n^2}{\sigma_n^2+g_t^2P}$. Since ${g_t}$ is i.i.d., we can transmit the estimation error from the decoder to the encoder and iteratively conduct the minimal mean square estimation process. Then the estimation error would decrease on average at a speed of $\mathbb{E} \{ \frac{\sigma_n^2}{\sigma_n^2+g_t^2P}\}$. If $\lambda^2\mathbb{E} \{\frac{\sigma_n^2}{\sigma_n^2+g_t^2P}\}<1$, in view of Lemma \ref{lemma:estimationThenControl}, system \eqref{scalarDynamics} can be mean square stabilized. The estimation strategy actually follows the principle of the well-known scheme of Schalkwijk \cite{Schalkwijk1966}, which utilizes the noiseless feedback link to consecutively refine the estimation error. The detailed encoder/decoder design and stability analysis are given as follows.
\subsubsection{Encoder/Decoder Design}
Suppose the estimation of $x_0$ formed by the decoder is $\hat{x}_t$ at time $t$ and the estimation error is $e_t=\hat{x}_t-x_0$. The encoder is designed as
\begin{equation}
\label{encoder}
\begin{aligned}
s_0 &=\sqrt{\frac{P}{\sigma_{x_0}^2}} x_0\\
s_t&=\sqrt{\frac{P}{\sigma^2_{e_{t-1}}}}\left( \hat{x}_{t-1} - x_0 \right), \;\; t\ge 1\\
\end{aligned}
\end{equation}
The decoder is designed as
\begin{equation}
\label{decoder}
\begin{aligned}
\hat{x}_0 & =\sqrt{\frac{\sigma_{x_0}^2}{P}}r_0\\
\hat{x}_t&=\hat{x}_{t-1}-\frac{\mathbb{E} \{r_t e_{t-1}|g_t\}}{\mathbb{E} \{r_t^2|g_t\}}r_t, \;\; t\ge 1
\end{aligned}
\end{equation}
with $\sigma^2_{e_{t-1}}$ representing the variance of $e_{t-1}$.
\subsubsection{Proof of Sufficiency}
Since $r_0=g_0s_0+n_0$, in view of \eqref{encoder} and \eqref{decoder}, we have $ e_0 =(g_0-1)x_0+ \sqrt{\frac{\sigma_{x_0}^2}{P}}n_0
$. Because $g_0$, $x_0$, $n_0$ are independent and $x_0$, $n_0$ follows a zero mean Gaussian distribution, we know that the conditional probability distribution of $e_0$ given the event $g_0$ is Gaussian and $ \mathbb{E} \{e_0|g_0\} =0$, $\mathbb{E} \{e_0^2|g_0\} = (g_0-1)^2 \sigma_{x_0}^2+ \frac{\sigma_{x_0}^2\sigma_n^2}{P}$. Thus $\mathbb{E} \{e_0\}=\mathbb{E} \{\mathbb{E} \{e_0|g_0\} \} =0$ and $ \mathbb{E} \{e^2_0\} =\mathbb{E} \{ \mathbb{E} \{e_0^2|g_0\} \} = \mathbb{E} \{(g_0-1)^2\} \sigma_{x_0}^2+ \frac{\sigma_{x_0}^2\sigma_n^2}{P}$.
For $t\ge 1$, in view of \eqref{encoder} and \eqref{decoder}, we have
\begin{multline*}
e_t=e_{t-1}-\frac{\mathbb{E} \{r_te_{t-1}|g_t\}}{\mathbb{E} \{r_t^2|g_t\}}r_t\\
=\Big(1-g_t \sqrt{\frac{P}{\sigma_{e_{t-1}}^2}} \frac{\mathbb{E} \{r_te_{t-1}|g_t\}}{\mathbb{E} \{r_t^2|g_t\}} \Big) e_{t-1} -\frac{\mathbb{E} \{ r_t e_{t-1}|g_t \}}{\mathbb{E} \{r_t^2|g_t\}} n_t
\end{multline*}
Thus the conditional probability distribution for $e_t$ given the event $g_t$ is Gaussian.
We also have
\begin{equation*}
\begin{aligned}
& \mathbb{E} \{e_t\} = \mathbb{E} \{ \mathbb{E} \{ e_t|g_t \} \}\\
&= \mathbb{E} \Big\{ \Big(1-g_t \sqrt{\frac{P}{\sigma_{e_{t-1}}^2}} \frac{\mathbb{E} \{r_te_{t-1}|g_t\}}{\mathbb{E} \{r_t^2|g_t\}} \Big) \mathbb{E} \{e_{t-1}|g_t\} \Big\} \\
&\overset{(a)}{=} \mathbb{E} \Big\{ \Big(1-g_t \sqrt{\frac{P}{\sigma_{e_{t-1}}^2}} \frac{\mathbb{E} \{r_te_{t-1}|g_t\}}{\mathbb{E} \{r_t^2|g_t\}} \Big) \Big\} \mathbb{E} \{e_{t-1}\}
\end{aligned}
\end{equation*}
where $(a)$ follows from the fact that $g_t$ is independent with $e_{t-1}$. Since $\mathbb{E} \{e_0\}=0$, we further know that $\mathbb{E} \{e_t\}\equiv 0$. The sufficient condition \eqref{eq:estimationThenControlRequirement1} is satisfied.
Since $e_{t-1}$, $g_t$ and $n_t$ are independent, we have $\mathbb{E} \{e_{t-1}^2|g_t\}=\mathbb{E} \{e_{t-1}^2\}$ and $\mathbb{E} \{n_t^2|g_t\}=\mathbb{E} \{n_t^2\}$, which implies
$ \mathbb{E} \{r_t^2|g_t\} = \mathbb{E} \big\{ \big(g_t \sqrt{\frac{P}{\sigma^2_{e_{t-1}}}} e_{t-1} +n_t\big)^2|g_t \big\}
=\sigma_n^2+g_t^2P
$
and
$
\mathbb{E} \{r_te_{t-1}|g_t\} =\mathbb{E} \big\{e_{t-1}\big(g_t \sqrt{\frac{P}{\sigma^2_{e_{t-1}}}} e_{t-1} +n_t \big)|g_t \big\}
= g_t \sqrt{P \sigma_{e_{t-1}}^2}
$.
Since $ \mathbb{E} \{e_t^2|g_t\}=\mathbb{E} \{e_{t-1}^2|g_t\} -\frac{\mathbb{E} \{r_te_{t-1}|g_t\}^2}{\mathbb{E} \{r_t^2|g_t\}} $, we also have
$
\mathbb{E} \{e_t^2|g_t\} =\mathbb{E} \{e_{t-1}^2\}-\frac{g_t^2P \mathbb{E} \{e_{t-1}^2\}}{\sigma_n^2 + g_t^2P}
=\mathbb{E} \{e_{t-1}^2\} \frac{\sigma_n^2}{\sigma_n^2 + g_t^2P}
$, which implies
$ \mathbb{E} \{e_t^2\} = \mathbb{E} \{\mathbb{E} \{e_t^2|g_t\}\}
=\mathbb{E} \{e_{t-1}^2\} \mathbb{E} \{ \frac{\sigma_n^2}{\sigma_n^2 + g_t^2P}\}
$.
Thus if $\lambda^2 \mathbb{E} \{ \frac{\sigma_n^2}{\sigma_n^2 + g_t^2P}\}<1$, the designed encoder/decoder pair can guarantee \eqref{eq:estimationThenControlRequirement2}. In view of Lemma \ref{lemma:estimationThenControl}, the sufficiency of Theorem \ref{theorem:theorem1} is proved.
\begin{remark}
We can show that $C_{\mathrm{MSC}}$ is smaller than the Shannon capacity, which is $C_{\mathrm{Shannon}}=\mathbb{E} \{c_t\}$ \cite{Goldsmith1997}. From Jensen's inequality, we know that $\mathbb{E} \{2^{-2c_t}\}\ge 2^{-2\mathbb{E} \{c_t\}}$ and the equality holds if and only if $c_t$ is a constant. Thus it follows that $ C_{\mathrm{MSC}} = \frac{1}{2}\mathrm{log}\frac{1}{\mathbb{E} \{2^{-2c_t}\}}\le \frac{1}{2}\mathrm{log} \frac{1}{2^{-2\mathbb{E} \{c_t\}}}=\mathbb{E} \{c_t\}=C_{\mathrm{Shannon}} $ and the equality holds if and only if $c_t$ is a constant.
\end{remark}
\begin{remark}
By letting $g_t$ in~\eqref{iffConditionForScalarSystem} be the Bernoulli distribution with failure probability $\epsilon$, and taking the limit $\sigma_n^2 \rightarrow 0$ and $P \rightarrow \infty$, we can show that the necessary and sufficient condition to ensure mean square stabilizability for the real erasure channel is $\epsilon < \frac{1}{\lambda^2}$, which recovers the result in \cite{Elia2005}. If we let $g_t$ be a constant with $g_t=1$, then the studied power constrained fading channel degenerates to the AWGN channel and the \eqref{iffConditionForScalarSystem} degenerates to $\frac{1}{2} \mathrm{log} (1+\frac{P}{\sigma_n^2})< \mathrm{log} |\lambda|$, which recovers the result in \cite{Sahai2006, Braslavsky2007}. If $\sigma_n^2=0$ and the event $g_t=0$ has zero probability measure, the right hand side of \eqref{iffConditionForScalarSystem} becomes infinity. Then for any $\lambda$, \eqref{iffConditionForScalarSystem} holds automatically. This is reasonable since we have assumed that $g_t$ is known at the decoder side, thus if there is no additive noise, the channel resembles a perfect communication link. Since \eqref{scalarDynamics} is controllable, we can always find a pair of encoder and decoder to stabilize the system.
\end{remark}
\section{Vector Systems}
For vector systems, the situation becomes complicated by the fact that we have $n$ sources $x_{i,0}$ and only one channel, where $x_{i,0}$ denotes the $i$-th element of $x_0$. Firstly, we would analyze the achievable minimal mean square estimation error for estimating $x_0$ over the channel \eqref{channel1} during one channel use. Consider the following Markov chain
\begin{equation*}
X_0\rightarrow S_t= f_t(X_0)\rightarrow R_t\rightarrow \hat{X}_t=h_t(R_t)
\end{equation*}
where $X_0\in \mathbb{R}^n$ denotes the Gaussian initial state with covariance matrix $\Sigma_{x_0}$; $f_t(\cdot)$ is a scalar-valued function denoting the channel encoder for \eqref{channel1}; $R_t$ denotes the channel output and $\hat{X}_t$ is the estimation of $X_0$ formed by the decoder with decoding rule $h_t(\cdot)$.
Denote the estimation error as $e_t=X_0-\hat{X}_t$, in view of Lemma \ref{lemma:varianceIsBoundedByEntropyPower}, we have
$
\frac{1}{n} \mathrm{tr} \mathbb{E} \{ e_te_t' \} \ge \frac{1}{2\pi e} e^{\frac{2}{n} H(e_t|R_t)}
$.
Since
\begin{equation*}
\begin{aligned}
H(e_t|R_t) &= H(X_0-h_t(R_t)|R_t)=H(X_0|R_t)\\
&=H(X_0)-I(X_0;R_t)\\
&\overset{(a)}=H(X_0)-I(f_t(X_0);R_t)\\
&\ge \frac{1}{2} \mathrm{ln} ((2 \pi e)^n \mathrm{det}(\Sigma_{x_0}))-\frac{1}{2} \mathrm{ln}(1+\frac{g_t^2P}{\sigma_n^2})\\
\end{aligned}
\end{equation*}
where $(a)$ follows from Lemma \ref{lemma:mutualInformationEqual}, thus we have
\begin{equation*}
\mathrm{tr} \mathbb{E} \{ e_te_t' \} \ge n \; \mathrm{det} (\Sigma_{x_0}) \big( \frac{\sigma_n^2}{g_t^2P+\sigma_n^2} \big)^{\frac{1}{n}}
\end{equation*}
From the above inequality, we know that the minimal mean square error is given in terms of $\frac{\sigma_n^2}{g_t^2P+\sigma_n^2} $. However, this is only for the sum of the estimation errors $e_{i,t}$ with $e_{i,t}$ being the $i$-th element of $e_t$. There is no indication on the convergence speed for every single $e_{i,t}$. Lemma \ref{lemma:estimationThenControl} implies that we should design the encoder/decoder to render that $\mathrm{lim}_{t \rightarrow \infty} \lambda_i^{2t} \mathbb{E} \{e_{i,t}^2\}=0$ for all $i$, which places separate requirements for the convergence speed of each $e_{i,t}$. Thus we need to optimally allocate channel resources to each unstable state variable.
The previous analysis also implies that we should treat the unstable modes of $A$ separately. Here we focus on the real Jordan canonical form of system \eqref{LTIDynamics}. Let $\lambda_1, \ldots, \lambda_d$ be the distinct unstable eigenvalues (if $\lambda_i$ is complex, we exclude from this list the complex conjugates $\lambda_i^*$) of $A$ in \eqref{LTIDynamics}, and let $m_i$ be the algebraic multiplicity of each $\lambda_i$. The real Jordan canonical form $J$ of $A$ then has the block diagonal structure $J=\mathrm{diag}(J_1,\ldots,J_d)\in \mathbb{R}^{n\times n}$, where the block $J_i\in \mathbb{R}^{\mu_i\times \mu_i}$ and $\mathrm{det} J_i =\lambda_i^{\mu_i}$, with
\begin{equation*}
\mu_i =\left\{ \begin{matrix}
m_i & \mathrm{if} \;\; \lambda_i \in \mathbb{R}\\
2m_i & \mathrm{otherwise}
\end{matrix} \right.
\end{equation*}
It is clear that we can equivalently study the following dynamical system instead of \eqref{LTIDynamics}
\begin{equation}
\label{realJordanCanonicalForm}
x_{k+1}=Jx_k+TBu_i
\end{equation}
for some similarity matrix $T$. Let $\mathcal{U}=\{1,\ldots, d\}$ denote the index set of unstable eigenvalues.
\begin{theorem}
\label{theorem:VectorResult}
There exists a causal encoder/decoder pair $\{f_t\}, \{h_t\}$, such that the LTI dynamics \eqref{LTIDynamics} can be stabilized over the communication channel \eqref{channel1} in mean square sense if
\begin{equation}
\label{SufficientConditionForVectorSystem}
\sum _{i=1}^d \mu_i\mathrm{log} |\lambda_i| < - \frac{1}{2} \mathrm{log} { \mathbb{E} \{ \frac{\sigma_n^2}{\sigma_n^2+g_t^2P} \}}
\end{equation}
and only if $(\mathrm{log}|\lambda_1|, \ldots, \mathrm{log}|\lambda_d|) \in \mathbb{R}^{d}$ satisfy that for all $v_i \in \{0, \ldots, m_i\}$ and $i\in \mathcal{U}$
\begin{equation}
\label{necessityForVectorSystem}
\sum_{i \in \mathcal{U}} a_i v_i \mathrm{log}|\lambda_i|< - \frac{v}{2} \mathrm{log} {\mathbb{E} \big\{ \big(\frac{\sigma_n^2}{\sigma_n^2+g_t^2P}\big)^{\frac{1}{v}}\big\}}
\end{equation}
where $v=\sum_{i\in \mathcal{U}} a_iv_i$, and $a_i=1$ if $\lambda_i \in \mathbb{R}$, and $a_i=2$ otherwise.
\end{theorem}
\begin{proof}
For the proof of necessity, notice that each block $J_i$ has an invariant real subspace $\mathcal{A}_{v_i}$ of dimension $a_i v_i$, for any $v_i \in \{ 0 , \ldots,m_i\}$. Consider the subspace $\mathcal{A}$ formed by taking the product of the invariant subspaces $\mathcal{A}_{v_i}$ for each real Jordan block. The total dimension of $\mathcal{A}$ is $v=\sum_{i\in \mathcal{U}} a_iv_i$. Denote by $x^\mathcal{V}$ of the components of $x$ belonging to $\mathcal{A}$. Then $x^{\mathcal{V}}$ evolves as
\begin{equation}
\label{stackedDynamics}
x_{k+1}^{\mathcal{V}}=J^{\mathcal{V} }x_{k+1}^{\mathcal{V}} +QT u_k
\end{equation}
where $Q$ is a transformation matrix and $\mathrm{det} J^{\mathcal{V}}=\Pi_{i\in \mathcal{U}} \lambda_i^{ a_i v_i}$.
Since $X_k$ is mean square stable, it is necessary that the subdynamics \eqref{stackedDynamics} is mean square stable. Similar to the necessity proof in Theorem 1, we may derive the necessary condition \eqref{necessityForVectorSystem}. And this completes the proof of necessity.
Here we prove the sufficiency using the idea of Time Division Multiple Access (TDMA).
Based on the previous encoder/decoder design for scalar systems, the following information transmission strategy is designed for the vector system. Without loss of generality, here we assume that $\lambda_1,\ldots, \lambda_d$ are real and $m_i=1$. For other cases, readers can refer to the analysis discussed in Chapter 2 of \cite{Zaidi2014}. Specifically, under this assumption, $J$ is a diagonal matrix and $d=n$. The sensor transmits periodically with a period of $\tau$. During one channel use, the sensor only transmits the estimation error of the $j$-th value of $x_0$ using the scheme devised for scalar systems. The relative transmission frequency for the $j$-th value of $x_0$ is scheduled to be $\alpha_j$ among the $\tau$ transmission period with $\sum_{j=1}^n\alpha_j= 1$. The receiver maintains an array that represents the most recent estimation of $x_0$, which is set to $0$ for $t=0$. When the information about the $j$-th value of $x_0$ is transmitted, only the estimation of the $j$-th value of $x_0$ is updated at the decoder side, and the other estimation values remain unchanged. After updating the estimation, the controller takes action as the one designed in Lemma \ref{lemma:estimationThenControl}.
If the diagonal elements of $ A^t \mathbb{E} \{e_te_t'\}(A')^t$ converge to zeros asymptotically, i.e., for $i=1,\ldots, n$, $\mathrm{lim}_{t\rightarrow \infty} \lambda_i^{2t} \mathbb{E} \{e_{i,t}^2\}=0$ , the conditions in Lemma \ref{lemma:estimationThenControl} can be satisfied. Since the transmission is scheduled periodically, we only need to require that $\mathrm{lim}_{k\rightarrow \infty} \lambda_i^{2k\tau}
\mathbb{E} \{e_{i, k\tau}^2\}=0$, $\forall i=1,\ldots, n$. Following our designed transmission scheme, we have $\mathbb{E} \{e_{i,k\tau}^2\}=\mathbb{E} \{\frac{\sigma_n^2}{\sigma_n^2+g_t^2P}\}^{\alpha_i k\tau}\mathbb{E} \{e_{i,0}^2\}$. If $\lambda_i ^{2}\mathbb{E} \{\frac{\sigma_n^2}{\sigma_n^2+g_t^2P}\}^{\alpha_i }<1 $ for all $i=1,\ldots n$, the sufficient condition in Lemma \ref{lemma:estimationThenControl} can be satisfied. To complete the proof, we only need to show the equivalence between the requirement $\lambda_i ^{2}\mathbb{E} \{\frac{\sigma_n^2}{\sigma_n^2+g_t^2P}\}^{\alpha_i }<1 $ for all $i=1,\ldots n$ and \eqref{SufficientConditionForVectorSystem}. On one hand, since $\sum_{i=1}^n \alpha_i=1$, if $\lambda_i ^{2}\mathbb{E} \{\frac{\sigma_n^2}{\sigma_n^2+g_t^2P}\}^{\alpha_i }<1 $ for all $i=1,\ldots n$, we know that \eqref{SufficientConditionForVectorSystem} holds. On the other hand, if \eqref{SufficientConditionForVectorSystem} holds, we can simply choose $\alpha_i=\frac{\mathrm{log}|\lambda_i|}{\sum_i \mathrm{log}|\lambda_i|}$, which satisfies the requirement that $\sum_{i=1}^n \alpha_i=1$ and $\lambda_i ^{2}\mathbb{E} \{\frac{\sigma_n^2}{\sigma_n^2+g_t^2P}\}^{\alpha_i }<1 $ for all $i=1,\ldots, n$. The sufficiency is proved. \end{proof}
\section{Numerical Illustrations}
\subsection{Scalar Systems}
The authors in \cite{Xiao2011} derive the mean square capacity of a power constrained fading channel with linear encoders/decoders. The necessary and sufficient condition for scalar systems is $ \frac{1}{2} \mathrm{log}(1+\frac{\mu^2_gP}{\sigma_g^2 P+\sigma_n^2}) > \mathrm{log} |\lambda| $ with $\mu_g$ and $\sigma_g^2$ being the mean and variance of $g_t$. We can similarly define the mean square capacity of the power constrained fading channel with linear encoders/decoders as $C_{\mathrm{MSL}}=\frac{1}{2} \mathrm{log}(1+\frac{\mu^2_gP}{\sigma_g^2P+\sigma_n^2})$. Simply assume that the fading follows the Bernoulli distribution with failure probability $\epsilon$, then the Shannon capacity, the mean square capacity achievable with causal encoders/decoders and the mean square capacity achievable with linear encoders/decoders are given as
$ C_{\mathrm{Shannon_{BD}}} = \frac{1-\epsilon}{2} \mathrm{log}\big(1+\frac{P}{\sigma_n^2}\big) $,
$ C_{\mathrm{MSC_{BD}}} =-\frac{1}{2} \mathrm{log} \big(\frac{\sigma_n^2+\epsilon P}{\sigma_n^2+P}\big) $,
$C_{\mathrm{MSL_{BD}}} = \frac{1}{2} \mathrm{log} \big(1+\frac{(1-\epsilon)^2P}{(1-\epsilon) \epsilon P+\sigma_n^2}\big) $.
For fixed $P$ and $\sigma_n^2$, the channel capacities are functions of $\epsilon$. Let $P=1$ and $\sigma_n^2=1$, the channel capacities in relation to the erasure probability are plotted in Fig. \ref{comparisonOfChannelCapacity}. It is clear that $C_{\mathrm{Shannon_{BD}}} \ge C_{\mathrm{MSC_{BD}}}\ge C_{\mathrm{MSL_{BD}}}$ at any given erasure probability $\epsilon$. This result is obvious since we have proved that the Shannon capacity is no smaller than the mean square capacity with causal encoders/decoders. Besides, we have more freedom in designing the causal encoders/decoders compared with the linear encoders/decoders, thus allowing to achieve a higher capacity. The three kinds of capacity degenerate to the same when $\epsilon=0$ and $\epsilon=1$, which represent the AWGN channel case and the disconnected case respectively.
\begin{figure}
\centering
\includegraphics[width=0.37\textwidth]{figs/comparisonAmongCapacity.pdf}\\
\caption{Comparison of different channel capacities when $P=1$, $\sigma_n^2=1$}\label{comparisonOfChannelCapacity}
\end{figure}
\subsection{Vector Systems}
Consider the two dimensional LTI system \eqref{realJordanCanonicalForm} with
$
J=\left[
\begin{smallmatrix}
\lambda_1 & 0\\
0 & \lambda_2
\end{smallmatrix}
\right]
$, and the communication channel is \eqref{channel1} in which the fading follows the Bernoulli distribution with failure probability $\epsilon$. In view of Theorem \ref{theorem:VectorResult}, a sufficient condition to ensure mean square stabilizability is that $(\mathrm{log}|\lambda_1|, \mathrm{log}|\lambda_2|)$ should lie in the region of
$ \mathrm{log}|\lambda_1|+ \mathrm{log} |\lambda_2 | < C_{\mathrm{MSC_{BD}}} $. The necessary requirement is given by the following region in $(\mathrm{log}|\lambda_1|,\mathrm{log}|\lambda_2|)$ plane
\begin{equation*}
\left\{
\begin{aligned}
&\log{|\lambda_1|} < C_{\mathrm{MSC_{BD}}}, \;\; \log { |\lambda_2 |} < C_{\mathrm{MSC_{BD}}} \\
& \log {|\lambda_1|} +\log{| \lambda_2 |} < - \log{{\big( \epsilon+(1-\epsilon) \big( \frac{\sigma_n^2}{\sigma_n^2+P} \big)^{\frac{1}{2}} \big) }}
\end{aligned}\right.
\end{equation*}
The necessary and sufficient condition to ensure mean square stability using linear encoders/decoders for this system is given in \cite{Xiao2011}, which states that $(\mathrm{log}|\lambda_1|, \mathrm{log}|\lambda_2|)$ should be in the region constrained by
$ \mathrm{log}|\lambda_1|+ \mathrm{log} |\lambda_2 | < C_{\mathrm{MSL_{BD}}}$.
Selecting $P = 1$, $\sigma_n^2 = 1$ and $\epsilon = 0.8$, we can plot the regions for $(\mathrm{log}|\lambda_1|, \mathrm{log}|\lambda_2|)$ indicated by the sufficiency and necessity in Theorem \ref{theorem:VectorResult} and that indicated in Theorem 3.1 in \cite{Xiao2011} in Fig. \ref{Fig.vectorSystemStabilityRegion}. We can observe that the region of $(\mathrm{log}|\lambda_1|, \mathrm{log}|\lambda_2|)$ that can be stabilized with the designed causal encoders/decoders in Section IV is much larger than that can be stabilized by linear encoders/decoders in \cite{Xiao2011}. Thus by extending endocers/decoders from linear settings to causal requirements, we can tolerate more unstable systems.
\begin{figure}
\centering
\includegraphics[width=0.29\textwidth]{figs/vectorSystemStabilityRegion.pdf}\\
\caption{Stability region of $(\mathrm{log}|\lambda_1|, \mathrm{log}|\lambda_2|)$ indicated by Theorem 2 for a vector system}\label{Fig.vectorSystemStabilityRegion}
\end{figure}
\section{Conclusion}
This paper characterized the requirement for a power constrained fading channel to allow the existence of a causal encoder/decoder pair that can mean square stabilize a discrete-time LTI system. The mean square capacity of the power constrained fading channel with causal encoders/decoders was given. It was shown that this mean square capacity is smaller than the Shannon capacity and they coincide with each other for some special situations. Throughout the paper, the capacity was derived with the assumption that there exists a perfect feedback link from the channel output to the channel input. What would the capacity be for power constrained fading channels when there is no such feedback link or there is only a noisy feedback link is still under investigation.
\bibliographystyle{ieeetr}
\section{Introduction}
Control over communication networks has been a hot research topic in the past decade \cite{Zaidi2014}. This is mainly motivated by the rapid development of wireless communication technology that enables the connection of geographically distributed systems and devices. However, the insertion of wireless communication networks also poses challenges in analysis and design of control systems due to constraints and uncertainties in communications. One must take the communication networks into consideration and analyze how they affect the stability and performance of the closed-loop control systems.
Until now, there have been plentiful results that reveal requirements on communication channels to ensure the stabilizability. For noiseless digital channels, the celebrated data rate theorem is given in \cite{Nair2004SIAM}. For noisy channels, the problem is complicated by the fact that different channel capacities are required under different stability definitions. For almost sure stability, \cite{Matveev2007} shows that the Shannon capacity in relation to unstable dynamics of a system constitutes the critical condition for its stabilizability. While for moment stability, \cite{Sahai2006} shows that the Shannon capacity is too optimistic while the zero-error capacity is too pessimistic, and the anytime capacity introduced in this paper characterizes the stabilizability conditions. Essentially, to keep the $\eta$-moment of the state of an unstable scalar plant bounded, it is necessary and sufficient for the feedback channel's anytime capacity corresponding to anytime-reliability $\alpha=\eta \mathrm{log}_2|\lambda|$ to be greater than $\mathrm{log}_2 |\lambda|$, where $\lambda$ is the unstable eigenvalue of the plant. The anytime capacity has a more stringent reliability requirement than the Shannon capacity. However, it is worthy noting that there exist no systematic method to calculate the anytime capacities of channels.
In control community, the anytime capacity is usually studied under the mean square stability requirement, for which the anytime capacity is commonly named as the mean square capacity. For example, \cite{Elia2005} characterizes the mean square capacity of a fading channel. \cite{Braslavsky2007} studies the mean square stabilization problem over a power constrained AWGN channel and characterizes the critical capacity to ensure mean square stabilizability. They further show that the extension from linear encoders/decoders to more general causal encoders/decoders cannot provide additional benefits of increasing the channel capacity \cite{Freudenberg2010}.
Specifically, the results stated above deal with fading channels or AWGN channels separately. While in wireless communications, it is practical to consider them as a whole. In this paper, we are interested in a power constrained fading channel which is corrupted by both fading and AWGN. We aim to find the critical condition on the channel to ensure the mean square stabilizability of the system. Note that \cite{Xiao2011} has derived the necessary and sufficient condition for such kind of channel to ensure mean square stabilizability under a linear encoder/decoder. It is still unknown whether we can achieve a higher channel capacity with more general causal strategies. This paper provides a positive answer to this question.
This paper is organized as follows. Problem formulation and some preliminaries are given in Section 2. Section 3 provides the results for scalar systems. Section 4 discusses the extension to vector systems. Section 5 provides numerical illustrations and this paper ends with some concluding remarks in Section 6.
\section{Problem Formulation and Preliminaries}
This paper studies the following single-input discrete-time linear system
\begin{equation}
\label{LTIDynamics}
x_{t+1}=A x_{t}+B u_{t}
\end{equation}
where $x\in \mathbb{R}^n$ is the system state and $u \in \mathbb{R}$ is the control input. Without loss of generality, we assume that all the eigenvalues of $A$ are unstable, i.e., $|\lambda_i(A)|\ge 1 $ for all $i=1,2,\ldots, n$ \cite{Freudenberg2010}. The initial value $x_0$ is randomly generated from a Gaussian distribution with zero mean and bounded covariance ${ \Sigma_{x_0}}$. The system state $x_t$ is observed by a sensor and then encoded and transmitted to the controller through a power constrained fading channel. The communication channel is modeled as
\begin{equation}
\label{channel1}
r_t=g_ts_t+n_t
\end{equation}
in which $s_t$ denotes the channel input; $r_t$ represents the channel output; $\{g_t\}$ is an i.i.d. stochastic process modeling the fading effects and $\{n_t\}$ is the additive white Gaussian noise with zero-mean and known variance $\sigma_n^2$. The channel input $s_t$ must satisfy an average power constraint, i.e., $\mathbb{E} \{s_t^2\}\le P$. We also assume that $x_0, g_0, n_0, g_1, n_1, \ldots$ are independent. In the paper, it is assumed that after each transmission, the instantaneous value of the fading factor $g_t$ is known to the decoder, which is a reasonable assumption for slowly varying channels with channel estimation \cite{Goldsmith1997}.
The instantaneous Shannon channel capacity is $c_t=\frac{1}{2}\mathrm{ln}\big( 1+\frac{g_t^2P}{\sigma_n^2} \big)$ with $c_t$ being measured in nats/transmission. The feedback configuration among the plant, the sensor and the controller, and the channel encoder/decoder structure are depicted in Fig. 1.
\begin{figure}[htpb]
\centering
\includegraphics[width=0.4\textwidth]{figs/ncsStructure.pdf}\\
\caption{Network control structure over power constraint fading channel}
\end{figure}
In this paper, we try to find requirements on the power constrained fading channel such that there exists a pair of causal encoder/decoder $\{f_t\}, \{h_t\}$ that can mean square stabilize the LTI dynamics \eqref{LTIDynamics}, i.e., to render $\mathrm{lim}_{t \rightarrow \infty} \mathbb{E} \{x_tx_t'\}=0$.
To solve this problem, the following preliminaries are needed, which are borrowed from \cite{Freudenberg2010}. Throughout the paper, a sequence $\{\chi_i\}_{i=0}^t$ is denoted by $\chi^t$; random variables are denoted by uppercase letters, and their realizations by lower case letters. All random variables are assumed to exist on a common probability space with measure $\mathcal{P}$. The probability density of a random variable $X$ in Euclidean space with respect to Lebesgue measure on the space is denoted by $p_X$, and the probability density of $X$ conditioned on the $\sigma$-field generated by the event $Y=y$ by $p_{X|y}$. Let the expectation operator be denoted by $\mathbb{E} $, and the expectation conditioned on the event $Y=y$ by $\mathbb{E} _{y}$. We use $\mathrm{log}$ to denote the logarithm to the base two, and $\mathrm{ln}$ to denote the natural logarithm.
The differential entropy of $X$ is defined by $H(X)=-\mathbb{E} \{\mathrm{ln} p_X \}$, provided that the defining integral exists. Denote the conditional entropy of $X$ given the event $Y=y$ by $H_y(X)=H(X|Y=y)=-\mathbb{E} _y\{ \mathrm{ln} p_{X|y} \}$, and the random variable associated with $H_y(X)$ by $H_Y(X)$. The average conditional entropy of $X$ given the event $Y=y$ and averaged over $Y$ is defined by $H(X|Y)=\mathbb{E} \{H_Y(X) \}$, and the average conditional entropy of $X$ given the events $Y=y$ and $Z=z$ and averaged only over $Y$ by $H_z(X|Y)=\mathbb{E} _{z}\{H_{Y,Z}(X)\}$. The conditional mutual information between two random variables $X$ and $Y$ given the event $Z=z$ is defined by $I_z(X;Y)=H_z(X)-H_z(X|Y)$. Given a random variable $X\in \mathbb{R}^{n}$ with entropy $H(X)$, the entropy power of $X$ is defined by $N(X)=\frac{1}{2\pi e} e^{\frac{2}{n}H(X)}$. Denote the conditional entropy power of $X$ given the event $Y=y$ by $N_y(X)=\frac{1}{2\pi e}e^{\frac{2}{n}H_y(X)}$, and the random variable associated with $N_y(X)$ by $N_Y(X)$. The average conditional entropy power of $X$ given the event $Y=y$ and averaged over $Y$ is defined by $N(X|Y)=\mathbb{E} \{N_Y(X)\}$, and the average conditional entropy power of $X$ given the events $Y=y$ and $Z=z$ and averaged only over $Y$ by $N_z(X|Y)=\mathbb{E} _z \{N_{Y,Z}(X)\}$. The following lemma shows that the entropy power of a random variable provides an estimation of the lower bound for its variance.
\begin{lemma}[\cite{Freudenberg2010}]
\label{lemma:varianceIsBoundedByEntropyPower}
Let $X$ be an $n$-dimensional random variable. Then $N_y(X)\le \frac{1}{n} \mathbb{E} _y\{\|X\|^2\}$.
\end{lemma}
\begin{lemma}
\label{lemma:mutualInformationEqual}
Let $X$ be an $n$-dimensional random variable, $f(X)$ be a function of $X$, and $Y=f(X)+N$ with $N$ being a random variable that is independent with $X$. Then $I(X;Y)=I(f(X);Y)$.
\end{lemma}
\begin{proof}
Since $H(Y|X)=H(Y|X,f(X))\le H(Y|f(X))$, we have $H(Y)=I(X;Y)+H(Y|X)\le I(X; Y)+H(Y|f(X))$. Thus $ H(Y)-H(Y|f(X)) = I(Y;f(X))\le I(X;Y)$.
Besides, noting that $X\rightarrow f(X)\rightarrow Y$ forms a Markov chain, the data processing inequality \cite{Cover2006} implies that $I(X;Y) \le I(f(X); Y)$. Combining the two facts, we have $I(X;Y)=I(f(X);Y)$.
\end{proof}
\begin{remark}
Lemma \ref{lemma:mutualInformationEqual} indicates that for the AWGN channel, the amount of information that the channel output contains about the source is equal to the amount of information that the channel output contains about the channel input.
\end{remark}
\section{Scalar Systems}
To better convey our ideas, we start with scalar systems. Consider the following scalar system
\begin{equation}
\label{scalarDynamics}
x_{t+1}=\lambda x_t+u_t
\end{equation}
where $|\lambda|\ge 1$ and $\mathbb{E} \{x_0^2\}=\sigma_{x_0}^2$.
With the communication channel given in \eqref{channel1}, the stabilizability result is stated in the following theorem.
\begin{theorem}
\label{theorem:theorem1}
There exists a causal encoder/decoder pair $\{f_t\}, \{h_t\}$, such that the system \eqref{scalarDynamics} can be stabilized over the communication channel \eqref{channel1} in mean square sense if and only if
\begin{equation}
\label{iffConditionForScalarSystem}
\mathrm{log} |\lambda| < - \frac{1}{2} \mathrm{log} \mathbb{E} \{ \frac{\sigma_n^2}{\sigma_n^2+g_t^2P} \}
\end{equation}
\end{theorem}
Theorem \ref{theorem:theorem1} indicates that the mean square capacity of the power constraint fading channel is $C_{\mathrm{MSC}}=-\frac{1}{2} \mathrm{log} {\mathbb{E} \{ \frac{\sigma_n^2}{\sigma_n^2+g_t^2P} \}}$. In the following, we will prove the necessity and sufficiency of Theorem \ref{theorem:theorem1}, respectively. The proof essentially follows the same steps as in \cite{Minero2009, Freudenberg2010, Kumar2014}, however, with some differences due to the channel structure.
\subsection{Proof of Necessity}
The proof of necessity follows from the intuition below. In view of Lemma \ref{lemma:varianceIsBoundedByEntropyPower}, the entropy power provides a lower bound for the mean square value of the system state. We thus can use the average entropy power as a measure of the uncertain region of the system state and analyze its update. At time $t$, the controller maintains a knowledge of the uncertain region of $x_t$. When it takes action on the plant, the average uncertain region of $x_{t+1}$ predicated by the controller is expanded to $\lambda^2$ times that of $x_t$. This is the iteration we term as dynamics update, which describes the update of the uncertain region of $x$ maintained by the controller from time $t$ to $t+1$. After receiving information about $x_{t+1}$ from the sensor through the communication channel, the controller can reduce the predication error of the uncertain region of $x_{t+1}$ by a factor of $\mathbb{E} \{\frac{\sigma_n^2}{\sigma_n^2+g_t^2P}\}$. This is the iteration we term as communication update, which describes the update of the uncertain region of $x$ maintained by the controller at time $t+1$ after it has received the information about $x_{t+1}$ from the sensor through the communication channel. Thus to ensure mean square stability, the average expanding factor $\lambda^2 \mathbb{E} \{\frac{\sigma_n^2}{\sigma_n^2+g_t^2P} \}$ of the system state's uncertain region should be smaller than one, which gives the necessary requirement in Theorem \ref{theorem:theorem1}. The formal proof is stated as follows. Here we use the uppercase letters $X, S, R, G$ to denote the random variables of the system state, the channel input, the channel output and the channel fading coefficient. We use the lowercase letters $x, s, r, g$ to denote their realizations.
\subsubsection{Communication Update}
The average entropy power of $X_t$ conditioned on $(R^t,G^t)$ is
$\scriptstyle N(X_t|R^t,G^t)= \mathbb{E} \{ N_{R^t,G^t}(X_t) \} \overset{(a)}{=} \mathbb{E} \{ \mathbb{E} \{N_{R^t,G^t}(X_t) | R^{t-1}, G^t \} \}
\overset{(b)}{=}\frac{1}{2\pi e} \mathbb{E} \{ \mathbb{E} \{ e^{ {2} H_{R^t,G^t}(X_t)} | R^{t-1}, G^t \}\} $
where $(a)$ follows from the law of total expectation and $(b)$ follows from the definition of entropy power.
Since
$
\begin{aligned}
& \mathbb{E} \{ e^{2 H_{R^t,G^t} (X_t) } | R^{t-1}= r^{t-1}, G^t= g^t \} \\
& \overset{(c)}{\ge} e^{ 2 \mathbb{E} \{ H_{R^t,G^t} (X_t) | R^{t-1}= r^{t-1}, G^t= g^t \} }\\
&\overset{(d)}{=} e^{2 H(X_t| R_t, R^{t-1}= r^{t-1}, G^t= g^t) }\\
&\overset{(e)}{=} e^{2 \left( H(X_t| R^{t-1}= r^{t-1}, G^t= g^t) - I(X_t, R_t| R^{t-1}= r^{t-1}, G^t= g^t) \right)}\\
&\overset{(f)}{=} e^{2 \left( H(X_t| R^{t-1}= r^{t-1}, G^t= g^t) - I(S_t, R_t| R^{t-1}= r^{t-1}, G^t= g^t) \right) }\\
& \overset{(g)}{\ge} e^{2 \left( H(X_t | R^{t-1}= r^{t-1}, G^t= g^t) - c_t \right) }\\
& \overset{(h)}{=} e^{- 2 c_t} e^{2 H(X_t|R^{t-1}=r^{t-1},G^{t-1}=g^{t-1})}
\end{aligned}
$
where $(c)$ follows from Jensen's inequality; $(d)$ follows from the definition of conditional entropy; $(e)$ follows from the definition of conditional mutual information; $(f)$ follows from Lemma \ref{lemma:mutualInformationEqual}; $(g)$ follows from the definition of channel capacity, i.e., $I(S_t, R_t|R^{t-1}=r^{t-1}, G^t=g^t )\le c_t$ and $(h)$ follows from the fact that $G_t$ is independent with $X_t$, we have
$\scriptstyle
N(X_t|R^t,G^t)
\ge\frac{1}{2\pi e} \mathbb{E} \{ e^{-2 C_t} e^{2 H_{R^{t-1},G^{t-1}}(X_t)} \}
=\mathbb{E} \{ \frac{\sigma_n^2}{\sigma_n^2+g_t^2P} \} N(X_t|R^{t-1},G^{t-1})
$.
\subsubsection{Dynamics Update}
Since
$e^{2H(X_{t+1}|R^t=r^t,G^t=g^t)} = e^{2 H(\lambda X_t+U_t|R^t=r^t,G^t=g^t)} \overset{(i)}{=} e^{2 H (\lambda X_t|R^t=r^t,G^t=g^t)}
\overset{(j)}{=} e^{2H(X_t|R^t=r^t,G^t=g^t)+2\ln|\lambda|}
= \lambda^2 e^{2 H(X_t|R^t=r^t,G^t=g^t)}
$
where $(i)$ follows from the fact that $u_t=h_t(r^t, g^t)$ and $(j)$ follows from
Theorem 8.6.4 in \cite{Cover2006}, we have
$ \scriptstyle N(X_{t+1}|R^t,G^t) \ge \mathbb{E} \left\{\frac{1}{2\pi e} \lambda^2 e^{2 H_{R^t,G^t}(X_t)} \right\} = \lambda^2 N(X_t|R^t,G^t) $.
\subsubsection{Proof of Necessity}
Combining the results of communication update and dynamics update, we have
$ \scriptstyle N(X_{t+1}|R^t,G^t) \ge \lambda^2 \mathbb{E} \{ \frac{\sigma_n^2}{\sigma_n^2+g_t^2P} \} N(X_t|R^{t-1}, G^{t-1})$.
In view of Lemma \ref{lemma:varianceIsBoundedByEntropyPower}, $N(X_{t+1}|R^t,G^t)$ should converge to zero asymptotically. Thus $\lambda^2 \mathbb{E} \{ \frac{\sigma_n^2}{\sigma_n^2+g_t^2P} \} <1$, which is \eqref{iffConditionForScalarSystem} and this proves the necessity.
\subsection{Proof of Sufficiency}
To prove the sufficiency, we need to construct a pair of encoder and decoder. The encoder and decoder are designed following an "estimation then control" strategy. The controller consecutively estimates the initial state $x_0$ by using the received information from the channel and then applies an equivalent control to the plant. The reason for adopting such strategy is explained as follows. The response of the linear system is $x_t=\lambda^t (x_0-\hat{x}_t)$ with $\hat{x}_t=-\sum_{i=0}^{t-1} \lambda^{-1-i} u_i$, which means $\mathbb{E} \{x_t^2\}=\lambda^{2t} \mathbb{E} \{(x_0-\hat{x}_t)^2\}$. We can treat $\hat{x}_t$ as an estimate of the controller for the initial state $x_0$. If the estimation error $\mathbb{E} \{(x_0-\hat{x}_t)^2\}$ converges to zero at a speed that is greater than $\lambda^2$, i.e., there exists $\eta>\lambda^2$ and $\alpha>0$, such that $\mathbb{E} \{(x_0-\hat{x}_t)^2\}\le \frac{\alpha}{\eta^t}$, the mean square value of the system state would be bounded by
$
\mathbb{E} \{x_t^2\}\le \alpha \left(\frac{\lambda^2}{\eta}\right)^{t}
$.
Thus $\underset{t\rightarrow \infty}{\mathrm{lim}}\mathbb{E} \{x_t^2\}=0$, i.e., system \eqref{scalarDynamics} is mean square stable. This intuition can be formalized using the following lemma.
\begin{lemma}[\cite{Kumar2014}]
\label{lemma:estimationThenControl}
If there exists an estimation scheme $\hat{x}_t$ for the initial system state $x_0$, such that the estimation error $e_t=\hat{x}_t-x_0$ satisfies the following property,
\begin{eqnarray}
\label{eq:estimationThenControlRequirement}
\mathbb{E} \{ e_t \}=0 \label{eq:estimationThenControlRequirement1} \\
\lim_{t\rightarrow \infty} A^t \mathbb{E} \{ e_te_t' \} (A')^t=0 \label{eq:estimationThenControlRequirement2}
\end{eqnarray}
then the system \eqref{LTIDynamics} can be mean square stabilized by the controller
$ u_t=K\left( A^t \hat{x}_t+ \sum_{i=1}^t A^{t-i} Bu_{i-1} \right) $
with $K$ being selected such that $A+BK$ is stable.
\end{lemma}
When $g_t$ is known at the receiver, channel \eqref{channel1} resembles an AWGN channel. Shannon shows that when estimating a Gaussian random variable through an AWGN channel, the minimal mean square estimation error can be attained by using linear encoders and decoders, respectively \cite{Gattami2014}. And the minimal mean square error variance is given by $\frac{P\sigma_n^2}{\sigma_n^2+g_t^2P}$. Thus through one channel use, we can at best decrease the estimation error by a factor of $\frac{\sigma_n^2}{\sigma_n^2+g_t^2P}$. Since ${g_t}$ is i.i.d., we can transmit the estimation error from the decoder to the encoder and iteratively conduct the minimal mean square estimation process. Then the estimation error would decrease on average at a speed of $\mathbb{E} \{ \frac{\sigma_n^2}{\sigma_n^2+g_t^2P}\}$. If $\lambda^2\mathbb{E} \{\frac{\sigma_n^2}{\sigma_n^2+g_t^2P}\}<1$, in view of Lemma \ref{lemma:estimationThenControl}, system \eqref{scalarDynamics} can be mean square stabilized. The estimation strategy actually follows the principle of the well-known scheme of Schalkwijk \cite{Schalkwijk1966}, which utilizes the noiseless feedback link to consecutively refine the estimation error. The detailed encoder/decoder design and stability analysis are given as follows.
\subsubsection{Encoder/Decoder Design}
Suppose the estimation of $x_0$ formed by the decoder is $\hat{x}_t$ at time $t$ and the estimation error is $e_t=\hat{x}_t-x_0$. The encoder is designed as
\begin{equation}
\label{encoder}
\begin{aligned}
s_0 &=\sqrt{\frac{P}{\sigma_{x_0}^2}} x_0\\
s_t&=\sqrt{\frac{P}{\sigma^2_{e_{t-1}}}}\left( \hat{x}_{t-1} - x_0 \right), \;\; t\ge 1\\
\end{aligned}
\end{equation}
The decoder is designed as
\begin{equation}
\label{decoder}
\begin{aligned}
\hat{x}_0 & =\sqrt{\frac{\sigma_{x_0}^2}{P}}r_0\\
\hat{x}_t&=\hat{x}_{t-1}-\frac{\mathbb{E} \{r_t e_{t-1}|g_t\}}{\mathbb{E} \{r_t^2|g_t\}}r_t, \;\; t\ge 1
\end{aligned}
\end{equation}
with $\sigma^2_{e_{t-1}}$ representing the variance of $e_{t-1}$.
\subsubsection{Proof of Sufficiency}
Since $r_0=g_0s_0+n_0$, in view of \eqref{encoder} and \eqref{decoder}, we have $ e_0 =(g_0-1)x_0+ \sqrt{\frac{\sigma_{x_0}^2}{P}}n_0
$. Because $g_0$, $x_0$, $n_0$ are independent and $x_0$, $n_0$ follows a zero mean Gaussian distribution, we know that the conditional probability distribution of $e_0$ given the event $g_0$ is Gaussian and $ \mathbb{E} \{e_0|g_0\} =0$, $\mathbb{E} \{e_0^2|g_0\} = (g_0-1)^2 \sigma_{x_0}^2+ \frac{\sigma_{x_0}^2\sigma_n^2}{P}$. Thus $\mathbb{E} \{e_0\}=\mathbb{E} \{\mathbb{E} \{e_0|g_0\} \} =0$ and $ \mathbb{E} \{e^2_0\} =\mathbb{E} \{ \mathbb{E} \{e_0^2|g_0\} \} = \mathbb{E} \{(g_0-1)^2\} \sigma_{x_0}^2+ \frac{\sigma_{x_0}^2\sigma_n^2}{P}$.
For $t\ge 1$, in view of \eqref{encoder} and \eqref{decoder}, we have
\begin{multline*}
e_t=e_{t-1}-\frac{\mathbb{E} \{r_te_{t-1}|g_t\}}{\mathbb{E} \{r_t^2|g_t\}}r_t\\
=\Big(1-g_t \sqrt{\frac{P}{\sigma_{e_{t-1}}^2}} \frac{\mathbb{E} \{r_te_{t-1}|g_t\}}{\mathbb{E} \{r_t^2|g_t\}} \Big) e_{t-1} -\frac{\mathbb{E} \{ r_t e_{t-1}|g_t \}}{\mathbb{E} \{r_t^2|g_t\}} n_t
\end{multline*}
Thus the conditional probability distribution for $e_t$ given the event $g_t$ is Gaussian.
We also have
\begin{equation*}
\begin{aligned}
& \mathbb{E} \{e_t\} = \mathbb{E} \{ \mathbb{E} \{ e_t|g_t \} \}\\
&= \mathbb{E} \Big\{ \Big(1-g_t \sqrt{\frac{P}{\sigma_{e_{t-1}}^2}} \frac{\mathbb{E} \{r_te_{t-1}|g_t\}}{\mathbb{E} \{r_t^2|g_t\}} \Big) \mathbb{E} \{e_{t-1}|g_t\} \Big\} \\
&\overset{(a)}{=} \mathbb{E} \Big\{ \Big(1-g_t \sqrt{\frac{P}{\sigma_{e_{t-1}}^2}} \frac{\mathbb{E} \{r_te_{t-1}|g_t\}}{\mathbb{E} \{r_t^2|g_t\}} \Big) \Big\} \mathbb{E} \{e_{t-1}\}
\end{aligned}
\end{equation*}
where $(a)$ follows from the fact that $g_t$ is independent with $e_{t-1}$. Since $\mathbb{E} \{e_0\}=0$, we further know that $\mathbb{E} \{e_t\}\equiv 0$. The sufficient condition \eqref{eq:estimationThenControlRequirement1} is satisfied.
Since $e_{t-1}$, $g_t$ and $n_t$ are independent, we have $\mathbb{E} \{e_{t-1}^2|g_t\}=\mathbb{E} \{e_{t-1}^2\}$ and $\mathbb{E} \{n_t^2|g_t\}=\mathbb{E} \{n_t^2\}$, which implies
$ \mathbb{E} \{r_t^2|g_t\} = \mathbb{E} \big\{ \big(g_t \sqrt{\frac{P}{\sigma^2_{e_{t-1}}}} e_{t-1} +n_t\big)^2|g_t \big\}
=\sigma_n^2+g_t^2P
$
and
$
\mathbb{E} \{r_te_{t-1}|g_t\} =\mathbb{E} \big\{e_{t-1}\big(g_t \sqrt{\frac{P}{\sigma^2_{e_{t-1}}}} e_{t-1} +n_t \big)|g_t \big\}
= g_t \sqrt{P \sigma_{e_{t-1}}^2}
$.
Since $ \mathbb{E} \{e_t^2|g_t\}=\mathbb{E} \{e_{t-1}^2|g_t\} -\frac{\mathbb{E} \{r_te_{t-1}|g_t\}^2}{\mathbb{E} \{r_t^2|g_t\}} $, we also have
$
\mathbb{E} \{e_t^2|g_t\} =\mathbb{E} \{e_{t-1}^2\}-\frac{g_t^2P \mathbb{E} \{e_{t-1}^2\}}{\sigma_n^2 + g_t^2P}
=\mathbb{E} \{e_{t-1}^2\} \frac{\sigma_n^2}{\sigma_n^2 + g_t^2P}
$, which implies
$ \mathbb{E} \{e_t^2\} = \mathbb{E} \{\mathbb{E} \{e_t^2|g_t\}\}
=\mathbb{E} \{e_{t-1}^2\} \mathbb{E} \{ \frac{\sigma_n^2}{\sigma_n^2 + g_t^2P}\}
$.
Thus if $\lambda^2 \mathbb{E} \{ \frac{\sigma_n^2}{\sigma_n^2 + g_t^2P}\}<1$, the designed encoder/decoder pair can guarantee \eqref{eq:estimationThenControlRequirement2}. In view of Lemma \ref{lemma:estimationThenControl}, the sufficiency of Theorem \ref{theorem:theorem1} is proved.
\begin{remark}
We can show that $C_{\mathrm{MSC}}$ is smaller than the Shannon capacity, which is $C_{\mathrm{Shannon}}=\mathbb{E} \{c_t\}$ \cite{Goldsmith1997}. From Jensen's inequality, we know that $\mathbb{E} \{2^{-2c_t}\}\ge 2^{-2\mathbb{E} \{c_t\}}$ and the equality holds if and only if $c_t$ is a constant. Thus it follows that $ C_{\mathrm{MSC}} = \frac{1}{2}\mathrm{log}\frac{1}{\mathbb{E} \{2^{-2c_t}\}}\le \frac{1}{2}\mathrm{log} \frac{1}{2^{-2\mathbb{E} \{c_t\}}}=\mathbb{E} \{c_t\}=C_{\mathrm{Shannon}} $ and the equality holds if and only if $c_t$ is a constant.
\end{remark}
\begin{remark}
By letting $g_t$ in~\eqref{iffConditionForScalarSystem} be the Bernoulli distribution with failure probability $\epsilon$, and taking the limit $\sigma_n^2 \rightarrow 0$ and $P \rightarrow \infty$, we can show that the necessary and sufficient condition to ensure mean square stabilizability for the real erasure channel is $\epsilon < \frac{1}{\lambda^2}$, which recovers the result in \cite{Elia2005}. If we let $g_t$ be a constant with $g_t=1$, then the studied power constrained fading channel degenerates to the AWGN channel and the \eqref{iffConditionForScalarSystem} degenerates to $\frac{1}{2} \mathrm{log} (1+\frac{P}{\sigma_n^2})< \mathrm{log} |\lambda|$, which recovers the result in \cite{Sahai2006, Braslavsky2007}. If $\sigma_n^2=0$ and the event $g_t=0$ has zero probability measure, the right hand side of \eqref{iffConditionForScalarSystem} becomes infinity. Then for any $\lambda$, \eqref{iffConditionForScalarSystem} holds automatically. This is reasonable since we have assumed that $g_t$ is known at the decoder side, thus if there is no additive noise, the channel resembles a perfect communication link. Since \eqref{scalarDynamics} is controllable, we can always find a pair of encoder and decoder to stabilize the system.
\end{remark}
\section{Vector Systems}
For vector systems, the situation becomes complicated by the fact that we have $n$ sources $x_{i,0}$ and only one channel, where $x_{i,0}$ denotes the $i$-th element of $x_0$. Firstly, we would analyze the achievable minimal mean square estimation error for estimating $x_0$ over the channel \eqref{channel1} during one channel use. Consider the following Markov chain
\begin{equation*}
X_0\rightarrow S_t= f_t(X_0)\rightarrow R_t\rightarrow \hat{X}_t=h_t(R_t)
\end{equation*}
where $X_0\in \mathbb{R}^n$ denotes the Gaussian initial state with covariance matrix $\Sigma_{x_0}$; $f_t(\cdot)$ is a scalar-valued function denoting the channel encoder for \eqref{channel1}; $R_t$ denotes the channel output and $\hat{X}_t$ is the estimation of $X_0$ formed by the decoder with decoding rule $h_t(\cdot)$.
Denote the estimation error as $e_t=X_0-\hat{X}_t$, in view of Lemma \ref{lemma:varianceIsBoundedByEntropyPower}, we have
$
\frac{1}{n} \mathrm{tr} \mathbb{E} \{ e_te_t' \} \ge \frac{1}{2\pi e} e^{\frac{2}{n} H(e_t|R_t)}
$.
Since
\begin{equation*}
\begin{aligned}
H(e_t|R_t) &= H(X_0-h_t(R_t)|R_t)=H(X_0|R_t)\\
&=H(X_0)-I(X_0;R_t)\\
&\overset{(a)}=H(X_0)-I(f_t(X_0);R_t)\\
&\ge \frac{1}{2} \mathrm{ln} ((2 \pi e)^n \mathrm{det}(\Sigma_{x_0}))-\frac{1}{2} \mathrm{ln}(1+\frac{g_t^2P}{\sigma_n^2})\\
\end{aligned}
\end{equation*}
where $(a)$ follows from Lemma \ref{lemma:mutualInformationEqual}, thus we have
\begin{equation*}
\mathrm{tr} \mathbb{E} \{ e_te_t' \} \ge n \; \mathrm{det} (\Sigma_{x_0}) \big( \frac{\sigma_n^2}{g_t^2P+\sigma_n^2} \big)^{\frac{1}{n}}
\end{equation*}
From the above inequality, we know that the minimal mean square error is given in terms of $\frac{\sigma_n^2}{g_t^2P+\sigma_n^2} $. However, this is only for the sum of the estimation errors $e_{i,t}$ with $e_{i,t}$ being the $i$-th element of $e_t$. There is no indication on the convergence speed for every single $e_{i,t}$. Lemma \ref{lemma:estimationThenControl} implies that we should design the encoder/decoder to render that $\mathrm{lim}_{t \rightarrow \infty} \lambda_i^{2t} \mathbb{E} \{e_{i,t}^2\}=0$ for all $i$, which places separate requirements for the convergence speed of each $e_{i,t}$. Thus we need to optimally allocate channel resources to each unstable state variable.
The previous analysis also implies that we should treat the unstable modes of $A$ separately. Here we focus on the real Jordan canonical form of system \eqref{LTIDynamics}. Let $\lambda_1, \ldots, \lambda_d$ be the distinct unstable eigenvalues (if $\lambda_i$ is complex, we exclude from this list the complex conjugates $\lambda_i^*$) of $A$ in \eqref{LTIDynamics}, and let $m_i$ be the algebraic multiplicity of each $\lambda_i$. The real Jordan canonical form $J$ of $A$ then has the block diagonal structure $J=\mathrm{diag}(J_1,\ldots,J_d)\in \mathbb{R}^{n\times n}$, where the block $J_i\in \mathbb{R}^{\mu_i\times \mu_i}$ and $\mathrm{det} J_i =\lambda_i^{\mu_i}$, with
\begin{equation*}
\mu_i =\left\{ \begin{matrix}
m_i & \mathrm{if} \;\; \lambda_i \in \mathbb{R}\\
2m_i & \mathrm{otherwise}
\end{matrix} \right.
\end{equation*}
It is clear that we can equivalently study the following dynamical system instead of \eqref{LTIDynamics}
\begin{equation}
\label{realJordanCanonicalForm}
x_{k+1}=Jx_k+TBu_i
\end{equation}
for some similarity matrix $T$. Let $\mathcal{U}=\{1,\ldots, d\}$ denote the index set of unstable eigenvalues.
\begin{theorem}
\label{theorem:VectorResult}
There exists a causal encoder/decoder pair $\{f_t\}, \{h_t\}$, such that the LTI dynamics \eqref{LTIDynamics} can be stabilized over the communication channel \eqref{channel1} in mean square sense if
\begin{equation}
\label{SufficientConditionForVectorSystem}
\sum _{i=1}^d \mu_i\mathrm{log} |\lambda_i| < - \frac{1}{2} \mathrm{log} { \mathbb{E} \{ \frac{\sigma_n^2}{\sigma_n^2+g_t^2P} \}}
\end{equation}
and only if $(\mathrm{log}|\lambda_1|, \ldots, \mathrm{log}|\lambda_d|) \in \mathbb{R}^{d}$ satisfy that for all $v_i \in \{0, \ldots, m_i\}$ and $i\in \mathcal{U}$
\begin{equation}
\label{necessityForVectorSystem}
\sum_{i \in \mathcal{U}} a_i v_i \mathrm{log}|\lambda_i|< - \frac{v}{2} \mathrm{log} {\mathbb{E} \big\{ \big(\frac{\sigma_n^2}{\sigma_n^2+g_t^2P}\big)^{\frac{1}{v}}\big\}}
\end{equation}
where $v=\sum_{i\in \mathcal{U}} a_iv_i$, and $a_i=1$ if $\lambda_i \in \mathbb{R}$, and $a_i=2$ otherwise.
\end{theorem}
\begin{proof}
For the proof of necessity, notice that each block $J_i$ has an invariant real subspace $\mathcal{A}_{v_i}$ of dimension $a_i v_i$, for any $v_i \in \{ 0 , \ldots,m_i\}$. Consider the subspace $\mathcal{A}$ formed by taking the product of the invariant subspaces $\mathcal{A}_{v_i}$ for each real Jordan block. The total dimension of $\mathcal{A}$ is $v=\sum_{i\in \mathcal{U}} a_iv_i$. Denote by $x^\mathcal{V}$ of the components of $x$ belonging to $\mathcal{A}$. Then $x^{\mathcal{V}}$ evolves as
\begin{equation}
\label{stackedDynamics}
x_{k+1}^{\mathcal{V}}=J^{\mathcal{V} }x_{k+1}^{\mathcal{V}} +QT u_k
\end{equation}
where $Q$ is a transformation matrix and $\mathrm{det} J^{\mathcal{V}}=\Pi_{i\in \mathcal{U}} \lambda_i^{ a_i v_i}$.
Since $X_k$ is mean square stable, it is necessary that the subdynamics \eqref{stackedDynamics} is mean square stable. Similar to the necessity proof in Theorem 1, we may derive the necessary condition \eqref{necessityForVectorSystem}. And this completes the proof of necessity.
Here we prove the sufficiency using the idea of Time Division Multiple Access (TDMA).
Based on the previous encoder/decoder design for scalar systems, the following information transmission strategy is designed for the vector system. Without loss of generality, here we assume that $\lambda_1,\ldots, \lambda_d$ are real and $m_i=1$. For other cases, readers can refer to the analysis discussed in Chapter 2 of \cite{Zaidi2014}. Specifically, under this assumption, $J$ is a diagonal matrix and $d=n$. The sensor transmits periodically with a period of $\tau$. During one channel use, the sensor only transmits the estimation error of the $j$-th value of $x_0$ using the scheme devised for scalar systems. The relative transmission frequency for the $j$-th value of $x_0$ is scheduled to be $\alpha_j$ among the $\tau$ transmission period with $\sum_{j=1}^n\alpha_j= 1$. The receiver maintains an array that represents the most recent estimation of $x_0$, which is set to $0$ for $t=0$. When the information about the $j$-th value of $x_0$ is transmitted, only the estimation of the $j$-th value of $x_0$ is updated at the decoder side, and the other estimation values remain unchanged. After updating the estimation, the controller takes action as the one designed in Lemma \ref{lemma:estimationThenControl}.
If the diagonal elements of $ A^t \mathbb{E} \{e_te_t'\}(A')^t$ converge to zeros asymptotically, i.e., for $i=1,\ldots, n$, $\mathrm{lim}_{t\rightarrow \infty} \lambda_i^{2t} \mathbb{E} \{e_{i,t}^2\}=0$ , the conditions in Lemma \ref{lemma:estimationThenControl} can be satisfied. Since the transmission is scheduled periodically, we only need to require that $\mathrm{lim}_{k\rightarrow \infty} \lambda_i^{2k\tau}
\mathbb{E} \{e_{i, k\tau}^2\}=0$, $\forall i=1,\ldots, n$. Following our designed transmission scheme, we have $\mathbb{E} \{e_{i,k\tau}^2\}=\mathbb{E} \{\frac{\sigma_n^2}{\sigma_n^2+g_t^2P}\}^{\alpha_i k\tau}\mathbb{E} \{e_{i,0}^2\}$. If $\lambda_i ^{2}\mathbb{E} \{\frac{\sigma_n^2}{\sigma_n^2+g_t^2P}\}^{\alpha_i }<1 $ for all $i=1,\ldots n$, the sufficient condition in Lemma \ref{lemma:estimationThenControl} can be satisfied. To complete the proof, we only need to show the equivalence between the requirement $\lambda_i ^{2}\mathbb{E} \{\frac{\sigma_n^2}{\sigma_n^2+g_t^2P}\}^{\alpha_i }<1 $ for all $i=1,\ldots n$ and \eqref{SufficientConditionForVectorSystem}. On one hand, since $\sum_{i=1}^n \alpha_i=1$, if $\lambda_i ^{2}\mathbb{E} \{\frac{\sigma_n^2}{\sigma_n^2+g_t^2P}\}^{\alpha_i }<1 $ for all $i=1,\ldots n$, we know that \eqref{SufficientConditionForVectorSystem} holds. On the other hand, if \eqref{SufficientConditionForVectorSystem} holds, we can simply choose $\alpha_i=\frac{\mathrm{log}|\lambda_i|}{\sum_i \mathrm{log}|\lambda_i|}$, which satisfies the requirement that $\sum_{i=1}^n \alpha_i=1$ and $\lambda_i ^{2}\mathbb{E} \{\frac{\sigma_n^2}{\sigma_n^2+g_t^2P}\}^{\alpha_i }<1 $ for all $i=1,\ldots, n$. The sufficiency is proved. \end{proof}
\section{Numerical Illustrations}
\subsection{Scalar Systems}
The authors in \cite{Xiao2011} derive the mean square capacity of a power constrained fading channel with linear encoders/decoders. The necessary and sufficient condition for scalar systems is $ \frac{1}{2} \mathrm{log}(1+\frac{\mu^2_gP}{\sigma_g^2 P+\sigma_n^2}) > \mathrm{log} |\lambda| $ with $\mu_g$ and $\sigma_g^2$ being the mean and variance of $g_t$. We can similarly define the mean square capacity of the power constrained fading channel with linear encoders/decoders as $C_{\mathrm{MSL}}=\frac{1}{2} \mathrm{log}(1+\frac{\mu^2_gP}{\sigma_g^2P+\sigma_n^2})$. Simply assume that the fading follows the Bernoulli distribution with failure probability $\epsilon$, then the Shannon capacity, the mean square capacity achievable with causal encoders/decoders and the mean square capacity achievable with linear encoders/decoders are given as
$ C_{\mathrm{Shannon_{BD}}} = \frac{1-\epsilon}{2} \mathrm{log}\big(1+\frac{P}{\sigma_n^2}\big) $,
$ C_{\mathrm{MSC_{BD}}} =-\frac{1}{2} \mathrm{log} \big(\frac{\sigma_n^2+\epsilon P}{\sigma_n^2+P}\big) $,
$C_{\mathrm{MSL_{BD}}} = \frac{1}{2} \mathrm{log} \big(1+\frac{(1-\epsilon)^2P}{(1-\epsilon) \epsilon P+\sigma_n^2}\big) $.
For fixed $P$ and $\sigma_n^2$, the channel capacities are functions of $\epsilon$. Let $P=1$ and $\sigma_n^2=1$, the channel capacities in relation to the erasure probability are plotted in Fig. \ref{comparisonOfChannelCapacity}. It is clear that $C_{\mathrm{Shannon_{BD}}} \ge C_{\mathrm{MSC_{BD}}}\ge C_{\mathrm{MSL_{BD}}}$ at any given erasure probability $\epsilon$. This result is obvious since we have proved that the Shannon capacity is no smaller than the mean square capacity with causal encoders/decoders. Besides, we have more freedom in designing the causal encoders/decoders compared with the linear encoders/decoders, thus allowing to achieve a higher capacity. The three kinds of capacity degenerate to the same when $\epsilon=0$ and $\epsilon=1$, which represent the AWGN channel case and the disconnected case respectively.
\begin{figure}
\centering
\includegraphics[width=0.37\textwidth]{figs/comparisonAmongCapacity.pdf}\\
\caption{Comparison of different channel capacities when $P=1$, $\sigma_n^2=1$}\label{comparisonOfChannelCapacity}
\end{figure}
\subsection{Vector Systems}
Consider the two dimensional LTI system \eqref{realJordanCanonicalForm} with
$
J=\left[
\begin{smallmatrix}
\lambda_1 & 0\\
0 & \lambda_2
\end{smallmatrix}
\right]
$, and the communication channel is \eqref{channel1} in which the fading follows the Bernoulli distribution with failure probability $\epsilon$. In view of Theorem \ref{theorem:VectorResult}, a sufficient condition to ensure mean square stabilizability is that $(\mathrm{log}|\lambda_1|, \mathrm{log}|\lambda_2|)$ should lie in the region of
$ \mathrm{log}|\lambda_1|+ \mathrm{log} |\lambda_2 | < C_{\mathrm{MSC_{BD}}} $. The necessary requirement is given by the following region in $(\mathrm{log}|\lambda_1|,\mathrm{log}|\lambda_2|)$ plane
\begin{equation*}
\left\{
\begin{aligned}
&\log{|\lambda_1|} < C_{\mathrm{MSC_{BD}}}, \;\; \log { |\lambda_2 |} < C_{\mathrm{MSC_{BD}}} \\
& \log {|\lambda_1|} +\log{| \lambda_2 |} < - \log{{\big( \epsilon+(1-\epsilon) \big( \frac{\sigma_n^2}{\sigma_n^2+P} \big)^{\frac{1}{2}} \big) }}
\end{aligned}\right.
\end{equation*}
The necessary and sufficient condition to ensure mean square stability using linear encoders/decoders for this system is given in \cite{Xiao2011}, which states that $(\mathrm{log}|\lambda_1|, \mathrm{log}|\lambda_2|)$ should be in the region constrained by
$ \mathrm{log}|\lambda_1|+ \mathrm{log} |\lambda_2 | < C_{\mathrm{MSL_{BD}}}$.
Selecting $P = 1$, $\sigma_n^2 = 1$ and $\epsilon = 0.8$, we can plot the regions for $(\mathrm{log}|\lambda_1|, \mathrm{log}|\lambda_2|)$ indicated by the sufficiency and necessity in Theorem \ref{theorem:VectorResult} and that indicated in Theorem 3.1 in \cite{Xiao2011} in Fig. \ref{Fig.vectorSystemStabilityRegion}. We can observe that the region of $(\mathrm{log}|\lambda_1|, \mathrm{log}|\lambda_2|)$ that can be stabilized with the designed causal encoders/decoders in Section IV is much larger than that can be stabilized by linear encoders/decoders in \cite{Xiao2011}. Thus by extending endocers/decoders from linear settings to causal requirements, we can tolerate more unstable systems.
\begin{figure}
\centering
\includegraphics[width=0.29\textwidth]{figs/vectorSystemStabilityRegion.pdf}\\
\caption{Stability region of $(\mathrm{log}|\lambda_1|, \mathrm{log}|\lambda_2|)$ indicated by Theorem 2 for a vector system}\label{Fig.vectorSystemStabilityRegion}
\end{figure}
\section{Conclusion}
This paper characterized the requirement for a power constrained fading channel to allow the existence of a causal encoder/decoder pair that can mean square stabilize a discrete-time LTI system. The mean square capacity of the power constrained fading channel with causal encoders/decoders was given. It was shown that this mean square capacity is smaller than the Shannon capacity and they coincide with each other for some special situations. Throughout the paper, the capacity was derived with the assumption that there exists a perfect feedback link from the channel output to the channel input. What would the capacity be for power constrained fading channels when there is no such feedback link or there is only a noisy feedback link is still under investigation.
\bibliographystyle{ieeetr}
|
train/arxiv
|
BkiUazA5qhLACAkw_AUl
| 5 | 1 |
\section{Introduction}
Basic justification stit (or jstit, for short) logic was
introduced in \cite{OLWA} as an environment for analysis of
doxastic actions related to proving activity within a somewhat
idealized community of agents, combining expressive means of stit
logic by N. Belnap et al. \cite{belnap2001facing} with those of
justification logic by S. Artemov et al. \cite{ArtemovN05}. This
logic, therefore, retains the full set of expressive means of the
two above-mentioned logics and introduces some new expressive
means on top of them. These new expressive means were called in
\cite{OLWA} proving modalities and they capture different modes in
which one can speak about proving activity of an agent. The
general idea behind jstit logic is that one gets a right
classification of such modes if one intersects the distinction
between agentive and factual (aka moment-determinate) events
developed in stit logic with the distinction between explicit and
implicit modes of knowledge which is central to justification
logic. The first distinction, when applied to proofs, corresponds
to a well-known philosophical discussion of proofs-as-objects vs
proofs-as-acts. One refers to a proof-as-act when one says that
agent $j$ proves some proposition $A$, but one refers to a
proof-as-object when saying that $A$ was proved. While doing that,
one can either simply say that $A$ was proved, or add that $A$ was
proved by some proof $t$; and the difference between these two
modes of speaking is exactly the difference between implicit and
explicit reference to proofs. All in all this gives us the
following classification of proving modalities:
\begin{center}
\begin{tabular}{|c||c|c|}
\hline
& Agentive & Moment-determinate \\
\hline\hline
Explicit & $j$ proves $A$ by $t$ & $A$ has been proven by $t$ \\
& $Prove(j, t, A)$& $Proven(t,A)$\\
\hline
Implicit & $j$ proves $A$ & $A$ has been proven \\
&$Prove(j, A)$& $Proven(A)$\\
\hline
\end{tabular}
\end{center}
In \cite{OLWA} the semantics of these modalities was presented and
informally motivated in some detail. However, in the present
paper, we are going to look into one fragment of basic jstit logic
rather than the full system. The reason for this is the relatively
high level of complexity of the full basic jstit logic. The
fragment in question is, in fact, the basic jstit logic without
the two explicit proving modalities given in the first row of
table above. The resulting restricted system, therefore, features
the full set of expressive means inherited from justification
logic and stit logic plus the two implicit modalities,
$Prove(j,A)$ and $Proven(A)$. For the same reason (i.e. keeping
the complexity down), we also use a slightly simplified version of
the semantics introduced in \cite{OLWA} to interpret this logic.
The resulting system, which we will call the implicit jstit logic,
still allows for an analysis of the interplay between
proofs-as-acts and proofs-as-objects, although it limits the
format of such an analysis to some extent and also zeros out the
interplay between implicit and explicit modes of speech. But even
this restricted logic, has, as will be shown below, a challenging
degree of complexity, which makes the problem of axiomatizing it
both interesting and non-trivial.
The present paper is devoted to solving this exact problem. Its
layout is as follows. In Section \ref{basic} we define the
language and the semantics of the logic at hand. We also show some
features of implicit jstit logic, which limit the power and the
scope of possible completeness results, namely, the failure of
compactness and finite model properties. The latter fails in a
rather strong form; as a result, one cannot impose any finite
bound not only on the overall size of a model satisfying a given
formula, but also on the length of histories in such a model. The
failure of compactness also means that one cannot have a strongly
complete axiomatization for this logic while retaining a finitary
notion of proof.
Despite all these challenges, however, it turns out that with
implicit jstit logic one can do much better than just weak
completeness; in fact, our main result is much closer to the
strong completeness and only differs from the latter in that some
restrictions are imposed on proof variables occurring in a given
set of formulas. The exact formulation of this result is given in
Section \ref{axioms}, where we also define the axiom system which
displays this exact degree of completeness w.r.t. implicit jstit
logic. We immediately show this system to be sound w.r.t. the
semantics introduced in Section \ref{basic}, and we end the
section by proving a number of theorems in the system.
Section \ref{canonicalmodel} then contains the bulk of technical
work necessary for the completeness theorem. It gives a stepwise
construction and adequacy check for all the numerous components of
the canonical model and ends with a proof of a truth lemma.
Section \ref{main} then reaps the fruits of the hard work done in
Section \ref{canonicalmodel}, giving a concise proof of the
completeness result and drawing some quick corollaries including
the weak completeness theorem and a restricted form of compactness
property. Then follows Section \ref{conclusion}, giving some
conclusions and drafting directions for future work.
In what follows we will be assuming, due to space limitations, a
basic acquaintance with both stit logic and justification logic.
We recommend to peruse \cite{sep-logic-justification} for a quick
introduction into the basics of stit logic, and \cite[Ch.
2]{horty2001agency} for the same w.r.t. justification logic.
\section{Basic definitions and notation}\label{basic}
We fix some preliminaries. First we choose a finite set $Ag$
disjoint from all the other sets to be defined below. Individual
agents from this set will be denoted by letters $i$ and $j$. Then
we fix countably infinite sets $PVar$ of proof variables (denoted
by $x,y,z,w,u$) and $PConst$ of proof constants (denoted by
$a,b,c,d$). When needed, subscripts and superscripts will be used
with the above notations or any other notations to be introduced
in this paper. Set $Pol$ of proof polynomials is then defined by
the following BNF:
$$
t := x \mid c \mid s + t \mid s \times t \mid !t,
$$
with $x \in PVar$, $c \in PConst$, and $s,t$ ranging over elements
of $Pol$. In the above definition $+$ stands for the \emph{sum} of
proofs, $\times$ denotes \emph{application} of its left argument
to the right one, and $!$ denotes the so-called
\emph{proof-checker}, so that $!t$ checks the correctness of proof
$t$.
In order to define the set $Form$ of formulas we fix a countably
infinite set $Var$ of propositional variables to be denoted by
letters $p,q,r,s$. Formulas themselves will be denoted by letters
$A,B,C,D$, and the definition of $Form$ is supplied by the
following BNF:
\begin{align*}
A := p \mid A \wedge B \mid \neg A \mid [j]A \mid \Box A \mid t{\hspace{0.25mm}:\hspace{0.25mm}}
A \mid KA \mid Prove(j, A) \mid Proven(A),
\end{align*}
with $p \in Var$, $j \in Ag$ and $t \in Pol$.
It is clear from the above definition of $Form$ that we are
considering a version of modal propositional language. As for the
informal interpretations of modalities, $[j]A$ is the so-called
cstit action modality and $\Box$ is historical necessity modality,
both modailities are borrowed from stit logic. The next two
modailities, $KA$ and $t{\hspace{0.25mm}:\hspace{0.25mm}} A$, come from justification logic and
the latter is interpreted as ``$t$ proves $A$'', whereas the
former is the strong epistemic modality ``$A$ is
known''.\footnote{Perhaps, ``$A$ is provable'' will be an even
better reading.} The two remaining modalities, $Prove(j, A)$ and
$Proven(A)$ are implicit modalities related to the proving
activity of agents and their informal interpretation was
considered in Section 1.
We assume $\Diamond$, $\langle K \rangle$, and $\langle j \rangle$ for a $j \in Ag$
as notations for the dual modalities of $\Box$, K and $[j]$, respectively.
For the language at hand, we assume the following semantics. A
jstit model is a structure
$$
\mathcal{M} = \langle Tree, \leq, Choice, Act, R, \mathcal{E},
V\rangle
$$
such that:
\begin{itemize}
\item $Tree$ is a non-empty set. Elements of $Tree$ are called
\emph{moments}.
\item $\leq$ is a partial order on $Tree$ for which a temporal
interpretation is assumed.
\item $Hist$ is the set of maximal chains in $Tree$ w.r.t. $\leq$.
Since $Hist$ is completely determined by $Tree$ and $\leq$, it is
not included into the structure of a model as a separate
component. Elements of $Hist$ are called \emph{histories}. The set
of histories containing a given moment $m$ will be denoted $H_m$.
The following set:
$$
MH(\mathcal{M}) = \{ (m,h)\mid m \in Tree,\, h \in H_m \},
$$
called the set of \emph{moment-history pairs}, will be used to
evaluate formulas of the above language.
\item $Choice$ is a function mapping $Tree \times Agent$ into
$2^{2^{Hist}}$ in such a way that for any given $j \in Agent$ and
$m \in Tree$ we have as $Choice(m,j)$ (to be denoted as
$Choice^m_j$ below) a partition of $H_m$. For a given $h \in H_m$
we will denote by $Choice^m_j(h)$ the element of partition
$Choice^m_j$ containing $h$.
\item $Act$ is a function mapping $MH(\mathcal{M})$ into
$2^{Pol}$.
\item $R$ is a pre-order on $Tree$ called epistemic
accessibility.
\item $\mathcal{E}$ is a function mapping $Tree \times Pol$ into
$2^{Form}$.
\item $V$ is an evaluation function, mapping the set $Var$ into
$2^{MH(\mathcal{M})}$.
\end{itemize}
However, not all structures of the above described type are
admitted as jstit models. A number of additional restrictions
needs to be satisfied. More precisely, we assume satisfaction of
the following constraints:
\begin{enumerate}
\item Historical connection:
$$
(\forall m,m_1 \in Tree)(\exists m_2
\in Tree)(m_2 \leq m \wedge m_2 \leq m_1).
$$
\item No backward branching:
$$
(\forall m,m_1,m_2 \in Tree)((m_1 \leq m \wedge m_2 \leq m) \to
(m_1 \leq m_2 \vee m_2 \leq m_1)).
$$
\item No choice between undivided histories:
$$
(\forall m,m' \in Tree)(\forall h,h' \in H_m)(m < m' \wedge m' \in
h \cap h' \to Choice^m_j(h) = Choice^m_j(h'))
$$
for every $j \in Agent$.
\item Independence of agents:
$$
(\forall m\in Tree)(\forall f:Agent \to 2^{H_m})((\forall j \in
Agent)(f(j) \in Choice^m_j) \Rightarrow \bigcap_{j \in Agent}f(j)
\neq \emptyset).
$$
\item Monotonicity of evidence:
$$
(\forall t \in Pol)(\forall m,m' \in Tree)(R(m,m') \Rightarrow
\mathcal{E}(m,t) \subseteq \mathcal{E}(m',t)).
$$
\item Evidence closure properties. For arbitrary $m \in Tree$,
$s,t \in Pol$ and $A, B \in Form$ it is assumed that:
\begin{enumerate}
\item $A \to B \in \mathcal{E}(m,s) \wedge A \in \mathcal{E}(m,t)
\Rightarrow B \in \mathcal{E}(m,s\times t)$;
\item $\mathcal{E}(m,s) \cup \mathcal{E}(m,t) \subseteq
\mathcal{E}(m,s + t)$.
\item $A \in \mathcal{E}(m,t) \Rightarrow
t:A \in \mathcal{E}(m,!t)$;
\end{enumerate}
\item Expansion of presented proofs:
$$
(\forall m,m' \in Tree)(m' < m \Rightarrow \forall h \in H_m
(Act(m',h) \subseteq Act(m,h))).
$$
\item No new proofs guaranteed:
$$
(\forall m \in Tree)(\bigcap_{h \in H_m}(Act(m,h)) \subseteq
\bigcup_{m' < m, h \in H_m}(Act(m',h))).
$$
\item Presenting a new proof makes histories divide:
$$
(\forall m \in Tree)(\forall h,h' \in H_m)(\exists m' > m(m' \in h
\cap h') \Rightarrow (Act(m,h) = Act(m,h'))).
$$
\item Future always matters:
$$
\leq \subseteq R.
$$
\item Presented proofs are epistemically transparent:
$$
(\forall m,m' \in Tree)(R(m,m') \Rightarrow (\bigcap_{h \in
H_m}(Act(m,h)) \subseteq \bigcap_{h' \in H_{m'}}(Act(m',h')))).
$$
\end{enumerate}
We offer some intuitive explanation for the above defined notion
of jstit model. Due to space limitations, we only explain the
intuitions behind jstit models very briefly, and we urge the
reader to consult \cite[Section 3]{OLWA} for a more comprehensive
explanations, whenever needed.
The components like $Tree$, $\leq$, $Choice$ and $V$ are inherited
from stit logic, whereas $R$ and $\mathcal{E}$ come from
justification logic. The only new component is $Act$. The
intuition behind the semantics is that $Ag$, our community of
agents, is engaged in proving activity and this proving activity
consists in making proof polynomials public within the community.
One can think of a group of researchers, assembled before a
whiteboard in a conference room and putting the proofs they
discover on this whiteboard. Function $Act$ gives out the current
state of this whiteboard at any given moment under any given
history. The whole situation is somewhat idealized in that we
assume that nothing ever gets erased from the whiteboard, that
there is always enough free space on it, and that the agents do
not send one another any private messages.
The numbered list of semantical constraints above then just builds
on these intuitions. Constraints $1$--$4$ are borrowed from stit
logic, constraints $5$ and $6$ are inherited from justification
logic. Constraint $7$ just says that nothing gets erased from the
whiteboard, constraint $8$ says a new proof cannot spring into
existence as a static (i.e. moment-determinate) feature of the
environment out of nothing, but rather has to come as a result (or
a by-product) of a previous activity. Constraint $9$ is just a
corollary to constraint $3$ in the richer environment of jstit
models, constraint $10$ says that the possible future of the given
moment is always epistemically relevant in this moment, and
constraint $11$ says that the community knows everything that has
firmly made its way onto the whiteboard.
For the members of $Form$, we will assume the following
inductively defined satisfaction relation. For every jstit model
$\mathcal{M} = \langle Tree, \leq, Choice, Act, R, \mathcal{E},
V\rangle$ and for every $(m,h) \in Pair_\mathcal{M}$ we stipulate
that:
\begin{align*}
&\mathcal{M}, m, h \models p \Leftrightarrow (m,h) \in
V(p);\\
&\mathcal{M}, m, h \models [j]A \Leftrightarrow (\forall h'
\in Choice^m_j(h))(\mathcal{M}, m, h' \models A);\\
&\mathcal{M}, m, h \models \Box A \Leftrightarrow (\forall h'
\in H_m)(\mathcal{M}, m, h' \models A);\\
&\mathcal{M}, m, h \models KA \Leftrightarrow \forall m'\forall
h'(R(m,m') \& h' \in H_{m'} \Rightarrow \mathcal{M}, m', h'
\models A);\\
&\mathcal{M}, m, h \models t{\hspace{0.25mm}:\hspace{0.25mm}} A \Leftrightarrow A \in
\mathcal{E}(m,t) \& \mathcal{M}, m, h \models KA;\\
&\mathcal{M}, m, h \models Prove(j, A) \Leftrightarrow (\forall h'
\in Choice^m_j(h))(\exists t \in Act(m,h'))(\mathcal{M},m,h'
\models t{\hspace{0.25mm}:\hspace{0.25mm}} A)
\&\\
&\qquad\qquad\qquad\qquad\qquad \&(\forall s \in Pol)(\exists
h'' \in H_m) (\mathcal{M},m,h \models s{\hspace{0.25mm}:\hspace{0.25mm}} A \Rightarrow s \notin
Act(m,h''));\\
&\mathcal{M}, m, h \models Proven(A) \Leftrightarrow (\exists t
\in Pol)(\forall h' \in H_m) (t \in Act(m,h') \& \mathcal{M},
m, h \models t{\hspace{0.25mm}:\hspace{0.25mm}} A)
\end{align*}
In the above clauses we assume that $p \in Var$; we also assume
standard clauses for Boolean connectives. Note that the
satisfaction clause for $Prove(j,A)$ consists of two conjuncts,
one stating that some proof of $A$ must be presented at every
history in a given choice cell, and the other saying that no proof
of $A$ is presented in all histories through the given moment.
These conjuncts show some similarity to the conjuncts in the
satisfaction clause for the \emph{dstit}
operator, which are known in the existing literature under the names of \emph{positive} and \emph{negative}
condition, respectively. Following this usage, we will
name the first conjunct in the satisfaction clause for $Prove(j,A)$
the positive condition for $Prove(j,A)$, and the second one
the negative condition for $Prove(j,A)$. The intuitive motivation
for the satisfaction clauses of $Prove(j,A)$ and $Proven(A)$ was
worked out in detail in \cite{OLWA} and we do not dwell on it
here.
We further assume standard definitions for satisfiability and
validity of formulas and sets of formulas in the presented
semantics.
Before we proceed to proving things about the defined system, we
want to briefly comment on how the above semantics relates to the
semantics introduced in \cite{OLWA}. The main difference is that
the latter semantics uses two epistemic accessibility relations $R$ and
$R_e$ with the constraint that $R
\subseteq R_e$, whereas in the jstit models as defined above one
only finds one such relation $R$, and this relation serves the
functions of both $R$ and $R_e$. Thus the semantics defined above
arises from the more general semantics presented in \cite{OLWA} as
a particular case with $R$ and $R_e$ being identified with one
another.
The exact import of this additional restriction on the semantics
presented in \cite{OLWA} is not yet clear. It is known that on the
level of pure justification logic identifying $R$ and $R_e$ does
not change the set of validities (see, e.g. \cite[Comment
6.5]{ArtemovN05}). Our tentative hypothesis would be, then, that
imposing $R = R_e$ in the richer context of jstit logic might be
just as irrelevant as it is in justification logic. However, we
have no proof of this hypothesis at the moment, so it stands as an
open problem.
The semantics just defined admits of no finitary strongly complete
system since it is not compact. Indeed, the set
$$
\{ Proven(p) \} \cup \{ \neg t{\hspace{0.25mm}:\hspace{0.25mm}} p\mid t \in Pol \}
$$
is unsatisfiable, even though every finite subset of it can be
satisfied. Still, the main result of this paper shows that we can
do better than just weak completeness; in fact we can show that
also infinite consistent sets of formulas can be satisfied
provided that there is an infinite set of proof variables that do
not occur in those formulas. Thus we get something considerably
stronger than just weak completeness including also a restricted
form of the compactness theorem.
It is also worth noting that under the presented semantics some
satisfiable formulas cannot be satisfied over finite models. As an
example of this phenomenon, consider $K(\Diamond p \wedge
\Diamond\neg p)$. If $\mathcal{M}, m_1, h \models K(\Diamond p
\wedge \Diamond\neg p)$, then, by reflexivity of $R$, also
$\mathcal{M}, m_1, h \models \Diamond p \wedge \Diamond\neg p$,
which means that at least two different histories are running
through $m_1$ in $\mathcal{M}$. Therefore, $m_1$ cannot be a
$\leq$-maximal moment in $\mathcal{M}$, so that there is at least
one moment $m_2 \in h$ such that $m_1 < m_2$. By the future always
matters constraint we get then that $R(m_1, m_2)$, which, by
transitivity of $R$, means that we also have $\mathcal{M}, m_2, h
\models K(\Diamond p \wedge \Diamond\neg p)$. Iterating this
construction $\omega$ times, we get a countably infinite sequence
of moments along $h$:
$$
m_1 < m_2 <\ldots < m_n <\ldots,
$$
showing that the moments in these sequence are pairwise different
(by antisymmetry of $\leq$) and that $\mathcal{M}$ is consequently
an infinite model. Since $\mathcal{M}$ was chosen arbitrarily,
this shows that $K(\Diamond p \wedge \Diamond\neg p)$ cannot be
satisfied over finite jstit models. On the other hand, $K(\Diamond
p \wedge \Diamond\neg p)$ is clearly satisfiable when one allows
for infinite models. One can consider, for example, a jstit model
$\mathcal{M} = \langle Tree, \leq, Choice, Act, R, \mathcal{E},
V\rangle$ for a community $\{ j \}$ consisting of a single agent,
setting:
$$
Tree: = \{ (a_1,\ldots,a_n)\mid a_i \in \{ 0, 1 \}\textup{ for } i
\leq n \} \cup \{\Lambda \},
$$
where $\Lambda$ is the empty sequence;
$$
(a_1,\ldots,a_n) \leq (b_1,\ldots,b_k) \Leftrightarrow (n \leq k
\& (\forall i \leq n)(a_i = b_i)),\,\,Choice^m_j := H_m,\,\,
Act(m, h) = \emptyset
$$
for every $m \in Tree$ and $h \in H_m$;
$$
R := \leq, \mathcal{E}(m, t) := Form,
$$
for every $m \in Tree$ and $t \in Pol$;
$$
V(p) := \{ (m,h) \mid (m, 1) \in h \}, V(q) = \emptyset
$$
provided $q \in Var \setminus \{ p \}$. It is straightforward to
check then that with these settings we get that $\mathcal{M},
\Lambda, h \models K(\Diamond p \wedge \Diamond\neg p)$ for an
arbitrary history $h$ over $\mathcal{M}$.
Note also, that the same example shows that one cannot put a
finite bound on the length of histories in the models satisfying a
given formula, so that what one might have called a ``finite
history property'' which is satisfied, e.g., by the canonical
model of the logic of dstit operator (see \cite[Section
17C]{belnap2001facing} for the definition) also fails for the
implicit jstit logic.
\section{Axiomatic system and soundness}\label{axioms}
We consider the following set of axiomatic schemes:
\begin{align}
&\textup{A full set of axioms for classical propositional
logic}\label{A0}\tag{\text{A0}}\\
&\textup{$S5$ axioms for $\Box$ and $[j]$ for every $j \in
Agent$}\label{A1}\tag{\text{A1}}\\
&\Box A \to [j]A \textup{ for every }j \in Agent\label{A2}\tag{\text{A2}}\\
&(\Diamond[j_1]A_1 \wedge\ldots \wedge \Diamond[j_n]A_n) \to
\Diamond([j_1]A_1 \wedge\ldots \wedge[j_n]A_n)\label{A3}\tag{\text{A3}}\\
&(s{\hspace{0.25mm}:\hspace{0.25mm}}(A \to B) \to (t{\hspace{0.25mm}:\hspace{0.25mm}} A \to (s\times t){\hspace{0.25mm}:\hspace{0.25mm}}
B)\label{A4}\tag{\text{A4}}\\
&t{\hspace{0.25mm}:\hspace{0.25mm}} A \to (!t{\hspace{0.25mm}:\hspace{0.25mm}}(t{\hspace{0.25mm}:\hspace{0.25mm}} A) \wedge KA)\label{A5}\tag{\text{A5}}\\
&(s{\hspace{0.25mm}:\hspace{0.25mm}} A \vee t{\hspace{0.25mm}:\hspace{0.25mm}} A) \to (s+t){\hspace{0.25mm}:\hspace{0.25mm}} A\label{A6}\tag{\text{A6}}\\
&\textup{$S4$ axioms for $K$}\label{A7}\tag{\text{A7}}\\
&KA \to \Box K\Box A\label{A8}\tag{\text{A8}}\\
&Prove(j, A) \to (\neg Proven(A) \wedge [j]Prove(j, A) \wedge KA)\label{A9}\tag{\text{A9}}\\
&\Box Prove(j, A) \to \Box Prove(i, A)\label{A10}\tag{\text{A10}}\\
&Proven(A) \to (KProven(A) \wedge KA)\label{A11}\tag{\text{A11}}\\
&\neg K(\bigvee^{n}_{l = 1}\langle K \rangle\Diamond Prove(j_{l},
A_{l}))\label{A12}\tag{\text{A12}}\\
&\neg Prove(j,A) \to \langle j \rangle (\bigwedge_{i \in Ag}\neg Prove(i,A))\label{A13}\tag{\text{A13}}
\end{align}
The assumption is that in \eqref{A3} $j_1,\ldots, j_n$ are
pairwise different.
To this set of axiom schemes we add the following rules of
inference:
\begin{align}
&A, A \to B \Rightarrow B;\label{R1}\tag{\text{R1}}\\
&A\Rightarrow KA;\label{R2}\tag{\text{R2}}\\
&\textup{If $A$ is an instance of (A0)--(A13) and $c \in Const$,
then infer $c{\hspace{0.25mm}:\hspace{0.25mm}} A$;}\label{R3}\tag{\text{R3}}\\
&KA \to (\neg Proven(B_1) \vee\ldots \vee\neg
Proven(B_n)) \Rightarrow\notag\\
&\qquad\qquad\Rightarrow KA \to (\bigwedge_{j \in Ag}\neg
Prove(j,B_1) \vee\ldots \vee \bigwedge_{j \in Ag}\neg
Prove(j,B_n)).\label{R4}\tag{\text{R4}}
\end{align}
We call a jstit model $\mathcal{M} = \langle Tree, \leq, Choice,
Act, R, \mathcal{E}, V\rangle$ \emph{normal} iff the following
condition is satisfied:
\begin{align*}
(\forall c \in Const)(\forall m \in Tree)(\{ A \mid &A\text{ is a
substitution instance}\\
&\text{of one of the schemes among \eqref{A1}--\eqref{A13}}\}
\subseteq \mathcal{E}(m,c)).
\end{align*}
Our goal is now a restricted completeness theorem w.r.t. the class
of normal models. We start by establishing soundness, and we
precede the soundness theorem with the following rather
straightforward technical claim:
\begin{lemma}\label{determinate}
For every $A \in Form$ and every $t \in Pol$, all of the formulas
$\Box A$, $KA$, $t{\hspace{0.25mm}:\hspace{0.25mm}} A$ and $Proven(A)$ are moment-determinate,
that is to say, if $\alpha \in \{ \Box A, KA, t{\hspace{0.25mm}:\hspace{0.25mm}} A,Proven(A)
\}$, then for an arbitrary normal jstit model $\mathcal{M} =
\langle Tree, \leq, Choice, Act, R, \mathcal{E}, V\rangle$ and $m
\in Tree$, if $h, h' \in H_m$, then:
$$
\mathcal{M},m,h \models \alpha \Leftrightarrow \mathcal{M},m,h'
\models \alpha.
$$
Also, Boolean combinations of these formulas are
moment-determinate.
\end{lemma}
\begin{proof}
For $\alpha = \Box A$ and $\alpha = KA$ it suffices to note that
the semantical conditions for satisfaction of $KA$ and $\Box A$ at
a given $(m,h) \in MH(\mathcal{M})$ in a given $\mathcal{M}$ have
no free occurrences of $h$. When we turn, further, to the
corresponding condition for $t{\hspace{0.25mm}:\hspace{0.25mm}} A$, the only free occurrence of
$h$ will be within the context $\mathcal{M}, m, h \models KA$
which was shown to be moment-determinate. Similarly, in the
satisfaction condition for $Proven(A)$ the only free occurrence of
$h$ is within a moment determinate context $\mathcal{M}, m, h
\models t{\hspace{0.25mm}:\hspace{0.25mm}} A$.
Of course, Boolean combinations of moment-determinate formulas
must be moment-determinate, too.
\end{proof}
It follows from Lemma \ref{determinate}, that the truth of
moment-determinate formulas at a given moment-history pair only
depends on the moment, so that we might as well omit the histories
when discussing satisfaction of such formulas and write
$\mathcal{M}, m \models KA$ instead of $\mathcal{M}, m, h \models
KA$, etc.
Establishing soundness mostly reduces to a routine check that
every axiom is valid and that rules preserve validity. We treat
the less obvious cases in some detail:
\begin{theorem}\label{soundness}
Every instance of \eqref{A1}--\eqref{A13} is valid over the
class of normal jstit models. Every application of rules
\eqref{R1}--\eqref{R4} to formulas which are valid over the class of normal jstit models
yileds a formula which is valid over the class of normal jstit models.
\end{theorem}
\begin{proof}
First, note that if $\mathcal{M} = \langle Tree, \leq, Choice,
Act, R, \mathcal{E}, V\rangle$ is a normal jstit model, then
$\langle Tree, \leq, Choice, V\rangle$ is a model of stit logic.
Therefore, axioms \eqref{A0}--\eqref{A3}, which were copy-pasted
from the standard axiomatization of \emph{dstit} logic (see, e.g.
\cite[Ch. 17]{belnap2001facing}) must be valid. Second, note that
if $\mathcal{M} = \langle Tree, \leq, Choice, Act, R, \mathcal{E},
V\rangle$ is a normal jstit model, then $\langle Tree, \leq, Act,
R, \mathcal{E}\rangle$ is what is called in \cite[p.
1067]{ArtemovN05} a frame for a Fitting justification model with
the form of constant specification defined by
\eqref{R3}\footnote{But note, that in \cite{ArtemovN05} they do
not include $\mathcal{E}$ in justification frames; however, this
is of no consequence for the present setting.}. This means that
also all of the \eqref{A4}--\eqref{A7} must be valid, whereas
\eqref{R1}--\eqref{R3} must preserve validity. The validity of
other elements of the above-presented axiomatic system will be
motivated below in some detail. In what follows, $\mathcal{M} =
\langle Tree, \leq, Choice, Act, R, \mathcal{E}, V\rangle$ will
always stand for an arbitrary normal jstit model, and $(m,h)$ for
an arbitrary element of $MH(\mathcal{M})$.
As for \eqref{A8}, assume for \emph{reductio} that $\mathcal{M}, m
\models KA \wedge \neg\Box K\Box A$. Then $\mathcal{M}, m, h
\models KA$ and also $\mathcal{M}, m \not\models \Box K\Box A$.
The latter means that for some $h' \in H_m$ we have $\mathcal{M},
m \not\models K\Box A$. Therefore, there must be some $m' \in
Tree$ such that $R(m,m')$ and some $g \in H_{m'}$ such that
$\mathcal{M}, m', g \not\models \Box A$, whence for some $g' \in
H_{m'}$ we will have $\mathcal{M}, m', g' \not\models A$. Since
$R(m,m')$, this means that $KA$ must fail at $(m,h)$ in
$\mathcal{M}$, a contradiction.
We consider next \eqref{A9}. Assume that $Prove(j, A)$ is true at
$(m,h)$ in $\mathcal{M}$. Note that the negative condition for
$Prove(j, A)$ at $(m,h)$ is logically equivalent to the negation
of the satisfaction condition for $Proven(A)$, which means that
$\neg Proven(A)$ must be true at $(m,h)$ in $\mathcal{M}$.
Further, since clearly $h \in Choice^m_j(h)$ and thus
$Choice^m_j(h)$ cannot be empty, it follows from the positive
condition for $Prove(j, A)$ that for some $t \in Pol$ we will have
$\mathcal{M},m,h \models t{\hspace{0.25mm}:\hspace{0.25mm}} A$, and therefore, by validity of
\eqref{A5}, $\mathcal{M},m,h \models KA$. Finally, note that since
$Choice^m_j$ is a partition of $H_m$, then for any $h' \in
Choice^m_j(h)$, if $h'' \in Choice^m_j(h')$, then $h'' \in
Choice^m_j(h)$. Therefore, since the positive condition for
$Prove(j,A)$ is satisfied at $(m,h)$, there must be some $t \in
Pol$ such that both $t \in Act(m,h'')$ and $\mathcal{M},m\models
t{\hspace{0.25mm}:\hspace{0.25mm}} A$. Therefore, the positive condition for $Prove(j,A)$ will
be satisfied at $(m,h')$ for every $h' \in Choice^m_j(h)$. As for
the negative condition, recall that it is equivalent to the
negation of the satisfaction condition for $Proven(j,A)$ and the
latter is, by Lemma \ref{determinate}, moment-determinate.
Therefore, the negative condition for $Prove(j,A)$ must be
moment-determinate as well, and, once satisfied at a given
$(m,h)$, it will be satisfied at every history through $m$.
Therefore, once we have $Prove(j, A)$ true at $(m,h)$ in
$\mathcal{M}$, we must also have $\mathcal{M},m,h \models
[j]Prove(j,A)$.
The next axiom is \eqref{A10}. If $\Box Prove(j, A)$ is true at
$(m,h)$ in $\mathcal{M}$, this means that $Prove(j, A)$ is true at
$(m,h')$ in $\mathcal{M}$ for every $h' \in H_m$. Now, take an
arbitrary such $h'$. We know that the negative condition for
$Prove(i,A)$ is the same as for $Prove(j, A)$, and is therefore
satisfied at $(m,h')$. As for the positive condition, assume that
$h'' \in Choice^m_i(h')$. We know that $Prove(j, A)$ is true at
$(m,h'')$, therefore, since $h''$ is obviously in
$Choice^m_j(h'')$, for some $t \in Pol$ we must have both $t \in
Act(m,h'')$ and $\mathcal{M},m \models t{\hspace{0.25mm}:\hspace{0.25mm}} A$. Thus the positive
condition for $Prove(i,A)$ at $(m,h')$ is satisfied as well. Since
$h'$ was chosen as an arbitrary history through $m$, this means
that $\Box Prove(i, A)$ must be satisfied at $(m,h)$ in
$\mathcal{M}$.
We now take up \eqref{A11}. If $Proven(A)$ is true at $m$ in
$\mathcal{M}$, then there is a $t \in Pol$ such that $t \in
\bigcap_{h \in H_m}Act(m,h)$ and $t{\hspace{0.25mm}:\hspace{0.25mm}} A$ is true at $m$. By
validity of \eqref{A5}, we immediately get that $\mathcal{M},m
\models KA$. Further, the fact that $t{\hspace{0.25mm}:\hspace{0.25mm}} A$ is true at $m$ means
that $A \in \mathcal{E}(m,t)$. Now, assume that $m' \in Tree$ is
such that $R(m,m')$. By the epistemic transparency of presented
proofs constraint we know that $t \in \bigcap_{h' \in
H_{m'}}Act(m',h')$. By monotonicity of evidence, we know that $A
\in \mathcal{E}(m',t)$. By the S4 reasoning for $K$ we know that
$\mathcal{M},m' \models KA$. Summing up, we must have $Proven(A)$
true at $m'$, and since $m'$ was chosen as an arbitrary
$R$-successor of $m$, this means that we also have $\mathcal{M},m
\models KProven(A)$.
To prove the validity of \eqref{A12} over the class of normal
jstit models, we proceed by induction on $n \geq 1$.
\emph{Basis}. $n = 1$. Assume, for \emph{reductio}, that
$\mathcal{M},m \models K\langle K\rangle\Diamond Prove(j_1,A_1)$.
Then, by validity of \eqref{A7}, $\mathcal{M},m \models \langle
K\rangle\Diamond Prove(j_1,A_1)$. Therefore, for some $m' \in
Tree$ such that $R(m,m')$, we must have $\mathcal{M},m' \models
\Diamond Prove(j_1,A_1)$. The latter, in turn, means that for some
$h' \in H_{m'}$ we will have $\mathcal{M},m', h' \models
Prove(j_1,A_1)$. We know then that $m'$ must have some
$<$-successors, where $<$ is the irreflexive companion of $\leq$
in $\mathcal{M}$. Indeed, if $m'$ were a $\leq$-maximal moment,
then we would have $H_{m'} = \{ h' \}$, that is to say, $h'$ would
be the only history passing through $m'$. But then, of course $h'
\in Choice^{m'}_{j_1}(h')$, therefore, for some $t \in Pol$ we
would have then both $t \in Act(m', h')$ and $\mathcal{M},m'
\models t{\hspace{0.25mm}:\hspace{0.25mm}} A$ by the positive condition for $Prove(j_1,A_1)$ at
$(m',h')$. But then, given that $H_{m'} = \{ h' \}$, this would
mean that $t \in \bigcap_{g \in H_{m'}}Act(m',g)$ so that the
negative condition for $Prove(j_1,A_1)$ at $(m',h')$ would be
violated, contradicting our assumption that $\mathcal{M},m', h'
\models Prove(j_1,A_1)$.
Therefore, we can choose a moment $m''$ such that both $m'' > m'$
and $h'$ passes through $m''$; consider then $H_{m''}$. All the
histories passing through $m''$ are pairwise undivided at $m'$,
therefore, by the presenting a new proof makes histories divide
constraint we must have $Act(m', g) = Act(m', g')$ for any $g,g'
\in H_{m''}$. We also know that, since $\mathcal{M},m', h' \models
Prove(j_1,A_1)$, there must be a $t \in Pol$ such that $t \in
Act(m',h')$ and $\mathcal{M},m' \models t{\hspace{0.25mm}:\hspace{0.25mm}} A$. Since $h' \in
H_{m''}$, this further means that $t \in \bigcap_{g \in
H_{m''}}Act(m', g)$. By the expansion of presented proofs
constraint, we may infer from the latter that $t \in \bigcap_{g
\in H_{m''}}Act(m'', g)$. By the future always matters constraint,
we know that, since $m' < m''$, then we must have $R(m',m'')$,
whence, given that $\mathcal{M},m' \models t{\hspace{0.25mm}:\hspace{0.25mm}} A$, we must also
have $\mathcal{M},m'' \models t{\hspace{0.25mm}:\hspace{0.25mm}} A$. Summing this up with $t \in
\bigcap_{g \in H_{m''}}Act(m'', g)$, we get that $\mathcal{M},m''
\models Proven(A)$, which, by \eqref{A11}, means that
$\mathcal{M},m'' \models KProven(A)$, whence further, by
\eqref{A8}, $\mathcal{M},m'' \models \Box K\Box Proven(A)$.
Validity of \eqref{A1} yields then $\mathcal{M},m'' \models K\Box
Proven(A)$. Note, further, that $Proven(A) \to \neg Prove(j,A)$
must be valid as a consequence of \eqref{A9}, and by S5 reasoning
for $\Box$ and S4 reasoning for $K$ we get from this the validity
of:
$$
K\Box Proven(A) \to K\Box\neg Prove(j,A).
$$
The latter means that $\mathcal{M},m'' \models K\Box\neg
Prove(j,A)$, and, pushing the negation outside, $\mathcal{M},m''
\models \neg\langle K\rangle\Diamond Prove(j,A)$. It remains then
to note we already established both $R(m,m')$ and $R(m,m'')$ so
that by transitivity of $R$ we get that $R(m,m'')$. Therefore, the
consequence that $\mathcal{M},m'' \models \neg\langle
K\rangle\Diamond Prove(j,A)$ turns out to be in contradiction with
our initial hypothesis that $\mathcal{M},m \models K\langle
K\rangle\Diamond Prove(j_1,A_1)$. The obtained contradiction shows
that we must have $\neg K\langle K\rangle\Diamond Prove(j_1,A_1)$
true throughout any given normal jstit model for any $A_1 \in
Form$ and $j_1 \in Ag$.
\emph{Induction step}. Assume that for a $k \geq 1$ the validity
of all instances of the scheme $\neg K(\bigvee^{k}_{l = 1}\langle
K \rangle\Diamond Prove(j_{l}, A_{l}))$ has been successfully
shown and assume that $n = k + 1$. Assume, further, that:
$$
\mathcal{M}, m \models K(\bigvee^{k + 1}_{l = 1}\langle K
\rangle\Diamond Prove(j_{l}, A_{l})).
$$
Then, by S4 reasoning for $K$, we know that
$$
\mathcal{M}, m \models \bigvee^{k + 1}_{l = 1}\langle K
\rangle\Diamond Prove(j_{l}, A_{l}),
$$
so that at least one of $\langle K \rangle\Diamond Prove(j_{l},
A_{l})$ must be true at $m$; suppose, wlog, that $l = 1$. Then,
arguing as in the base case, we find a moment $m''$ such that
$R(m,m'')$ and $\mathcal{M},m'' \models K\Box\neg Prove(j_1,A_1)$.
Applying to this S4 reasoning for $K$, we get further that
$\mathcal{M},m'' \models KK\Box\neg Prove(j_1,A_1)$, and, pushing
out the negation, that $\mathcal{M},m'' \models K\neg\langle
K\rangle\Diamond Prove(j_1,A_1)$. Since we have $R(m,m'')$, it
follows that we also have:
$$
\mathcal{M}, m'' \models K(\bigvee^{k + 1}_{l = 1}\langle K
\rangle\Diamond Prove(j_{l}, A_{l})).
$$
From the latter two facts, S4 reasoning for $K$ yields that:
$$
\mathcal{M}, m'' \models K(\bigvee^{k + 1}_{l = 2}\langle K
\rangle\Diamond Prove(j_{l}, A_{l})),
$$
contradicting the induction hypothesis. The obtained contradiction
shows the validity of \eqref{A12} for $n = k + 1$.
The last axiom is \eqref{A13}. So, assume that $\mathcal{M}, m, h
\models \neg Prove(j, A)$. We have to consider then two cases.
\emph{Case 1}. The negative condition for $Prove(j,A)$ fails at
$(m,h)$. Then we must have $\mathcal{M}, m, h \models Proven(A)$,
and by \eqref{A9} we know that $\mathcal{M}, m, h \models
\bigwedge_{i \in Ag}\neg Prove(i,A)$, thus also $\mathcal{M}, m, h
\models \langle j \rangle\bigwedge_{i \in Ag}\neg Prove(i,A)$ by
S5 reasoning for $[j]$.
\emph{Case 2}. The negative condition for $Prove(j,A)$ holds at
$(m,h)$. Then, since we have $\mathcal{M}, m, h \models \neg
Prove(j, A)$, the positive condition for $Prove(j,A)$ at $(m,h)$
must fail. Therefore, we can choose a $g \in Choice^m_j(h)$ such
that for no $t \in Pol$ do we have both $t \in Act(m,g)$ and
$\mathcal{M}, m \models t{\hspace{0.25mm}:\hspace{0.25mm}} A$. Note, further, that $g \in
Choice^m_i(g)$ for every $i \in Ag$, and therefore the positive
condition for every formula of the form $Prove(i, A)$ fails at
$(m,g)$. Therefore, we must have $\mathcal{M}, m, g \models
\bigwedge_{i \in Ag}\neg Prove(i,A)$, and, since $g \in
Choice^m_j(h)$, also $\mathcal{M}, m, h \models \langle j
\rangle\bigwedge_{i \in Ag}\neg Prove(i,A)$ as desired.
It only remains to show that \eqref{R4} preserves validity over
normal jstit models. Assume that $KA \to (\neg Proven(B_1)
\vee\ldots \vee\neg Proven(B_n))$ is valid over normal jstit
models, and assume also that we have:
$$
\mathcal{M}, m, h \models KA \wedge (\bigvee_{j \in Ag}
Prove(j,B_1) \wedge\ldots \wedge \bigvee_{j \in Ag} Prove(j,B_n)).
$$
This means that we can choose $j_{B_1},\ldots, j_{B_n} \in Ag$ in
such a way that we end up having:
$$
\mathcal{M}, m, h \models KA \wedge Prove(j_{B_1},B_1)
\wedge\ldots \wedge Prove(j_{B_n},B_n).
$$
We can now re-use the manner of reasoning employed above for the
base case of \eqref{A12}. More precisely, since $\mathcal{M}, m, h
\models Prove(j_{B_1},B_1)$ then $m$ must have some
$<$-successors, otherwise $h$ would be the unique history through
$m$. Then, if there existed $t \in Pol$ such that both $t \in
Act(m,h)$ and $\mathcal{M}, m \models t{\hspace{0.25mm}:\hspace{0.25mm}} B_1$, the negative
condition for $Prove(j_{B_1},B_1)$ at $(m,h)$ would be violated.
On the other hand, if there were no such $t$, then the positive
condition for $Prove(j_{B_1},B_1)$ at $(m,h)$ would be violated.
Since $m$ is not a $\leq$-maximal moment in $Tree$, then we can
choose an $m' \in Tree$ such that both $m' > m$ and $h \in
H_{m'}$. All the histories passing through $m'$ are pairwise
undivided at $m$, therefore, by the presenting a new proof makes
histories divide constraint we must have $Act(m, g) = Act(m, g')$
for any $g,g' \in H_{m'}$. We also know that, since
$$
\mathcal{M},m, h \models Prove(j_{B_1},B_1) \wedge\ldots\wedge
Prove(j_{B_n},B_n),
$$
there must be $t_1,\ldots, t_n \in Pol$ such that $t_1,\ldots, t_n
\in Act(m,h)$ and $\mathcal{M},m \models t_i{\hspace{0.25mm}:\hspace{0.25mm}} B_i$ for all $i$
such that $1\leq i \leq n$. Since $h \in H_{m'}$, this further
means that
\noindent$t_1,\ldots, t_n \in \bigcap_{g \in H_{m'}}Act(m, g)$. By
the expansion of presented proofs constraint, we may infer from
the latter that $t_1,\ldots, t_n \in \bigcap_{g \in H_{m'}}Act(m',
g)$. By the future always matters constraint, we know that, since
$m < m'$, then we must have $R(m,m')$, whence, given that
$\mathcal{M},m \models t_i{\hspace{0.25mm}:\hspace{0.25mm}} B_i$ for all $i$ such that $1\leq i
\leq n$, we must also have $\mathcal{M},m' \models t_i{\hspace{0.25mm}:\hspace{0.25mm}} B_i$ for
all such $i$. Summing this up with $t_1,\ldots, t_n \in \bigcap_{g
\in H_{m'}}Act(m', g)$, we get that
$$
\mathcal{M},m' \models Proven(B_1) \wedge\ldots \wedge
Proven(B_n).
$$
Further, we know that $\mathcal{M}, m \models KA$, so that by
$R(m,m')$ and S4 properties of $K$ we must also have $\mathcal{M},
m' \models KA$. Thus we get that $KA \wedge Proven(B_1)
\wedge\ldots\wedge Prove(B_n)$ is satisfied at $m'$ which is in
contradiction with the assumed validity of
\noindent$KA \to (\neg Proven(B_1) \vee\ldots\vee\neg
Proven(B_n))$.
\end{proof}
We then define a \emph{proof} in the above-presented axiomatic
system as a finite sequence of formulas such that every formula in
it is either an axiom or is obtained from earlier elements of the
sequence by one of the inference rules. A proof is a proof of its
last formula. If an $A \in Form$ is provable in our system, we
will write $\vdash A$.
The presence in our system of the rules like \eqref{R2} and
especially \eqref{R4} complicates the issue of finding the right
notion of an inference from premises and the right format for
Deduction Theorem. Given that these problems lie beyond the scope
of the present paper, we will take a little detour and will base
our definition of consistency of a set of formulas upon the notion
of provable formula, rather than just saying that a set $\Gamma
\subseteq Form$ is inconsistent iff $\bot$ is derivable from
$\Gamma$. Moreover, due to the form of our main result we need to
relativize our notions to sets of proof variables occurring in a
given set of formulas.
More precisely, assume that $Z \subseteq PVar$. Then we can define
$Pol_Z$ and $Form_Z$ as the sets of proof polynomials (resp.
formulas) containing proof variables from $Z$ only. Note that this
imposes no restrictions on proof constants, so that the set of
closed proof polynomials is contained in $Pol_Z$ for every $Z
\subseteq PVar$. Now, for a given $Z \subseteq PVar$ we say that
$\Gamma \subseteq Form_Z$ is a \emph{set of formulas in} $Z$. We
say that $\Gamma$ is \emph{inconsistent} iff for some
$A_1,\ldots,A_n \in \Gamma$ we have $\vdash (A_1 \wedge\ldots
\wedge A_n) \to \bot$, and we say that $\Gamma$ is consistent iff
it is not inconsistent. $\Gamma$ is \emph{maxiconsistent in} $Z$
iff $\Gamma \subseteq Form_Z$ and no consistent subset of $Form_Z$
properly extends $\Gamma$.
Even with this slightly non-standard definition of inconsistency,
we can still do many familiar things, e.g. extend consistent sets
with new formulas and eventually make them maxiconsistent. More
precisely, the following lemma holds:
\begin{lemma}\label{elementaryconsistency}
Let $Z \subseteq PVar$, let $\Gamma \subseteq Form_Z$ be
consistent, and let $A, B \in Form_Z$. Then:
\begin{enumerate}
\item There exists a $\Delta \subseteq Form_Z$ such that $\Delta$
is maxiconsistent in $Z$ and $\Gamma \subseteq \Delta$.
\item If $\Gamma$ is maxiconsistent in $Z$, then exactly one
element of $\{A, \neg A \}$ is in $\Gamma$.
\item If $\Gamma$ is maxiconsistent in $Z$, then $A \vee B \in
\Gamma$ iff $(A \in \Gamma$ or $B \in \Gamma)$.
\item If $\Gamma$ is maxiconsistent in $Z$ and $A, (A \to B) \in
\Gamma$, then $B \in \Gamma$.
\item If $\Gamma$ is maxiconsistent in $Z$, then $A \wedge B \in
\Gamma$ iff $(A \in \Gamma$ and $B \in \Gamma)$.
\end{enumerate}
\end{lemma}
\begin{proof} (Part 1) Just as in the standard case, we enumerate the
elements of $Form_Z$ as $A_1,\ldots, A_n,\ldots$ and form the
sequence of sets $\Gamma_1,\ldots, \Gamma_n,\ldots,$ such that
$\Gamma_1 := \Gamma$ and for every natural $i \geq 1$:
\begin{align*}
\Gamma_{i + 1} :=
\left\{%
\begin{array}{ll}
\Gamma_i, & \hbox{ if $\Gamma_i \cup \{ A_ i \}$ is inconsistent;} \\
\Gamma_i \cup \{ A_ i \}, & \hbox{ otherwise.} \\
\end{array}%
\right.
\end{align*}
We now define $\Delta := \bigcup_{i \geq 1}\Gamma_i$. Of course,
we have $\Gamma \subseteq \Delta$, and, moreover, $\Delta$ is
maxiconsistent in $Z$. To see this, note that for every $i \geq 1$
the set $\Gamma_i$ is consistent by construction. Now, if $\Delta$
is inconsistent, then there must be a valid implication from a
finite conjunction of formulas in $\Delta$ to $\bot$. These
formulas must be mentioned in our numeration of $Form_Z$ so that
the valid implication in question can presented as $\vdash
(A_{i_1} \wedge\ldots \wedge A_{i_n}) \to \bot$ for appropriate
natural $i_1,\ldots, i_n$. Since all of $A_{i_1}, \ldots, A_{i_n}$
are in $\Delta$, we must have, by the construction of
$\Gamma_1,\ldots, \Gamma_n,\ldots,$ that $A_{i_1}, \ldots, A_{i_n}
\in \Gamma_{max(i_1,\ldots, i_n)}$. But then this latter set must
be inconsistent which contradicts our construction.
Further, if some consistent $\Xi \subseteq Form_Z$ is such that
$\Delta \subset \Xi$, then let $A_n \in \Xi \setminus \Delta$. We
must have then $\Gamma_n \cup \{ A_n \}$ inconsistent, but we also
have $\Gamma_n \cup \{ A_n \} \subseteq \Xi$, which implies
inconsistency of $\Xi$, in contradiction to our assumptions.
Therefore, $\Delta$ is not only consistent, but also
maxiconsistent in $Z$.
(Part 2) We cannot have both $A$ and $\neg A$ in $\Gamma$, since
we have, of course, $\vdash (A \wedge \neg A) \to \bot$. If, on
the other hand, neither $A$, nor $\neg A$ is in $\Gamma$, then
both $\Gamma \cup \{ A \}$ and $\Gamma \cup \{ \neg A \}$ must be
inconsistent, so that for some $B_1, \ldots, B_n \in \Gamma$ we
will have:
$$
\vdash (B_1\wedge \ldots\wedge B_n \wedge A) \to \bot,
$$
whereas for some $C_1, \ldots, C_k \in \Gamma$ we will have:
$$
\vdash (C_1\wedge \ldots\wedge C_k \wedge \neg A) \to \bot,
$$
whence we get, using \eqref{A0} and \eqref{R1}:
$$
\vdash (C_1\wedge \ldots\wedge C_k) \to A,
$$
and further:
$$
\vdash (B_1\wedge \ldots\wedge B_n \wedge C_1\wedge \ldots\wedge
C_k) \to \bot,
$$
so that $\Gamma$ turns out to be inconsistent, contrary to our
assumptions.
(Part 3) Assume $(A \vee B) \in \Gamma$. If neither $A$ nor $B$
are in $\Gamma$, then, by Part 2, both $\neg A$ and $\neg B$ are
in $\Gamma$. Using \eqref{A0} and \eqref{R1} we get that:
$$
\vdash ((A \vee B) \wedge \neg A \wedge \neg B) \to \bot,
$$
showing that $\Gamma$ is inconsistent, contrary to our
assumptions. In the other direction, if, say $A \in \Gamma$ and
$(A \vee B) \notin \Gamma$, then, by Part 2, we must have $\neg(A
\vee B) \in \Gamma$. Using \eqref{A0} and \eqref{R1} we get that:
$$
\vdash (\neg(A \vee B) \wedge A) \to \bot,
$$
showing, again, that $\Gamma$ is inconsistent, contrary to our
assumptions. The case when $B \in \Gamma$ is similar.
Parts 4 and 5 are similar to Part 3.
\end{proof}
\textbf{Remark}. Note that one can recover the notion of
non-relativized maxiconsistent set and its properties just by
setting $Z : = PVar$. But this will not be needed in the present
paper.
We are now prepared to formulate our main result:
\begin{theorem}\label{completeness}
Let $X \subseteq PVar$ be such that $PVar \setminus X$ is
countably infinite. Then an arbitrary $\Gamma \subseteq Form_X$ is
consistent iff it is satisfiable in a normal jstit model.
\end{theorem}
The rest of the paper is mainly concerned with proving Theorem
\ref{completeness}. One part of it we have, of course, right away,
as a consequence of Theorem \ref{soundness}:
\begin{corollary}\label{c-soundness}
Let $X \subseteq PVar$ be such that $PVar \setminus X$ is
countably infinite. If $\Gamma \subseteq Form_X$ is satisfiable in
a normal jstit model, then $\Gamma$ is consistent.
\end{corollary}
\begin{proof}
Let $\Gamma \subseteq Form_X$ be satisfiable in a normal jstit
model so that we have, say $\mathcal{M}, m, h \models \Gamma$ for
some $(m,h) \in MH(\mathcal{M})$. If $\Gamma$ were inconsistent
this would mean that for some $A_1,\ldots,A_n \in \Gamma$ we would
have $\vdash (A_1 \wedge\ldots \wedge A_n) \to \bot$. By Theorem
\ref{soundness}, this would mean that:
$$
\mathcal{M}, m, h \models (A_1 \wedge\ldots \wedge A_n) \to \bot,
$$
whence clearly $\mathcal{M}, m, h \models \bot$, which is
impossible. Therefore, $\Gamma$ must be consistent.
\end{proof}
Before we move further, we mention some theorems in the above
axiom system to be used later in the proof of the main result:
\begin{lemma}\label{theorems}
The following holds for every $A \in Form$, $t \in Pol$, $x \in
PVar$, and $j \in Ag$:
\begin{enumerate}
\item $\vdash t{\hspace{0.25mm}:\hspace{0.25mm}} A \to \Box t{\hspace{0.25mm}:\hspace{0.25mm}} A$;
\item $\vdash Proven(A) \to \Box Proven(A)$;
\item $\not\vdash x{\hspace{0.25mm}:\hspace{0.25mm}} A$;
\item $\vdash (Prove(j, A) \wedge \neg\Box Prove(j,A)) \to
[j](Prove(j, A) \wedge \neg\Box Prove(j,A))$;
\item $\vdash KA \to \Box KA$.
\end{enumerate}
\end{lemma}
\begin{proof}
(Part 1) We have:
\begin{align*}
t{\hspace{0.25mm}:\hspace{0.25mm}} A &\to !t{\hspace{0.25mm}:\hspace{0.25mm}} t{\hspace{0.25mm}:\hspace{0.25mm}} A&&\textup{(by \eqref{A5})}\\
&\to Kt{\hspace{0.25mm}:\hspace{0.25mm}} A&&\textup{(by \eqref{A5})}\\
&\to \Box K\Box t{\hspace{0.25mm}:\hspace{0.25mm}} A&&\textup{(by \eqref{A8})}\\
&\to K\Box t{\hspace{0.25mm}:\hspace{0.25mm}} A&&\textup{(by \eqref{A1})}\\
&\to \Box t{\hspace{0.25mm}:\hspace{0.25mm}} A&&\textup{(by \eqref{A7})}
\end{align*}
Our theorem follows then by transitivity of implication.
(Part 2) Again, we proceed by building a chain of implications:
\begin{align*}
Proven(A) &\to KProven(A)&&\textup{(by \eqref{A11})}\\
&\to \Box K\Box Proven(A)&&\textup{(by \eqref{A8})}\\
&\to K\Box Proven(A)&&\textup{(by \eqref{A1})}\\
&\to \Box Proven(A)&&\textup{(by \eqref{A7})}
\end{align*}
(Part 3). Take an arbitrary normal jstit model $\mathcal{M} =
\langle Tree, \leq, Choice, Act, R, \mathcal{E}, V\rangle$ and
consider another model $\mathcal{M}' = \langle Tree, \leq, Choice,
Act, R, \mathcal{E}', V\rangle$ such that:
\begin{align*}
\mathcal{E}'(m,t) = \left\{%
\begin{array}{ll}
\mathcal{E}(m,t), & \hbox{if $t \neq x$;} \\
\emptyset, & \hbox{if $t = x$.} \\
\end{array}%
\right.
\end{align*}
It is straightforward to verify that $\mathcal{M}'$ is again a
normal jstit model, and we obviously have $\mathcal{M}', m
\not\models x{\hspace{0.25mm}:\hspace{0.25mm}} A$ for every $m \in Tree$. Therefore, $x{\hspace{0.25mm}:\hspace{0.25mm}} A$ is
not valid, and, by Theorem \ref{soundness}, cannot be provable in
our system.
(Part 4). We chain the implications as follows:
\begin{align*}
(Prove(j, A) \wedge \neg\Box Prove(j,A)) &\to ([j]Prove(j, A) \wedge \Box\neg\Box Prove(j,A))&&\textup{(by \eqref{A1} and \eqref{A9})}\\
&\to ([j]Prove(j, A) \wedge [j]\neg\Box Prove(j,A))&&\textup{(by \eqref{A2})}\\
&\to [j](Prove(j, A) \wedge \neg\Box Prove(j,A))&&\textup{(by \eqref{A1})}
\end{align*}
\end{proof}
(Part 5). By S5 properties of $\Box$ and S4 properties of $K$, we
clearly have $\vdash \Box K\Box A \to \Box KA$. Part 5 follows
then by \eqref{A8} and transitivity of implication.
\section{The canonical model}\label{canonicalmodel}
The main aim of the present section is to prove the inverse of
Corollary \ref{c-soundness}. The method used is a variant of the
canonical model technique, but, due to the complexity of the case,
we do not define our model in one full sweep. Rather, we proceed
piecewise, defining elements of the model one by one, and checking
the relevant constraints as soon, as we have got enough parts of
the model in place. The last subsection proves the truth lemma for
the defined model.
Throughout this section we fix an $X \subseteq PVar$ such that
$PVar \setminus X$ is countably infinite. We then
present\footnote{More precisely, we divide $PVar \setminus X$ into
three countably infinite subsets plus a single proof variable
which we will denote $z$. For the first of these three subsets
(denoted $Y$) we fix a bijection onto the Cartesian product of
$Ag$ and $Form$, for the other two (denoted $W$ and $U$) we fix
bijections onto $Form$.} the set of proof variables in the
following form:
$$
PVar = X \cup Y \cup W \cup U \cup \{ z \},
$$
where:
$$
Y := \{ y_{(i,A)} \mid i \in Ag, A \in Form_X \},
$$
$$
W := \{ w_A \mid A \in Form_X \},
$$
$$
U := \{ u_A \mid A \in Form_X \}.
$$
Since $Form$ is countably infinite and $Ag$ is finite, this
presentation of $PVar$ is well-defined. Also throughout this
section we will use $\mathcal{M} = \langle Tree, \leq, Choice,
Act, R, \mathcal{E}, V\rangle$ as a fixed notation for our
canonical model.
The ultimate building blocks of $\mathcal{M}$ we will call
\emph{elements}. Before going on with the definition of
$\mathcal{M}$, we define what these elements are and explore some
of their properties.
\begin{definition}\label{element}
An element is a sequence of the form $(\Gamma_1,\ldots,\Gamma_n)
\alpha$ for some natural $n \geq 1$ such that:
\begin{itemize}
\item $\alpha \in \{ \uparrow, \downarrow \}$;
\item For every $i \leq n$, $\Gamma_i$ is maxiconsistent in $X$;
\item For every $i < n$, for all $A \in Form_X$, if $KA \in
\Gamma_i$, then $KA \in \Gamma_{i + 1}$;
\item For every $i$ such that $1 < i \leq n$, for all $j \in Ag$
and $A \in Form_X$, if $Prove(j,A) \in \Gamma_1$, then $Proven(A)
\in \Gamma_i$;
\item For every $i$ such that $1 < i \leq n$, for all $A \in
Form_X$, it is true that
\noindent$K\Box\neg Prove(j, A) \in \Gamma_i$.
\end{itemize}
\end{definition}
In other words, elements are sequences of subsets of $Form_X$ of a
rather special kind, which are signed by either $\downarrow$ or
$\uparrow$. The (purely technical) reason for including these
arrows in the structure of elements is that one normally needs at
least two copies of one element in order to get the truth
conditions for formulas like $\Box Prove(j,A)$ right. Both
$\downarrow$ or $\uparrow$ mainly become relevant after we define
$Act$ and for most other purposes they can be more or less
overlooked.
We prove the following lemma:
\begin{lemma}\label{elementcontinuation}
Whenever $(\Gamma_1,\ldots,\Gamma_n)\alpha$ is an element, then,
for some $\Gamma_{n + 1} \subseteq Form_X$, the sequence
$(\Gamma_1,\ldots,\Gamma_{n + 1})\alpha$ is also an element.
\end{lemma}
\begin{proof}
Assume $(\Gamma_1,\ldots,\Gamma_n)\alpha$ is an element. We have
two cases to consider:
\emph{Case 1.} $n = 1$. Then consider the set:
\begin{align*}
\Delta := \{ KA \mid KA \in \Gamma_1 \} \cup \{ Proven(A) \mid
(\exists j \in Ag)&(Prove(j, A) \in \Gamma_1) \} \cup\\
&\cup \{ K\Box\neg Prove(j, A) \mid A \in Form \}.
\end{align*}
We show that $\Delta$ is consistent. Of course, the set $\{ KA
\mid KA \in \Gamma_1 \}$ is consistent since it is a subset of
$\Gamma_1$ and the latter is assumed to be consistent.
Further, if the set
$$
\Delta' := \{ KA \mid KA \in \Gamma_1 \} \cup \{ Proven(A) \mid
(\exists j \in Ag)(Prove(j, A) \in \Gamma_1) \}
$$
is inconsistent, this would mean, wlog, that for some $B_1,\ldots,
B_k, C_ 1,\ldots, C_l \in Form$ and $j_1, \ldots, j_l \in Ag$ such
that $KB_1,\ldots, KB_k$ and $Prove(j_1, C_1),\ldots,
Prove(j_l,C_l)$ are in $\Gamma_1$, we have that:
$$
\vdash(KB_1\wedge\ldots \wedge KB_k) \to (\neg Proven(C_1)
\vee\ldots \vee \neg Proven(C_l)),
$$
whence, by \eqref{A7}:
$$
\vdash K(B_1\wedge\ldots \wedge B_k) \to (\neg Proven(C_1)
\vee\ldots \vee \neg Proven(C_l)),
$$
and further, by \eqref{R4}:
$$
\vdash K(B_1\wedge\ldots \wedge B_k) \to (\bigwedge_{j \in Ag}\neg
Prove(j, C_1) \vee\ldots \vee \bigwedge_{j \in Ag}\neg
Prove(j,C_l)).
$$
Since the latter formula is in $X$ it is, of course, in $\Gamma_1$
by its maxiconsistency in $X$. Also, given Lemma
\ref{elementaryconsistency}, $K(B_1\wedge\ldots \wedge B_k)$ is in
$\Gamma_1$ by the fact that $KB_1,\ldots, KB_k \in \Gamma_1$,
\eqref{A7}, and the fact that $\Gamma_1$ is maxiconsistent in $X$.
Therefore, we get:
$$
\bigwedge_{j \in Ag}\neg Prove(j, C_1) \vee\ldots \vee
\bigwedge_{j \in Ag}\neg Prove(j,C_l) \in \Gamma_1
$$
by Lemma \ref{elementaryconsistency}.4. By Lemma
\ref{elementaryconsistency}.3, we further get that for some $r$
such that $1 \leq r \leq l$ all of the formulas $\neg Prove(j,
C_r)$, where $j \in Ag$ are in $\Gamma_1$. By the choice of
$C_1,\ldots, C_l$ this makes $\Gamma_1$ inconsistent and we get a
contradiction, which shows that $\Delta'$ is consistent.
Assume, further, that $\Delta$ is inconsistent. In view of
consistency of $\Delta'$ this will mean that for some
$KB_1,\ldots, KB_k$ in $\Gamma_1$, and some $Proven(C_1),\ldots,
Proven(C_l)$ from $\Delta' \setminus \Gamma_1$ and some
$Prove(j_1,D_1),\ldots, Prove(j_r,D_r) \in Form$, we will have:
\begin{align*}
\vdash(KB_1\wedge\ldots \wedge KB_k \wedge &Proven(C_1)
\wedge\ldots
\wedge Proven(C_l)) \to\\
&\to (\langle K \rangle\Diamond Prove(j_1,D_1)\vee\ldots
\vee\langle K \rangle\Diamond Prove(j_r,D_r)).
\end{align*}
From the latter validity, by \eqref{R2} and \eqref{A7} we get
that:
\begin{align*}
\vdash K(KB_1\wedge\ldots \wedge KB_k \wedge &Proven(C_1)
\wedge\ldots
\wedge Proven(C_l)) \to\\
&\to K(\langle K \rangle\Diamond Prove(j_1,D_1)\vee\ldots
\vee\langle K \rangle\Diamond Prove(j_r,D_r)),
\end{align*}
whence, by \eqref{A12}, we obtain:
\begin{align*}
\vdash K(KB_1\wedge\ldots \wedge KB_k \wedge Proven(C_1)
\wedge\ldots \wedge Proven(C_l)) \to \bot,
\end{align*}
and, by \eqref{A7}, and \eqref{A11} we further obtain:
\begin{align*}
\vdash (KB_1\wedge\ldots \wedge KB_k \wedge Proven(C_1)
\wedge\ldots \wedge Proven(C_l)) \to \bot,
\end{align*}
showing that $\Delta'$ must be inconsistent. This makes up a
contradiction showing that $\Delta$ must be consistent.
Since $\Delta$ is shown to be consistent, then, by Lemma
\ref{elementaryconsistency}.1, it is also extendable to a set
$\Gamma_2$ which is maxiconsistent in $X$. By the choice of
$\Delta$, this means that $(\Gamma_1, \Gamma_2)\alpha$ must be an
element.
\emph{Case 2}. $n > 1$. Then it is easy to see that
$(\Gamma_1,\ldots, \Gamma_n, \Gamma_n)\alpha$ is an element.
\end{proof}
The structure of elements will be important in what follows. If
$\xi = (\Gamma_1,\ldots, \Gamma_n)\alpha$ is an element, then its
\emph{initial segment} is any element $\tau$ of the form
$(\Gamma_1,\ldots, \Gamma_k)\alpha$ with $k \leq n$. If, moreover,
$k < n$, then $\tau$ is a \emph{proper} initial segment of $\xi$,
and if $k = n -1$, then $\tau$ is the \emph{greatest} proper
initial segment of $\xi$. Moreover, we define $n$ to be the
\emph{length} of $\xi$. Thus, any element of length $1$ has no
proper initial segments. Furthermore, we define that $\Gamma_n$ is
the end element of $\xi$ and write $\Gamma_n = end(\xi)$.
We now define the canonical model using elements as our building
blocks. We start by defining the following relation $\equiv$
between elements of equal length. For the elements of length $1$
we set:
$$
(\Gamma)\alpha \equiv (\Delta)\beta \Leftrightarrow (\forall A \in
Form_X)(\Box A \in \Gamma \Rightarrow A \in \Delta);
$$
and for the elements of greater length we set:
\begin{align*}
(\Gamma_1,\ldots, \Gamma_n, \Gamma_{n + 1})\alpha \equiv
&(\Delta_1,\ldots, \Delta_n, \Delta_{n + 1})\beta \Leftrightarrow\\
&\Leftrightarrow (\Gamma_1 = \Delta_1 \wedge \ldots \wedge
\Gamma_n = \Delta_n \wedge \alpha = \beta \wedge (\Gamma_{n +
1})\alpha \equiv (\Delta_{n + 1})\beta).
\end{align*}
It is routine to check that $\equiv$ is an equivalence relation
given that $\Box$ is an S5 modality. We will denote the
equivalence class of element $(\Gamma_1,\ldots, \Gamma_n)\alpha$
generated by $\equiv$ by $[(\Gamma_1,\ldots,
\Gamma_n)\alpha]_\equiv$. Since all the elements inside a given
$\equiv$-equvalence class are of the same length, we may extend
the notion of length to these classes setting that the length of
$[(\Gamma_1,\ldots, \Gamma_n)\alpha]_\equiv$ also equals $n$.
We now proceed to definitions of components for the canonical
model.
\subsection{$Tree$, $\leq$, and $Hist$}
The first two components of the canonical model $\mathcal{M}$ are
as follows:
\begin{itemize}
\item $Tree$ is the set of $\equiv$-equvalence classes of
elements plus $\dag$ and $\ddag$ as additional moments;
\item We set that both $\dag < m$ and $m \not< \dag$ for every
$m \in Tree \setminus \{ \dag \}$. We further set that $\ddag$ is only
$<$-comparable to $\dag$ (in which case we already have $\dag < \ddag$), and for any two $\equiv$-equvalence classes of
elements $m$ and $m'$, we have that $m < m'$ iff
there is an element $\xi \in
m$ such that $\xi$ is a proper initial segment of every element $\tau \in
m'$. The relation $\leq$ is then defined as the reflexive
companion to $<$.
\end{itemize}
Before we move on to the choice- and justifications-related
components, let us pause to check that the restraints imposed by
our semantics on $Tree$ and $\leq$ are satisfied:
\begin{lemma}\label{leq}
The relation $\leq$, as defined above, is a partial order on
$Tree$, which satisfies both historical connection and no backward
branching constraints. Moreover, every element in $Tree$, except
for $\ddag$, has at least one immediate $<$-successor.
\end{lemma}
\begin{proof}
Transitivity and reflexivity of $\leq$ are obvious. As for
antisymmetry, assume that we have both $m < m'$ and $m' < m$. Then
$m$ and $m'$ must be equivalence classes of elements. Let $\xi \in
m$ be a proper initial segment of every element in $m'$ and let
$\tau \in m'$ be a proper initial segment of every element in $m$.
It follows that $\xi$ is a proper initial segment of $\tau$ and
also $\tau$ is a proper initial segment of $\xi$, a contradiction.
Historical connection is satisfied since $\dag$ is the
$\leq$-least element of $Tree$. Let us prove the absence of
backward branching. Assume that we have both $m \leq m''$ and $m'
\leq m''$ but neither $m \leq m'$ nor $m' \leq m$ holds. This
means that all the three moments are pairwise different and none
of them is either $\dag$ or $\ddag$, otherwise our assumptions
about them would be immediately falsified. Therefore, all the
three moments are some equivalence classes of elements and we also
have $m \neq m'$, $m < m''$ and $m' < m''$. So let $\xi \in m$ and
$\tau \in m'$ be such that both $\xi$ and $\tau$ are proper
initial segments of every element in $m''$. Then, since $m \neq
m'$, $\xi$ and $\tau$ must be different, hence either $\xi$ must
be a proper initial segment of $\tau$ or $\tau$ must be a proper
initial segment of $\xi$. Assume, wlog, that $\xi$ is a proper
initial segment of $\tau$. Then $\xi$ is included into the
greatest proper initial segment of $\tau$. Let $\tau'$ be any
element in $m'$. It follows from the definition of $\equiv$ that
all the elements within $m'$ share the same greatest proper
initial segment, therefore $\xi$ must be a proper initial segment
of $\tau'$ as well. It follows that $m < m'$, contrary to our
assumptions.
Consider the $<$-successors of a given $m \in Tree$. If $m \neq
\ddag$, then either $m = \dag$, or $m$ is an equivalence class of
elements. If $m = \dag$, then take any $\Gamma \subseteq Form_X$
which is maxiconsistent in $X$ and any $\alpha \in \{ \uparrow,
\downarrow \}$. Then $(\Gamma)\alpha$ is an element and we have
$\dag < [(\Gamma)\alpha]_\equiv$. Moreover, no other moment can be
in between them: this cannot be either $\dag$, or $\ddag$, or an
equivalence class of elements (since the greatest proper initial
segment of $(\Gamma)\alpha$ is empty). If, on the other hand, $m$
is an equivalence class of elements, then assume that $m =
[(\Gamma_1,\ldots, \Gamma_n)\alpha]_\equiv$. By Lemma
\ref{elementcontinuation}, we know that for some $\Delta \subseteq
Form$, the tuple $(\Gamma_1,\ldots, \Gamma_n, \Delta)\alpha$ must
be an element. But then we must have
$$
[(\Gamma_1,\ldots, \Gamma_n)\alpha]_\equiv < [(\Gamma_1,\ldots,
\Gamma_n, \Delta)\alpha]_\equiv,
$$
and again, no moments are strictly in between them since
$(\Gamma_1,\ldots, \Gamma_n)\alpha$ is the greatest proper initial
segment of $(\Gamma_1,\ldots, \Gamma_n, \Delta)\alpha$.
\end{proof}
Moreover, it is easy to see that if $m, m' \in Tree$ are two
equivalence classes of elements, and $m < m'$, then the length of
$m$ is less than the length of $m'$, and if $m'$ is an immediate
$<$-successor of $m$, then length of $m$ is the length of $m'$
minus one.
Before we move on, let us have a quick look into the structure of
histories induced by $Tree$ and $\leq$. Lemma \ref{leq} shows that
we must have the history $(\dag,\ddag)$ plus a bunch of infinite
histories of the form $(\dag, m_1,\ldots, m_n,\ldots)$, ordered in
the type of $\omega$, where, for $n \geq 1$, $m_n$ is an
equivalence class of elements of length $n$ and every next element
is the immediate $<$-successor of the previous one. Every such
infinite history we can also represent in the form $(\dag,
\xi_1,\ldots, \xi_n,\ldots)$ such that for every $n \geq 1$:
\begin{itemize}
\item $\xi_n \in m_n$;
\item $\xi_n$ is the greatest proper initial segment of
$\xi_{n + 1}$.
\end{itemize}
Moreover, one can show that such a representation, for a given
history of the form $(\dag, m_1,\ldots, m_n,\ldots)$, is unique.
Indeed, suppose that $(\dag, \xi_1,\ldots, \xi_n,\ldots)$ and
$(\dag, \xi'_1,\ldots, \xi'_n,\ldots)$ are two different
representations for $(\dag, m_1,\ldots, m_n,\ldots)$. Then let
$i\geq 1$ be the first natural number such that $\xi_i \neq
\xi'_i$. Consider $m_{i + 1}$. We have $\xi_{i + 1}, \xi'_{i + 1}
\in m_{i + 1}$ so that $\xi_{i + 1} \equiv \xi'_{i + 1}$. Since
$\xi_i$ and $\xi'_i$ are the greatest proper initial segments of
$\xi_{i + 1}$, $\xi'_{i + 1}$, respectively, the greatest proper
initial segments of $\xi_{i + 1}$ and $\xi'_{i + 1}$ are non-empty
and, by $\xi_{i + 1} \equiv \xi'_{i + 1}$, must coincide, which
cannot be the case since $\xi_i \neq \xi'_i$.
Therefore, if $h = (\dag, m_1,\ldots, m_n,\ldots)$ is a history in
$Tree$ and $(\dag, \xi_1,\ldots, \xi_n,\ldots)$ is the unique
representation of $h$ as a sequence of elements, we define $m_n
\cap h$ to be $\xi_n$.
All the above statements admit of an inversion. Not only can every
history be uniquely represented as a sequence of elements, but
also every sequence of elements of an appropriate form represents
a unique history in $\mathcal{M}$. Not only is every intersection
of an equivalence class of elements and a history an element in
this class, but also, conversely, every element defines the
intersection of at least one history with the equivalence class
induced by this element. More precisely:
\begin{lemma}\label{hist}
The following statements are true:
\begin{enumerate}
\item Fix a sequence $(\dag, \xi_1,\ldots, \xi_n,\ldots)$ where
all of $\xi_1,\ldots, \xi_n,\ldots $ are elements and, for every
natural $n$, $\xi_n$ is the greatest proper initial segment of
$\xi_{n + 1}$. Then there is a unique history $h = (\dag, m_1,\ldots,
m_n,\ldots)$ in $\mathcal{M}$ such that for all natural $n$ it
is true that $\xi_n \in m_n$ (thus $\xi_n = m_n \cap h$).
\item Let $\xi$ be an element. Then there is at least one history
$h \in H_{[\xi]_\equiv}$ such that $[\xi]_\equiv \cap h = \xi$.
\end{enumerate}
\end{lemma}
\begin{proof}
As for Part 1, consider $(\dag, [\xi_1]_\equiv,\ldots,
[\xi_n]_\equiv,\ldots)$; it is obviously a history in
$\mathcal{M}$ and we also have $\xi_n \in [\xi_n]_\equiv$ for all
natural $n$.
As for Part 2, we have to consider two cases.
\emph{Case 1}. The length of $\xi$ equals $1$, so that $\xi =
(\Gamma)\alpha$ for appropriate $\Gamma$ and $\alpha$. Then we
know, by the proof of Lemma \ref{elementcontinuation} above, that
for some $\Delta \subseteq Form_X$ the sequence:
\begin{align*}
\xi_1 &:= \xi = (\Gamma)\alpha;\\
\xi_2 &:= (\Gamma, \Delta)\alpha;\\
&\ldots;\\
\xi_{n + 1} &:= (\Gamma, \underbrace{\Delta, \ldots, \Delta}_{n\textup{ times}})\alpha;\\
&\ldots;
\end{align*}
is a sequence of elements in which every element is the greatest
proper initial segment of the next one. Therefore, by Part 1,
there must be a history $h$ in $\mathcal{M}$ such that $h = (\dag,
[\xi_1]_\equiv,\ldots, [\xi_n]_\equiv,\ldots)$ and also $\xi =
\xi_1 = [\xi_1]_\equiv \cap h$.
\emph{Case 2}. The length of $\xi$ is greater than $1$, so that
$\xi = (\Gamma_1,\ldots, \Gamma_n)\alpha$ for appropriate $n > 1$,
$\Gamma_1,\ldots, \Gamma_n$ and $\alpha$. Then we define the
following sequence of elements:
\begin{align*}
\xi_1 &:= (\Gamma_1)\alpha;\\
\xi_2 &:= (\Gamma_1, \Gamma_2)\alpha;\\
&\ldots;\\
\xi_n &:= \xi = (\Gamma_1,\ldots, \Gamma_n)\alpha;\\
\xi_{n + 1} &:= (\Gamma_1,\ldots, \Gamma_n, \Gamma_n)\alpha;\\
&\ldots;\\
\xi_{n + k} &:= (\Gamma_1,\ldots, \Gamma_n, \underbrace{\Gamma_n, \ldots, \Gamma_n}_{k\textup{ times}})\alpha;\\
&\ldots.
\end{align*}
Again, it is easy to see that every element in this sequence is
the greatest proper initial segment of the next one, so that,
arguing as in the previous case, we get that $h = (\dag,
[\xi_1]_\equiv,\ldots, [\xi_n]_\equiv,\ldots, [\xi_{n +
k}]_\equiv,\ldots)$ is a history in $\mathcal{M}$ and $\xi = \xi_n
= [\xi_n]_\equiv \cap h$.
\end{proof}
\subsection{$Choice$}
We now define the choice structures of our canonical model:
\begin{itemize}
\item $Choice^m_j = H_m$, if $m \in \{ \dag, \ddag \}$;
\item $Choice^m_j(h) = \{ h' \mid h' \in H_m,\,
(\forall A \in Form)([j]A \in end(h \cap m) \Rightarrow A \in end(h' \cap
m))\}$, if $m$ is an equivalence class of elements.
\end{itemize}
Since for every $j \in Ag$, $[j]$ is an S5-modality, $Choice$
induces a partition on $H_m$ for every given $m \in Tree$. We
check that the choice function verifies the relevant semantic
constraints:
\begin{lemma}\label{choice}
The tuple $\langle Tree, \leq, Choice\rangle$, as defined above,
verifies both the independence of agents and the no choice between
undivided histories constraints.
\end{lemma}
\begin{proof}
We first tackle no choice between undivided histories. Consider a
moment $m$ and two histories $h, h' \in H_m$ such that $h$ and
$h'$ are undivided at $m$. Since the agents' choices are only
non-vacuous at moments represented by equivalence classes of
elements, we may safely assume that $m$ is such a class. Since $h$
and $h'$ are undivided at $m$, this means that there is a moment
$m'$ such that $m < m'$ and $m'$ is shared by $h$ and $h'$. Hence
we know that also $m'$ is some equivalence class of elements.
Suppose the length of $m$ is $n$ and the length of $m'$ is $n'$.
Then $n < n'$, also $h \cap m$ is the initial segment of length
$n$ of $h \cap m'$, and similarly, $h' \cap m$ is the initial
segment of length $n$ of $h' \cap m'$. But both $h \cap m'$ and
$h' \cap m'$ are, by definition, in $m'$, therefore, they must
share the greatest proper initial segment. Hence, their initial
segments of length $n$ must coincide as well, and we must have $h
\cap m = h' \cap m$, whence $end(h \cap m) = end(h' \cap m)$. Now,
if $j \in Ag$ and $[j]A \in end(h \cap m)$, then, by \eqref{A1}
and maxiconsistency of $end(h \cap m)$ in $X$, we will have also
$A \in end(h \cap m) = end(h' \cap m)$, and thus $h' \in
Choice^m_j(h)$, so that $Choice^m_j(h) = Choice^m_j(h')$ since
$Choice$ is a partition of $H_m$.
Consider, next, the independence of agents. Let $m \in Tree$ and
let $f$ be a function on $Ag$ such that $\forall j \in Ag(f(j) \in
Choice^m_j)$. We are going to show that in this case $\bigcap_{j
\in Ag}f(j) \neq \emptyset$. If $m \in \{ \dag, \ddag \}$, then
this is obvious, since every agent will have a vacuous choice. We
treat the case when $m$ is an equivalence class of elements.
Assume that $m = [(\Gamma_1,\ldots, \Gamma_{n +
1})\alpha]_\equiv$. We have two cases to consider:
\emph{Case 1}. $n = 0$. By \eqref{A1} we know that there is a set
$\Delta$ of formulas of the form $\Box A$ which is shared by all
sets of the form $end(\xi)$ with $\xi \in m$ in the sense that if
$\xi \in m$, then $\Box A \in end(\xi)$ iff $\Box A \in \Delta$.
By the same axiom scheme and Lemma \ref{hist}.2, we also know that
for every $j \in Ag$ there is set $\Delta_j$ of formulas of the
form $[j]A$ which is shared by all sets of the form $end(\xi)$
such that $\exists h(h \in f(j) \wedge \xi = m \cap h)$. More
precisely:
$$
\xi \in m \Rightarrow (\exists h(h \in f(j) \wedge \xi = m \cap h)
\Leftrightarrow (\forall A \in Form)([j]A \in end(\xi)
\Leftrightarrow [j]A \in \Delta_j)).
$$
We now consider the set $\Delta \cup \bigcup\{ \Delta_j\mid j \in
Ag \}$ and show its consistency. Indeed, if this set is
inconsistent, then, wlog, we would have a provable formula of the
following form:
\begin{equation}\label{E:c1}
\vdash (\Box A \wedge \bigwedge_{j \in Ag}[j]A_j) \to \bot.
\end{equation}
But then, choose for every $j \in Ag$ an element $\xi_j \in m$
such that
\noindent$(\forall A \in Form)([j]A \in end(\xi_j) \Leftrightarrow
[j]A \in \Delta_j)$. This is possible, since we may simply choose
an arbitrary $h_j \in f(j)$ and set $\xi_j: = m \cap h_j$. Then we
will have $[j]A_j \in \xi_j$ for every $j \in Ag$. Next, consider
$\Gamma_1$. Since $m = [(\Gamma_1)\alpha]_\equiv$ and $\Box$ is an
S5-modality, we must have:
$$
\{ \Diamond[j]A_j \in Ag \} \subseteq \Gamma_1,
$$
whence, by Lemma \ref{elementaryconsistency}.5:
$$
\bigwedge_{j \in Ag}\Diamond[j]A_j \in \Gamma_1,
$$
and further, by \eqref{A3} and Lemma
\ref{elementaryconsistency}.4:
$$
\Diamond\bigwedge_{j \in Ag}[j]A_j \in \Gamma_1.
$$
Also, by definition of $\Delta$ and the fact that $m =
[(\Gamma_1)\alpha]_\equiv$, we get successively:
$$
\Box A \in \Gamma_1,
$$
then, by Lemma \ref{elementaryconsistency}.5:
$$
\Box A \wedge \Diamond\bigwedge_{j \in Ag}[j]A_j \in \Gamma_1,
$$
and finally, by the fact that $\Box$ is an S5-modality:
\begin{equation}\label{E:c2}
\Diamond(\Box A \wedge \bigwedge_{j \in Ag}[j]A_j) \in \Gamma_1.
\end{equation}
From \eqref{E:c1}, together with \eqref{E:c2}, it follows by S5
reasoning for $\Box$ that $\Diamond\bot \in \Gamma_1$, so that,
again by S5 properties of $\Box$ and Lemma
\ref{elementaryconsistency}.4, it follows that $\bot \in
\Gamma_1$, which is in contradiction with maxiconsistency of
$\Gamma_1$ in $X$.
Hence $\Delta \cup \bigcup\{ \Delta_j\mid j \in Ag \}$ is
consistent, and since it is in $X$, we can extend it to a set
$\Xi$ which is maxiconsistent in $X$. We now consider
$(\Xi)\alpha$ which is obviously an element, and since, moreover
$\Delta \subseteq \Xi$, then also $(\Xi)\alpha \in m$. By Lemma
\ref{hist}.2, we can choose a history $g$ such that $(\Xi)\alpha =
g \cap m$. We also know that for every $j \in Ag$, there is a
history $h_j \in f(j)$ such that $h_j \cap m = \xi_j$ by the
choice of $\xi_j$. Therefore, for every $j \in Ag$,
$Choice^m_j(h_j) = f(j)$. Also, if $[j]A \in end(\xi_j) = end(h_j
\cap m)$, then $[j]A \in \Delta_j$, hence $[j]A \in \Xi = end(g
\cap m)$, therefore, by \eqref{A1}, $A \in end(g \cap m)$. Thus we
get that $g \in \bigcap_{j \in Ag}Choice^m_j(h_j) = \bigcap_{j \in
Ag}f_j$ so that the independence of agents is verified.
\emph{Case 2}. $n > 0$. For the most part, we can re-use our
reasoning from Case 1. We again form the sets $\Delta$, $\{
\Delta_j\mid j \in Ag \}$ and $\Xi$, and consider element
$(\Gamma_1,\ldots, \Gamma_n, \Xi)\alpha \in m$. We then choose a
history $g \in H_m$ for which we have $(\Gamma_1,\ldots, \Gamma_n,
\Xi)\alpha = m \cap g$ and show that $g \in \bigcap_{j \in
Ag}Choice^m_j(h_j) = \bigcap_{j \in Ag}f_j$.
The only new ingredient is that now seeing that $(\Gamma_1,\ldots,
\Gamma_n, \Xi)\alpha$ is in fact an element is much less trivial
and has to be argued separately. We show this as follows. If $KA
\in \Gamma_n$, then $KA \in \Gamma_{n + 1}$ by definition of an
element. But then $\Box KA \in \Gamma_{n + 1}$ by Lemma
\ref{theorems}.5 and maxiconsistency of $\Gamma_{n + 1}$ in $X$,
whence $\Box KA \in \Delta$ and, therefore, $\Box KA \in \Xi$. By
\eqref{A1} and maxiconsistency of $\Xi$ we get then $KA \in \Xi$.
Similarly, if $Prove(j,A) \in \Gamma_1$, then $Proven(A) \in
\Gamma_{n + 1}$ by definition of an element. But then $\Box
Proven(A) \in \Gamma_{n + 1}$, by Lemma \ref{theorems}.2 and
maxiconsistency of $\Gamma_{n + 1}$ in $X$, whence $\Box
Proven(A) \in \Delta$ and, therefore, $\Box Proven(A) \in \Xi$. By
\eqref{A1} and maxiconsistency of $\Xi$ in $X$, we get then
$Proven(A) \in \Xi$. Finally, if $A \in Form_X$ and $j \in Ag$,
then, since $n > 0$, we must have $K\Box\neg Prove(j,A) \in
\Gamma_{n + 1}$ by definition of an element, whence $\Box
K\Box\neg Prove(j,A) \in \Gamma_{n + 1}$ by \eqref{A8} and
maxiconsistency of $\Gamma_{n + 1}$ in $X$, so that $\Box
K\Box\neg Prove(j,A) \in \Delta$ and, further, $\Box K\Box\neg
Prove(j,A) \in \Xi$. By \eqref{A1} and maxiconsistency of $\Xi$ in
$X$, we get then that $K\Box\neg Prove(j,A) \in \Xi$. Thus
$(\Gamma_1,\ldots, \Gamma_n, \Xi)\alpha$ is an element, and the
rest is shown exactly as in Case 1.
\end{proof}
\subsection{$R$ and $\mathcal{E}$}
We now define the justifications-related components of our
canonical model. We first define $R$ as follows:
\begin{itemize}
\item $R([(\Gamma)\alpha]_\equiv, m')\Leftrightarrow (m' \in Tree \setminus \{ \dag,\ddag \})\&$
$\qquad\qquad\qquad\qquad\qquad\quad\&(\forall
\tau \in m')(\forall A \in Form_X)(KA \in \Gamma \Rightarrow KA \in
end(\tau))$;
\item If $n > 1$, then\begin{align*}
R([(\Gamma_1,\ldots, \Gamma_n)&\alpha]_\equiv, m')
\Leftrightarrow\\
&\Leftrightarrow (\exists \Delta_1,\ldots, \Delta_k \subseteq
Form_X)(k > 0 \& m' =
[(\Gamma_ 1, \Delta_1,\ldots, \Delta_k)\alpha]_\equiv \&\\
&\qquad\qquad\qquad\qquad\qquad\&(\forall A \in Form_X)(KA \in \Gamma_n \Rightarrow KA \in
\Delta_k));
\end{align*}
\item $R(\dag,m)$, for all $m \in Tree$;
\item $R(\ddag,m) \Leftrightarrow m = \ddag$.
\end{itemize}
Now, for the definition of $\mathcal{E}$:
\begin{itemize}
\item For
all $t \in Pol$: $\mathcal{E}(\dag, t) = \mathcal{E}(\ddag, t) = \{ A \in Form \mid
\vdash t{\hspace{0.25mm}:\hspace{0.25mm}} A \}$;
\item For all $t \in Pol_X$ and $m \in Tree \setminus \{ \dag,
\ddag \}$:
\begin{align*}
(\forall A \in Form)(&A \in \mathcal{E}(m, t)
\Leftrightarrow\\
&\Leftrightarrow (\exists
t_1{\hspace{0.25mm}:\hspace{0.25mm}} B_1)\ldots(\exists t_n{\hspace{0.25mm}:\hspace{0.25mm}} B_n)((\forall \xi \in m)(t_1{\hspace{0.25mm}:\hspace{0.25mm}} B_1,\ldots,t_n{\hspace{0.25mm}:\hspace{0.25mm}} B_n \in
end(\xi)) \&\\
&\qquad\qquad\qquad\qquad\qquad\qquad\quad\& \vdash (t_1{\hspace{0.25mm}:\hspace{0.25mm}} B_1\wedge\ldots \wedge t_n{\hspace{0.25mm}:\hspace{0.25mm}} B_n) \to t{\hspace{0.25mm}:\hspace{0.25mm}} A));
\end{align*}
\item $(\forall A \in Form_X)(\{ A \} = \mathcal{E}(m, y_{(j,
A)})= \mathcal{E}(m, w_A) = \mathcal{E}(m, u_A))$, for all $m \in Tree \setminus \{ \dag, \ddag \}$ and $j \in Ag$;
\item $(\forall A \in Form_X)(A \in \mathcal{E}(m, z)
\Leftrightarrow (\forall \xi \in m)(Proven(A) \in end(\xi)))$, for all $m \in Tree \setminus \{ \dag, \ddag \}$;
\item $\mathcal{E}(m, t) = Form$, if $m \in Tree \setminus \{ \dag, \ddag \}$ and $t
\in Pol \setminus(Pol_X \cup Y \cup W \cup U \cup \{ z \})$.
\end{itemize}
We start by mentioning a straightforward corollary to the above
definition:
\begin{lemma}\label{proven}
For all $m \in Tree$ and $t \in Pol$ it is true that $\{ A \in
Form \mid \vdash t{\hspace{0.25mm}:\hspace{0.25mm}} A \} \subseteq \mathcal{E}(m,t)$.
\end{lemma}
\begin{proof}
This holds simply by the definition of $\mathcal{E}$ when $m \in
\{ \dag, \ddag \}$. If $m \in Tree \setminus \{ \dag, \ddag \}$,
then we have another obvious case for $t \in Pol \setminus(Pol_X
\cup Y \cup W \cup U \cup \{ z \})$.
If $t \in Pol_X$, and $\vdash t{\hspace{0.25mm}:\hspace{0.25mm}} A$, then we have just a border
case in the definition of $\mathcal{E}(m,t)$, with $t{\hspace{0.25mm}:\hspace{0.25mm}} A$
following from the empty conjunction of elements present in
$end(\xi)$ for every $\xi \in m$.
Finally, if $t \in Y \cup W \cup U \cup \{ z \}$, then $t \in
PVar$, therefore, by Lemma \ref{theorems}.3, we must have:
$$
\{ A \in Form \mid \vdash t{\hspace{0.25mm}:\hspace{0.25mm}} A \} = \emptyset \subseteq
\mathcal{E}(m,t).
$$
\end{proof}
Note that since we know that for every $c \in PConst$ and every
instance $A$ of one of the axiom schemes in the list
\eqref{A0}--\eqref{A13}, it is true that $\vdash c{\hspace{0.25mm}:\hspace{0.25mm}} A$ (by
\eqref{R3}), it follows, among other things, that the
above-defined function $\mathcal{E}$ satisfies the additional
normality condition on jstit models.
It is straightforward to check that $R$, as defined above, is a
preorder on $Tree$, using \eqref{A7} and \eqref{A8}. Let us
briefly look into why the future always matters constraint is
verified as well. Assume $m \in Tree$. If $m = \dag$, then it is
connected to all the elements in $Tree$ by both $\leq$ and $R$,
and if $m = \ddag$, then it is connected only to itself by both
$\leq$ and $R$, so these moments cannot falsify the constraint. So
let us assume that $m$ is a class of equivalence generated by some
element, say $m = [(\Gamma_1,\ldots, \Gamma_n)\alpha]_\equiv$. If
$m \leq m'$, then $m'$ must be an equivalence class as well, and
$(\Gamma_1,\ldots, \Gamma_n)\alpha$ must be an initial segment of
every element in $m'$, so that we may assume, wlog, that $m' =
[(\Gamma_1,\ldots, \Gamma_k)\alpha]_\equiv$ for some $k \geq n$.
In particular, if $n
> 1$, then $k - 1 > 0$. But then take an arbitrary $A \in Form_X$. If $KA \in \Gamma_n$, then, since
$(\Gamma_1,\ldots, \Gamma_k)\alpha$ is an element, $KA \in
\Gamma_k$. By Lemma \ref{theorems}.5 and maxiconsistency of
$\Gamma_k$ in $X$ we must have then $\Box KA \in \Gamma_k$. Now,
by definition of $\equiv$, we get $KA \in end(\tau)$ for any given
$\tau \in m'$. It follows then that $R(m, m')$ as desired.
We further check that the semantical constraints for $\mathcal{E}$
are verified:
\begin{lemma}\label{e}
The function $\mathcal{E}$, as defined above, satisfies both the
monotonicity of evidence and the evidence closure properties.
\end{lemma}
\begin{proof}
We start with the monotonicity of evidence. Assume $R(m,m')$ and
$t \in Pol$. If $m \in \{ \dag, \ddag \}$ then, by Lemma
\ref{proven}, $\mathcal{E}(m,t) = \{ A \in Form \mid \vdash t{\hspace{0.25mm}:\hspace{0.25mm}} A
\} \subseteq \mathcal{E}(m',t)$ for any $m' \in Tree$.
Assume, further, that $m$ is an equivalence class of elements.
Then, since we have $R(m,m')$, $m'$ must be an equivalence class
of elements as well. Also, we are done if $m = m'$. On the other
hand, if $m \neq m'$, then consider $t$. If $t \in Pol \setminus
(Pol_X \cup \{ z \})$, then we must have $\mathcal{E}(m,t) =
\mathcal{E}(m',t)$ by definition, since $m, m' \in Tree \setminus
\{ \dag, \ddag \}$. If $t = z$, then take an arbitrary $A \in
\mathcal{E}(m,z)$. By the above definition of $\mathcal{E}$, this
means that $Proven(A) \in end(\xi)$ for every element $\xi$ of
$m$. By maxiconsistency of $end(\xi)$ in $X$ and \eqref{A11}, this
further means that $KProven(A) \in end(\xi)$ for every element
$\xi$ of $m$. Therefore, by $R(m,m')$, and the fact that $m, m'
\in Tree \setminus \{ \dag, \ddag \}$, we get that $KProven(A) \in
end(\tau)$ for every element $\tau$ of $m'$, whence, by
\eqref{A7}, it follows that $Proven(A) \in end(\tau)$ for every
element $\tau$ of $m'$. Therefore $A \in \mathcal{E}(m',z)$. Since
$A$ was arbitrary, this means that $\mathcal{E}(m,z) \subseteq
\mathcal{E}(m',z)$, as desired.
Finally, assume that $t \in Pol_X$ and take an arbitrary $A \in
\mathcal{E}(m,t)$. Then we can choose $t_1{\hspace{0.25mm}:\hspace{0.25mm}} B_1,\ldots,t_n{\hspace{0.25mm}:\hspace{0.25mm}}
B_n$ in such a way that for all $\xi \in m$ we have $t_1{\hspace{0.25mm}:\hspace{0.25mm}}
B_1,\ldots,t_n{\hspace{0.25mm}:\hspace{0.25mm}} B_n \in end(\xi)$, and, moreover, $\vdash
(t_1{\hspace{0.25mm}:\hspace{0.25mm}} B_1\wedge\ldots \wedge t_n{\hspace{0.25mm}:\hspace{0.25mm}} B_n) \to t{\hspace{0.25mm}:\hspace{0.25mm}} A$. Since
$t_1{\hspace{0.25mm}:\hspace{0.25mm}} B_1,\ldots,t_n{\hspace{0.25mm}:\hspace{0.25mm}} B_n \in end(\xi)$, we know that $\{
t_1{\hspace{0.25mm}:\hspace{0.25mm}} B_1,\ldots,t_n{\hspace{0.25mm}:\hspace{0.25mm}} B_n \} \in Form_X$. We also know that,
for every $\xi \in m$, $end(\xi)$ is maxiconsistent in $X$.
Therefore, using Lemma \ref{elementaryconsistency}, we obtain,
successively:
\begin{align*}
&(\forall \xi \in m)(Kt_1{\hspace{0.25mm}:\hspace{0.25mm}} B_1,\ldots,Kt_n{\hspace{0.25mm}:\hspace{0.25mm}} B_n \in end(\xi))
&&\textup{(by
Lemma \ref{theorems}.1)}\\
&(\forall \xi \in m)((Kt_1{\hspace{0.25mm}:\hspace{0.25mm}} B_1\wedge\ldots\wedge Kt_n{\hspace{0.25mm}:\hspace{0.25mm}} B_n)
\in end(\xi)) &&\textup{(by
Lemma \ref{elementaryconsistency}.5)}\\
&(\forall \xi \in m)(K(t_1{\hspace{0.25mm}:\hspace{0.25mm}} B_1\wedge\ldots\wedge t_n{\hspace{0.25mm}:\hspace{0.25mm}} B_n)
\in end(\xi)) &&\textup{(by \eqref{A7})}
\end{align*}
From the latter it follows by $R(m,m')$ that $K(t_1{\hspace{0.25mm}:\hspace{0.25mm}}
B_1\wedge\ldots\wedge t_n{\hspace{0.25mm}:\hspace{0.25mm}} B_n) \in end(\tau)$ for all $\tau \in
m'$. We also know that for every $\tau \in m'$, $end(\tau)$ is
maxiconsistent in $X$ so that, applying Lemma
\ref{elementaryconsistency}, and \eqref{A7}, we get that $t_1{\hspace{0.25mm}:\hspace{0.25mm}}
B_1,\ldots,t_n{\hspace{0.25mm}:\hspace{0.25mm}} B_n \in end(\tau)$ for all $\tau \in m'$. Adding
this to our initial assumption that $\vdash (t_1{\hspace{0.25mm}:\hspace{0.25mm}}
B_1\wedge\ldots \wedge t_n{\hspace{0.25mm}:\hspace{0.25mm}} B_n) \to t{\hspace{0.25mm}:\hspace{0.25mm}} A$, we obtain that $A
\in \mathcal{E}(m',t)$.
We turn now to the closure conditions. We verify the first two
conditions, and the third one can be verified in a similar way,
restricting attention to $t$ rather than considering both $s$ and
$t$. Let $s, t \in Pol$. We need to consider two cases:
\emph{Case 1}. $m \in \{ \dag, \ddag \}$. If $A \in
\mathcal{E}(m,s)$, then $\vdash s{\hspace{0.25mm}:\hspace{0.25mm}} A$. Therefore, by \eqref{A6},
we must also have $\vdash (s + t){\hspace{0.25mm}:\hspace{0.25mm}} A$ so that $A \in
\mathcal{E}(m,s + t)$. Similarly, if $A \in \mathcal{E}(m,t)$,
then also $A \in \mathcal{E}(m,s + t)$ and the closure constraint
(b) is verified. If, on the other hand, it is true that for some
$A, B \in Form$ we have both $A \to B \in \mathcal{E}(m,s)$ and $A
\in \mathcal{E}(m,t)$, then, again, this means that both $\vdash
s{\hspace{0.25mm}:\hspace{0.25mm}} A \to B$ and $\vdash t{\hspace{0.25mm}:\hspace{0.25mm}} A$. By \eqref{A4}, it follows that
$\vdash s\times t{\hspace{0.25mm}:\hspace{0.25mm}} B$ and, therefore, also $B \in
\mathcal{E}(m,s\times t)$, so that the closure condition (a) is
also verified.
\emph{Case 2}. $m \in Tree \setminus \{ \dag, \ddag \}$. If $s +
t, s\times t \notin Pol_X$, then we have:
$$
\mathcal{E}(m, s + t) = \mathcal{E}(m, s \times t) = Form,
$$
so that all the closure conditions are verified trivially.
Therefore, assume that
\noindent$s + t, s\times t \in Pol_X$. If $A \in Form$ and $A \in
\mathcal{E}(m,s)$, then we can choose $t_1{\hspace{0.25mm}:\hspace{0.25mm}} B_1,\ldots,t_n{\hspace{0.25mm}:\hspace{0.25mm}}
B_n$ such that for all $\xi \in m$ we have both $t_1{\hspace{0.25mm}:\hspace{0.25mm}}
B_1,\ldots,t_n{\hspace{0.25mm}:\hspace{0.25mm}} B_n \in end(\xi)$ and
$$
\vdash (t_1{\hspace{0.25mm}:\hspace{0.25mm}} B_1\wedge\ldots \wedge t_n{\hspace{0.25mm}:\hspace{0.25mm}} B_n) \to s{\hspace{0.25mm}:\hspace{0.25mm}} A.
$$
By \eqref{A0} and \eqref{A6} we get then that $\vdash (t_1{\hspace{0.25mm}:\hspace{0.25mm}}
B_1\wedge\ldots \wedge t_n{\hspace{0.25mm}:\hspace{0.25mm}} B_n) \to s + t{\hspace{0.25mm}:\hspace{0.25mm}} A$, which means
that $A \in \mathcal{E}(m,s + t)$. Similarly, if $A \in
\mathcal{E}(m,t)$, then $A \in \mathcal{E}(m,s + t)$ as well, and
closure condition (b) is verified.
On the other hand, if $A, B \in Form$ and we have both both $A \to
B \in \mathcal{E}(m,s)$ and $A \in \mathcal{E}(m,t)$, then we can
choose $t_1{\hspace{0.25mm}:\hspace{0.25mm}} B_1,\ldots,t_n{\hspace{0.25mm}:\hspace{0.25mm}} B_n$ and also $s_1{\hspace{0.25mm}:\hspace{0.25mm}}
C_1,\ldots,s_k{\hspace{0.25mm}:\hspace{0.25mm}} C_k$ such that for every $\xi \in m$ we have all
of the following:
\begin{align*}
&t_1{\hspace{0.25mm}:\hspace{0.25mm}} B_1,\ldots,t_n{\hspace{0.25mm}:\hspace{0.25mm}} B_n, s_1{\hspace{0.25mm}:\hspace{0.25mm}}
C_1,\ldots,s_k{\hspace{0.25mm}:\hspace{0.25mm}} C_k \in end(\xi);\\
&\vdash (t_1{\hspace{0.25mm}:\hspace{0.25mm}} B_1\wedge\ldots \wedge t_n{\hspace{0.25mm}:\hspace{0.25mm}} B_n) \to t{\hspace{0.25mm}:\hspace{0.25mm}} A;\\
&\vdash (s_1{\hspace{0.25mm}:\hspace{0.25mm}} C_1\wedge\ldots \wedge s_k{\hspace{0.25mm}:\hspace{0.25mm}} C_k) \to s{\hspace{0.25mm}:\hspace{0.25mm}} (A \to
B);
\end{align*}
It follows then by \eqref{A0} and \eqref{A4} that:
\begin{align*}
&(t_1{\hspace{0.25mm}:\hspace{0.25mm}} B_1\wedge\ldots \wedge t_n{\hspace{0.25mm}:\hspace{0.25mm}} B_n \wedge s_1{\hspace{0.25mm}:\hspace{0.25mm}} C_1\wedge\ldots \wedge s_k{\hspace{0.25mm}:\hspace{0.25mm}}
C_k)\to s\times t{\hspace{0.25mm}:\hspace{0.25mm}} B,
\end{align*}
so that $B \in \mathcal{E}(m,s \times t)$ follows and closure
condition (a) is verified.
\end{proof}
\subsection{$Act$ and $V$}
It only remains to define $Act$ and $V$ for our canonical model,
and we define them as follows:
\begin{itemize}
\item $(m,h) \in V(p) \Leftrightarrow p \in end(m \cap h)$,
for all $p \in Var$;
\item $Act(\dag, (\dag,\ddag)) = Act(\ddag, (\dag,\ddag)) = \emptyset$;
\item $Act(\dag, h) = \{ z \}$, if $h \neq (\dag,\ddag)$;
\item $Act(m,h) = \{ z \} \cup \{ y_{(j, A)} \mid Prove(j, A) \wedge \neg\Box Prove(j, A)
\in \Gamma_1 \} \cup$
$\qquad\qquad\qquad\qquad\qquad\qquad\cup\{ u_A \mid \Box Prove(j, A) \in \Gamma_1
\}$, if $m \cap h = (\Gamma_1,\ldots, \Gamma_n)\uparrow$;
\item $Act(m,h) = \{ z \} \cup \{ y_{(j, A)} \mid Prove(j, A) \wedge \neg\Box Prove(j, A)
\in \Gamma_1 \} \cup$
$\qquad\qquad\qquad\qquad\qquad\qquad\cup \{ w_A \mid \Box Prove(j, A) \in \Gamma_1
\}$, if $m \cap h = (\Gamma_1,\ldots, \Gamma_n)\downarrow$.
\end{itemize}
We begin by establishing some consequences of the above
definition:
\begin{lemma}\label{intersections}
The following statements are true:
\begin{enumerate}
\item If $(\Gamma)\alpha$ is an element, then:
$$
\bigcap_{h \in
H_{[(\Gamma)\alpha]_\equiv}}(Act([(\Gamma)\alpha]_\equiv, h) = \{
z \}.
$$
\item If $n > 1$ and $(\Gamma_1,\ldots, \Gamma_n)\alpha$ is an
element and $g \in H_{[(\Gamma_1,\ldots, \Gamma_n)\alpha]_\equiv}$
is arbitrary, then:
$$
\bigcap_{h \in H_{(\Gamma_1,\ldots,
\Gamma_n)\alpha]_\equiv}}(Act([(\Gamma_1,\ldots,
\Gamma_n)\alpha]_\equiv, h) = Act([(\Gamma_1,\ldots,
\Gamma_n)\alpha]_\equiv, g).
$$
\end{enumerate}
\end{lemma}
\begin{proof}
(Part 1). Set $m := [(\Gamma)\alpha]_\equiv$. It is clear from the
definition of $Act$ that
\noindent$z \in \bigcap_{h \in H_m}(Act(m,h))$, so that we only
need to show that $z$ is the only member in this intersection. The
other elements of $Act$, according to the definition, can have one
of the following forms: either $y_{(j, A)}$, or $u_A$, or $w_A$,
for some $A \in Form_X$ and $j \in Ag$. We know, further, that
both $(\Gamma)\alpha \equiv (\Gamma)\uparrow$ and $(\Gamma)\alpha
\equiv (\Gamma)\downarrow$.\footnote{One of these two elements
even coincides with $(\Gamma)\alpha$, but we cannot tell, which
one.} Then, using Lemma \ref{hist}.2, take any $h, h' \in H_m$ for
which $h \cap m = (\Gamma)\uparrow$ and $h' \cap m =
(\Gamma)\downarrow$. By definition, $Act(m, h)$ is disjoint from
$\{ w_A \mid A \in Form_X \}$ whereas $Act(m, h')$ is disjoint
from $\{ u_A \mid A \in Form_X \}$, therefore, $\bigcap_{h \in
H_m}(Act(m,h))$ must be disjoint from $\{ w_A \mid A \in Form_X \}
\cup \{ u_A \mid A \in Form_X \}$. Finally, consider a variable of
the form $y_{(j, A)}$ for arbitrary $A \in Form_X$ and $j \in Ag$.
If $y_{(j, A)} \in \bigcap_{h \in H_m}(Act(m,h))$, then recall
that for every $(\Delta)\alpha \in m$ there exists, by Lemma
\ref{hist}.2, a history $h_\Delta \in H_m$ such that
$(\Delta)\alpha = m \cap h_\Delta$. This means that $Prove(j, A)
\wedge \neg\Box Prove(j, A) \in \Delta$ for every $(\Delta)\alpha
\in m$, and thus, by maxiconsistency of $\Delta$ in $X$ and Lemma
\ref{elementaryconsistency}.5, that $Prove(j, A), \neg\Box
Prove(j, A) \in \Delta$ for every $(\Delta)\alpha \in m$. In
particular, we will have $Prove(j, A), \neg\Box Prove(j, A) \in
\Gamma$. Consider then the following set of formulas in $X$:
$$
\Xi = \{ B \mid \Box B \in \Gamma \} \cup \{ \neg Prove(j,A) \}.
$$
$\Xi$ is consistent, for otherwise we would have:
$$
\vdash (B_1 \wedge\ldots \wedge B_k) \to Prove(j,A),
$$
for some $B_1,\ldots,B_k$ such that $\Box B_1,\ldots,\Box B_k$ are
all in $\Gamma$. Since $\Box$ is an S5-modality, we would obtain
that
$$
\vdash (\Box B_1 \wedge\ldots \wedge \Box B_k) \to \Box
Prove(j,A),
$$
whence, by maixiconsistency of $\Gamma$ in $X$, it would follow
that $\Box Prove(j,A) \in \Gamma$, and the latter, given that also
$\neg\Box Prove(j, A) \in \Gamma$, would contradict $\Gamma$'s
maxiconsistency. Therefore, $\Xi$ is consistent and we can extend
$\Xi$ to a set $\Theta \subseteq Form_X$, which is maxiconsistent
in $X$. By definition, we will have $(\Gamma)\alpha \equiv
(\Theta)\alpha$, and thus $(\Theta)\alpha \in m$. But we will also
have $\neg Prove(j,A) \in \Theta$ which contradicts our assumption
that $Prove(j, A) \in \Delta$ for every $(\Delta)\alpha \in m$.
This contradiction shows that no proof variable of the form
$y_{(j, A)}$ is in $\bigcap_{h \in H_m}(Act(m,h))$. Therefore,
finally, we get our claim that $\bigcap_{h \in H_m}(Act(m,h)) = \{
z \}$ verified.
(Part 2). We set $m := [(\Gamma_1,\ldots,
\Gamma_n)\alpha]_\equiv$. It will suffice to show that, for all
$h,h' \in H_m$, we have $Act(m,h) = Act(m,h')$. We know that for
some appropriate $\Delta_1,\ldots, \Delta_n, \Theta_1,\ldots,
\Theta_n$ we will have:
$$
m \cap h = (\Delta_1,\ldots, \Delta_n)\alpha,
$$
and:
$$
m \cap h' = (\Theta_1,\ldots, \Theta_n)\alpha.
$$
Since the length of $m$ is greater than $1$, we know that all
elements in $m$ share the same greatest proper initial segment, so
that we have:
$$
\Gamma_i = \Delta_i = \Theta_i
$$
for all $i < n$, and, in particular:
$$
\Gamma_1 = \Delta_1 = \Theta_1.
$$
Now it is clear from the definition of $Act$, that $Act(m,h)$ and
$Act(m,h')$ are completely determined by $\alpha$, $\Delta_1$ and
$\Theta_1$, respectively, therefore, it follows that $Act(m,h) =
Act(m,h')$.
\end{proof}
We now check that the remaining semantic constraints on normal jstit models:
\begin{lemma}\label{act}
The canonical model, as defined above, satisfies the constraints
as to the expransion of presented proofs, no new proofs
guaranteed, presenting a new proof makes histories divide, and
epistemic transparency of presented proofs.
\end{lemma}
\begin{proof}
We consider the expansion of presented proofs first. Let $m' < m$
and let $h \in H_m$. Then $m' \neq \ddag$, since $\ddag$ has no
$<$-successors. If $m' = \dag$ and $m = \ddag$, then $h$ must be
$(\dag,\ddag)$ and we have $Act(\dag, (\dag,\ddag)) = Act(\ddag,
(\dag,\ddag)) = \emptyset$, so that the expansion of presented
proofs holds. If $m' = \dag$ and $m$ is an equivalence class of
elements, then $h \neq (\dag,\ddag)$, and we have $Act(\dag, h) =
\{ z \}$ and $z \in Act(m,h)$. Finally, if $m'$ is an equivalence
class of elements, then $m$ is also an equivalence class of
elements. In this case, $m \cap h$ must be of the form
$(\Gamma_1,\ldots, \Gamma_n)\alpha$ for the respective
$\Gamma_1,\ldots, \Gamma_n \subseteq Form_X$ and $\alpha \in \{
\uparrow, \downarrow \}$. But then, for some $k \leq n$, $m' \cap
h$ must be of the form $(\Gamma_1,\ldots, \Gamma_k)\alpha$. Since
the extension of both $Act(m,h)$ and $Act(m',h)$ is determined by
$\alpha$ and $\Gamma_1$, and these are shared by both $m \cap h$
and $m' \cap h$, it follows that $Act(m,h) = Act(m',h)$ and thus
$Act(m,h) \subseteq Act(m',h)$.
We consider next the no new proofs guaranteed constraint. Let $m
\in Tree$. If $m \in \{ \dag,\ddag \}$, then $\bigcap_{h \in
H_m}(Act(m,h)) = \bigcup_{m' < m, h \in H_m}(Act(m',h)) =
\emptyset$ and the constraint is trivially satisfied. If $m \in
Tree \setminus \{ \dag, \ddag \}$, then we need to distinguish
between two cases:
\emph{Case 1}. The length of $m$ equals $1$. Then $m$ is of the
form $[(\Gamma)\alpha]_\equiv$ for the respective $\Gamma
\subseteq Form_X$ and $\alpha \in \{ \uparrow, \downarrow \}$. By
Lemma \ref{intersections}.1, we have then that $\bigcap_{h \in
H_m}(Act(m,h)) = \{ z \}$. On the other hand, note that the only
$<$-predecessor of $[(\Gamma)\alpha]_\equiv = m$ must be $\dag$
and therefore, by definition of $Act$, we get that $\bigcup_{h \in
H_m}(Act(\dag,h)) = \{ z \}$ so that the no new proofs guaranteed
constraint is verified for $m$.
\emph{Case 2}. The length of $m$ is greater than $1$. Then $m$
must be of the form $[(\Gamma_1,\ldots, \Gamma_n, \Gamma_{n +
1})\alpha]_\equiv$ for the respective $\Gamma_1,\ldots, \Gamma_n,
\Gamma_{n + 1} \subseteq Form_X$, $n > 0$, and $\alpha \in \{
\uparrow, \downarrow \}$. Then we choose, by Lemma \ref{hist}.2,
an arbitrary $g \in H_m$ such that $m \cap g = (\Gamma_1,\ldots,
\Gamma_n, \Gamma_{n + 1})\alpha$. For this $g$ we get, using Lemma
\ref{intersections}.2:
\begin{align*}
&\bigcap_{h \in
H_m}(Act([(\Gamma_1,\ldots, \Gamma_n, \Gamma_{n +
1})\alpha]_\equiv,h)) = Act([(\Gamma_1,\ldots, \Gamma_n, \Gamma_{n
+
1})\alpha]_\equiv,g)\\
&=\{ z \} \cup \{ y_{(j, A)} \mid Prove(j, A) \wedge \neg\Box
Prove(j, A) \in \Gamma_1 \} \cup \{ w_A \mid \Box Prove(j, A) \in
\Gamma_1 \}\\
&\qquad\qquad\qquad\qquad = Act([(\Gamma_1)\alpha]_\equiv,g)\\
&\qquad\qquad\qquad\qquad\subseteq \bigcup_{m' < m, h \in
H_m}(Act(m',h)),
\end{align*}
since $[(\Gamma_1)\alpha]_\equiv < [(\Gamma_1,\ldots, \Gamma_n,
\Gamma_{n + 1})\alpha]_\equiv$.
We turn next to the presenting a new proof makes histories divide
constraint. Consider an $m, m' \in Tree$ such that $m < m'$ and
arbitrary $h, h' \in H_{m'}$. If $m = \ddag$, then the constraint
is verified trivially since $\ddag$ has no $<$-successors. If $m =
\dag$ and $m' = \ddag$, then we must have $h = h' = (\dag, \ddag)$
and the constraint is verified trivially. If $m = \dag$ and $m'
\neq \ddag$, then both $h$ and $h'$ are different from $(\dag,
\ddag)$, which means that $Act(\dag, h) = Act(\dag, h') = \{ z
\}$, and the constraint is again verified. Finally, if $m \in Tree
\setminus \{ \dag, \ddag \}$, then we must have $m =
[(\Gamma_1,\ldots, \Gamma_n)\alpha]_\equiv$ for some appropriate
$\Gamma_1,\ldots, \Gamma_n,\alpha$. But then, since $m' > m$, it
must be that $m' = [(\Gamma_1,\ldots, \Gamma_k)\alpha]_\equiv$ for
some $k > n$ (so that, among other things, we know that $k > 1$).
Now, given that $h, h' \in H_{m'}$, this means that for
appropriate $\Delta, \Delta' \subseteq Form_X$ we must have $h
\cap m' = (\Gamma_1,\ldots, \Gamma_{k - 1}, \Delta)\alpha$ and $h'
\cap m' = (\Gamma_1,\ldots, \Gamma_{k - 1}, \Delta')\alpha$,
which, in turn, means that:
$$
h \cap m = h' \cap m = (\Gamma_1,\ldots, \Gamma_n)\alpha.
$$
It follows, by definition of $Act$, that in this case $Act(m,h) =
Act(m,h')$, and the constraint is verified.
It remains to check the epistemic transparency of presented proofs
constraint. Assume that $m, m' \in Tree$ are such that $R(m,m')$.
If we have $m \in \{ \dag,\ddag \}$, then, by definition, we must
have $\bigcap_{h \in H_m}(Act(m,h)) = \emptyset$, and the
constraint is verified in a trivial way. If, on the other hand, $m
\in Tree \setminus \{ \dag, \ddag \}$, then, by $R(m,m')$, we must
also have $m' \in Tree \setminus \{ \dag, \ddag \}$. We have then
two cases to consider:
\emph{Case 1}. The length of $m$ equals $1$. Then, by Lemma
\ref{intersections}.1, we know that
\noindent$\bigcap_{h \in H_m}(Act(m,h)) = \{ z \}$. It is also
obvious, by the fact that $m' \in Tree \setminus \{ \dag, \ddag
\}$, that $z \in \bigcap_{h \in H_m'}(Act(m',h))$ and thus the
constraint is satisfied.
\emph{Case 2}. The length of $m$ is greater than $1$. Then $m =
[(\Gamma_1,\ldots, \Gamma_n)\alpha]_\equiv$ for appropriate
$\Gamma_1,\ldots, \Gamma_n,\alpha$, and, since we have $R(m,m')$,
we must also have $m' = [(\Gamma_1,\Delta_1\ldots,
\Delta_k)\alpha]_\equiv$ for appropriate $\Delta_1\ldots,
\Delta_k$. We assume that in fact $\alpha = \downarrow$, the other
subcase is similar. Using Lemma \ref{hist}.2, we choose $g \in
H_m$ and $g' \in H_{m'}$ in such a way that $m \cap g =
(\Gamma_1,\ldots, \Gamma_n)\downarrow$ and $m' \cap g' =
(\Gamma_1,\Delta_1\ldots, \Delta_k)\downarrow$. We get then, by
Lemma \ref{intersections}.2:
\begin{align*}
&\bigcap_{h \in H_m}(Act(m,h)) = Act(m,g)\\
&=\{ z \} \cup \{ y_{(j, A)} \mid Prove(j, A) \wedge \neg\Box
Prove(j, A) \in \Gamma_1 \} \cup \{ w_A \mid \Box Prove(j, A) \in
\Gamma_1 \}\\
&\qquad\qquad\qquad\qquad\qquad = Act(m',g')\\
&\qquad\qquad\qquad\qquad\qquad = \bigcap_{h' \in
H_{m'}}(Act(m',h')).
\end{align*}
\end{proof}
\subsection{The truth lemma}
It follows from Lemmas \ref{leq}--\ref{act} that our above-defined
canonical model is in fact a normal jstit model. Now we need to
supply a truth lemma:
\begin{lemma}\label{truth}
Let $A \in Form_X$, let $m \in Tree \setminus \{ \dag,\ddag \}$,
and let $h \in H_m$. Then:
$$
\mathcal{M},m,h \models A \Leftrightarrow A \in end(m \cap h).
$$
\end{lemma}
\begin{proof}
As is usual, we prove the lemma by induction on the construction
of $A$. The basis of induction with $A = p \in Var$ we have by
definition of $V$, whereas Boolean cases for the induction step
are trivial. We treat the modal cases:
\emph{Case 1}. $A = \Box B$. If $\Box B \in end(m \cap h)$, then
note that for every $h' \in H_m$ we must have $m \cap h' \in m$ so
that $m \cap h' \equiv m \cap h$. By definition of $\equiv$ and
the fact that $m \in Tree \setminus \{ \dag,\ddag \}$, we must
have then $B \in (m \cap h')$ for all $h' \in H_m$ and thus, by
induction hypothesis, we obtain that $\mathcal{M},m,h \models \Box
B$. If, on the other hand, $\Box B \notin end(m \cap h)$, we need
to consider then two subcases:
\emph{Case 1.1}. The length of $m$ equals $1$. We must have then
$m \cap h = (\Gamma)\alpha$ for some appropriate $\Gamma$ and
$\alpha$ so that $\Gamma = end(m \cap h)$. Then the set
$$
\Xi = \{ \Box C \mid \Box C \in \Gamma \} \cup \{ \neg B \}
$$
must be consistent, since otherwise we would have
$$
\vdash (\Box C_1\wedge\ldots\wedge\Box C_n) \to B
$$
for some $\Box C_1,\ldots,\Box C_n \in \Gamma$, whence, since
$\Box$ is an S5-modality, we would get
$$
\vdash (\Box C_1\wedge\ldots\wedge\Box C_n) \to \Box B,
$$
which would mean that $\Box B \in \Gamma$, contrary to our
assumption. Therefore, $\Xi$ is consistent and we can extend $\Xi$
to a set $\Delta \in Form_X$ which is maxiconsistent in $X$. Of
course, in this case $B \notin \Delta$. We will have then that
$(\Delta)\alpha$ is an element, and, by definition of $\equiv$,
that $(\Gamma)\alpha \equiv (\Delta)\alpha$. By Lemma
\ref{hist}.2, for some $h' \in H_m$ we will have $(\Delta)\alpha =
m \cap h'$ and, therefore, $\Delta = end(m \cap h')$. Since $B
\notin \Delta$, it follows, by induction hypothesis, that
$\mathcal{M},m,h' \not\models B$, hence $\mathcal{M},m,h
\not\models \Box B$ as desired.
\emph{Case 1.2}. The length of $m$ is greater than $1$. We must
have then $m \cap h = (\Gamma_1,\ldots,\Gamma_n,\Gamma)\alpha$ for
some appropriate $n > 0$, $\Gamma_1,\ldots,\Gamma_n,\Gamma$ and
$\alpha$ so that $\Gamma = end(m \cap h)$. We then define $\Delta$
as in Case 1.1 so that we have
$(\Gamma_1,\ldots,\Gamma_n,\Gamma)\alpha \equiv
(\Gamma_1,\ldots,\Gamma_n,\Delta)\alpha$ and show that for any $h'
\in H_m$ such that $(\Gamma_1,\ldots,\Gamma_n,\Delta)\alpha = m
\cap h'$ and, \emph{eo ipso}, $\Delta = end(m \cap h')$, we will
have $\mathcal{M},m,h' \not\models B$, whence $\mathcal{M},m,h
\not\models \Box B$ as desired. The only new ingredient is that
now we need to supply a proof that
$(\Gamma_1,\ldots,\Gamma_n,\Delta)\alpha$ is actually an element.
Well, if for any $C \in Form_X$ we have that $KC \in \Gamma_n$,
then, since $(\Gamma_1,\ldots,\Gamma_n,\Gamma)\alpha$ is an
element, we will have $KC \in \Gamma$, whence, by maxiconsistency
of $\Gamma$ in $X$ and Lemma \ref{theorems}.5, $\Box KC \in
\Gamma$, and since every boxed formula from $\Gamma$ is also in
$\Delta$, we get that $\Box KC \in \Delta$, whence $KC \in \Delta$
by maxiconsistency of $\Delta$ in $X$ and S5 reasoning for $\Box$.
Further, if we have $Prove(j, C) \in \Gamma_1$ for $j \in Ag$,
then, since $(\Gamma_1,\ldots,\Gamma_n,\Gamma)\alpha$ is an
element, we will have $Proven(C) \in \Gamma$, whence, by
maxiconsistency of $\Gamma$ in $X$ and Lemma \ref{theorems}.2,
$\Box Proven(C) \in \Gamma$, and since every boxed formula from
$\Gamma$ is also in $\Delta$, we get that $\Box Proven(C) \in
\Delta$, whence $\Box Proven(C) \in \Delta$ by maxiconsistency of
$\Delta$ in $X$ and S5 reasoning for $\Box$. Finally, if $C \in
Form_X$ and $j \in Ag$, then $K\Box\neg Prove(j, C) \in \Gamma$,
because $(\Gamma_1,\ldots,\Gamma_n,\Gamma)\alpha$ is an element,
whence $\Box K\Box\neg Prove(j, C) \in \Gamma$ by \eqref{A8} and
maxiconsistency of $\Gamma$ in $X$. And since every boxed formula
from $\Gamma$ is also in $\Delta$, we get that $\Box K\Box\neg
Prove(j, C) \in \Delta$ as well, hence $K\Box\neg Prove(j, C) \in
\Delta$ by \eqref{A1} and maxiconsistency of $\Delta$ in $X$.
\emph{Case 2}. $A = [j]B$ for some $j \in Ag$. Then, if $[j]B \in
end(m \cap h)$, by definition of $Choice$ and the fact that $m \in
Tree \setminus \{ \dag,\ddag \}$ we must have:
$$
Choice^m_j(h) = \{ h' \mid h' \in H_m,\,
(\forall C \in Form_X)([j]C \in end(h \cap m) \Rightarrow C \in end(h' \cap
m))\}.
$$
Therefore, if $h' \in Choice^m_j(h)$, then we must have $B \in
end(h' \cap m)$, and further, by induction hypothesis, that
$\mathcal{M},m,h' \models B$, so that we get $\mathcal{M},m,h
\models [j]B$. On the other hand, if $[j]B \notin end(m \cap h)$,
we need to consider then two subcases:
\emph{Case 2.1}. The length of $m$ equals $1$. We must have then
$m \cap h = (\Gamma)\alpha$ for some appropriate $\Gamma$ and
$\alpha$ so that $\Gamma = end(m \cap h)$. Then the set
$$
\Xi = \{ [j]C \mid [j]C \in \Gamma \} \cup \{ \neg B \}
$$
must be consistent, since otherwise we would have
$$
\vdash ([j]C_1\wedge\ldots\wedge[j]C_n) \to B
$$
for some $[j]C_1,\ldots,[j]C_n \in \Gamma$, whence, since $[j]$ is
an S5-modality, we would get
$$
\vdash ([j]C_1\wedge\ldots\wedge[j]C_n) \to [j]B,
$$
which would mean that $[j]B \in \Gamma$, contrary to our
assumption. Therefore, $\Xi$ is consistent and we can extend $\Xi$
to a set $\Delta \subseteq Form_X$ which is maxiconsistent in $X$.
Of course, in this case $B \notin \Delta$. We will have then that
$(\Delta)\alpha$ is an element.
Now, if $D \in Form_X$ is such that $\Box D \in \Gamma$, then, by
\eqref{A2} and maxiconsistency of $\Gamma$ in $X$, we know that
$[j]D \in \Gamma$, so that also $[j]D \in \Delta$, and hence, by
\eqref{A1} and maxiconsistency of $\Delta$ in $X$, $D \in \Delta$.
We have thus shown that:
\begin{equation}\label{E:t1}
(\forall D \in Form_X)(\Box D \in \Gamma \Rightarrow D \in
\Delta),
\end{equation}
and it follows that $(\Gamma)\alpha
\equiv (\Delta)\alpha$ by definition of $\equiv$. By Lemma
\ref{hist}.2, for some $h' \in H_m$ we will have $(\Delta)\alpha =
m \cap h'$ and, therefore, $\Delta = end(m \cap h')$. Also, since
$\Delta$ contains all the $[j]$-modalized formulas from $\Gamma$,
we know that for any such $h'$ we will have $h' \in
Choice^m_j(h)$. Since $B \notin \Delta$, it follows, by induction
hypothesis, that $\mathcal{M},m,h' \not\models B$, hence
$\mathcal{M},m,h \not\models [j]B$ as desired.
\emph{Case 2.2}. The length of $m$ is greater than $1$. We must
have then $m \cap h = (\Gamma_1,\ldots,\Gamma_n,\Gamma)\alpha$ for
some appropriate $n > 0$, $\Gamma_1,\ldots,\Gamma_n,\Gamma$ and
$\alpha$ so that $\Gamma = end(m \cap h)$. We then define $\Delta$
as in Case 2.1 so that we have
$(\Gamma_1,\ldots,\Gamma_n,\Gamma)\alpha \equiv
(\Gamma_1,\ldots,\Gamma_n,\Delta)\alpha$ and show that for any $h'
\in H_m$ such that $(\Gamma_1,\ldots,\Gamma_n,\Delta)\alpha = m
\cap h'$ and, \emph{eo ipso}, $\Delta = end(m \cap h')$, we will
have both $h' \in Choice^m_j(h)$ and $\mathcal{M},m,h' \not\models
B$, whence $\mathcal{M},m,h \not\models [j]B$ as desired. Again, a
separate argument for $(\Gamma_1,\ldots,\Gamma_n,\Delta)\alpha$
being an element needs to be supplied, and it can be done in the
same way as in Case 1.2, given that by \eqref{E:t1} and S5
properties of $\Box$ we know that every boxed formula from
$\Gamma$ is also in $\Delta$.
\emph{Case 3}. $A = KB$. Assume that $KB \in end(m \cap h)$. We
clearly have then $m = [(m \cap h)]_\equiv$. Hence, by definition
of $R$ and the fact that $m \in Tree \setminus \{ \dag,\ddag \}$
we must have for every $m' \in Tree$:
$$
R(m,m') \Rightarrow (\forall \tau \in m')(\forall C \in Form_X)(KC
\in end(m \cap h) \Rightarrow KC \in end(\tau)).
$$
Therefore, if $R(m,m')$ and $h' \in H_{m'}$ is arbitrary, then, of
course, $(h' \cap m') \in m'$ so that $KB \in end(h' \cap m')$,
and, further, $B \in end(h' \cap m')$ by S4 reasoning for $K$.
Therefore, by induction hypothesis, we get that $\mathcal{M},m',h'
\models B$, whence $\mathcal{M},m,h \models KB$. On the other
hand, if $KB \notin end(m \cap h)$, we need to consider then two
subcases:
\emph{Case 3.1}. The length of $m$ equals $1$. We must have then
$m \cap h = (\Gamma)\alpha$ for some appropriate $\Gamma$ and
$\alpha$ so that $\Gamma = end(m \cap h)$. Then the set
$$
\Xi = \{ KC \mid KC \in \Gamma \} \cup \{ \neg\Box B \}
$$
must be consistent, since otherwise we would have
$$
\vdash (KC_1\wedge\ldots\wedge KC_n) \to \Box B
$$
for some $KC_1,\ldots,KC_n \in \Gamma$, whence, since $K$ is an
S4-modality, we would get
$$
\vdash (KC_1\wedge\ldots\wedge KC_n) \to K\Box B,
$$
which would mean that $K\Box B \in \Gamma$, hence, by \eqref{A1},
\eqref{A7} and maxiconsistency of $\Gamma$ in $X$, that $KB \in
\Gamma$, contrary to our assumption. Therefore, $\Xi$ is
consistent and we can extend $\Xi$ to a set $\Delta \subseteq
Form_X$ which is maxiconsistent in $X$. Of course, in this case
$\Box B \notin \Delta$. We will have then that $(\Delta)\alpha$ is
an element. So we set $m' = [(\Delta)\alpha]_\equiv$. Assume that
$(\Delta')\alpha' \equiv (\Delta)\alpha$. Then every boxed formula
from $\Delta$ will be in $\Delta'$. In particular, whenever $KC
\in \Delta$, then also $\Box KC \in \Delta$ and thus $KC \in
\Delta'$, by Lemma \ref{theorems}.5 and maxiconsistency of
$\Delta$ in $X$. Therefore, whenever $KC \in \Gamma$ and $\tau \in
m' = [(\Delta)\alpha]_\equiv$, we have that $KC \in end(\tau)$ so
that we must have $R(m,m')$. On the other hand, since $\Box B
\notin \Delta$, then, by Case 1, there must be a $\tau \in m'$
such that $B \notin end(\tau)$. But then, by Lemma \ref{hist}.2,
we can choose $h' \in H_{m'}$ in such a way that $\tau = m' \cap
h'$, and we get that $B \notin end(m' \cap h')$. Therefore, by
induction hypothesis, we get $\mathcal{M},m',h' \not\models B$. In
view of the fact that also $R(m,m')$, this means that
$\mathcal{M},m,h \not\models KB$ as desired.
\emph{Case 3.2}. The length of $m$ is greater than $1$. We must
have then $m \cap h = (\Gamma_1,\ldots,\Gamma_n,\Gamma)\alpha$ for
some appropriate $n > 0$, $\Gamma_1,\ldots,\Gamma_n,\Gamma$ and
$\alpha$ so that $\Gamma = end(m \cap h)$. We then define $\Delta$
as in Case 3.1 and consider $m' =
[(\Gamma_1,\Delta)\alpha]_\equiv$. We get then $R(m,m')$
immediately by definition of $R$. Just as in Case 3.1, we will use
the fact that $\Box B \notin \Delta$ to find $\tau \in m'$ and $h'
\in H_{m'}$ so that $\tau = m' \cap h'$ and $B \notin end(\tau)$.
It will follow by induction hypothesis that $\mathcal{M},m',h'
\not\models B$, hence, given that $R(m,m')$, that $\mathcal{M},m,h
\not\models KB$.
The only new ingredient is that now we need to supply a proof that
$(\Gamma_1,\Delta)\alpha$ is actually an element. Well, if for any
$C \in Form_X$ we have that $KC \in \Gamma_1$, then, since
$(\Gamma_1,\ldots,\Gamma_n,\Gamma)\alpha$ is an element, we will
have $KC \in \Gamma$, whence $KC \in \Delta$. Further, if we have
$Prove(j, C) \in \Gamma_1$ for $j \in Ag$, then, since
$(\Gamma_1,\ldots,\Gamma_n,\Gamma)\alpha$ is an element, we will
have $Proven(C) \in \Gamma$, whence, by maxiconsistency of
$\Gamma$ in $X$ and \eqref{A11}, $KProven(C) \in \Gamma$, and
since every $K$-modalized formula from $\Gamma$ is also in
$\Delta$, we get that $KProven(C) \in \Delta$, whence $Proven(C)
\in \Delta$ by maxiconsistency of $\Delta$ in $X$ and S4 reasoning
for $K$. Finally, if $C \in Form_X$ and $j \in Ag$, then
$K\Box\neg Prove(j, C) \in \Gamma$, because
$(\Gamma_1,\ldots,\Gamma_n,\Gamma)\alpha$ is an element. And since
every $K$-modalized formula from $\Gamma$ is also in $\Delta$, we
get that $K\Box\neg Prove(j, C) \in \Delta$ as well.
\emph{Case 4}. $A = t{\hspace{0.25mm}:\hspace{0.25mm}} B$ for some $t \in Pol_X$. Note that by
\eqref{A0} we know that $\vdash t{\hspace{0.25mm}:\hspace{0.25mm}} B \to t{\hspace{0.25mm}:\hspace{0.25mm}} B$. Therefore, if
$t{\hspace{0.25mm}:\hspace{0.25mm}} B \in end(m \cap h)$, we will have $A \in \mathcal{E}(m,t)$
by definition. Also, by \eqref{A5} and maxiconsistency of $end(m
\cap h)$ in $X$, we will have $KB \in end(m \cap h)$. Therefore,
by Case 3, we will have that $\mathcal{M},m,h \models KB$ and
further, by $A \in \mathcal{E}(m,t)$, that $\mathcal{M},m,h
\models t{\hspace{0.25mm}:\hspace{0.25mm}} B$. On the other hand, if $t{\hspace{0.25mm}:\hspace{0.25mm}} B \notin end(m \cap
h)$, then for no
\noindent$t_1{\hspace{0.25mm}:\hspace{0.25mm}} B_1,\ldots, t_n{\hspace{0.25mm}:\hspace{0.25mm}} B_n \in end(m \cap h)$ can it
be that:
$$
\vdash (t_1{\hspace{0.25mm}:\hspace{0.25mm}} B_1 \wedge\ldots\wedge t_n{\hspace{0.25mm}:\hspace{0.25mm}} B_n) \to t{\hspace{0.25mm}:\hspace{0.25mm}} B,
$$
for in this case we would also have $t{\hspace{0.25mm}:\hspace{0.25mm}} B \in end(m \cap h)$ by
maxiconsistency of $end(m \cap h)$ in $X$. Therefore, we must have
$A \notin \mathcal{E}(m,t)$ so that $\mathcal{M},m,h \not\models
t{\hspace{0.25mm}:\hspace{0.25mm}} B$.
\emph{Case 5}. $A = Proven(B)$. Assume that $Proven(B) \in end(m
\cap h)$. Then, by Lemma \ref{theorems}.1 and maxiconsistency of
$end(m \cap h)$, we will also have $\Box Proven(B) \in end(m \cap
h)$. Now, choose an arbitrary $\xi \in m$. We know that $end(\xi)
\equiv end(m \cap h)$, therefore, we must have $Proven(B) \in
end(\xi)$ by definition of $\equiv$, which means that $B \in
\mathcal{E}(m,z)$. We also have $z \in Act(m,h')$ for all $h' \in
H_m$ and we will have $KB \in end(m \cap h)$ by \eqref{A11} so
that we have $\mathcal{M},m,h \models z{\hspace{0.25mm}:\hspace{0.25mm}} B$ by Case 3 and
induction hypothesis.\footnote{Note that sentences like $z{\hspace{0.25mm}:\hspace{0.25mm}} B$
are not covered by our induction since $z \notin Pol_X$; but
sentences like $KB$ are covered since $B \in Form_X$. This is the
reason why our argument invokes Case 3 rather than Case 4.} It
follows then that $\mathcal{M},m,h \models Proven(B)$. On the
other hand, assume that $Proven(B) \notin end(m \cap h)$. Then we
have to consider two subcases:
\emph{Case 5.1}. The length of $m$ equals 1. Then, since
$Proven(B) \notin end(m \cap h)$, we will have $B \notin
\mathcal{E}(m,z)$ by definition. Also, by Lemma
\ref{intersections}.1, we know that
\noindent$\bigcap_{h' \in H_m}Act(m,h') = \{ z \}$. It follows
that $\mathcal{M},m,h \not\models Proven(B)$.
\emph{Case 5.2}. The length of $m$ is greater than 1. We must
have then $m \cap h = (\Gamma_1,\ldots,\Gamma_n,\Gamma)\alpha$ for
some appropriate $n > 0$, $\Gamma_1,\ldots,\Gamma_n,\Gamma$ and
$\alpha$ so that $\Gamma = end(m \cap h)$. We will assume that in
fact $\alpha = \uparrow$, the reasoning for the case $\alpha =
\downarrow$ is similar. It follows then, by Lemma
\ref{intersections}.2:
\begin{align*}
\bigcap_{h' \in H_m}Act(m,h') = \{ z \} \cup \{ y_{(j, C)} \mid
Prove(j, C) \wedge \neg\Box &Prove(j, C)
\in \Gamma_1 \} \cup\\
&\cup \{ u_A \mid \Box Prove(j, C) \in \Gamma_1
\}.
\end{align*}
We know that $B \notin \mathcal{E}(m,z)$ since $Proven(B) \notin
\Gamma$. If, for some $j \in Ag$, we would have $Prove(j, B) \in
\Gamma_1$, it would follow that $Proven(B) \in \Gamma$, since
$(\Gamma_1,\ldots,\Gamma_n,\Gamma)\alpha$ is an element.
Therefore, if $u_C, y_{(j,C)} \in \bigcap_{h' \in H_m}Act(m,h')$
for any $j \in Ag$, then $C \neq B$ and therefore $B \notin
\mathcal{E}(m, u_C) = \mathcal{E}(m, y_{(j,C)}) = \{ C \}$. It
follows then that for no proof which is presented under all
histories through $m$, this proof will be acceptable for $B$,
hence we get $\mathcal{M},m,h \not\models Proven(B)$.
\emph{Case 6}. $A = Prove(j,B)$ for some $j \in Ag$. Assume that
$Prove(j,B) \in end(m \cap h)$. Then we know that the length of
$m$ must be $1$. Indeed, if length of $m$ were greater than $1$,
then we would have $K\Box\neg Prove(j,B) \in end(m \cap h)$,
whence, by S4 reasoning for $K$, S5 reasoning for $\Box$, and
maxiconsistency of $end(m \cap h)$ in $X$ we would have $\neg
Prove(j,B) \in end(m \cap h)$, so that $Prove(j,B) \in end(m \cap
h)$ would be impossible.
So, for some appropriate $\Gamma$ and $\alpha$ we will have both
$m = [(\Gamma)\alpha]_\equiv$ and $m \cap h = (\Gamma)\alpha$. We
need to consider two subcases:
\emph{Case 6.1}. $\Box Prove(j,B) \in \Gamma$. Then, for all $h'
\in Choice^m_j(h)$, we will have, of course, $m \cap h' \equiv m
\cap h$ which means, by maxiconsistency and S5 reasoning for
$\Box$, that we will also have $\Box Prove(j, B) \in end(m \cap
h')$. This will mean that for all $h' \in Choice^m_j(h)$ we will
have either $u_B$ or $w_B$ in $Act(m,h')$ and we will have, of
course $B \in \mathcal{E}(m, u_B) = \mathcal{E}(m, w_B)$. Further,
by \eqref{A9} and maxiconsistency in $X$ of every $end(m \cap h')$
with $h' \in Choice^m_j(h)$ we know that also $KB \in m \cap h'$
for every such $h'$. Therefore, we know by Case 3 above that
either $\mathcal{M},m,h' \models u_B{\hspace{0.25mm}:\hspace{0.25mm}} B$, or
$\mathcal{M},m,h'\models w_B{\hspace{0.25mm}:\hspace{0.25mm}} B$ for every $h' \in
Choice^m_j(h)$.
Assume, further, that for some $s \in Pol$ we have
$\mathcal{M},m,h \models s{\hspace{0.25mm}:\hspace{0.25mm}} B$. Then, in particular, we must
have $B \in \mathcal{E}(m,s)$. By definition of $\mathcal{E}$, $s$
cannot then be a proof variable of the form $u_C$, $w_C$, or
$y_{(j,C)}$ for any $j \in Ag$ and any formula $C$ different from
$B$. Moreover, $s$ cannot be $z$, since we have $Prove(j,B) \in
end(m \cap h)$ whence by maxiconsistency of $end(m \cap h)$ in
$X$ and \eqref{A9}, $\neg Proven(B) \in end(m \cap h)$, so that,
again by maxiconsistency, $Proven(B) \notin end(m \cap h) \in m$,
which means, by the above definition of $\mathcal{E}$, that $B
\notin \mathcal{E}(m,z)$. Therefore, assuming that
$\mathcal{M},m,h \models s{\hspace{0.25mm}:\hspace{0.25mm}} B$, $s$ can be either in $Pol_X$ or
in $Pol \setminus (Pol_X \cup Y \cup W \cup U \cup \{ z \})$, or
else in $\{ w_B, u_B, y_{(j,B)} \mid j \in Ag \}$. Well, if $s$ is
either in $Pol_X$ or in $Pol \setminus (Pol_X \cup Y \cup W \cup U
\cup \{ z \})$, then it is immediate from the definition of $Act$
that $s \notin Act(m,h)$. On the other hand, if $s \in \{
y_{(j,B)} \mid j \in Ag \}$, then note that by maxiconsistency of
$end(m \cap h)$ in $X$ we must have $Prove(j,B) \wedge \neg\Box
Prove(j, B) \notin \Gamma$ whence it immediately follows that,
again $s \notin Act(m,h)$. Finally, consider two elements
$(\Gamma)\uparrow$ and $(\Gamma)\downarrow$. One of these elements
is actually
$(\Gamma)\alpha$, both elements are in $m$, and, by Lemma \ref{hist}.2, we can choose $h', h'' \in H_m$ in such a way that we have both
$ (\Gamma)\uparrow = m \cap h'$ and $ (\Gamma)\downarrow = m \cap h''$ . It clearly follows then from the definition of $Act$ that
$w_B \notin Act(m, h')$, whereas $u_B \notin Act(m, h'')$.
\emph{Case 6.2}. $\Box Prove(j,B) \notin \Gamma$. Then, by maxiconsistency of $\Gamma = end(m \cap h)$ in $X$,
we must have $Prove(j,B) \wedge \neg\Box Prove(j,B) \in \Gamma$ as well as (again, by maxiconsistency of $\Gamma$ in $X$ and Lemma \ref{theorems}.4) $[j](Prove(j,B) \wedge \neg\Box Prove(j,B)) \in \Gamma$.
Therefore, for every $h' \in Choice^m_j(h)$ we will have $Prove(j,B) \wedge \neg\Box Prove(j,B) \in end(m \cap h')$ simply by definition of $Choice$.
This further means that for every such $h'$, the proof variable $y_{(j,B)}$ will be in $Act(m,h')$. Besides, it is immediate
from the definition of $\mathcal{E}$ that $B \in \mathcal{E}(m, y_{(j,B)})$. Finally, note that by \eqref{A9} and maxiconsistency of the
respective $end(m \cap h')$ in $X$, we will have $KB \in end(m \cap h')$ for every $h' \in Choice^m_j(h)$. Therefore, by Case 3 above, we will have
$\mathcal{M},m,h'\models y_{(j,B)}{\hspace{0.25mm}:\hspace{0.25mm}} B$ for every $h' \in Choice^m_j(h)$.
Assume, further, that for some $s \in Pol$ we have $\mathcal{M},m,h \models s{\hspace{0.25mm}:\hspace{0.25mm}} B$. Just as
in Case 6.1, we can show that $s$ cannot be of the form $z$, $u_C$, $w_C$, or $y_{(j,C)}$ for any $j \in Ag$ and any formula $C$ different from $B$.
Then, again borrowing our reasoning from the Case 6.1 above, we can show that
if $s \in Pol_X$ or $s \in Pol \setminus (Pol_X \cup Y \cup W \cup U \cup \{ z \})$, then we must have
$s \notin Act(m,h)$. If $s$ is $u_B$ or $w_B$ then we must have $s \notin Act(m,h)$ since $\Box Prove(j,B) \notin \Gamma = end(m \cap h)$,
and therefore, by maxiconsistency of $\Gamma$ in $X$ and \eqref{A10} we must have $\Box Prove(i,B) \notin \Gamma = end(m \cap h)$ for all $i \in Ag$.
Assume then that $s$ is $y_{(i,B)}$ for some $i \in Ag$. If $y_{(i,B)} \notin Act(m,h)$, then we are done. If, on the other hand,
$y_{(i,B)} \in Act(m,h)$, then, by definition of $Act$, we must have $Prove(i,B) \wedge \neg\Box Prove(i,B) \in \Gamma = end(m \cap h)$, hence, by
Lemma \ref{elementaryconsistency}.5, $\neg\Box Prove(i,B) \in end(m \cap h)$. Then the set
$$
\Xi = \{ \Box C \mid \Box C \in \Gamma \} \cup \{ \neg Prove(i,B) \}
$$
must be consistent, since otherwise we would have
$$
\vdash (\Box C_1\wedge\ldots\wedge\Box C_n) \to Prove(i,B)
$$
for some $\Box C_1,\ldots,\Box C_n \in \Gamma$, whence, since
$\Box$ is an S5-modality, we would get
$$
\vdash (\Box C_1\wedge\ldots\wedge\Box C_n) \to \Box Prove(i,B),
$$
which would mean that $\Box Prove(i,B) \in \Gamma$, contrary to
our assumption. Therefore, $\Xi$ is consistent and we can extend
$\Xi$ to a set $\Delta \subseteq Form_X$ which is maxiconsistent
in $X$. Of course, in this case $Prove(i,B) \notin \Delta$. We
will have then that $(\Delta)\alpha$ is an element, and, by
definition of $\equiv$, that $(\Gamma)\alpha \equiv
(\Delta)\alpha$. By Lemma \ref{hist}.2, for some $h' \in H_m$ we
will have $(\Delta)\alpha = m \cap h'$ and, therefore, $\Delta =
end(m \cap h')$. Since $Prove(i,B) \notin \Delta$, it follows that
$y_{(i,B)} \notin Act(m,h')$.
Thus we have shown that if $Prove(j,B) \in end(m \cap h)$, then
$\mathcal{M},m,h\models Prove(j,B)$. For the inverse direction,
assume that $Prove(j,B) \notin end(m \cap h)$. Again, we have to
consider two further subcases:
\emph{Case 6.3}. The length of $m$ equals $1$ so that, for some
appropriate $\Gamma$ and $\alpha$ we have both $m =
[(\Gamma)\alpha]_\equiv$ and $m \cap h = (\Gamma)\alpha$. If
$\mathcal{M},m,h \models Proven(B)$, then by \eqref{A9} we will
have $\mathcal{M},m,h \not\models Prove(j,B)$, and thus we will be
done. Therefore, assume that $\mathcal{M},m,h \not\models
Proven(B)$. Moreover, if $\mathcal{M},m,h \not\models KB$ then we
will again have, by \eqref{A9}, that $\mathcal{M},m,h \not\models
Prove(j,B)$, so that we may also safely assume that
$\mathcal{M},m,h \models KB$. Under these assumptions, in order to
show that $\mathcal{M},m,h \not\models Prove(j,B)$ we have to show
that the positive condition fails in that there is an $h' \in
Choice^m_j(h)$ such that no acceptable proof of $B$ is present in
$Act(m,h')$. To this end, we consider the set
$$
\Xi = \{ [j]C \mid [j]C \in \Gamma \} \cup \{ \bigwedge_{i \in Ag}\neg Prove(i,B) \}.
$$
This set must be consistent, since otherwise we would have
$$
\vdash ([j]C_1\wedge\ldots\wedge[j] C_n) \to \bigvee_{i \in Ag}Prove(i,B)
$$
for some $[j] C_1,\ldots,[j]C_n \in \Gamma$, whence, since $[j]$
is an S5 modality, we would get
$$
\vdash ([j]C_1\wedge\ldots\wedge[j]C_n) \to [j](\bigvee_{i \in Ag}Prove(i,B)),
$$
which would mean that $[j](\bigvee_{i \in Ag}Prove(i,B)) \in \Gamma$. On the other hand,
since
\noindent$Prove(j,B) \notin \Gamma$, this means, by
maxiconsistency of $\Gamma$ in $X$, that $\neg Prove(j,B) \in
\Gamma$, whence, again by maxiconsistency and \eqref{A13}, we
obtain that $\langle j\rangle(\bigwedge_{i \in Ag}\neg Prove(i,B))
\in \Gamma$. Therefore, by maxiconsistency of $\Gamma$ in $X$, we
must have $\neg [j](\bigvee_{i \in Ag}Prove(i,B)) \in \Gamma$, a
contradiction.
Therefore, $\Xi$ is consistent and we can extend $\Xi$ to a set
$\Delta \subseteq Form_X$ which is maxiconsistent in $X$. Of
course, in this case we will have $Prove(i,B) \notin \Delta$ for
all $i \in Ag$. We will have then that $(\Delta)\alpha$ is an
element, and, arguing as in Case 2.1 we can show \eqref{E:t1} so
that $\Delta$ contains all boxed formulas from $\Gamma$.
Therefore, by definition of $\equiv$, we know that $(\Gamma)\alpha
\equiv (\Delta)\alpha$. By Lemma \ref{hist}.2, we know that for
some $h' \in H_m$ we will have $(\Delta)\alpha = m \cap h'$ and,
therefore, $\Delta = end(m \cap h')$. Also, since $\Delta$
contains all the $[j]$-modalized formulas from $\Gamma$, we know
that for any such $h'$ we will have $h' \in Choice^m_j(h)$. We
also know that $Proven(B) \notin \Delta$, for otherwise we would
have, by maxiconsistency of $\Delta$ and Lemma \ref{theorems}.2,
that $\Box Proven(B) \in \Delta$, whence, by the fact that
$(\Gamma)\alpha \equiv (\Delta)\alpha$ we would have that
$Proven(B) \in \Gamma$, contradicting our assumptions.
Consider then $Act(m,h')$. We may assume that
$\alpha = \uparrow$, the reasoning for the case when $\alpha = \downarrow$
is similar. We have, by definition of $Act$ that:
$$
Act(m,h') = \{ z \} \cup \{ y_{(i, C)} \mid Prove(i, C) \wedge \neg\Box Prove(i, C)
\in \Delta \} \cup \{ u_C \mid \Box Prove(i, C) \in \Delta
\}.
$$
We know that $B \notin \mathcal{E}(m, z)$, since we have
established that $Proven(B) \notin \Delta$; we also know that if
$u_C, y_{(i, C)} \in Act(m,h')$ for any $i \in Ag$, then $C \neq
B$ since for all $i \in Ag$ we have $Prove(i,B) \notin \Delta$,
and this means that if $u_C, y_{(i, C)} \in Act(m,h')$ for any $i
\in Ag$, then both $B \notin \mathcal{E}(m, u_C)$ and $B \notin
\mathcal{E}(m, y_{(i, C)})$. Therefore, at $(m,h')$ there exists
no presented proof which would be acceptable for $B$, and since
$h' \in Choice^m_j(h)$, this means that the positive condition for
$Prove(j,B)$ at $(m,h)$ is violated, so that we get
$\mathcal{M},m,h \not\models Prove(j,B)$ as desired.
\emph{Case 6.4}. The length of $m$ is greater than $1$. Then, by
Lemma \ref{intersections}.2, for all $h' \in H_m$ we have that
$$
Act(m,h') = \bigcap_{h'' \in H_m}Act(m,h'').
$$
Assume then, that we have both $s \in Act(m,h)$ and $\mathcal{M},m,h \models s{\hspace{0.25mm}:\hspace{0.25mm}} B$ for some $s \in Pol$. Then $s \in \bigcap_{h'' \in H_m}Act(m,h'')$,
which means that the negative condition for $Prove(j,B)$ at $(m,h)$ is violated and we must have
$\mathcal{M},m,h \not\models Prove(j,B)$. Assume, on the contrary, that there is no
$s \in Pol$ for which both $s \in Act(m,h)$ and $\mathcal{M},m,h \models s{\hspace{0.25mm}:\hspace{0.25mm}} B$.
Then, since $h$ is of course in $Choice^m_j(h)$, it turns out that the positive condition
for $Prove(j,B)$ at $(m,h)$ is violated and again have
$\mathcal{M},m,h \not\models Prove(j,B)$. So, in any case $\mathcal{M},m,h \not\models Prove(j,B)$,
as desired.
This finishes the list of the modal induction cases at hand, and
thus the proof of our truth lemma is complete.
\end{proof}
\section{The main result}\label{main}
We are now in a position to prove Theorem \ref{completeness}. The
proof proceeds as follows. One direction of the theorem was proved
as Corollary \ref{c-soundness}. In the other direction, assume
that $\Gamma \subseteq Form_X$ is consistent. Then, by Lemma
\ref{elementaryconsistency}.1, $\Gamma$ can be extended to a
$\Delta$ which is maxiconsistent in $X$. But then choose an
arbitrary $\alpha \in \{ \uparrow, \downarrow \}$ and consider
$\mathcal{M} = \langle Tree, \leq, Choice, Act, R, \mathcal{E},
V\rangle$, the canonical model defined in Section
\ref{canonicalmodel}. The structure $(\Delta)\alpha$ is an
element, therefore $[(\Delta)\alpha]_\equiv \in Tree$. By Lemma
\ref{hist}.2, there is a history $h \in H_m$ such that
$(\Delta)\alpha = [(\Delta)\alpha]_\equiv \cap h$. For this $h$,
we will also have $\Delta = end([(\Delta)\alpha]_\equiv \cap h)$.
By Lemma \ref{truth}, we therefore get that:
$$
\mathcal{M}, [(\Delta)\alpha]_\equiv, h \models \Delta \supseteq
\Gamma,
$$
and thus $\Gamma$ is shown to be satisfiable in a normal jstit
model.
\textbf{Remark}. Note that the canonical model used in this proof
is $X$-universal in the sense that it satisfies every consistent
subset of $Form_X$.
As an obvious corollary of Theorem \ref{completeness} we get the
following weak completeness result:
\begin{corollary}\label{weakcompleteness}
For every $A \in Form$, $\vdash A$ iff $A$ is valid over normal
jstit models.
\end{corollary}
\begin{proof}
One direction follows from Theorem \ref{soundness}. In the other
direction, if $\not\vdash A$, then $\{ \neg A \}$ is consistent.
Setting $X$ to be the set of proof variables occurring in $A$, we
see that $PVar \setminus X$ must be countably infinite. Therefore,
Theorem \ref{completeness} applies, $\{ \neg A \}$ must be
satisfied in some normal jstit model, and $A$ cannot be valid.
\end{proof}
As a further corollary, we deduce a restricted form of compactness
property:
\begin{corollary}\label{compactness}
Let $X \subseteq PVar$ be such that $PVar \setminus X$ is
countably infinite. Then an arbitrary $\Gamma \subseteq Form_X$ is
satisfiable iff every finite $\Gamma_0 \subseteq \Gamma$ is
satisfiable.
\end{corollary}
\begin{proof}
If $\Gamma$ is satisfiable, then clearly every finite $\Gamma_0
\subseteq \Gamma$ is satisfiable. On the other hand, if every
finite $\Gamma_0 \subseteq \Gamma$ is satisfiable, then for no
$A_1,\ldots, A_n \in \Gamma$ can we have that $\vdash (A_1
\wedge\ldots \wedge A_n) \to \bot$, for otherwise, by Theorem
\ref{soundness}, the finite set $\{ A_1,\ldots, A_n \}$ would be
unsatisfiable. Therefore, $\Gamma$ must be consistent, and, by
Theorem \ref{completeness}, also satisfiable.
\end{proof}
\section{Conclusions and future research}\label{conclusion}
Theorem \ref{completeness}, the main result of this paper, proves
what might be called a restricted strong completeness theorem for
the implicit jstit logic. As we have shown in Section \ref{main},
this means, among other things, that this logic allows for a
finitary proof system and enjoys a restricted form of compactness
property. Taken together, these results show that, given the rich
variety of expressive means present in the implicit jstit logic
and non-trivial semantic constraints imposed on its models, this
logic displays a surprising degree of regularity.
Of course, the results of the present paper give room to some
generalization. One obvious observation would be that the rule
\eqref{R3} gives but one variant out of the infinite family of the
so-called \emph{constant specifications} allowed for in
justification logic; and it is straightforward to see that the
above completeness proof can be easily adapted for the systems
with other versions of constant specification. The other obvious
direction of generalizing the results above would be to relieve
the restriction that $R = R_e$ and consider the semantics of
\cite{OLWA} in its full generality, although, as we have already
mentioned, it is not so clear whether this generalization will
affect the set of validities.
In the broader perspective, Theorem \ref{completeness} is a step
towards axiomatization of the full basic justification stit logic
in case such an axiomatization is possible. Viewing Theorem
\ref{completeness} as a partial success in axiomatizing the full
basic jstit logic, it is easy to see which steps shall come next.
First, one needs to understand the mechanics behind the proving
modalities omitted from the implicit jstit logic and axiomatize
the logic of $Prove(j,t,A)$ and $Proven(t,A)$ placed on top of
stit and justification modalities; then an axiomatization of a
system combining both explicit and implicit proving modalities and
their interplay may turn out to be possible. As a promising
further step in this direction, one can consider, for example, the
logic of the so-called $E$-notions, introduced in \cite{OLWA2}. It
allows one to define a combination of implicit and explicit
proving modalities, even though this combination is but a subset
of the variety of proving modalities definable within the full
basic jstit logic, and can, therefore, provide a demo version of
the problems to be encountered in an attempt to explore the
properties of the full system.
\section{Acknowledgements}
To be inserted.
|
train/arxiv
|
BkiUdKE5qoYAo52VqNL7
| 5 | 1 |
\section{Conclusion}\label{sec:conclusion}
\section{Challenges and Directions}\label{sec:challenges}
Now we highlight various directions that pose challenges in research on DNS: we call for more research based on data, and data-driven analysis (\ref{sec:datadriven}), privacy as a plug-in (\ref{sec:privacyplugin}), modeling adversaries (\ref{sec:advmodel}), attack surface analyses (\ref{sec:attacksurf}), and addressing the open resolvers phenomena (\ref{sec:openres}).
\subsection{More Data-driven Analysis for Security}\label{sec:datadriven}
There has been an abundance of work on DNS data analysis for security, such as DNS behavior tracking, encryption, blocking, query name minimization, DNSSEC, DNS over TLS, the DTLS for DNS exchange, etc. However, the DNS query traffic has been increasing and the behavior of DNS usage and the DNS ecosystem have been changing over time. DNS queries can represent plenty of information. For example, an attacker can build highly accurate profiles of what users do on the Internet by eavesdropping on query streams and ultimately breaching a user's privacy. Even more, some companies target individual users and build profiles for them based on their browsing seen in DNS traffic. They assemble such profiles as part of their own commercial activities. Although efforts to prevent this leakage have been ongoing, many problems are still open. An understanding of the problem coupled with previous developments in DNS is necessary. In addition, many functions must be modified for the new DNS ecosystem and further research using DNS data must be done for security.
\subsection{Privacy as a Plug-in}\label{sec:privacyplugin}
With the increase in Internet usage, malicious invasion of privacy using the DNS operation is on the rise. Techniques such as query name minimization, DNSSEC, and DNS over TLS, etc. to solve these problems exist, but a solution that satisfies the requirements of privacy as a plug-in is lacking: such solution should not require major modifications nor interfere with the existing standards of DNS. For example, DNSSEC allows users to verify DNS responses are correct, but does not protect privacy. Encryption provided by TLS eliminates opportunities for eavesdropping, but it is unclear what notions of privacy it provides. DNSSEC and DNS over TLS are independent and compatible protocols, although each solving different problems. Thus, it is necessary to understand the notions of privacy under various security models, and realizing privacy as a plug-in to the DNS existing infrastructure by not requiring major modifications.
\subsection{Modeling Adversaries}\label{sec:advmodel}
Although previous studies have concentrated on DNS security and privacy, including data-driven modeling and the infrastructure and investments made by the DNS providers, little work is done on formalizing and understanding the notion of pervasive adversaries. These pervasive adversaries have been widely considered as a potential threat to the privacy and security of communication on the Internet.
Modeling such adversaries would be the first challenge to improve DNS privacy. We should be able to view adversaries as either a passive adversary or an active adversary. A passive adversary does not interfere with the resolution and is interested in associating queries with a user or a set of users. He can eavesdrop on the links between the stub resolvers and recursive resolvers, and the links between the recursive resolvers and authoritative resolvers. An active adversary can control over a recursive resolver. For example, it is a result of the compromise of the software of that recursive resolver or due to being the adversary's recursive resolver such as an open rogue resolver. Formalizing the advantage of the adversaries would be the second challenge since the goal of the adversaries is to breach the privacy of users. It is meaningful to quantify their advantage in breaching the DNS privacy for addressing the privacy issue. An extending formalization of the capabilities of the adversaries using real-world DNS resolution topology and DNS query data would be another challenge. Some stub resolvers may generate more queries than others. Thus, if such information is known by an adversary, the adversary may use such distribution to associate a query with a user more often than with another.
The last challenge is understanding the advantages of such adversaries in light of various ongoing activities in the DNS research community, which include: encryption, query name minimization and blocking for DNS privacy. For example, blocking mechanisms at either the browser, recursive, or authoritative server can improve DNS privacy--but it must be implemented at once in order to ensure privacy. In reality, it is almost impossible that all browsers and recursive resolvers on the Internet perform blocking at the same time. The diversity of browsers and recursive software (even by the same vendor) on the Internet today make it difficult to implement timely and synchronized blocking. Thus, it is reasonable to assume only a partial deployment of such recommendations. It should also be noted that while a user can maintain their privacy through DNS blocking, non-blocking users who share the same DNS infrastructure may be inadvertently affected.
\subsection{Attack Surface Analyses}\label{sec:attacksurf}
The attack surface of the DNS resolution system is the entire public Internet between the end user's connection and the public DNS service. The attack surface analysis is concerned with enumerating potential and confirmed vulnerabilities, the attacks those vulnerabilities can be used to launch, and the implications of those attacks.
In the DNS resolution system, there are several potential attacks for disrupting the resolution operation. There has been a long history of attacks on the DNS ranging from Denial of Service (DoS) attacks to targeted attacks requiring specialized software. For example, an attacker can attack DNS resolvers by exploiting vulnerabilities such as buffer overflow attacks which make them misbehave or crash. Moreover, an attacker can actually modify DNS resolver configuration files and replace the name server IP addresses with malicious IP addresses to cause DoS attacks. These high-profile attacks have affected various commercial companies, software vendors, websites, content distribution services, and ISPs.
DNS amplification attacks utilize DNS servers for performing bandwidth-consumption DoS attacks. An attacker can ``spoof" look-up requests to DNS servers to hide the source of the exploit and direct the response to the target. Essentially, the attacker turns a small DNS query into a much larger payload directed at the target network.
With cache poisoning, an attacker can attempt to insert a fake address record for an Internet domain into the DNS. If the server accepts the fake record, the cache is ``poisoned'' and subsequent requests for the address of the domain are answered with the address of a server controlled by the attacker. For as long as the fake entry is cached by the server, all subscriber's browsers or e-mail servers will automatically go to the address provided by the compromised DNS server. DNS cache poisoning attacks do not require substantial bandwidth or processing, nor do they require sophisticated techniques.
Quantifying the attack surface of DNS is important in understanding and managing the DNS resolution system, thereby improving DNS privacy. It identifies critical pieces of the DNS system that need to be modified to withstand against security threats. As aforementioned, evaluating the advantages of various adversaries under blocking (at either the browser or recursive resolver) and examining the difference between the probabilities against the entities observed and controlled by adversaries at the blocking point would be the challenge.
\subsection{Addressing Open Resolvers}\label{sec:openres}
While open resolvers provide various benefits, such as answering DNS requests from external sources for anything, they currently pose a significant threat to the stability and security of the Internet. Just recently, open resolvers have been utilized for launching amplification attacks, calling for initiating a systematic study on their population, use, and distribution, and raising the awareness on their potential roles. For example, the open resolver project~\cite{openres} reported 32 million open resolvers, 28 million of which pose a significant threat, as of October 2013. However, little is done on understanding the role each of those millions of resolvers plays, whether they are open intentionally or accidentally, and other aspects of their behavior.
One open problem today is to understand those resolvers by perhaps analyzing their role, and understanding how they contribute to the good and bad use of the DNS as a service. Some of the open questions that are worth exploring---which may shed light on the role each of those resolvers play---include, among others, the following. 1) How well-represented are the open resolvers in typical DNS resolution systems, e.g., in popular TLDs? 2) How persistent are open resolvers over time in both the DNS resolution and open resolver ecosystems? 3) Is there any correlation between the volumes of DNS queries generated by those resolvers in the DNS resolution system and their actual size in the open resolver ecosystem? 4) Do open resolvers ``lie'' about responses for queries initiated by other clients? 5) Are open resolvers consistent in answering various clients for the same type of query?
Other open questions concerning open resolvers could potentially be answered through a characterization of those resolvers, such as geographical distribution, and persistence characterization over a longer period of time between consecutive scans, along with implications of the findings in the main questions mentioned earlier. Ultimately, findings systematically obtained through answering those questions could help a reputation system of the open resolvers ecosystem to guide benign users in their use of those resolvers.
\section{Concluding Remarks}\label{sec:concluding}
In this paper, we review various works on DNS ecosystem, security issues, and privacy concerns. We point out the open research problems related to DNS data-driven analysis, privacy as a plug-in, modeling adversaries, and attack surface analyses. We expect that these challenges and directions will continue to be useful for improving DNS security and privacy.
\section{Introduction}
Since the inception of the domain name system (DNS) in 1983~\cite{mockapetris1987domain}, there has been a large body of work on understanding its operation, security, and privacy. Issues as understanding the DNS ecosystem\cite{AppelbaumGWW12,SSAC10,rfc5395,MockapertisD88}, resolvers behavior~\cite{SchompRA16,GaoYCPGJD13,CallahanAR13}, security issues of resolution\cite{GoldbergNPRVZ15,ShulmanW14,HerzbergS13,AtenieseM01,Conrad01}, applications for malicious actors detection and profiling\cite{PaxsonCJRSSSTVW13,AntonakakisPLVD11,WeaverKP11,AntonakakisPDLF10,KrishnanM10,ChangMWC15,WangMCC15}, DNS privacy\cite{ZhuHHWMS15,Bortzmeyer13a,MohaisenM15,Shulman14,Bortzmeyer13,KrishnanM10,Castillo-PerezG09}, etc. have been widely researched.
The large body of literature on DNS operation, security, and privacy suggests that the area is mature, and problems are well understood. However, reality is contrary to this suggestion. Recently, adversaries have become very ``creative'' about the way they use DNS for launching attacks, moving from simple to sophisticated usage~\cite{HerzbergS14,Rossow14}. The DNS ecosystem has evolved to include many players, such as open DNS resolvers, which include trusted, untrusted, and semi-trusted ones, making it very difficult to reason about its resolution and operation. The rise of nation-state adversaries, with their unique capabilities compared to typical adversaries (ISP-level of an individual malicious actor) call for the further understanding of how they can affect the operation of the Internet in general, and DNS in particular~\cite{BarnesSJHTTHB15}.
Despite the large body of work on DNS, the rise of new attacks suggests that DNS operation, security, and privacy are still one of the significant areas to explore with various issues to address. Those issues are not necessarily new, like addressing pervasive adversaries, privacy, and new forms of attacks, but could be issues to do with problems already explored in the past: as the behavior of DNS users (benign and malicious) evolves over time, this calls for further exploration to incorporate such behavior in characterizing, identifying, and detecting misuse. As new entities and operational realities and functions get incorporated in the DNS system, their role and how they affect end-to-end guarantees, and services built on top of DNS, need to be understood.
Believing in the important role that it plays today and will play the future, we set out to present the current opportunities and challenges of DNS. We summarize all of the major thrusts of research on the topic and explore some of our ongoing research activities, highlighting some challenges, and call on the community to help address them.
\BfPara{Organization} In \ref{sec:research} we introduce the research opportunities. In \ref{sec:challenges} we introduce various challenges and directions. In \ref{sec:concluding} we introduce concluding remarks.
\section{A Review of the Research Opportunities}\label{sec:research}
We review various avenues of research uncovered in the rich literature on DNS. The main objective of this review is to highlight DNS security and privacy, with the secondary objective being the operation of DNS. We focus on research for understanding the DNS ecosystem (\ref{sec:ecosystem}), DNS security (\ref{sec:security}) and DNS privacy (\ref{sec:privacy}).
\vspace{-2mm}
\subsection{DNS Economics and System Analysis}\label{sec:ecosystem}
The role that each entity plays, including clients and resolvers, and how these roles interact is the determinant factor for understanding the DNS ecosystem and its various pieces of complex infrastructure. In the following, we review two crucial areas that have been explored on that front which includes understanding the behavior of DNS and DNS blocking.
\BfPara{Understanding DNS Behavior} Callahan {\em et al.}\xspace~\cite{CallahanAR13} passively monitored DNS and related traffic within a local network to understand server behavior. Shcomp {\em et al.}\xspace~\cite{SchompRA16} presented a characterization of DNS clients for developing an analytical model of client interactions with the larger DNS ecosystem. Banse {\em et al.}\xspace~\cite{BanseHF12} studied the feasibility of behavior-based tracking in a real-world setting.
Schomp {\em et al.}\xspace~\cite{SchompCRA13} presented methodologies for efficiently discovering the complex client-side DNS infrastructure. They further developed measurement techniques for isolating the behavior of the distinct actors in the infrastructure. Shulman and Waidner~\cite{ShulmanW15} explored name servers that use server-side caching, characterized the operators of the server-side caching resolvers and their motivations. Hao {\em et al.}\xspace~\cite{HaoFP11} explored the behavioral properties of domains using the DNS infrastructure associated with the domain and the DNS lookup patterns from networks who are looking up the domains initially. Behavior-based tracking is a threat, allowing attackers to track users passively. Multiple sessions of a user are linked exploiting characteristic patterns gathered from network traffic. For user's privacy, daily changing IP addresses offer limited protection against behavior-based tracking. Thus, lightweight methods that help to prevent profiling and tracking users without their consent are needed.
\BfPara{DNS Blocking}
Thomas {\em et al.}\xspace~\cite{ThomasLS14} examined the NXD (Non-eXistent Domain) request patterns observed at the root and recursive name servers to gauge the effectiveness of collision blocking techniques. Scaife {\em et al.}\xspace~\cite{ScaifeCT15} presented an anonymous domain registrar. Appelbaum and Muffett~\cite{AppelbaumM15} proposed blocking special queries ({\em i.e.,}\xspace .onion) to improve Tor's privacy. DNS blocking has a limitation. In order to ensure DNS privacy, blocking should be implemented at once. However, it is almost impossible for all browsers and recursive resolvers to perform blocking simultaneously. Understanding how blocking may affect users who are not performing blocking but are sharing the same DNS infrastructure is required.
\vspace{-2mm}
\subsection{DNS Security}\label{sec:security}
DNS security is one of the well-explored areas in the literature, where work has been focused on analyzing and detecting DNS vulnerabilities and malicious domains. We review some of the outstanding work on each of these topics.
\BfPara{DNS Vulnerability} Schomp {\em et al.}\xspace~\cite{SchompCRA14} measured vulnerability to DNS record injection attacks and found that record injection vulnerabilities are fairly common even years after some of them were first uncovered. Dagon {\em et al.}\xspace~\cite{DagonPLL08} documented how attackers are using ``rogue'' DNS servers to create malicious DNS resolution paths, showing dozens of viruses that corrupt resolution paths and noting hundreds of URLs discovered per week that performed drive-by alterations of host DNS settings. Xu {\em et al.}\xspace~\cite{XuBSY13} quantitatively analyzed several techniques for hiding malicious DNS activities. Jackson {\em et al.}\xspace~\cite{JacksonBBSB09} evaluated the cost-effectiveness of mounting DNS rebinding attacks. Schomp {\em et al.}\xspace~\cite{SchompAR14} addressed vulnerabilities in shared DNS resolvers by removing them entirely and leaving recursive resolution to clients, showing that the cost of this approach is modest and arguing that it strengthens the DNS by reducing the attack surface. Dagon {\em et al.}\xspace~\cite{DagonAVJL08} proposed a technique to make DNS queries more resistant to poisoning attacks. Chen {\em et al.}\xspace \cite{ChenMP15} proposed a lightweight DNS record's TTL optimization for consistency. Despite such work, extending the literature for user-intention-based anomaly detection method to identify anomalous DNS traffic is an open challenge.
\BfPara{Detecting Malicious Domain Names} Various works have been proposed for detecting malicious domain names using DNS behavioral profiling~\cite{AntonakakisPLVD11,AntonakakisPNVALD12,RahbariniaPA15,JonesFPWA16,HaoFP11,BilgeKKB11,LuoTZSLNM15,PerdisciCG12,DagonPLL08}. For example, Antonakakis {\em et al.}\xspace~\cite{AntonakakisPLVD11} proposed a novel detection system called \textit{Kopis} for detecting malware-related domain names by passively monitoring DNS traffic at the upper levels of the DNS hierarchy. Johnes {\em et al.}\xspace~\cite{JonesFPWA16} presented techniques for detecting unauthorized DNS root servers on the Internet using primarily endpoint-based measurements. Yadav {\em et al.}\xspace~\cite{YadavRRR10} developed a method to detect domain fluxes in DNS traffic by looking for patterns inherent to domain names that are generated algorithmically. Antonakakis {\em et al.}\xspace~\cite{AntonakakisPDLF10} suggested a dynamic reputation system for DNS called \textit{Notos}, which uses passive DNS query data and analyzes the network and zone features of domains to indicate if a new domain is malicious or legitimate. Gao {\em et al.}\xspace \cite{GaoYCPGJD13} presented an innovative approach to detect previously unknown malicious domains by simply using temporal correlation in DNS queries. Szurdi {\em et al.}\xspace \cite{SzurdiKCSFK14} performed a comprehensive study of ``typosquatting'' ({\em i.e.,}\xspace \textit{the deliberate registration of domains containing typos}) within the \texttt{.com} TLD, showing typo domains identified by lexical analysis are truly typographical variants of their target domain names. Despite such measures, the integrity and availability of Internet communication rely on replies from the DNS root name servers. Thus, it is important to detect DNS root manipulation when it does occur, even though it is rare.
\BfPara{Modeling adversaries} There have been some works modeling DNSSEC (Domain Name System Security Extensions) adversaries, such as Bau and Mitchell~\cite{BauM10} who formally modeled the cryptographic operations in DNSSEC and discovered a forgery vulnerability. Herzberg {\em et al.}\xspace~\cite{HerzbergS13} presented a comprehensive overview of challenges and pitfalls of DNSSEC, including vulnerable configurations, interoperability of incremental deployment, and challenges due to super-sized DNS responses. Although DNSSEC deployment is still very limited, it has already been abused in several of the largest DoS attacks. These attacks often deter domains from deploying DNSSEC. Goldberg {\em et al.}\xspace~\cite{GoldbergNPRVZ15} demonstrated that since current DNSSEC deployments with support for NSEC and/or NSEC3 are vulnerable to zone enumeration attacks, they proposed a new cryptographic construction called \textit{NSEC5}--which proved to be a secure DNSSEC denial-of-existence mechanism. DNSSEC does not protect against denial of service attacks. DNSSEC makes DNS vulnerable to a new type of denial of service attacks based on cryptographic operations against security-aware resolvers and name servers, as an attacker can attempt to use DNSSEC mechanisms to consume victim's resources.
\subsection{DNS Privacy}\label{sec:privacy}
DNS privacy is quickly becoming one of the most emergent issues in the DNS research community. Despite the large body of the literature on this problem, including but not limited to:
\begin{enumerate*}[label=(\roman*)]
\item quantification of DNS privacy leakage
\item designs to improve privacy
\item DNS encryption as a vehicle to improve privacy, and
\item various standard body activities,
\end{enumerate*}
many in the academic research community are still doubtful about the privacy risks in DNS~\cite{priv}. Notwithstanding such doubts, we review prior work on DNS privacy and open directions.
\BfPara{Quantifying DNS Leakage} Konings {\em et al.}\xspace~\cite{KoningsBSW13} collected a one-week dataset of multicast domain name system (mDNS) announcements at a university and showed that queries and device names leak plenty of information about users. Krishnan {\em et al.}\xspace~\cite{KrishnanM10} demonstrated privacy leakage by prefetching and showed that it is possible to infer the likelihood of search terms issued by clients by analyzing the context obtained from the prefetching queries. Zhao {\em et al.}\xspace~\cite{ZhaoHS07} analyzed the complete DNS query process and discussed privacy disclosure problems in each step: client-side, query transmission process, and DNS server-side. They proposed a privacy-preserving query scheme called ``Range Query", which decreases privacy disclosure in the whole DNS query process. Paxson {\em et al.}\xspace~\cite{PaxsonCJRSSSTVW13} developed a measurement procedure to limit the amount of information a domain can receive surreptitiously through DNS queries to an upper bound specified by a security policy, with the exact setting representing a tradeoff between the scope of potential leakage versus the quantity of possible detections. Castillo-Perez and Garcia-Alfaro~\cite{Castillo-PerezG09} evaluated DNS privacy-preserving approaches, and pointed out the necessity of additional measures to enhance their security. When mobile devices are operated in public wireless networks, current implementations pose several privacy risks. The default naming practices of devices names need to be revised and users need to be able to limit service discovery to a selected set of networks.
\BfPara{Designs for Improving Privacy
Due to the ubiquity of privacy risks, efforts are constantly being made in both academia and industry for preserving privacy in DNS communications. Zhao~{\em et al.}\xspace~\cite{ZhaoHS07} propose to ensure the DNS privacy by concealing the actual queries using noisy traffic. Noisy traffic increases of latency and bandwidth during the execution and resolution of queries. Castillo-Perez {\em et al.}\xspace~\cite{Castillo-PerezG09} evaluated this approach and demonstrated that the privacy ensured by added queries is somewhat difficult to analyze and that the technique introduces additional latency and overhead, making it less practical. An extended algorithm to ensure privacy, while improving performance, is also introduced by Castillo-Perez {\em et al.}\xspace~\cite{Castillo-PerezG09} which uses both noisy traffic and private information retrieval (PIR) techniques. They pointed out to serious security flaws on both proposals if active attackers can target those mechanisms. These flaws on an improved method of the two proposals still require additional improvements to be effective.
Techniques which employ certain flavors of encryption have also been studied. For example, Herrmann {\em et al.}\xspace~\cite{HerrmannFLF14} proposed a lightweight privacy-preserving implementation called \textit{EncDNS} which essentially replaces third-party resolvers and provides client software that forwards queries to it through conventional DNS forwarders. Since EncDNS provides an end-to-end encryption, forwarders will not know the contents of the queries. Lu and Tsudik~\cite{LuT10} proposed a Privacy-Preserving DNS (PPDNS) built on top of a distributed hash tables (DHTs) and computational PIR to obtain a reasonably high level of privacy for name resolution queries.
\BfPara{DNS Encryption}
As mentioned previously, Herrmann {\em et al.}\xspace~\cite{HerrmannFLF14} presented a novel lightweight privacy-preserving name resolution service called EncDNS to serve as a replacement for conventional third-party resolvers. The EncDNS protocol, which is based on \textit{DNSCurve}, encapsulates encrypted messages in standards-compliant DNS messages. Zhu {\em et al.}\xspace~\cite{ZhuHHWMS15} proposed T-DNS to address privacy leakage problems using transport-layer security (TLS) to enable a user's privacy against their DNS resolvers and {\em optionally} the authoritative servers. Shulman~\cite{Shulman14} extensively explored dependencies in DNS and showed that an attacker can learn the requested domain in an encrypted DNS packet when information leakage via transitive trust is applied in tandem with other side-channels. Ateniese {\em et al.}\xspace~\cite{AtenieseM01} introduced a new strategy to build chains of trust from root servers to authoritative servers. End-to-end encryption has high overhead that needs to be mitigated. Besides, DNSSEC is not widely deployed yet even though DNS names are used for authentication. Thus, protections need to utilize encryption together with other methods such as DNSSEC, query name minimization, etc.
\BfPara{Modeling DNS Adversaries} While much of the previous work has focused on various aspects of DNS security and privacy, including (data-driven) modeling and informal description~\cite{MohaisenM15} and {\em informal adversaries modeling} on confidentiality~\cite{BarnesSJHTTHB15}, there is no study that formalizes adversaries against confidentiality, let alone concretely evaluating them.
\BfPara{Standards} The Internet Engineering Task Force (IETF) has recently established a working group dedicated solely to addressing DNS privacy concerns (DNS PRIVate Exchange, DPRIVE). This working group has proposed various techniques that are currently being under consideration~\cite{MohaisenM15}. Zhu {\em et al.}\xspace~\cite{HuZHMWH15a} (based on \cite{ZhuHHWMS15}) proposed a connection-oriented DNS transport over TCP, which uses TLS for privacy. The authors argue that the overhead of their approach is modest with careful implementations. Reddy {\em et al.}\xspace~\cite{ReddyWP15} proposed to use the Datagram Transport Layer Security (DTLS) for DNS exchange. They add a protection layer for the sensitive information in DNS queries, which would withstand a passive listener and certain active attackers. To address side-channel attacks on encrypted DNS, Mayrhofer~\cite{Mayrhofer15} proposed a padding scheme, where servers pad requests and responses by a variable number of octets. DNS over TLS does not consider known attacks on TLS, such as man-in-the-middle and protocol downgrade. The use of simple padding schemes alone is not sufficient to mitigate traffic analysis attacks. However, padding will organize a part of more complex mitigations for traffic analysis attacks that are probably to be developed over time.
\ref{tab:lit} summarizes the work in the literature as categorized above, with example work, contributions, and open directions.
\begin{table*}
\begin{center}
\caption{Summary of research directions in the literature, main contributions, and follow up.}\label{tab:lit}
{\scriptsize
\begin{tabular}{|p{1.3cm}|p{2.8cm}|p{1.9cm}|p{6.2cm}|p{3.5cm}|}
\hline
Area & Topic & Example work & Main contribution & Open direction \\
\hline
Ecosystem analysis & DNS resolver behavior & \cite{CallahanAR13, SchompRA16, BanseHF12, SchompCRA13, ShulmanW15, HaoFP11} & $\bullet$ DNS behavior tracking in a real-world setting $\bullet$ development an analytical model for client interactions $\bullet$ exploring the behavioral properties of domains using the DNS infrastructure associated with the domain and the DNS lookup patterns & Behavior under DNS changes, open resolvers\\
&DNS blocking& \cite{ThomasLS14, ScaifeCT15, AppelbaumM15} & $\bullet$ NXD request patterns analysis, blocking special queries &How blocking affects privacy \\
\hline
DNS security &Vulnerability analysis& \cite{SchompCRA14, DagonPLL08, XuBSY13, JacksonBBSB09, SchompAR14, DagonAVJL08, ChenMP15} & $\bullet$ measurement of the Internet's vulnerability to DNS record injection attacks $\bullet$ analysis of rogue DNS servers to create malicious DNS resolution paths $\bullet$ analysis techniques for hiding malicious DNS activities $\bullet$ evaluation of the cost-effectiveness of mounting DNS rebinding attacks $\bullet$ removing shared DNS resolvers $\bullet$ making DNS queries more resistant to poisoning attacks & Best practices and incentives for fixing vulnerabilities \\
&Malicious domain detection& \cite{AntonakakisPLVD11,AntonakakisPNVALD12,RahbariniaPA15,JonesFPWA16,HaoFP11,BilgeKKB11,LuoTZSLNM15,PerdisciCG12,DagonPLL08, YadavRRR10,AntonakakisPDLF10, GaoYCPGJD13, SzurdiKCSFK14} & $\bullet$ detecting malware-related domain names by passively monitoring DNS traffic $\bullet$ detecting unauthorized DNS root servers using endpoint-based measurements $\bullet$ detecting domain fluxes in DNS traffic by looking for patterns inherent to domain names $\bullet$ dynamic reputation system using passive DNS query data and zone features of domains $\bullet$ detecting unknown malicious domain using temporal correlation in DNS queries & Addressing evasion and stealthiness\\
&Modeling adversaries& \cite{BauM10, HerzbergS13, GoldbergNPRVZ15} & $\bullet$ modeling the cryptographic operations in DNSSEC $\bullet$ discovering a forgery vulnerability & Mathematical modeling with standard adversaries\\
\hline
DNS Privacy&Quantifying leakage& \cite{KoningsBSW13, KrishnanM10, ZhaoHS07, PaxsonCJRSSSTVW13, Castillo-PerezG09} & $\bullet$ quantifying queries and device names leak plenty of information about users $\bullet$ analyzing the context obtained from the prefetching queries, privacy-preserving query scheme (Range Query) $\bullet$ exact setting representing a tradeoff between the scope of potential leakage versus the number of possible detections &Gap analysis of policy vs. reality\\
&Privacy-preserving designs& \cite{ZhaoHS07, Castillo-PerezG09, HerrmannFLF14, LuT10} & $\bullet$ concealing the actual queries using noisy traffic $\bullet$ PIR techniques $\bullet$ PPDNS built on top of DHTs and computational PIR & Rigorous analysis and evaluation \\
&DNS encryption& \cite{HerrmannFLF14, ZhuHHWMS15, Shulman14, AtenieseM01} & $\bullet$ lightweight privacy-preserving name resolution service (EncDNS) $\bullet$ end-to-end encryption $\bullet$ T-DNS using TLS $\bullet$ chains of trust from root servers to authoritative servers &How encryption meets privacy (and how not) \\
&Modeling adversaries& \cite{MohaisenM15, BarnesSJHTTHB15} & $\bullet$ data-driven modeling and informal description $\bullet$ modeling of adversaries against confidentiality & Modeling pervasive capabilities (no prior work) \\
&Standards&\cite{MohaisenM15, HuZHMWH15a, ZhuHHWMS15, ReddyWP15, Mayrhofer15} & $\bullet$ connection-oriented DNS transport over TCP $\bullet$ DTLS for DNS exchange $\bullet$ protection layer for the sensitive information in DNS queries $\bullet$ padding scheme &Rigorous analysis of security, privacy, and trust \\
\hline
\end{tabular}\vspace{-8mm}
}
\end{center}
\end{table*}
|
train/arxiv
|
BkiUeIs5qYVBjUFKcLKx
| 5 | 1 |
\section{Introduction}
Bose--Einstein condensation (BEC) has been a field of growing interest in the last two decades, meanwhile proven in various systems including atoms, molecules, magnons, polaritons, etc. To establish BEC of excitons in a bulk semiconductor (proposed already in the 1960s \cite{moskalenko1962,blatt1962}), however, turned out to be a long standing problem that is still to be solved. Despite of many experiments even at extremely low temperatures (for an overview, see \cite{snoke2014}), so far conclusive evidence matching all required criteria \cite{snoke2003} could not be provided.
A promising candidate for the realisation of a BEC in a bulk material is cuprous oxide (Cu$_2$O) due to the long lifetime and the high binding energy of the excitons. In this material, some very promising signatures of an excitonic condensate have been found experimentally \cite{stolz2012}.
For the analysis of the reasons for the persisting discrepancy between the expectations and the experimental results, a critical revision of previous conceptions of the exciton gas physics is required.
In order to model the trapped exciton gas, the assumption of a global or local thermodynamic equilibrium, necessarily applied in previous analyses of the thermodynamics of the excitons \cite{stolz2010,sobkowiak2010,sobkowiak2011}, is not sufficient. During the measurements, the excitons undergo dynamic processes like spatial propagation towards the trap centre, conversion between ortho- and paraexcitons, and cooling by interaction with phonons. Therefore, a description of the kinetics of the excitonic system is required.
There exists a vast number of works on the kinetics of ultracold bosonic gases, for an overview see \cite{griffin2009}. Within the formalism of the well-known Zaremba--Nikuni--Griffin (ZNG) equations \cite{zaremba1999,griffin2009}, the dynamics of the condensed particles obey a generalised Gross--Pitaevskii equation containing the exchange between normal and condensed phases, while the evolution of the thermal particles is described by a quantum Boltzmann equation. To apply the formalism to the trapped exciton gas, semiconductor specific effects have to be taken into account by including collision terms between excitons and phonons as well as source and loss terms: laser excitation of excitons, their recombination, i.e., radiative decay, and a special two-body process usually referred to as Auger decay.
The first stage of the time evolution of the exciton gas is characterised by the local thermalisation of the excitonic momentum distribution. This process is very fast, within about 1 ns local equilibrium is reached \cite{sobkowiak2014}. The thermalisation turns out to be essentially determined by exciton-exciton collisions.
The cooling process of the excitons can be investigated looking at the ``effective'' temperature defined by the averaged kinetic energy per particle \cite{sobkowiak2014}. Again, the exciton-exciton collisions are crucial for the cooling efficiency. However, at very low temperatures of the helium bath below 0.1 K, the cooling is not efficient enough to ensure that the excitons reach the bath temperature within their lifetime \cite{sobkowiak2014}. This does not yet rule out the possibility that there are zones within the trap where the local temperature is less than the global effective temperature. The analysis of the local temperature distribution in the trap \cite{sobkowiak2015}, however, confirms the principle trend -- for temperatures above 0.1 K, the excitons reach bath temperature at least in the trap centre, while that is nowhere the case below that value. The temperature minimum is then outside the trap centre \cite{sobkowiak2015}. This behaviour is, on the one hand, caused by the drastically reduced cooling efficiency. On the other hand, the aforementioned Auger decay, a two-body process where one exciton recombines and the other one takes over the released energy, depends quadratically on the density and is thus a source of temperature increase in the trap centre where the exciton density has its highest value.
Thermodynamic as well as kinetic approaches to the exciton gas in the trap result in spatial (and temporal) profiles of exciton density and temperature. In order to compare with experimental findings, one has to translate the density distribution into the decay luminescence of excitons. Earlier investigations of the latter subject \cite{haug1983,shi1994} were based on an exciton-photon coupling Hamiltonian containing only ``normal'' terms (creating a photon while annihilating an exciton and vice versa). A recently published approach \cite{koch2016,koch2016a} based on the full minimal coupling Hamiltonian provides the excitation spectrum of the new quasiparticles composed of weakly interacting excitons (``bogolons'') and photons. However, it represents only a first step on the way to a general theory of excitonic decay luminescence.
The present analysis will focus mainly on the extension of the approach presented in \cite{sobkowiak2015} in two different aspects. First, the hydrodynamics will be reformulated for a multicomponent exciton gas. Second, the problem of the Auger-like two-body decay will be investigated more in detail, aiming both at a more elaborated calculation of the Auger coefficient as well as at comparing the impact of the Auger process on the hydrodynamic evolution to an alternative mechanism based on a transient biexciton formation \cite{wolfe2014}.
\section{Experimental Background}
The excitons under investigation are composed of holes in the $\Gamma_7^+$ valence band and electrons in the $\Gamma_6^+$ conduction band of Cu$_2$O (so-called yellow series). The ground state of this series consists of the non-degenerate paraexciton and the triply degenerate orthoexciton. The latter are labeled according to their spin projection ($+$), ($0$), and ($-$). The paraexcitons are the energetically lowest state lying $\unit{12.12}{\milli\electronvolt}$ below the orthoexcitons, due to electron-hole exchange interaction. Their long lifetime $\tau_\mathrm{P}=\unit{650}{\nano\second}$ \cite{schwartz2012} and high binding energy $E_\mathrm{B}^\mathrm{P}=\unit{151.36}{\milli\electronvolt}$ \cite{schwartz2012} make the paraexciton a promising candidate for a BEC of excitons in a bulk material.
Typical experiments in the field \cite{schwartz2012} are setup the following way. Excitons are created using a pump laser and collected inside a stress induced potential trap. The crystal specimen is cooled via a helium bath inside a cryostate. Helium bath temperatures as low as $\unit{37}{\milli\kelvin}$ have been reached using a $^3$He/$^4$He dilution cryostate \cite{stolz2012}, while optically pumping the crystal. The excitons themselves are cooled via interaction with the crystal lattice (phonons). The luminescence spectrum of the excitons is recorded using a CCD camera.
There are different ways to create excitons experimentally. In a strain field one can excite ortho- or paraexcitons directly or create orthoexcitons indirectly under the involvement of a $\Gamma_3^{-}$-phonon. Due to its large oscillator strength the latter process is usually used \cite{schwartz2012,stolz2012}. The created orthoexcitons convert rapidly to paraexcitons at rates of $\unit{0.2}{\nano\second^{-1}}$ \cite{denev2002} to $\unit{0.29}{\nano\second^{-1}}$ \cite{wolfe2000}. The pump laser may be run in pulsed or continuous wave (cw) mode. Under pulsed excitation the system will eventually reach a quasi-equilibrium state which decays over time, while cw excitation results in a stationary state in which the creation and the decay of excitons balances out.
The stress applied on the crystal has different effects on the exciton species. The potential for ortho($+$)-, ortho($-$)-, and paraexcitons is attractive, while being strongly repulsive for ortho($0$)excitons. However, for all species it has cylindrical symmetry and can be calculated from experimental data using contact mechanics.
\section{Model}
For a theoretical description we extend the ansatz presented in Ref.\ \cite{sobkowiak2015} to incorporate multiple components. The model introduced in Ref.\ \cite{sobkowiak2015} is based on the ZNG equations \cite{griffin2009}, which describe ultracold atomic gases in nonequilibrium. In order to derive a set of equations for a multicomponent system we use similar assumptions as in the ZNG formalism. Analogous to the one-component case the Bose-field operator for each component $\hat{\psi}_i(\mathbf{r},t)$ is split into $\hat{\psi}_i(\mathbf{r},t) = \Phi_i(\mathbf{r},t) + \tilde{\psi}_i(\mathbf{r},t)$ with condensate wavefunction $\Phi_i(\mathbf{r},t)$ and fluctuation operator $\tilde{\psi}_i(\mathbf{r},t)$. The exciton-exciton (X-X) scattering is assumed to be $s$-wave. Hence, the interaction strength $g_{ij}$ is given by
\begin{equation}
\label{eq:gij}
g_{ij}=2\pi\hbar^2\left( \frac{1}{m_i}+ \frac{1}{m_j} \right) a_{ij}^s\,,
\end{equation}
with the $s$-wave scattering length $a_{ij}^s$ and the exciton mass $m_i$. Furthermore, we neglect all nondiagonal densities $\tilde{n}_{ij}(\mathbf{r},t)=\langle \tilde{\psi}_i^\dagger(\mathbf{r},t) \tilde{\psi}_j(\mathbf{r},t)\rangle$ with $i \neq j$ and all anomalous densities $\tilde{m}_{ij}(\mathbf{r},t)=\langle \tilde{\psi}_i(\mathbf{r},t) \tilde{\psi}_j(\mathbf{r},t)\rangle$. The mean-field potentials are therefore given by
\begin{eqnarray}
\label{eq:MeanPotential}
U_i(\mathbf{r},t) &=& V^i_\mathrm{ext}(\mathbf{r})+2g_{ii} [n^c_i(\mathbf{r},t) +\tilde{n}_i(\mathbf{r},t)] \nonumber \\ &&+ \sum_{j \neq i} g_{ij} [n^c_j(\mathbf{r},t) +\tilde{n}_j(\mathbf{r},t)]\,,
\end{eqnarray}
with the condensate density $n^c_i(\mathbf{r},t)=|\Phi_i(\mathbf{r},t)|^2$ and the density of the thermal excitons $\tilde{n}_{i}(\mathbf{r},t)=\langle \tilde{\psi}_i^\dagger(\mathbf{r},t) \tilde{\psi}_i(\mathbf{r},t)\rangle$. Under these assumptions, the dynamics of the condensates are governed by generalised Gross--Pitaevskii equations (GGPE) of the form
\begin{eqnarray}
\label{eq:GGPE}
\mathrm{i}\hbar\frac{\partial \Phi_i(\mathbf{r},t)}{\partial t} &=& \bigg[-\frac{\hbar^2\nabla^2}{2m_i}+ U_i(\mathbf{r},t) - g_\mathrm{ii}n^c_i(\mathbf{r},t) \nonumber \\ &&-\mathrm{i} R_i(\mathbf{r},t) \bigg] \Phi_i(\mathbf{r},t)\,,
\end{eqnarray}
with the coupling terms $R_i(\mathbf{r},t)$, which transfer excitons into and out of the condensate.
Assuming the mean field potentials $U_i(\mathbf{r},t)$ to be only slowly varying, one can transform the equation of motion for $\tilde{\psi}_i(\mathbf{r},t)$ into a quantum Boltzmann equation
\begin{eqnarray}
\label{eq:BoltzmannEq}
\frac{\partial f_\mathbf{k}^i(\mathbf{r},t)}{\partial t}+\frac{\hbar\mathbf{k}}{m_i}\cdot\nabla_{\mathbf{r}} f_\mathbf{k}^i(\mathbf{r},t) && \nonumber \\ - \frac{1}{\hbar} \nabla_{\mathbf{r}} U^i(\mathbf{r},t) \cdot \nabla_{\mathbf{k}}f_\mathbf{k}^i(\mathbf{r},t)
&=& \frac{\partial f_\mathbf{k}^i(\mathbf{r},t)}{\partial t}\bigg|_\text{coll.}
\end{eqnarray}
for the Wigner distribution function $f_\mathbf{k}^i(\mathbf{r},t)$.
In the original ZNG equations only particle-particle scattering terms are included in Eq.\ (\ref{eq:BoltzmannEq}). Similar to Ref.\ \cite{sobkowiak2015} we extend the collision term to incorporate semiconductor specific effects, e.g. exciton-phonon (X-Ph) interaction. Then, the collision term reads
\begin{eqnarray}
\label{eq:collision}
\frac{\partial f_\mathbf{k}^i(\mathbf{r},t)}{\partial t}\bigg|_\text{coll.} &=& \sum_j \left[C_\mathrm{X-X}^{ij} + C_\mathrm{X_c-X}^{ij} \right] \nonumber \\ &+& C_\mathrm{X-Ph}^i + C_\mathrm{Conv}^i + C_\mathrm{C-D}^i \, ,
\end{eqnarray}
where $C_\mathrm{X-X}^{ij}$ stands for inter- ($i \neq j$) and intra-species ($i=j$) X-X scattering involving only thermal excitons. $C_\mathrm{X_c-X}^{ij}$ is the corresponding term if condensed and non-condensed excitons are involved. The interaction with phonons is contained in $C_\mathrm{Ph}^i$ and the conversion of different exciton species into each other in $C_\mathrm{Conv}^i$. All processes that can create (e.g. pump laser) or destroy (e.g. finite lifetime) excitons are grouped in $C_\mathrm{C-D}^i$. The experimentally observed effective two-body loss mechanism is also contained in this collision term.
In the ZNG equations, the energy dispersion for the non-condensed excitons is taken to be Hartree--Fock-like,
\begin{equation}
\label{eq:HFDisp}
\varepsilon_\mathbf{k}^i(\mathbf{r},t)=\frac{\hbar^2k^2}{2m_i}+ U_i(\mathbf{r},t) \, .
\end{equation}
This is a good approximation as long as no or only small condensates occur \cite{griffin2009}. The ZNG equations using a Bogoliubov quasiparticle spectrum can be found in \cite{imamovic2001}.
Due to the experimental background some additional simplifications are justified. A condensate of excitons is extremely unlikely to occur in any other species than the paraexcitons. Therefore, we only consider the GGPE for the paraexcitons. Since the ortho($0$)excitons are pushed out of the trap by the repulsive potential, we do not directly consider them in our model. The trap potentials for ortho($+$)- and ortho($-$)excitons are almost identical. The interaction with paraexcitons and phonons is also the same for both species. Therefore, we combine the ortho($+$)- and ortho($-$)excitons into one component, arriving at an effective two-component system of para- and orthoexcitons. Furthermore, we neglect their small mass differences and use $m=\unit{2.6}{m_0}$ for all excitons ($m_0$ -- free electron mass). In typical experiments the number of paraexcitons is much higher than the number of orthoexcitons. Therefore, we expect the influence of para-ortho X-X scattering on the paraexcitons to be very small, and neglect inter-species X-X scattering. Then, the coupling term $R(\mathbf{r},t)$ remains the same as in Ref.\ \cite{sobkowiak2015} and the GGPE is only modified via the mean-field potential compared to the one-component case.
\subsection{Hydrodynamic equations}
Like in the one-component case, we rewrite the quantum Boltzmann equations (\ref{eq:BoltzmannEq}) in terms of the first three moments. This leads to two sets of hydrodynamic equations governing the evolution of particle density, momentum density, and kinetic energy density of the thermal excitons of each species. These quantities are given by
\begin{eqnarray}
\label{eq:nvepsDef}
\tilde{n}_i(\mathbf{r},t)&=&\int \frac{d\mathbf{k}}{(2\pi)^3} f^i_\mathbf{k}(\mathbf{r},t) \, , \nonumber \\
m\tilde{n}_i(\mathbf{r},t)\mathbf{v}_i(\mathbf{r},t)&=&\int \frac{d\mathbf{k}}{(2\pi)^3} \hbar\mathbf{k} f_\mathbf{k}^i(\mathbf{r},t) \, , \nonumber \\
E_i(\mathbf{r},t) &=& \int \frac{d\mathbf{k}}{(2\pi)^3} \frac{\hbar^2k^2}{2m} f_\mathbf{k}^i(\mathbf{r},t) \, ,
\end{eqnarray}
with the (normal) velocity $\mathbf{v}_i(\mathbf{r},t)$ of the non-condensed excitons. In order to derive the hydrodynamic equations one has to multiply Eq.\ (\ref{eq:BoltzmannEq}) with ($\varphi_0=1$, $\varphi_1=\hbar\mathbf{k}$, $\varphi_2=\hbar^2k^2/2m$) and integrate over the whole $\mathbf{k}$-space.
Each set of hydrodynamic equations needs to be closed, since the next higher moment appears in the equation for the energy, respectively. This can be achieved by assuming the form of a partial local equilibrium for the distribution function $f_\mathbf{k}^i(\mathbf{r},t)= \tilde{f}_\mathbf{k}^i(\mathbf{r},t)$ given by
\begin{eqnarray}
\label{eq:LocalPartialEquil}
\tilde{f}^i_\mathbf{k}(\mathbf{r},t) &=& [e^{[(\hbar \mathbf{k}-m\mathbf{v}_i)^2/2m +U_i-\tilde{\mu}_i(\mathbf{r},t)]/k_\mathrm{B}T_i(\mathbf{r},t)}-1]^{-1} \nonumber \\
&=& [e^{[\hbar^2\tilde{k}^2/2m - \tilde{\mu}_\mathrm{eff}^i (\mathbf{r},t)]/k_\mathrm{B}T_i(\mathbf{r},t)}-1]^{-1} \, ,
\end{eqnarray}
with Boltzmann's constant $k_\mathrm{B}$ and the space- and time-dependent temperature and chemical potential $T_i(\mathbf{r},t)$ and $\tilde{\mu}_i(\mathbf{r},t)$. As already shown in Refs.\ \cite{sobkowiak2014,sobkowiak2015} the relaxation in $\mathbf{k}$-space for the experimentally relevant parameters is very rapid. Starting with a nonequilibrium distribution, the form (\ref{eq:LocalPartialEquil}) is typically reached in less than $\unit{1}{\nano\second}$. This is very fast compared to the lifetime of the paraexcitons of $\unit{650}{\nano\second}$. Therefore, the excitons can be described using a hydrodynamic model, if the relaxation into partial local equilibrium is split off and treated separately. This has to be done for all newly created excitons and will be explained in detail when the corresponding collision terms are discussed.
\section{Collision terms}\label{sec:collision}
\subsection{Pump laser $C_\mathrm{Laser}$}
The pump laser creates orthoexcitons either directly or under involvement of a $\Gamma^\mathrm{-}_3$-phonon. While both processes may occur simultaneously, we only consider the latter due to its much higher oscillator strength. The energy balance for this indirect process reads
\begin{equation}
\label{eq:LaserEnergy}
E_\mathrm{Laser}- \left[E_\mathrm{G}-E_\mathrm{B}^\mathrm{O} \right] = \varepsilon_\mathbf{k}^\mathrm{O}+ E_{\Gamma^-_3}\,,
\end{equation}
with the laser energy $E_\mathrm{Laser}=h c/\lambda$, the band gap $E_\mathrm{G}=\unit{2.17208}{\electronvolt}$ \cite{kazimierczuk2014}, the orthoexciton energy (\ref{eq:HFDisp}), their binding energy $E_\mathrm{B}^\mathrm{O}=\unit{139.24}{\milli\electronvolt}$ \cite{uihlein1981}, and the phonon energy $E_{\Gamma^-_3}=\unit{13.49}{\milli\electronvolt}$ \cite{hoeger2006}. The laser wavelength used in the experiments is $\lambda=\unit{605.9}{\nano\meter}$ \cite{stolz2012,schwartz2012}. For these parameters, Eq.\ (\ref{eq:LaserEnergy}) can only be fullfilled inside an attractive potential, thus, the pump laser only creates ortho($+$)- and ortho($-$)excitons, but not ortho($0$)excitons, since their potential is strongly repulsive.
To calculate the relaxation into partial local equilibrium we solve a homogeneous Boltzmann equation with the collision terms given by Eq.\ (\ref{eq:collision}). At $t=0$ there are no paraexcitons and the distribution function for the orthoexcitons is Gaussian. Its maximum is given by the $\mathbf{k}$-value corresponding to the kinetic energy of the orthoexcitons determined by Eq.\ (\ref{eq:LaserEnergy}). The width follows from the spectral width of the pump laser. The initial distribution function is normalised to the density of the newly created excitons. From these calculations we can determine the density of ortho- and paraexcitons entering the hydrodynamic equations and their respective energy. It also follows that both components have reached a partial local equilibrium after $\unit{1}{\nano\second}$, which is consistent with earlier results obtained by using simpler models \cite{sobkowiak2015}.
Spatially, the laser spot is placed $\unit{100}{\micro\meter}$ along the $z$-axis below the trap minimum. It has a Gaussian shape in $z$- and $\rho$-direction with a width of $\unit{3}{\micro\meter}$. The normalisation is chosen to resemble the exciton generation rate of a pump laser with power $P_L$, assuming that half of the emitted photons create excitons \cite{schwartz2012}. The moments of $C_\mathrm{Laser}$ are (symbolically) given by
\begin{eqnarray}
\Gamma^{(0,i)}_\mathrm{Laser}&=&n_\mathrm{Laser}^i(\mathbf{r},t) \, , \nonumber \\
\mathbf{\Gamma}_\mathrm{Laser}^{(1,i)} &=& \mathbf{0} \, , \nonumber \\
\Gamma^{(2,i)}_\mathrm{Laser}&=&E_\mathrm{Laser}^i(\mathbf{r},t) \, .
\end{eqnarray}
\subsection{Exciton-phonon collisions ($C_\mathrm{Ph}$)}
The main cooling mechanism in the system is the interaction of excitons with acoustic phonons. Orthoexcitons may interact with both transversal acoustic (TA), as well as longitudinal acoustic (LA) phonons. Paraexcitons in a stress-free crystal may only interact with LA phonons. However, due to the applied stress, interaction with TA$_1$-phonons becomes possible \cite{sobkowiak2014}. All these processes can be modeled using a deformation potential interaction. The moments of the corresponding collision term are \cite{haug1997,sobkowiak2015}
\begin{eqnarray}
\label{eq:PhononMoments}
\Gamma_\mathrm{X-APh}^{(n,i)} &=&- \frac{\pi D^2_i}{\varrho v_s}\int \frac{d \mathbf{k}d \mathbf{k}'}{(2\pi)^6} |\mathbf{k}'-\mathbf{k}| [\varphi_n(\mathbf{k})-\varphi_n(\mathbf{k}')] \nonumber \\
&\times& \bigg[f^i_{\mathbf{k}}f_{\mathbf{k}'-\mathbf{k}}^\mathrm{Ph}(1+f^i_{\mathbf{k}'}) -(1+f^i_{\mathbf{k}}) (1+f_{\mathbf{k}'-\mathbf{k}}^\mathrm{Ph})f^i_{\mathbf{k}'} \bigg] \nonumber \\
&\times& \delta \left(\varepsilon_\mathbf{k}^i -\varepsilon_{\mathbf{k}'}^i +\hbar\omega_{\mathbf{k}'-\mathbf{k}} \right) \, ,
\end{eqnarray}
with the speed of sound $v_s$, the crystal density $\varrho = \unit{6.11\times10^3}{\kilogram/\meter^3}$ \cite{haug1997}, and the deformation potential $D_i$. The values for the speeds of sound are $v_s^\mathrm{LA} =\unit{4.5\times10^3}{\meter/\second}$ and $v_s^\mathrm{TA}=\unit{1.3\times 10^3}{\meter/\second}$ \cite{trauernicht1986}. The deformation potential for the interaction of paraexcitons with LA-phonons is $D^\mathrm{LA}_\mathrm{P}=\unit{1.68}{\electronvolt}$ \cite{reimann89}. The respective value for the interaction with TA-phonons depends on the applied stress \cite{sobkowiak2014}. For our calculations, we use $D^\mathrm{TA}_\mathrm{P}= \unit{0.235} {\electronvolt}$, which corresponds to a trap depth of about $\unit{2}{\milli\electronvolt}$. The deformation potentials for the orthoexcitons are $D^\mathrm{TA}_\mathrm{O}= \unit{0.247} {\electronvolt}$ \cite{sobkowiak2014} and $D^\mathrm{LA}_\mathrm{O}= \unit{1.7} {\electronvolt}$ \cite{ohara1999a}.
For our model we assume the phonons to be in equilibrium at the lattice temperature $T_\mathrm{Ph}$ at all times. Hence, the phonon distribution function is given by $f_{\mathbf{k}'-\mathbf{k}}^\mathrm{Ph} =[\exp(\hbar\omega_{\mathbf{k}'-\mathbf{k}}/ k_BT_\mathrm{Ph})-1]^{-1}$ with the phonon energy $\hbar\omega_{\mathbf{k}'-\mathbf{k}}=\hbar v_s |\mathbf{k}'-\mathbf{k}|$.
Additionally to acoustic phonons we also consider the interaction of excitons with optical phonons. Assuming a Fr\"ohlich-type interaction the moments are given by \cite{haug1997}
\begin{eqnarray}
\label{eq:OptPhononMoments}
\Gamma_\mathrm{X-OPh}^{(n,i)} &=&- \frac{2 \pi}{\hbar}\int \frac{d \mathbf{k}d \mathbf{k}'}{(2\pi)^6} \Xi^2(\mathbf{k}'-\mathbf{k}) [\varphi_n(\mathbf{k})-\varphi_n(\mathbf{k}')] \nonumber \\
&\times& \bigg[f^i_{\mathbf{k}}f_{\mathbf{k}'-\mathbf{k}}^\mathrm{Ph}(1+f^i_{\mathbf{k}'}) -(1+f^i_{\mathbf{k}}) (1+f_{\mathbf{k}'-\mathbf{k}}^\mathrm{Ph})f^i_{\mathbf{k}'} \bigg] \nonumber \\
&\times& \delta \left(\varepsilon_\mathbf{k}^i -\varepsilon_{\mathbf{k}'}^i +E_\mathrm{Ph} \right)\,,
\end{eqnarray}
with the squared matrix element
\begin{equation}
\Xi^2(\mathbf{k}'-\mathbf{k})=\frac{2\pi E_\mathrm{Ph}e^2}{4\pi\varepsilon_0}\left(\frac{1}{\varepsilon_\infty}-\frac{1}{\varepsilon_{\omega=0}} \right)\frac{(q_e-q_h)^2}{|\mathbf{k}-\mathbf{k}'|^2} \, .
\end{equation}
The terms $q_e$ and $q_h$ enter the matrix element due to central-cell corrections and are given by
\begin{eqnarray}
q_e &=& \left[ 1 + a_X \alpha_h |\mathbf{k}-\mathbf{k}'|/2 \right]^{-2}\,, \nonumber \\
q_h &=& \left[ 1 + a_X \alpha_e |\mathbf{k}-\mathbf{k}'|/2 \right]^{-2}\,,
\end{eqnarray}
with the mass factors $\alpha_e=m_e/(m_e+m_h)$ and $\alpha_h=m_h/(m_e+m_h)$. For the electron mass we use $m_e=\unit{1.0}{m_0}$ and for the hole mass $m_h=\unit{0.7}{m_0}$.
There are two relevant optical phonons, the LO$_1$- and LO$_2$-phonon. Their energies are $E_\mathrm{Ph,1}=\unit{18.9}{\milli\electronvolt}$ and $E_\mathrm{Ph,2}=\unit{82.5}{\milli\electronvolt}$, respectively.
\subsection{Conversion ($C_\mathrm{Conv}$)}
Ortho- and paraexcitons can convert into each other under involvement of a TA-phonon. This can be modeled using a deformation potential interaction \cite{wolfe2000}. The energetic splitting of the para- and orthoexcitons is an important quantity in the conversion process.
In a strain-free crystal the orthoexcitons lie $\unit{12.12}{\milli\electronvolt}$ above the paraexcitons. Under strain this splitting $\Delta$ depends on space and the applied stress. Given the potential traps for ortho- $V_\mathrm{ext}^\mathrm{O}(\mathbf{r})$ and paraexcitons $V_\mathrm{ext}^\mathrm{P}(\mathbf{r})$, the splitting $\Delta$ is simply $\Delta=V_\mathrm{ext}^\mathrm{O}(\mathbf{r})-V_\mathrm{ext}^\mathrm{P}(\mathbf{r})$. In the experiments under consideration it varies between $\unit{7}{\milli\electronvolt}$ and $\unit{9}{\milli\electronvolt}$. This corresponds to temperatures between $\unit{80}{\kelvin}$ and $\unit{100}{\kelvin}$. The effect of this splitting is twofold. It poses as a barrier for the conversion from para- to orthoexcitons, only allowing high-energy paraexcitons to convert to orthoexcitons. For the inverse process it leads to highly energetic paraexcitons created from the conversion. Since typical exciton temperatures in the experiments are below $\unit{2}{\kelvin}$, we neglect the para-ortho conversion process. The $n$-th moments of the collision terms for the ortho-para conversion are given by \cite{wolfe2000}
\begin{align}
\label{eq:OPnteMomentAllg}
&\Gamma_\mathrm{Conv}^{(n,\mathrm{P})}=\frac{\pi L^2}{\rho v_\mathrm{TA}} \int \frac{\mathrm{d}^3k}{(2\pi)^3} \int \frac{\mathrm{d}^3k'}{(2\pi)^3} |\mathbf{k}-\mathbf{k}'| \Phi_n(\mathbf{k}) \\
&\times f_{\mathbf{k}'}^\mathrm{O}(1+f_{\mathbf{k}}^\mathrm{P})\Big\{(1+f_{|\mathbf{k}-\mathbf{k}'|}^\mathrm{Ph})\delta(\varepsilon_{\mathbf{k}'}^\mathrm{O}-\varepsilon^\mathrm{Ph}_{|\mathbf{k}-\mathbf{k}'|} -\varepsilon_{\mathbf{k}}^\mathrm{P}) \nonumber \\
& + f_{|\mathbf{k}-\mathbf{k}'|}^\mathrm{Ph} \delta(\varepsilon_{\mathbf{k}'}^\mathrm{O} +\varepsilon^\mathrm{Ph}_{|\mathbf{k}-\mathbf{k}'|} -\varepsilon_{\mathbf{k}}^\mathrm{P}) \Big\} \nonumber \\
&\qquad\;\;= \frac{\pi L^2}{\rho v_\mathrm{TA}} \int \frac{\mathrm{d}^3k}{(2\pi)^3} \int \frac{\mathrm{d}^3k'}{(2\pi)^3} \Phi_n(\mathbf{k}) F(\mathbf{k},\mathbf{k}')\,, \nonumber \\
&\Gamma_\mathrm{Conv}^{(n,\mathrm{O})}=-\frac{\pi L^2}{\rho v_\mathrm{TA}} \int \frac{\mathrm{d}^3k}{(2\pi)^3} \int \frac{\mathrm{d}^3k'}{(2\pi)^3} \Phi_n(\mathbf{k}') F(\mathbf{k},\mathbf{k}')\,, \nonumber
\end{align}
with deformation potential $L=\unit{50}{\milli\electronvolt}$ \cite{wolfe2000}.
\subsection{Lifetime ($C_\tau$)}
Both species of excitons have a finite lifetime. We model this via the collision term $C_\tau^i=-\tilde{f}_\mathbf{k}^i(\mathbf{r},t)/\tau_i$ with a constant lifetime $\tau_i$. The corresponding moments are
\begin{eqnarray}
\label{eq:Gammatau}
\Gamma_\mathrm{\tau}^{(0,i)}&=&-\tilde{n}_i(\mathbf{r},t)/\tau_i\,, \nonumber \\
\mathbf{\Gamma}_\mathrm{\tau}^{(1,i)}&=&-m\tilde{n}_i(\mathbf{r},t)\mathbf{v}_i(\mathbf{r},t)/\tau_i\,, \nonumber \\
\Gamma_\mathrm{\tau}^{(2,i)}&=&-E_i(\mathbf{r},t)/\tau_i \, .
\end{eqnarray}
For the calculations we assume a paraexciton lifetime of $\tau_\mathrm{P}=\unit{650}{\nano\second}$ \cite{stolz2012} and an orthoexciton lifetime of $\tau_\mathrm{O}=\unit{150}{\nano\second}$ \cite{schwartz2012}. However, the effective lifetime of the orthoexcitons is limited rather by the rapid conversion to paraexciton, than by this process.
\subsection{Two-body decay}
In experiments an effective loss mechanism, that scales with the square of the density, has been observed. Two different explanations have been put forward to explain this effect. One possibility is an Auger-like two-body decay of the excitons. In this process one exciton recombines, while ionising another. The second explanation attributes the loss mechanism to the formation of biexcitons, which in turn undergo an Auger-like decay themselves. Hence, in both cases two excitons are destroyed and an electron-hole pair is created. The latter rebinds to form a high-energy exciton.
To model the Auger-like two-body decay we use a $\mathbf{k}$-averaged Auger rate $A$ as commonly employed to explain experimental data \cite{ohara1999,yoshioka2010,schwartz2012}. The loss of excitons of the $i$-th species can then be described by the collision term
\begin{equation}
C_\mathrm{Auger}^{i}=-2A_{ii}\tilde{n}_i(\mathbf{r},t)\tilde{f}^i_\mathbf{k}(\mathbf{r},t)- A_{ij}\tilde{n}_j(\mathbf{r},t)\tilde{f}^i_\mathbf{k}(\mathbf{r},t)\,,
\end{equation}
with constant Auger rates $A_{ij}$.
\subsubsection{Estimation of the Auger coefficient}
The Auger coefficient for the nonstressed crystal can be calculated based on \cite{baym1996}. Two possible decay channels are of particular interest: direct and phonon assisted Auger scattering. In an unstrained crystal both processes require the recombining exciton to be an orthoexciton. While symmetry allows the transition to orthoexcitons via quadrupole coupling, direct transition of paraexcitons is strictly forbidden. In case of the phonon-assisted transition, the creation of orthoexcitons under the involvement of a $\Gamma_3^-$-phonon is the dominant transition channel, while paraexcitons solely couple to the weak $\Gamma_5^-$-phonon mode, thus diminishing the recombination rate of paraexcitons greatly. Indeed, the oscillator strength of the $\Gamma_3^-$-phonon assisted transition is predominant, preferring its treatment.
The phonon-assisted scattering matrix can be expressed in second order perturbation theory, effectively splitting it into two separate processes: the phonon transition of a later recombining orthoexciton into an intermediate state and the subsequent direct scattering of the intermediate state exciton with a second exciton of indifferent species
\begin{eqnarray}\label{eq:fs01}
&&M_\lambda^\mathrm{(pa)} =\\
&&\frac{\langle \Phi_\mathrm{1Sy}(\mathbf{K})|h_{\nu,\mathbf{Q}} |\Phi_\lambda (\mathbf{K-Q})\rangle \;M_\lambda^\mathrm{(d)}(\mathbf{K-Q},\mathbf{P},\mathbf{k_e},\mathbf{k_h})}{E_{\mathrm{1Sy}}(\mathbf{K}) - E_\lambda (\mathbf{K}-\mathbf{Q})} \,.\nonumber
\end{eqnarray}
In this case, the phonon mode $\nu$ belongs to the $\Gamma_3^-$ phonon, and the intermediate state $\lambda$ is theoretically any S-exciton state of the blue series of Cu$_2$O. In \cite{schoene2017} it was shown that, for the intermediate blue series, states with principal quantum numbers beyond the 1S state do effectively not participate, thus the treatment of the blue 1S state suffices. The phonon transition element for nonpolar optical phonons is generally expressed via an optical deformation potential
\begin{eqnarray}\label{eq:ds02}
&&\langle \Phi_\mathrm{1Sy}(\mathbf{K})|h_{3,\mathbf{Q}} |\Phi_\mathrm{1Sb} (\mathbf{K-Q})\rangle \nonumber \\
&&\simeq \mathcal{S}^\mathrm{(1Sb,1Sy)}(Q)\, D_{3;68} (Q) \sqrt{\frac{\hbar}{2\rho\,\Omega\, \omega_{3}}}\,.
\end{eqnarray}
The first term $\mathcal{S}^\mathrm{(1Sb,1Sy)}(Q)$ is a convolution of the yellow and blue exciton 1S wave functions in momentum space. The second term $D_{3;68} (Q)$ is the optical deformation potential between the $\Gamma_6^+$ and $\Gamma_8^-$ conduction bands via the $\Gamma_3^-$ phonon. The square root contains the material density $\rho$, crystal volume $\Omega$ and the phonon energy $\hbar\omega_{3}$. For small phonon momenta $\mathbf{Q}$ the deformation potential can be expanded into a Taylor series as
\begin{equation}\label{eq:fs03}
D_{\lambda,ij}(Q) = \; D_{\lambda,ij}^{(0)} \;+\; D_{\lambda,ij}^{(2)}\,Q^2\; + \ldots \;.
\end{equation}
This approach differs fundamentally from the treatment of \cite{baym1996} since intermediate states are assumed to be exciton states and the deformation potential is momentum dependent.
Similarly, the direct scattering matrix element $M_\mathrm{1Sb}^\mathrm{(d)}$ features the wave function of the intermediate 1S blue state. The direct scattering can be separated into two processes, one where the energy of the recombined exciton is transferred via Coulomb interaction to the electron, in the other case the energy is transferred to the hole. Both terms are additive and yield the collective direct scattering matrix element
\begin{eqnarray} \label{eq:fs04}
M^\mathrm{(d)}_\mathrm{1Sb}
&=& \frac{\hbar}{m_0}\,V_\mathrm{eff}(\mathbf{K\!-\!Q})\\
&&\times\sum_\mathbf{q} \varphi^{1\mathrm{Sb}}_{\mathbf{q}-(\mathbf{K}-\mathbf{Q})/2}
\frac{\langle u_{\mathrm{7v},\mathbf{q}}|(\mathbf{K}-\mathbf{Q})\cdot \mathbf{p}| u_{\mathrm{8c},\mathbf{q}}\rangle}{E_\mathrm{8c}(\mathbf{q}) - E_\mathrm{7v}(\mathbf{q})} \nonumber \\
&&\times \left[ \varphi^{\mathrm{1Sy}}_{\mathbf{k}_\mathrm{e}-\mathbf{P}/2 - \mathbf{K}+\mathbf{Q}} - \varphi^{\mathrm{1Sy}}_{\mathbf{k}_\mathrm{e}-\mathbf{P}/2} \right] \,\delta_{\mathbf{k}_\mathrm{e}+\mathbf{k}_\mathrm{h};\mathbf{K-Q+P}}\,.\nonumber
\end{eqnarray}
Here, $\varphi_\mathbf{k}^i$ are the exciton envelope functions and $V_\mathrm{eff}(\mathbf{K})$ is the effective Coulomb interaction, both in momentum space. The $\langle u_{\mathrm{7v},\mathbf{q}}|(\mathbf{K}-\mathbf{Q})\cdot \mathbf{p}| u_{\mathrm{8c},\mathbf{q}}\rangle$ depicts the dipole transition matrix element between the $\Gamma_8^-$ conduction and the $\Gamma_7^+$ valence band.
Expressing the transition probability for the phonon-assisted (pa) process in Fermi's golden rule yields
\begin{eqnarray}\label{eq:fs05}
&& \Gamma_\mathrm{Auger}^\mathrm{(pa)} = \frac{2\pi}{\hbar} \sum_{\mathbf{Q},\mathbf{K},\mathbf{P},\mathbf{k_e},\mathbf{k_h}} \left| M_\mathrm{1Sb}^\mathrm{(pa)}(\mathbf{Q},\mathbf{K},\mathbf{P},\mathbf{k_e},\mathbf{k_h}) \right|^2 \\
&&\times\delta \left( E_\mathrm{1Sy}(\mathbf{K})\! +\! E_\mathrm{1Sy}(\mathbf{P})\! -\! E_\mathrm{6c}(\mathbf{k_e})\! -\! E_\mathrm{7v}(\mathbf{k_h})\! -\! \hbar\omega_\lambda (\mathbf{Q}) \right). \nonumber
\end{eqnarray}
At low temperatures, exciton momentum is considerably smaller than the momentum of the ionised particles due to the transferred energy being in the order of the gap energy, hence justifying the approximation $\mathbf{K},\mathbf{P}\rightarrow 0$. Then Eq.\ (\ref{eq:fs05}) simplifies to three sums, from which one is eliminated by the Kronecker delta of Eq.\ (\ref{eq:fs04}). The remaining sums are solved numerically. The transition matrix element as well as the momentum dependent deformation potential are known from fits of the $\Gamma_3^-$ phonon assisted absorption band edge \cite{schoene2017}. The resulting Auger coefficient is
\begin{align}\label{eq:fs06}
A_\mathrm{Auger}^\mathrm{(pa)} \; = \; \Omega\,\Gamma_\mathrm{Auger}^\mathrm{(pa)} \; = \; 8.62\times 10^{-20} \,\mathrm{cm^3 ns^{-1}} \,,
\end{align}
which is closer to experimental values than the one calculated in \cite{baym1996}, but still several orders of magnitude below the expected value. However, it should be considered that this derivation is made for an unstressed crystal which, e.g., drastically inhibits the paraexciton recombination channel. For excitons in a trapped system the result of (\ref{eq:fs06}) should be regarded as a lower boundary for the possible Auger coefficient.
\subsubsection{Rate equations}
We assume that all electron-hole pairs rebind to form new excitons, which are randomly distributed among the four possible exciton states (one para- and three orthoexcitons). Therefore, one quarter of the newly formed excitons is fed into the paraexciton and half into the orthoexciton component, since the latter stands for ortho($+$)- and ortho($-$)excitons. In this setup the pump laser can not create ortho($0$)excitons. Therefore, the rebinding of electron-hole pairs is the only source of ortho($0$)excitons in the system. Due to the repulsive potential they are forced out of the trap. From the gradient of the potential one can estimate that it takes approximately $\unit{15}{\nano\second}$ for the ortho($0$)excitons to reach the fringe of the trap, if they start from its centre. The conversion lifetime on the other hand is about $\unit{4-5}{\nano\second}$. Hence, the ortho($0$)excitons are assumed to almost completely convert to paraexcitons on their way out of the trap. To take this into account we refeed one quarter of the newly created excitons into the paraexcitons smeared out over the whole trap with energy corresponding to the splitting.
The newly formed excitons are assumed to be at rest and are relaxed into partial local equilibrium first before being introduced into the hydrodynamic equations. Their initial energy corresponds to their binding energy of $E_\mathrm{B}^\mathrm{P}=\unit{151.36}{\milli\electronvolt}$ for para- and $E_\mathrm{B}^\mathrm{O}=\unit{139.24}{\milli\electronvolt}$ for orthoexcitons. The densities ($\tilde{n}_\mathrm{Auger}^i$) and energies ($E_\mathrm{Auger}^i$) after the initial relaxation are calculated in the same fashion as in the case of excitons created by the pump laser (by solving a homogeneous Boltzmann equation). Hence, the moments are given by
\begin{eqnarray}
\label{eq:AugerMoments}
\Gamma^{(0,i)}_\mathrm{Auger} &=& -2 A_{ii}\tilde{n}_i^2(\mathbf{r},t) - A_{ij} \tilde{n}_i(\mathbf{r},t) \tilde{n}_j(\mathbf{r},t) \nonumber \\ &&+ \tilde{n}^i_\mathrm{Auger}(\mathbf{r},t) \, , \nonumber \\
\mathbf{\Gamma}^{(1,i)}_\mathrm{Auger} &=& -m\tilde{n}_i(\mathbf{r},t)\mathbf{v}_i(\mathbf{r},t) [2A_{ii}\tilde{n}_i(\mathbf{r},t) - A_{ij} \tilde{n}_j(\mathbf{r},t)] \, , \nonumber \\
\Gamma^{(2,i)}_\mathrm{Auger} &=& -2A_{ii}\tilde{n}_i(\mathbf{r},t) E_i(\mathbf{r},t) -A_{ij}\tilde{n}_j(\mathbf{r},t) E_i(\mathbf{r},t) \nonumber \\ &&+ E_\mathrm{Auger}^i \, .
\end{eqnarray}
For the Auger rates $A_{ij}$ we use the values reported in Ref.\ \cite{stolz2012} $A_\mathrm{PP}=\unit{2 \times 10^{-18}}{\centi\meter^3/\nano\second}$, $A_\mathrm{OO}=\unit{4.9 \times 10^{-17}}{\centi\meter^3/\nano\second}$ and $A_\mathrm{PO}=(A_\mathrm{PP}+A_\mathrm{OO})/2$.
When the two-body decay is attributed to the formation of biexcitons, Eqs.\ (\ref{eq:AugerMoments}) have to be modified. Instead of Auger rates, temperature-dependent capture coefficients $C_{ij}$ are used to model the process \cite{wolfe2014}:
\begin{eqnarray}
C_\mathrm{PP}&=&\frac{\unit{1.4 \times 10^{-14}}{\centi\meter^3\kelvin/\nano\second}}{T_\mathrm{P}}\,, \nonumber \\
C_\mathrm{OO}&=&\frac{\unit{4.7 \times 10^{-15}}{\centi\meter^3\kelvin/\nano\second}}{T_\mathrm{O}}\,, \quad
C_\mathrm{PO}=0\,.
\end{eqnarray}
Excitons created by rebinding of electron-hole pairs are treated the same way as before. Therefore, the moments take the form
\begin{eqnarray}
\label{eq:BiExcMoments}
\Gamma^{(0,i)}_\mathrm{Biexc} &=& -2 C_{ii}\tilde{n}_i^2(\mathbf{r},t) + \tilde{n}^i_\mathrm{Biexc}(\mathbf{r},t) \, , \nonumber \\
\mathbf{\Gamma}^{(1,i)}_\mathrm{Biexc} &=& -2C_{ii}m\tilde{n}^2_i(\mathbf{r},t) \mathbf{v}_i(\mathbf{r},t) \, , \nonumber \\
\Gamma^{(2,i)}_\mathrm{Biexc} &=& -2C_{ii}\tilde{n}_i(\mathbf{r},t) E_i(\mathbf{r},t)+ E_\mathrm{Biexc}^i \, .
\end{eqnarray}
\section{Results}
For the calculations we use cw excitation and $a^s_\mathrm{PP}=2.1 \, a_X$, $a^s_\mathrm{OO} =a^s_\mathrm{PO} =2/3a^s_\mathrm{PP}$ \cite{shumway01} with a Bohr radius of $a_X=\unit{0.7}{\nano\meter}$.
For the discussion it is useful to introduce the exciton temperature in the trap centre $T^0_i$ and the mean exciton temperature
\begin{equation}
\langle T_i \rangle = \frac{1}{N_i} \int d\mathbf{r} T_i(\mathbf{r},t) \tilde{n}_i(\mathbf{r},t) \, .
\end{equation}
Other important temperatures are that of the helium bath $T_\mathrm{B}$, that of the phonons (of the crystal lattice) $T_\mathrm{Ph}$ and that one extracted from fitting the experimental spectra $T_\mathrm{S}$.
In the following we first present and discuss some general results using the Auger effect as the two-body decay mechanism. Afterwards we compare results using Auger effect, biexciton formation and no two-body loss mechanism. Last, we compare the theoretical results of our model with experimental data.
\subsection{Stationary state}
A typical result for temperature and density of the ortho- and paraexcitons is shown in Fig.\ \ref{fig:DichteTemp2D}. The two exciton species behave quite differently. The situation for the paraexcitons is straightforward. Hot excitons with temperatures of up to $\unit{3.5}{\kelvin}$ are created around $z=\unit{100}{\micro\meter}$ due to conversion of laser excited orthoexcitons. From there, they drift towards the trap centre, while gradually cooling down. They accumulate inside the trap and reach temperatures of around $\unit{0.5}{\kelvin}$. The highest density of the paraexcitons can be found in the trap centre. Since the effective lifetime of the orthoexcitons is much shorter, their situation is different. Most of the orthoexcitons created by the laser do not reach the trap centre. The highest density can be found in the centre of the laser spot. The orthoexcitons inside the trap mainly originate from recombining electron-hole pairs and, therefore, are very hot. The orthoexciton temperature inside the trap is around $\unit{1.5}{\kelvin}$, a factor of 3 higher than the paraexciton temperature.
\begin{figure}
\vspace*{0.2cm}
\includegraphics[width=\linewidth]{DichtePara.pdf}
\vspace*{-0.25cm}
\includegraphics[width=\linewidth]{DichteOrtho.pdf}
\caption{Density in $\centi\meter^{-3}$ (left column) and temperature in $\kelvin$ (right column) of the para- (top row) and orthoexcitons (bottom row) in the stationary state with $T_\mathrm{Ph}=\unit{0.25}{\kelvin}$ and $P_\mathrm{L}=\unit{69.52}{\micro\watt}$ as a function of $\rho$ (radial direction perpendicular to applied strain) and $z$ (direction of the strain).}
\label{fig:DichteTemp2D}
\end{figure}
As shown in Fig.\ \ref{fig:NTvonP}, there are much more paraexcitons in the system than orthoexcitons for all pump powers considered here. However, the ratio of para- to orthoexcitons declines as the pump power increases. This is due to the growing influence of the Auger effect. The destroyed excitons are mainly paraexcitons but half of the rebinding electron-hole pairs become orthoexcitons, hence, the balance between ortho- and paraexcitons shifts in the system. The growing influence of the Auger effect is also visible in the development of the temperatures. For the paraexcitons the mean temperature as well as the temperature in the trap centre increase strongly, while their difference shrinks. This is due to the creation of high-energy excitons by the Auger effect, which acts as local heating where the density is high. The orthoexciton temperatures do not change drastically since they are already quite hot and the cooling by phonons is much more efficient at high temperatures.
\begin{figure}
\includegraphics[width=\linewidth]{bild_sfb_2.pdf}
\caption{Various quantities in the stationary state as function of the pump power $P_\mathrm{L}$ at $T_\mathrm{Ph}=\unit{0.25}{\kelvin}$ (black dashed line). Left: particle number of para- (blue solid) and orthoexcitons (red dashed), and ratio of para- to orthoexciton particle numbers (black dotted). Right: mean exciton temperature $\langle T_i \rangle$ (crosses) and exciton temperature in the trap centre $T^0_i$ (circles and ovals) for para- (blue, solid and short-dashed) and orthoexcitons (red, dash-dotted and long-dashed).}
\label{fig:NTvonP}
\end{figure}
\subsection{Two-body loss mechanism}
In this section we analyse how our results depend on the implementation of the two-body loss mechanism. We consider the Auger effect and biexciton formation using the parameters given in Sec.\ \ref{sec:collision} and for reference the case of no two-body loss mechanism.
\begin{figure}
\includegraphics[width=\linewidth]{NVervonP.pdf}
\vspace*{-0.25cm}
\includegraphics[width=\linewidth]{FrakvonP.pdf}
\caption{Particle number of para- (top left) and orthoexcitons (top right) and the ratio of para- to orthoexciton numbers in the stationary state as a function of pump power $P_\mathrm{L}$ using Auger decay (blue circles), biexciton formation (red diamonds) and without a two-body loss mechanism (black crosses) with $T_\mathrm{Ph}=\unit{1.0}{\kelvin}$.}
\label{fig:NVervonP}
\end{figure}
The exciton numbers and the ratio of para- to orthoexcitons are given in Fig.\ \ref{fig:NVervonP}. Obviously, the loss due to the Auger effect is marginal over a wide range of pumping powers. Only at high laser powers the number of paraexcitons in the Auger effect model drops significantly. Using the biexciton model leads to paraexciton numbers one order of magnitude lower compared to the other cases, while the orthoexciton numbers only differ at higher pumping powers. The para to ortho fraction behaves differently for all cases. While the ratio stays almost constant using no two-body loss mechanism, it drops significantly using the Auger effect. For the biexciton model the ratio starts out much lower and drops slightly. This qualitatively different behaviour is also reflected in the mean temperature of the excitons depicted in Fig.\ \ref{fig:TvervonP}. The discrepancy of the bath temperature and the exciton temperature without two-body loss mechanism is due to the heat introduced into the system by the laser and the ortho-para conversion. The increase in temperature towards higher laser powers using the Auger effect is due to the hot excitons created by the latter. The temperatures in the biexciton model are over all pumping powers considerably higher. These strong differences between the two results should be observable in actual experiments. However, a comparison with experimental results is difficult since only the luminescence spectrum is available.
Assuming a linear relation between particle number and luminescence intensity, one can normalise the intensities to the value of the lowest pumping power. Doing the same for the paraexciton numbers results in the left graph of Fig.\ \ref{fig:VerNorm}. Again the three cases behave quite differently. Another quantity that should be observable in actual experiments is the extension of the thermal cloud. We define it as the full width between the two points of half maximum along the $\rho$- or $z$-direction $\sigma^{\rho/z}$. The normalised result for the paraexcitons $\sigma^\rho/\sigma^\rho_0$ is shown in Fig.\ \ref{fig:VerNorm}. The extension of the thermal cloud also displays a completely different behaviour for the three cases. Hence, comparing the theoretical results of our model with experimental data should be helpful to determine how realistic the parameters are.
\begin{figure}
\includegraphics[width=\linewidth]{TVervonP.pdf}
\caption{Mean exciton temperature $\langle T_i \rangle$ for para- (left) and orthoexcitons (right) for the same situations as in Fig.\ \ref{fig:NVervonP}.}
\label{fig:TvervonP}
\end{figure}
\begin{figure}
\includegraphics[width=0.9\linewidth]{VerNorm.pdf}
\caption{Normalised number of paraexcitons and extension of the thermal cloud in $\rho$-direction (full width at half maximum) for the same situations as in Fig.\ \ref{fig:NVervonP}.}
\label{fig:VerNorm}
\end{figure}
\subsection{Comparison with experiments}
Comparing the experimentally determined luminescence intensities of the para- and orthoexcitons \cite{stolz2012}, one might be able to differentiate between these qualitatively different behaviours depicted in Fig.\ \ref{fig:NVervonP}.
The trap depth for the paraexcitons in the experiments was $\unit{3.5}{\milli\electronvolt}$. The helium bath temperature was measured and ranged between 0.26 and $\unit{0.31}{\kelvin}$ depending on the pumping power. For our calculations we assumed the crystal to be in equilibrium with the bath $T_\mathrm{Ph}=T_\mathrm{B}$. In the top part of Fig.\ \ref{fig:VergExp} we compare the mean temperature of the paraexcitons according to theory with the spectral temperature obtained from experimental data. The temperatures predicted using the biexciton model are considerably higher than the spectral temperatures. On the other hand, the results using the Auger effect agree quite well with the experimental data. However, a growing discrepancy appears for higher pump powers. This could be due to a heating of the crystal that might be stronger than the rise in the bath temperature suggests.
A comparison of the ratio of para- and orthoexcitons between experiment and theory is difficult for two reasons. First, the luminescence intensity of the orthoexcitons is indistinguishable from the noise for the lowest pumping powers. Second, there is an unknown factor between the intensities of the different species. To account for that we subtract the first value from the theoretical data and multiply the result with a factor to adapt the theoretical to the experimental data. Note that this is the only fit parameter. The results in the lower part of Fig.\ \ref{fig:VergExp} clearly show a strong discrepancy between the experimental data and the results using the biexciton formation model. The latter show a qualitatively different behaviour for high pumping powers than the experimental results. The results using the Auger effect provide the correct qualitative behaviour even though there are some deviations at low and high pumping powers.
\begin{figure}
\includegraphics[width=\linewidth]{TvonPExp.pdf}
\includegraphics[width=\linewidth]{NFrakvonPExp.pdf}
\caption{Top: mean paraexciton temperature $\langle T_P \rangle$ using biexciton formation (black diamonds) and Auger effect (blue crosses) compared to the spectral temperature determined from experimental data. The black crosses represent the helium bath temperature. Bottom: ratio of ortho- to paraexciton numbers (symbols as above) compared to the ratio of the integrated luminescence intensity of the ortho- and paraexcitons (experimental data).}
\label{fig:VergExp}
\end{figure}
\begin{figure}
\includegraphics[width=\linewidth]{NPvonPExp.pdf}
\caption{Normalised paraexciton numbers using biexciton formation (black diamonds) and Auger effect (blue crosses) compared to the normalised integrated luminescence intensity (red circles; experimental data). The black dashed line represents the expected behaviour without two-body loss mechanism.}
\label{fig:VergExp1}
\end{figure}
In Fig.\ \ref{fig:VergExp1} we compare the normalised paraexciton number with the normalised integrated luminescence intensity from the experimental data. In the first half of the plot the experimental data follow the dashed line quite well, indicating a very weak or no influence of the two-body loss mechanism. For these pumping powers both theoretical models behave quite differently from the experimental data with the results of the Auger effect being closer to the experimental data. However, in the second half the experimental values agree quite well with the Auger effect model. The results for the biexciton model are slightly lower compared to the other two. In conclusion, the results using the Auger effect display a reasonable agreement with the experimental data, while the biexciton model seems to be quantitatively and in one case even qualitatively off. However, we want to emphasise that it cannot be concluded from this result that the proposed mechanism is wrong. It simply means that the suggested capture coefficients seem to be too high to explain the experimental results at these low temperatures.
\subsection{Auger effect and BEC}
Although modeling the two-body loss mechanism using the Auger effect is able to reproduce experimental results qualitatively quite well, a quantitative description of the Auger effect in a strained crystal is still pending. In particular, the possibility of reaching the conditions for BEC, i.e., high enough densities and low enough temperatures, strongly depends on the absolute strength of the Auger coefficient. This is illustrated by Fig.\ \ref{fig:myeff} \cite{sobkowiak2014a}. The vanishing of the effective chemical potential of the paraexcitons $\mu_\mathrm{eff}\equiv\mu-U=0$ marks the condensation boundary. There is obviously a critical value of the Auger coefficient of about $5\times 10^{-19}$ cm$^3$/ns above which no BEC is possible within reasonable laser powers.
\begin{figure}
\includegraphics[width=\linewidth]{bild_sfb_6.pdf}
\caption{Effective chemical potential $\mu_\mathrm{eff}$ vs.\ laser power $P_\mathrm{L}$ for various Auger coefficients $A$. $A_0=\unit{2.0\times10^{-18}}{\centi\meter^3/\nano\second}$ and $A/A_0=1$ (blue solid line), 0.5 (green dashed), 0.25 (red dotted), 0.1 (magenta dash-dotted), and 0.05 (dark blue long-dashed). The bath temperature is $T_\mathrm{B}=\unit{0.037}{\kelvin}$.}
\label{fig:myeff}
\end{figure}
\section{Conclusion and Outlook}
The present theoretical approach is capable to describe the dynamics of a trapped multicomponent exciton gas in Cu$_2$O.
For an effective two-component system we analysed the different behaviour of para- and orthoexcitons in the course of the drift towards the trap centre. We showed in particular that the number of paraexcitons always exceeds that of the orthoexcitons substantially while the temperature of the latter species is by a factor of 3 higher than that of the former one. Particle number as well as temperature show the growing influence of the two-body loss mechanism with increasing pump power.
The comparison of two models for the two-body decay process -- Auger effect vs.\ transient biexciton formation -- shows reasonable agreement of theoretical and experimental data for the first model while there are significant deviations for the latter one. The biexciton model, however, cannot be completely ruled out by this analysis, but only the given high capture coefficients.
Since theoretical and experimental results are compared on the basis of the light emission by the excitons, a comprehensive theory of the excitonic decay luminescence is still in demand.
\begin{acknowledgments}
We would like to thank G.\ Manzke, W.-D.\ Kraeft, and Th.\ Bornath for many fruitful discussions. This work was supported by the Deutsche Forschungsgemeinschaft via Collaborative Research Center SFB 652 (project B14).
\end{acknowledgments}
|
train/arxiv
|
BkiUeYPxaJiQn7ar4rE9
| 5 | 1 |
\section{Introduction}
While the idea of cosmic inflation was introduced about 40~years ago to solve inherent problems with the canonical hot big-bang model \citep{Brout77, Starobinsky80, Kazanas80, Sato81,Guth80, Linde81, Albrecht82, Linde83}, attention quickly focused on using it as a means to generate cosmological perturbations from quantum fluctuations \citep{Mukhanov81,Mukhanov82,Hawking82,Guth82,Starobinsky82,Bardeen83,Mukhanov85}. These perturbations include a tensor component (i.e., gravitational waves) as well as the scalar component (i.e., density variations). Inflationary gravitational waves entering the horizon between the epoch of recombination and the present day generate a tensor contribution to the large-scale cosmic microwave background (CMB) anisotropy. Hence, primordial tensor fluctuations contribute to the CMB anisotropies, both in temperature ($T$) and in polarization ($E$ and $B$ modes; \citealt{Seljak97a,Kamionkowski97,Seljak97b}).
As described in \citet{planck2016-l06} and \citet{,planck2016-l10}, the comoving wavenumbers of tensor modes probed by the CMB temperature anisotropy power spectrum have $k \la 0.008\,\mathrm{Mpc}^{-1}$, with very little sensitivity to higher wavenumbers because gravitational waves decay on sub-horizon scales. The corresponding multipoles in the harmonic domain are $\ell \la 100$, for which the scalar perturbations dominate with respect to tensor modes in temperature. The tensor component can be fitted together with the scalar one, and the precision of the \textit{Planck}\ constraint is limited by the cosmic variance of the large-scale anisotropies.
In polarization, the $EE$ and $TE$ spectra also contain a tensor signal coming from the last-scattering and reionization epochs. The $BB$ power spectrum, however, is treated differently when determining the tensor contribution, since the model does not predict any primordial scalar fluctuations in $BB$. As a consequence, a primordial $B$-mode signal would be a direct signature of tensor modes. However, depending on the amplitude of the tensor-to-scalar ratio, such a signal may be masked by $E$-mode power that is transformed to $B$-mode power through lensing by gravitational potentials along the line of sight \citep[so-called `$BB$ lensing,'][]{Zaldarriaga98}. $BB$ lensing has been measured with high accuracy by \textit{Planck}\ in both harmonic \citep{planck2016-l08} and map \citep{planck2015-XLI} domains, as well as by ground-based observatories POLARBEAR \citep{polarbear17}, SPTpol \citep{Sayre20}, and ACTPol \citep{Choi20}. But a primordial $BB$ tensor signal has not been detected yet.
The scalar and tensor CMB angular power spectra are plotted in Fig.~\ref{fig:cl_tensor} for the \textit{Planck}\ 2018 cosmology and for two values of the tensor-to-scalar ratio, namely $r = 0.1$ and $r=0.01$. For a further discussion of the tensor-to-scalar ratio and its implications for inflationary models, see \citet{planck2013-p17}, \citet{planck2014-a24}, and \citet{planck2016-l10}. We note that the signal from tensor modes in $EE$ is similar to that in $BB$ modes, which makes $EE$ (in particular at low multipoles) an important data set for tensor constraints. Indeed, limits set by cosmic variance alone for full-sky spectra are $\sigma_r(TT)=0.072$, $\sigma_r(EE)=0.023$, and $\sigma_r(BB)=0.000057$ for $r=0$.
In this paper, we make use of a polarized $E$-$B$ likelihood, which consistently includes the correlated polarization fields $E$ and $B$, and covers the range of multipoles where tensor modes can be constrained using \textit{Planck}\ data (i.e., from $\ell=2$ to $\ell=150$).
\begin{figure}[!htbp]
\centering
\includegraphics[width=\columnwidth]{cl_scalar_tensors.pdf}
\caption{Scalar (thick solid lines) versus tensor spectra for $r=0.1$ (dashed lines) and $r=0.01$ (dotted lines). Spectra for $TT$ are in black, $EE$ in blue, and $BB$ in red. The red solid line corresponds to the signal from $BB$ lensing.}
\label{fig:cl_tensor}
\end{figure}
At present the tightest $B$-mode constraints on $r$ come from the BICEP/Keck measurements (BK15; \citealt{Bicep2018limit}), which cover approximately $400\,{\rm deg}^2$ centred on ${\rm RA}=0^{\rm h}$, ${\rm Dec}=-57\pdeg5$. These measurements probe the peak of the $B$-mode power spectrum at around $\ell=100$, corresponding to gravitational waves with $k\approx 0.01\,\mathrm{Mpc}^{-1}$ that enter the horizon during recombination (i.e., somewhat smaller than the scales contributing to the \textit{Planck}\ temperature constraints on $r$). The results of BK15 give a limit of $r < 0.07$ at 95\,\% confidence, which tightens to $r < 0.06$ in combination with \textit{Planck}\ temperature and other data sets.
\citet{planck2016-l05} presented \textit{Planck}\ $B$-mode constraints from the 100- and 143-GHz HFI channels with a 95\,\% upper limit of $r < 0.41$ (at a somewhat larger pivot scale, as described in the next section), using only a limited number of multipoles around the so-called `reionization bump' ($2\leq\ell\leq29$).
Using \textit{Planck}\ {\tt NPIPE}\ maps \citep{planck2020-LVII}, called \textit{Planck}\ Release 4 (PR4), we are now able to constrain the $BB$ power spectrum for a much larger number of modes, including both the reionization bump at large angular scales ($\ell \la 30$) and the so-called `recombination bump' at intermediate scales ($50\la\ell\la 150$).
In this paper, we first describe, in Sect.~\ref{sec:model}, the cosmological model used throughout the analysis. We then detail the data and the likelihoods in Sect.~\ref{sec:data_lik}. Section~\ref{sec:tt} focuses on constraints from $TT$ and in particular the impact of the \mbox{low-$\ell$}\ data in temperature. Section~\ref{sec:bb} gives constraints from the $BB$ angular power spectrum using \textit{Planck}\ data, while results from the full set of polarization power spectra are given in Sect.~\ref{sec:pol}. In Sect.~\ref{sec:combined}, we combine all data sets to provide the most robust constraints on $BB$ coming from \textit{Planck}\ and in combination with other CMB data sets, such as the results from the BICEP/Keck Collaboration. Finally, we provide details of several parts of our analysis in a set of appendices, specifically describing the transfer function for $BB$, the {\tt HiLLiPoP}\ likelihood, large-scale polarized power spectra, the cross-spectrum correlation matrix, comparison between PR3 and PR4, robustness tests, triangle plots for {$\rm{\Lambda CDM}$}+r parameters, and comparison with other $BB$ spectrum measurements.
\section{Cosmological model}
\label{sec:model}
We use the base-{$\rm{\Lambda CDM}$}\ model, which has been established over the last couple of decades to be the simplest viable cosmological model, in particular with the \textit{Planck}\ results \citep[e.g.,][]{planck2016-l06}.
In this model, we assume purely adiabatic, nearly scale-invariant perturbations at very early times, with curvature-mode (scalar) and tensor-mode power spectra parameterized by
\begin{eqnarray}
\mathcal{P}_{\rm s}(k) &=& A_{\rm s} \left(\frac{k}{k_0}\right)^{n_{\rm s}-1}, \label{eq:Ps} \\
\mathcal{P}_{\rm t}(k) &=& A_{\rm t} \left(\frac{k}{k_0}\right)^{n_{\rm t}}, \label{eq:Pt}
\end{eqnarray}
where $A_{\rm s}$ and $A_{\rm t}$ are the initial super-horizon amplitudes for curvature and tensor perturbations, respectively. The primordial spectral indexes for scalar ($n_{\rm s}$) and tensor ($n_{\rm t}$) perturbations are taken to be constant. This means that we assume no `running,' i.e., a pure power-law spectrum with $d n_{\rm s} / d\ln k = 0$.
We set the pivot scale at $k_0 = 0.05\,\mathrm{Mpc}^{-1}$, which roughly corresponds to approximately the middle of the logarithmic range of scales probed by \textit{Planck};
with this choice, $n_{\rm s}$ is not strongly degenerate with the amplitude parameter $A_{\rm s}$.
Note that for historical reasons, the definitions of $n_{\rm s}$ and $n_{\rm t}$ differ, so that a scale-invariant scalar spectrum corresponds to $n_{\rm s} = 1$, while a scale-invariant tensor spectrum corresponds to $n_{\rm t} = 0$.
The late-time parameters, on the other hand, determine the linear evolution of perturbations after they re-enter the Hubble radius. We use the basis ($\Omega_{\rm b}h^2$, $\Omega_{\rm c}h^2$, $\theta_{\ast}$, $\tau$) following the approach in \textit{Planck}\ cosmological studies \citep{planck2016-l06}, where $\Omega_{\rm b}h^2$ is the baryon density today, $\Omega_{\rm c}h^2$ is the cold dark matter density today, $\theta_{\ast}$ is the observed angular size of the sound horizon at recombination, and $\tau$ is the reionization optical depth.
The amplitude of the small-scale linear CMB power spectrum is proportional to $A_{\rm s}e^{-2\tau}$. Because \textit{Planck}\ measures this amplitude very accurately, there is a tight linear constraint between $\tau$ and $\ln A_{\rm s}$.
For this reason, we usually adopt $\ln A_{\rm s}$ as a base parameter with a flat prior; $\ln A_{\rm s}$ has a significantly more Gaussian posterior than $A_{\rm s}$. A linear parameter redefinition then allows the degeneracy between $\tau$ and $A_{\rm s}$ to be explored efficiently. Note that the degeneracy between $\tau$ and $A_{\rm s}$ is broken by the relative amplitudes of large-scale temperature and polarization CMB anisotropies and by the effect of CMB lensing.
We define $r \equiv A_{\rm t}/A_{\rm s}$, the primordial tensor-to-scalar ratio defined explicitly at the scale $k_0=0.05,\mathrm{Mpc}^{-1}$. Our constraints are only weakly sensitive to the tensor spectral index, $n_{\rm t}$. We adopt the single-field-inflation consistency relation $n_{\rm t}=-r/8$. Note that the Planck Collaboration also discussed $r$ constraints for $k_0=0.002,\mathrm{Mpc}^{-1}$ \citep{planck2016-l05}. Given the definitions in Eqs.~\eqref{eq:Ps} and \eqref{eq:Pt}, the tensor-to-scalar ration scales with $(0.05/0.002)^{-r/8}$, which means that $r_{0.002}$ is lower by 4\,\% at $r\simeq0.1$ compared to $r_{0.05}$ and less than 0.4\,\% lower for $r<0.01$.
In this work, we use an effective tensor-to-scalar ratio $r_{\rm eff}$, which we extend into the negative domain by modifying the Boltzmann-solver code \texttt{CLASS}~\citep{Blas2011}. While negative tensor amplitudes are unphysical, this approach will allow us to derive posteriors without boundaries, facilitating detection of potential biases, and enabling us to determine a more accurate statistical definition of the constraints on $r$. With $r_{\rm eff}$ we are able to independently discuss both the uncertainty of $r$ ($\sigma_r$) and corresponding upper limits (depending on the maximum a posteriori probability).
In the rest of this paper, we simply write $r$ as the effective tensor-to-scalar ratio, and report upper limits for positive tensor amplitudes, for which $r_{\rm eff} = r$.
We use 95\,\% confidence levels when reporting upper limits, and a 68\,\% confidence interval with the maximum a posteriori probability.
\section{Data and likelihoods}
\label{sec:data_lik}
\subsection{Data and simulations \label{sec:data}}
The sky measurements used in this analysis are the PR4 maps available from the Planck Legacy Archive\footnote{\myurl{https://pla.esac.esa.int}} (PLA) and from the National Energy Research Scientific Computing Center (NERSC).\footnote{\myurl{https://portal.nersc.gov/project/cmb/planck2020}} They have been produced with the {\tt NPIPE}\ processing pipeline, which creates calibrated frequency maps in temperature and polarization from the \textit{Planck}\ Low Frequency Instrument (LFI) and High Frequency Instrument (HFI) data. As described in \citet{planck2020-LVII}, {\tt NPIPE}\ processing includes several improvements, resulting in lower levels of noise and systematics in both frequency and component-separated maps at essentially all angular scales, as well as notably improved internal consistency between the various frequencies.
{\tt NPIPE}\ achieves an overall lower noise level in part by incorporating the data acquired during the 4-minute spacecraft repointing manoeuvres that take place between the 30-to-70-min stable science scans. Residual systematics are suppressed using a mitigation strategy that combines aspects of both LFI and HFI processing pipelines. Most importantly, gain fluctuations, bandpass mismatch, and other systematics are formulated into time-domain templates that are fitted and subtracted as a part of the mapmaking process. Degeneracies between sky polarization and systematic templates are broken by constructing a prior of the polarized foreground sky using the extreme polarization-sensitive frequencies (30, 217, and 353\,GHz).
Moreover, the PR4 release comes with 400 simulations of signal, noise, and systematics, component-separated into CMB maps, which allow for an accurate characterization of the noise and systematic residuals in the \textit{Planck}\ maps. This is important because \textit{Planck}\ polarization data are cosmic-variance-dominated only for a few multipoles at very large scales in $EE$ ($\ell < 8$, as shown in Fig.~\ref{fig:cl_var}). These simulations, even though limited in number, represent a huge effort in terms of CPU time. They are essential in order to compute the following two additional quantities.\\
First, the end-to-end transfer function from the data reduction (including TOI processing, mapmaking and template fitting for mitigation of systematics, component separation, and power-spectrum estimation). The transfer function is defined as the ratio between the estimated output power spectrum and the input one, averaged over all the simulations \citep[see section~4.3 of][for the details]{planck2020-LVII}.\\
Second, the covariance of the data (here we use the cross-power spectra), which is the only way to propagate uncertainties when those are dominated by systematics (from the instrument or from foregrounds).\\
Note that these two quantities estimated from the simulations are directly related to two different characteristics of the final parameter posteriors: the bias of the mean (the transfer function); and the width of the posterior (as propagated into parameter constraints by the covariance matrix in the likelihood). They can be separated from each other, meaning that one systematic effect can easily produce a significant bias without any strong impact on the variance, while another effect can produce a large increase of the variance with no associated bias.
\begin{figure}[htbp!]
\centering
\includegraphics[width=\columnwidth]{clvar_NPIPE_20pc.pdf}
\caption{Variances for cross-spectra in $EE$ and $BB$ based on PR4 simulations, including: cosmic (sample) variance (black); analytic statistical noise (red); and PR4 noise from Monte Carlo simulations (green), including noise and systematics with (solid line) or without (dashed line) correction for the transfer function. The sky fraction used here is 80\,\%, as it illustrates well the effect of both systematics and transfer-function corrections \citep[][]{planck2020-LVII}.}
\label{fig:cl_var}
\end{figure}
The {\tt NPIPE}\ simulations include the systematic effects relevant for polarization studies, specifically analogue-to-digital-converter nonlinearities, gain fluctuations, bandpass mismatch between detectors, correlated noise (including 4-K line residuals), and full-beam convolutions for each detector.
The use of a polarization prior in {\tt NPIPE}\ processing causes a suppression of large-scale ($\ell\,{<}\,20$) CMB polarization, which needs to be corrected.
As explained in \citet{planck2020-LVII}, allowing for a non-trivial transfer function is a compromise between measuring very noisy but unbiased large-scale polarization from all low-$\ell$ modes, and filtering out the modes that are most affected by the calibration uncertainties left in the data by the \textit{Planck}\ scan strategy.
As detailed in \citet{planck2020-LVII}, the transfer function to correct for this bias is determined from simulations. It is then used to correct the power spectrum estimates, just as instrumental beam and pixel effects must be deconvolved.
Due to the fact that $E$ modes dominate the CMB polarization, the simulations do not yield a definitive measurement of the $B$-mode transfer function. We have chosen to conservatively deconvolve the $E$-mode transfer function from the $B$-mode spectrum in order to provide a robust upper limit on the true $B$-mode spectrum.
Indeed, when regressing the templates fitted during the map-making process with pure $E$ and $B$ CMB maps, we found a similar impact on the $EE$ and $BB$ power spectra (see Appendix~\ref{ann:bbtf}). Moreover, in the situation where primordial $B$-mode power is not detected, the transfer function correction essentially increases the variance estimate at low multipoles, which propagates the uncertainty induced by the degeneracy between the sky and the systematic templates used in {\tt NPIPE}. Note that this uncertainty is small compared to the impact of systematics in the error budget (see Fig.~\ref{fig:cl_var}).
To compute unbiased estimates of the angular power spectra, we perform cross-correlations of two independent splits of the data. As shown in \citet{planck2020-LVII}, the most appropriate split for the \textit{Planck}\ data is represented by the detector-set (hereafter `detset') maps, comprising two subsets of maps at each frequency, with nearly independent noise characteristics, made by combining half of the detectors. This was obtained by processing each split independently, in contrast to the detset maps produced in the previous \textit{Planck}\ releases. Note that time-split maps (made from, e.g., `odd-even rings' or `half-mission data') share the same instrumental detectors, and therefore exhibit noise correlations due to identical spectral bandpasses and optical responses. The use of time-split maps is subject to systematic biases in the cross-power spectra \citep[see section~3.3.3 in][]{planck2016-l05}, as well as underestimation of the noise properties in computing the half-differences (which must be compensated by a rescaling of the noise in the PR3 as described in Appendix~A.7 of \citealt{planck2016-l03}). Hence we use detset splits here.
Uncertainties at the power-spectrum level are dominated by noise and systematics, as illustrated in Fig.~\ref{fig:cl_var}. Thanks to the {\tt NPIPE}\ processing, we are now able to show the impact of the systematics at low $\ell$. This is illustrated by comparing the PR4 end-to-end noise (based on the Monte Carlo simulations, including instrumental noise, systematics, and foreground uncertainties, and corrected for the transfer function both in $EE$ and $BB$) with the propagation of the statistical noise coming from the analytic pixel-pixel covariance matrix. The systematic uncertainties dominate at $\ell \la 15$, then slowly decrease so that the effective uncertainties converge towards the analytic estimate at higher multipoles.
\subsection{Polarized sky masks \label{sec:masks}}
Foreground residuals in the foreground-cleaned maps dominate the polarized CMB signal near the Galactic plane.
To avoid contamination from these residuals in the cosmological analysis, we mask the Galactic plane. We use a series of different retained sky fractions (from 30\,\% to 70\,\%) to check the consistency of our results with respect to foreground residuals (Fig.~\ref{fig:masks}).
\begin{figure}[htbp!]
\centering
\includegraphics[width=\columnwidth]{masks_lollipop.pdf}
\caption{Galactic masks used for the \textit{Planck}\ likelihoods. The mask shown in dark blue indicates the sky rejected in order to retain a 70\,\% sky fraction for analysis. The masks shown in light blue, green, orange and red incrementally omit further parts of the sky, corresponding in turn to 60, 50, 40 and 30\,\% retained sky fractions, the latter shown in white.}
\label{fig:masks}
\end{figure}
The masks used in this analysis are a combination of a mask for polarization intensity (to avoid polarized foreground residuals), a mask for total intensity (to avoid potential temperature-to-polarization leakage residuals), and the confidence mask for component separation provided by the \textit{Planck}\ Colaboration. The intensity mask is obtained by thresholding the combination of the 353-GHz intensity map (which traces dust) scaled to 143\,GHz, and the 30-GHz intensity map (which traces synchrotron) scaled to 100\,GHz. The polarization map is constructed similarly. Both foreground tracers are smoothed beforehand with a 10\ifmmode^\circ\else$^\circ$\fi\ Gaussian window function.
The impact of the emission of extragalactic polarized sources on the power spectra is negligible, given the \textit{Planck}\ resolution and noise level. The confidence mask for component separation ensures the masking of the strongest sources, which could also produce residuals through temperature-to-polarization leakage.
\subsection{Likelihoods}
Table~\ref{tab:lik} summarizes the likelihoods used in this analysis, which are described below.
\begin{table*}[htbp!]
\begingroup
\caption{Summary of the likelihoods used in this paper.}
\label{tab:lik}
\nointerlineskip
\vskip -3mm
\setbox\tablebox=\vbox{
\newdimen\digitwidth
\setbox0=\hbox{\rm 0}
\digitwidth=\wd0
\catcode`*=\active
\def*{\kern\digitwidth}
\newdimen\signwidth
\setbox0=\hbox{+}
\signwidth=\wd0
\catcode`!=\active
\def!{\kern\signwidth}
\halign{\hbox to 0.9in{#\leaders\hbox to 5pt{\hss.\hss}\hfil}\tabskip=2.0em&
\hfil#\hfil\tabskip=2em&
\hfil#\hfil\tabskip=2em&
\hfil#\hfil\tabskip=2em&
#\hfil\tabskip=0pt\cr
\noalign{\vskip 3pt\hrule \vskip 1.5pt \hrule \vskip 5pt}
\omit\hfil Name\hfil&Mode&$\ell$ range&\textit{Planck}\ release&\omit\hfil Description\hfil\cr
\noalign{\vskip 3pt\hrule\vskip 4pt}
{lowT}$^{\rm a}$& TT& *2--30**& PR3& {\tt Commander}\ likelihood for Temperature\cr
{hlp} TT$^{\rm b}$& TT& 30--2500& PR4& {\tt HiLLiPoP}\ likelihood for \mbox{high-$\ell$}\ TT\cr
{hlp} TTTE$^{\rm b}$& TT+TE& 30--2500& PR4& {\tt HiLLiPoP}\ likelihood for \mbox{high-$\ell$}\ TT+TE\cr
\noalign{\vskip 5pt}
{lowlE}$^{\rm b}$& EE& *2--150*& PR4& {\tt LoLLiPoP}\ likelihood for \mbox{low-$\ell$}\ EE\cr
{lowlB}$^{\rm b}$& BB& *2--150*& PR4& {\tt LoLLiPoP}\ likelihood for \mbox{low-$\ell$}\ BB\cr
{lowlEB}$^{\rm b}$& EE+BB+EB& *2--150*& PR4& {\tt LoLLiPoP}\ likelihood for \mbox{low-$\ell$}\ EE+BB+EB\cr
\noalign{\vskip 4pt\hrule\vskip 3pt}}}
\endPlancktablewide
\tablenote{{\rm a}} {\tiny available from \myurl{https://pla.esac.esa.int}}\par
\tablenote{{\rm b}} {\tiny available from \myurl{https://github.com/planck-npipe}}\par
\endgroup
\end{table*}
\subsubsection{Low-$\ell$ temperature likelihood}
We use the \textit{Planck}\ public \mbox{low-$\ell$}\ temperature-only likelihood based on the PR3 CMB map recovered from the component-separation procedure (specifically {\tt Commander}) described in detail in \citet{planck2016-l05}. At large angular scales, \textit{Planck}\ temperature maps are strongly signal-dominated, and there is no expected gain in updating this likelihood with the PR4 data.
As discussed in \cite{planck2014-a24}, the \mbox{low-$\ell$}\ temperature data from \textit{Planck}\ have a strong impact on the $r$ posterior and the derivation of the corresponding constraints. This is because the deficit of power in the measured $C_\ell$s at \mbox{low-$\ell$}\ in temperature (see the discussions in \citealt{planck2013-p11} and \citealt{planck2016-LI}) lowers the probability of tensor models, which `add' power at low multipoles. This shifts the maximum in the posterior of $r$ towards low values (or even negative values when using $r_{\rm eff}$, as we show in Sect.~\ref{sec:tt}).
\subsubsection{High-$\ell$ likelihood}
\label{sec:lik_highl}
At small angular scales ($\ell > 30$), we use the {\tt HiLLiPoP}\ likelihood, which can include the $TT$, $TE$, and/or $EE$ power spectra. {\tt HiLLiPoP}\ has been used as an alternative to the public \textit{Planck}\ likelihood in the 2013 and 2015 \textit{Planck}\ releases \citep{planck2013-p08,planck2014-a13}, and is described in detail in~\citet{couchot2017}. In this paper, the {\tt HiLLiPoP}\ likelihood is applied to the PR4 detset maps at 100, 143, and 217\ifmmode $\,GHz$\else \,GHz\fi. We focus on the $TT$ spectra, since there is marginal additional information at small scales in $TE$ or $EE$ for tensor modes, due to \textit{Planck}\ noise. We only make use of $TE$ in Sect.~\ref{sec:combined} in order to help constrain the spectral index $n_{\rm s}$.
The likelihood is a spectrum-based Gaussian approximation, with semi-analytic estimates of the $C_\ell$ covariance matrix based on the data. The cross-spectra are debiased from the effects of the mask and the beam leakage using {\tt Xpol}\ (a generalization to polarization of the algorithm presented in \citealt{tristram2005}\footnote{\myurl{https://gitlab.in2p3.fr/tristram/Xpol}}) before being compared to the model, which includes CMB and foreground residuals. The beam window functions are evaluated using {\sc QuickPol} \citep{hivon17}, adapted to the PR4 data. These adaptations include an evaluation of the beam-leakage effect, which couples temperature and polarization modes due to the beam mismatch between individual detectors.
The model consists of a linear combination of the CMB power spectrum and several foregrounds residuals. These are:
\begin{itemize}
\item Galactic dust (estimated directly from the 353-GHz channel);
\item the cosmic infrared background \citep[as measured in][]{planck2013-pip56};
\item thermal Sunyaev-Zeldovich emission \citep[based on the \textit{Planck}\ measurement reported in][]{planck2013-p05b};
\item kinetic Sunyaev-Zeldovich emission, including homogeneous and patchy reionization components from \cite{shaw12} and \cite{battaglia13};
\item a tSZ-CIB correlation consistent with both models above; and
\item unresolved point sources as a Poisson-like power spectrum with two components (extragalactic radio galaxies and infrared dusty galaxies).
\end{itemize}
On top of the cosmological parameters associated with the computation of the CMB spectrum, with {\tt HiLLiPoP}\ we sample seven foreground amplitudes (one per emission source, the spectral energy density rescaling the amplitude for each cross-frequency being fixed) and six nuisance parameters (one overall calibration factor plus intercalibrations for each map). See Appendix~\ref{ann:hillipop} for more details.
\subsubsection{Large-scale polarized likelihood}
\label{sec:lik:lol}
We construct a polarized $E$-$B$ likelihood based on power spectra, focusing on the large scales where the tensor signal is dominant. Because it carries very little information about the tensor modes, we do not include the $TE$ spectrum in this analysis.
In polarization, especially at large angular scales, foregrounds are stronger relative to the CMB than in temperature, and cleaning the \textit{Planck}\ frequencies using $C_\ell$ templates in the likelihood (as done in temperature) is not accurate enough. In order to clean sky maps of polarized foregrounds, we use the {\tt Commander}\ component-separation code \citep{eriksen2008}, with a model that includes three polarized components, namely the CMB, synchrotron, and thermal dust emission. {\tt Commander}\ was run on each detset map independently, as well as on each realization from the PR4 Monte Carlo simulations. Maps are available on the PLA in {\tt HEALPix}\footnote{\myurl{http://healpix.sourceforge.net}} format~\citep{gorski2005} at a resolution $\ifmmode {N_{\rm side}} \else $N_{\rm side}$ \fi=2048$.
To compute unbiased estimates of the angular power spectra, we calculate the cross-correlation of the two detset maps. We make use of two different angular cross-power spectra estimators (described below), which are then concatenated to produce a full-multipole-range power spectrum. There is no information loss in this process, since the covariances are deduced using Monte Carlo simulations including the correlations over the entire multipole range.
\begin{itemize}
\item For multipoles $2\leq \ell \leq 35$, we compute power spectra using an extension of the quadratic maximum likelihood estimator \citep{tegmark01} adapted for cross-spectra in \citet{vanneste18}.\footnote{\myurl{https://gitlab.in2p3.fr/xQML/xQML}} At multipoles below 40, it has been shown to produce unbiased polarized power spectra with almost optimal errors. We use downgraded $\ifmmode {N_{\rm side}} \else $N_{\rm side}$ \fi\,{=}\,16$ maps after convolution with a cosine apodizing kernel $b_\ell = \frac{1}{2}\left\{1+\cos\pi(\ell-1)/(3\ifmmode {N_{\rm side}} \else $N_{\rm side}$ \fi-1)\right\}$.
The signal is then corrected with the PR4 transfer function, to compensate for the filtering induced by the degeneracies between the signal and the templates for systematics in the mapmaking procedure (see Sect.~\ref{sec:data}).
\item For multipoles $35<\ell<300$, we compute power spectra with a classical pseudo-$C_\ell$ estimator {\tt Xpol}\ (Sect.~\ref{sec:lik_highl}). We used $\ifmmode {N_{\rm side}} \else $N_{\rm side}$ \fi=1024$ maps and the native beam of {\tt Commander}\ maps (i.e., 5\ifmmode {^{\scriptstyle\prime}). In this case, we apodize the mask (see Sect.~\ref{sec:masks}) with a 1\ifmmode^\circ\else$^\circ$\fi\ Gaussian taper. Given the low signal-to-noise ratio in polarization, we bin the spectra with $\Delta\ell = 10$.
\end{itemize}
\begin{figure*}[!htbp]
\centering
\includegraphics[width=\columnwidth]{cl_lowl_commander.pdf}
\includegraphics[width=\columnwidth]{cl_intl_commander.pdf}
\caption{$EE$, $BB$, and $EB$ power spectra of the CMB computed on 50\,\% of the sky with the PR4 maps at low (left panels) and intermediate multipoles (right panels). The \textit{Planck}\ 2018 {$\rm{\Lambda CDM}$}\ model is plotted in black. Grey bands represent the associated cosmic variance. Error bars are deduced from the PR4 Monte Carlo simulations. Correlations between data points are given in Appendix~\ref{ann:corrmat}. A simple $\chi^2$ test shows no significant departure from the model for any of these spectra.}
\label{fig:cl_EE_BB_EB}
\end{figure*}
The $EE$, $BB$, and $EB$ power spectra estimates are presented in Fig.~\ref{fig:cl_EE_BB_EB} for 50\,\% of the sky, which provides the best combination of sensitivity and freedom from foreground residuals. Power spectra computed on different sky fractions (using masks from Sect.~\ref{sec:masks}) are compared in Fig.~\ref{fig:cl_galcut}.
A simple $\chi^2$ test on the first 34 multipoles shows no significant departure from the \textit{Planck}\ 2018 {$\rm{\Lambda CDM}$}\ model for any of these spectra. The `probability to exceed' values (PTE) for the $EE$, $BB$, and $EB$ spectra on the first 34 multipoles are 0.27, 0.21, and 0.26, respectively.
The most extreme multipole in the $BB$ spectrum is $\ell=6$, which, for a Gaussian distribution, would correspond conditionally to a 3.4$\,\sigma$ outlier (reducing to 2.3$\,\sigma$ after taking into account the look-elsewhere effect, including the first 34 multipoles). However, at such low multipoles, the distribution is not Gaussian and the PTE are certainly higher than the numbers of $\sigma$ would suggest. In $EE$, the largest deviation from the model is for $\ell=17$ at 3.1$\,\sigma$ and in $EB$ it is $\ell=19$ at 2.7$\,\sigma$.
The $C_\ell$ covariance matrix is computed from the PR4 Monte Carlos. For each simulation, we compute the power spectra using both estimators. The statistical distribution of the recovered $C_\ell$ then naturally includes the effect of the components included in the Monte Carlo, namely the CMB signal, instrumental noise, \textit{Planck}\ systematic effects incorporated in the PR4 simulations (see Sect.~\ref{sec:data}), component-separation uncertainties, and foreground residuals. The residual power spectra (both for the simulations and the data) are shown in Fig.~\ref{fig:cl_residuals}.
Given the \textit{Planck}\ noise level in polarization, we focus on multipoles below $\ell=150$, which contain essentially all the information on tensor modes in the \textit{Planck}\ CMB angular power spectra. At those scales, and given \textit{Planck}\ noise levels, the likelihood function needs to consistently take into account the two polarization fields $E$ and $B$, as well as all correlations between multipoles and modes ($EE$, $BB$, and $EB$).
{\tt LoLLiPoP}\ (LOw-$\ell$ LIkelihood on POlarized Power-spectra) is a \textit{Planck}\ \mbox{low-$\ell$}\ polarization likelihood based on cross-spectra, and was previously applied to \textit{Planck}\ $EE$ data for investigating the reionization history in \citet{planck2014-a25}. The version used here is updated to use cross-spectra calculated on component-separated CMB detset maps processed by {\tt Commander}\ from the PR4 frequency maps. Systematic effects are considerably reduced in cross-correlation compared to auto-correlation, and {\tt LoLLiPoP}\ is based on cross-power spectra for which the bias is zero when the noise is uncorrelated between maps. It uses the approximation presented in \citet{hamimeche08}, modified as described in \citet{mangilli15} to apply to cross-power spectra. The idea is to apply a change of variable $C_\ell \rightarrow X_\ell$ so that the new variable $X_\ell$ is nearly Gaussian-distributed. Similarly to \citet{hamimeche08}, we define
\begin{equation}
X_\ell = \sqrt{ C_\ell^{\rm f} + O_\ell} \,\, g{\left(\frac{\widetilde{C}_\ell + O_\ell}{C_\ell + O_\ell}\right)} \,\, \sqrt{ C_\ell^{\rm f} + O_\ell} ,
\label{eq:xell}
\end{equation}
where $g(x)=\sqrt{2(x-\ln(x)-1)}$, $\widetilde{C}_\ell$ are the measured cross-power spectra, $C_\ell$ are the power spectra of the model to be evaluated, $C_\ell^{\rm f}$ is a fiducial model, and $O_\ell$ are the offsets needed in the case of cross-spectra.
For multi-dimensional CMB modes (here we restrict ourselves to $E$ and $B$ fields only), the $C_\ell$ generalise to $\tens{C}_\ell$, a $2\times2$ matrix of power spectra,
\begin{equation}
\tens{C}_\ell =
\left(
\begin{array}{ccc}
C_\ell^{EE} +O_\ell^{EE} & C_\ell^{EB} \\
C_\ell^{BE} & C_\ell^{BB} + O_\ell^{BB}
\end{array}
\right) \,,
\end{equation}
and the $g$ function is applied to the eigenvalues of $\tens{C}^{-1/2}_\ell \widetilde{\tens{C}}_\ell \tens{C}^{-1/2}_\ell$ (with $\tens{C}^{-1/2}$ the square root of the positive-definite matrix $\tens{C}$). In the case of auto-spectra, the offsets $O_\ell$ are given by the noise bias effectively present in the measured power spectra. For cross-power spectra, the noise bias is zero, and we use effective offsets defined from the $C_\ell$ noise variance:
\begin{equation}
\Delta C_\ell \equiv \sqrt{ \frac{2}{2\ell+1}} O_\ell .
\end{equation}
The distribution of the new variable $X_\ell \equiv \text{vecp}(\tens{X}_\ell)$, the vector of distinct elements of $\tens{X}_\ell$, can be approximated as Gaussian, with a covariance given by the covariance of the $C_\ell$s.
The likelihood function of the $C_\ell$ given the data $\widetilde{C}_\ell$ is then
\begin{equation}
-2\ln P(C_\ell|\widetilde{C}_\ell)=\sum_{\ell \ell'} X^{\sf T}_\ell \tens{M}^{-1}_{\ell \ell'} X_{\ell'}.
\end{equation}
Uncertainties are incorporated into the $C_\ell$-covariance matrix $\tens{M}_{\ell\ell'}$, which is evaluated after applying the same pipeline (including {\tt Commander}\ component separation and cross-spectrum estimation on each simulation) to the Monte Carlo simulations provided in PR4.
While foreground emission and the cleaning procedure are kept fixed in the simulations (so that we cannot include uncertainties arising from an imperfect foreground model), the resulting $C_\ell$ covariance consistently includes CMB sample variance, statistical noise, and systematic residuals, as well as foreground-cleaning uncertainties, together with the correlations induced by masking. These uncertainties are then propagated through the likelihood up to the level of cosmological parameters. Figures of the correlation matrices are given in Appendix~\ref{ann:corrmat}.
Using this approach, we are able to derive three different likelihoods, one using only information from $E$ modes ({lowlE}), one using only information from $B$ modes ({lowlB}), and one using $EE$+$BB$+$EB$ spectra ({lowlEB}). We have used these likelihoods from $\ell=2$ up to $\ell=300$ with a nominal range of $\ell=[2,150]$, since multipoles above $\ell \simeq 150$ do not contribute to the result due to the \textit{Planck}\ noise (see Sect.~\ref{sec:bb}).
The approach used in this paper is different from the one used for the \textit{Planck}\ 2018 results. Indeed, in \cite{planck2016-l05}, the probability density of the polarized spectra at low multipoles was modelled with a polynomial function adjusted on simulations in which only $\tau$ is varied, with all other cosmological parameters in a {$\rm{\Lambda CDM}$}\ model fixed to the \textit{Planck}\ 2018 best-fit values. As a consequence, the probability density is not proportional to the likelihood $\mathcal{L}(\Omega^{\rm model} | C_\ell^{\rm data})$ when the model is not {$\rm{\Lambda CDM}$}\ (and in particular for our case {$\rm{\Lambda CDM}$}+$r$), and even in the {$\rm{\Lambda CDM}$}\ case it neglects correlations with other parameters that affect the posterior on $\tau$.
In addition, the simulations used in \cite{planck2016-l05} were generated with the same CMB realization for the mapmaking solution. Cosmic variance was included afterwards by adding CMB realizations on top of noise-only maps, neglecting correlations between foregrounds or systematic templates and the CMB. The information in polarization at \mbox{low-$\ell$}\ was then extracted using a polynomial function fitted to the distribution from simulations. While this is supposed to empirically take into account the effects of systematics on the likelihood shape, it does not include $\ell$-by-$\ell$ correlations, and is limited in the $C_\ell$ power that one can test (for example imposing a strong prior on the $EE$ power at $\ell = 3$). As a consequence, the combination of those two effects reduces the covariance, especially at low multipoles, leading to error bars (especially on $\tau$) that are underestimated.
\section{Constraints from \textit{TT}}
\label{sec:tt}
To derive constraints on the tensor-to-scalar ratio from the temperature power spectrum, we use the \mbox{high-$\ell$}\ {\tt HiLLiPoP}\ likelihood for $30\leq\ell\leq2500$, and the {\tt Commander}\ likelihood ({lowT}) in temperature for $\ell < 30$, with a prior on the reionization optical depth to break the degeneracy with the scalar amplitude $A_{\rm s}$. We use a Gaussian prior $\tau = 0.055 \pm 0.009$. For the base-{$\rm{\Lambda CDM}$}\ model, using PR4 data, we obtain the same results as presented in~\citet{planck2016-l06}.
We now describe the results obtained when fitting the tensor-to-scalar ratio $r$ in addition to the six {$\rm{\Lambda CDM}$}\ parameters ($\Omega_{\rm b}h^2$, $\Omega_{\rm c}h^2$, $\theta_\ast$, $A_{\rm s}$, $n_{\rm s}$, $\tau$). In \citet{planck2016-l10}, the constraint from $TT$ is reported as $r_{0.002} < 0.10$ (\CL{95}) using PR3 data. This is much lower than the expected 2$\,\sigma$ upper bound on $r$. Indeed, when we calculate $r_{\rm eff}$ as proposed in Sect.~\ref{sec:model}, we find that the maximum of the posterior is in the negative region by about 1.7$\,\sigma$.
That the maximum happens to fall at negative values is the major reason for the apparently strong constraint on $r$.
With PR4 data, after marginalizing over the other cosmological parameters and the nuisance parameters, we find that the maximum of the posterior is negative by less than 1.2$\,\sigma$ when using {\tt HiLLiPoP}\ in temperature ({hlp} TT) along with {lowT}. As discussed in \citet{planck2016-l10}, this result is related to the \mbox{low-$\ell$}\ deficit in the temperature power spectrum. Indeed, removing {lowT}\ from the likelihood moves the maximum of the posterior closer to zero, as illustrated in Fig.~\ref{fig:lik_r_TT_NPIPE}. The corresponding posterior maximum and 68\,\% confidence interval are
\begin{eqs}
r_{0.05} &=& +0.031 \pm 0.120 \quad \text{({hlp} TT+$\tau$-prior)}, \\
r_{0.05} &=& -0.131 \pm 0.093 \quad \text{({hlp} TT+{lowT})},\\
r_{0.05} &=& -0.101 \pm 0.094 \quad \text{({hlp} TT+{lowT}+$\tau$-prior)}.
\end{eqs}
\begin{figure}[htbp!]
\begin{center}
\includegraphics[draft=false,width=\columnwidth]{lik_r_TT}
\caption{Constraints on the tensor-to-scalar ratio $r_{0.05}$ based on high-$\ell$ temperature data from \textit{Planck}\ PR4 ({hlp} TT), in combination with {lowT}, and with a prior on $\tau$.}
\label{fig:lik_r_TT_NPIPE}
\end{center}
\end{figure}
Using the temperature power spectrum from PR4, we recover the same constraints on other parameters, in particular the scalar spectral tilt $n_{\rm s}$, as found using PR3 data (see Appendix~\ref{ann:PR3vsPR4}). With the full posterior distribution on $r$, we are able to accurately derive the maximum probability and the uncertainty $\sigma_r$. The width of the posterior is consistent with the PR3 results. Using only \mbox{high-$\ell$}\ data, with a prior on the reionization optical depth $\tau$, we find $\sigma_r = 0.12$ for $TT$ (consistent with the cosmic variance limit). Note that we find $\sigma_r = 0.43$ for $TE$, indicating that $TE$ is much less constraining for $r$ than $TT$. When adding information from low multipoles in temperature, $\sigma_r$ reduces to 0.094, but at the price of pushing the maximum distribution towards negative values. The posterior maximum is slightly shifted towards zero thanks to the small differences in {\tt HiLLiPoP}\ compared to the public \textit{Planck}\ likelihood (see Appendix~\ref{ann:hillipop}). The fact that the distribution peaks in the non-physical domain can be considered as a statistical fluctuation (with a significance between 1 and 2$\,\sigma$, depending on the data set used), which on its own is not a serious problem. However, the fact that this behaviour is strongly related to the deficit of power at \mbox{low-$\ell$}\ in temperature is worth noting.
After integrating the positive part of the $r$-posterior, the final upper limits from the \textit{Planck}\ temperature power spectrum using PR4 are
\begin{eqs}
r_{0.05} &<& 0.13 \quad \text{(\CL{95}, {hlp} TT+{lowT})},\\
r_{0.05} &<& 0.12 \quad \text{(\CL{95}, {hlp} TT+{lowT}+$\tau$-prior)}.
\end{eqs}
\section{Constraints from \textit{BB}}
\label{sec:bb}
To derive constraints on the tensor-to-scalar ratio from $BB$ using the PR4 maps, we sample the likelihood with a fixed {$\rm{\Lambda CDM}$}\ model based on the \textit{Planck}\ 2018 best fit, to which we add tensor fluctuations with a free amplitude parametrized by the tensor-to-scalar ratio $r$.
We use the {\tt LoLLiPoP}\ likelihood described in Sect.~\ref{sec:lik:lol}, restricted to $BB$ only (referred as `{lowlB}'). As discussed in Sect.~\ref{sec:lik:lol}, we construct the $C_\ell$ covariance matrix using the PR4 Monte Carlo simulations, which include CMB signal, foreground emission, realistic noise, and systematic effects.
Before giving the final constraints coming from the \textit{Planck}\ $BB$ spectra, we should distinguish between the two different regimes, corresponding to large scales (the reionization bump) and intermediate scales (the recombination bump). Across the reionization bump, uncertainties are dominated by systematic residuals, as discussed in Sect.~\ref{sec:data}, while foreground residuals may bias the results. Across the recombination bump, uncertainties are dominated by statistical noise; however, systematic effects, as well as foreground residuals, can still bias constraints on $r$. In order to test the effects of potential foreground residuals, we calculate the posterior distributions of $r$ using various Galactic masks, as described in Sect.~\ref{sec:masks}. While large sky fractions ($f_{\rm sky} > 60\,\%$) show deviations from $r=0$, the posteriors for 40, 50, and 60\,\% of the sky are consistent with zero (Fig.~\ref{fig:lolR_galcuts}). As a robustness test, we also calculate the posterior distribution when changing the range of multipoles (Fig.~\ref{fig:lolR_lrange}) and find consistent results, with posteriors compatible with $r=0$. Multipoles above $\ell \simeq 150$ do not contribute to the result, since the noise in $BB$ is too high. For the rest of this paper, unless otherwise noted, we use a sky fraction of 50\%, and compute the likelihood over the range of multipoles from $\ell=2$ to $\ell=150$.
For the reionization and recombination bumps we find
\begin{eqnarray}
r_{0.05} &=& -0.014_{-0.111}^{+0.108} \quad \text{({lowlB}, reionization bump)}, \\
r_{0.05} &=& \phantom{+}0.069_{-0.113}^{+0.114} \quad \text{({lowlB}, recombination bump)}.
\end{eqnarray}
Both results are obtained over 50\,\% of the sky, with multipoles in the range $\ell = [2,35]$ for the former and $\ell=[50,150]$ for the latter.
With these ranges of multipoles, and given the statistics of the PR4 maps, we can see that the reionization bump ($\sigma_r=0.110$) and the recombination bump ($\sigma_r=0.113$) contribute equally to the overall \textit{Planck}\ sensitivity to the tensor-to-scalar ratio.
\begin{figure}[htbp!]
\includegraphics[width=\columnwidth]{lik_r_NPIPE_BB.pdf}
\caption{Posterior distribution of $r$ from PR4 data, using {\tt LoLLiPoP}\ and the $BB$ spectrum on 50\,\% of the sky (black). Constraints from the reionization bump and the recombination bump are plotted in red and blue, respectively. Constraints from \textit{Planck}\ BB with the full multipole range $\ell=[2,150]$ are in black.}
\label{fig:lik_r_BB}
\end{figure}
We can combine the results from the two bumps in order to give the overall constraints on the tensor-to-scalar ratio from the \textit{Planck}\ $BB$ spectrum (Fig.~\ref{fig:lik_r_BB}).
The full constraint on $r$ from the PR4 $BB$ spectrum over 50\,\% of the sky, including correlations between all multipoles between $\ell=2$ and $\ell=150$, is
\begin{eqnarray}
r_{0.05} = 0.033 \pm 0.069 &&\quad \text{({lowlB})}.
\end{eqnarray}
This is fully compatible with no tensor signal, and we can derive an upper limit by integrating the posterior distribution out to 95\,\%, after applying the physical prior $r>0$, which yields
\begin{eqnarray}
r_{0.05} < 0.158 &&\quad \text{(\CL{95}, {lowlB})}.
\end{eqnarray}
This result can be compared with the BICEP2/Keck Array constraints \citep{Bicep2018limit} of
\begin{eqnarray}
r_{0.05} < 0.072 &&\quad \text{(\CL{95}, BK15)},
\end{eqnarray}
with $\sigma_r=0.02$ compared to $\sigma_r = 0.069$ for the \textit{Planck}\ result presented in this analysis
\section{Additional constraints from polarization}
\label{sec:pol}
As shown in Fig.~\ref{fig:cl_tensor}, the $EE$ tensor spectrum is similar in amplitude to the $BB$ tensor spectrum, even though the scalar mode in $EE$ is stronger. Given that noise dominates the tensor signal at all multipoles in both $EE$ and $BB$, we expect the likelihood for $EE$ to give useful constraints on $r$. We thus present the constraints from polarized \mbox{low-$\ell$}\ data ($\ell < 150$) using different combinations of the {\tt LoLLiPoP}\ likelihood (specifically EE, BB, and EE+BB+EB) in Fig.~\ref{fig:lol_r_EB}. We emphasize that EE+BB+EB is a likelihood of the correlated polarization fields $E$ and $B$ and not the combination of individual likelihoods (see Sect.~\ref{sec:lik:lol}).
The first thing to notice is that the posterior distribution for $EE$ peaks at $r = 0.098 \pm 0.097$, while the other modes give results compatible with zero within 1$\,\sigma$.
Given the lower sensitivity of {lowlE}\ to $r$ ($\sigma_r \simeq 0.10$) compared to that of {lowlB}\ ($\sigma_r \simeq 0.07$), this is mitigated when adding the information from other modes.
The posterior distributions for $r$ give
\begin{eqnarray}
r_{0.05} = \phantom{+}0.033 \pm 0.069 &&\quad \text{({lowlB})},\\
r_{0.05} = -0.031 \pm 0.046 &&\quad \text{({lowlEB})}.
\end{eqnarray}
As a consistency check, Fig.~\ref{fig:lol_r_EB} also shows the constraints when fitting the $BB$ tensor model on the $EB$ data power spectrum, which is compatible with zero ($r=-0.012 \pm 0.068$) as expected.
\begin{figure}[htbp!]
\begin{center}
\includegraphics[draft=false,width=\columnwidth]{lik_r_EEBBEB_2-145.pdf}
\caption{Posterior distributions for $r$ from \textit{Planck}\ polarized \mbox{low-$\ell$}\ data ($\ell < 150$) using {\tt LoLLiPoP}\ and the $EE$, $BB$, and $EE$+$BB$+$EB$ spectra. The dashed black line is obtained from $EB$ data by fitting a $BB$ tensor model. The sky fraction used here is $f_{\rm sky} = 50\,\%$}
\label{fig:lol_r_EB}
\end{center}
\end{figure}
Using polarization data, \textit{Planck}'s sensitivity to the tensor-to-scalar ratio reaches $\sigma_r = 0.046$. Combining all \textit{Planck}\ polarization modes ($EE$, $BB$, and $EB$) out to $\ell=150$ leads to the following upper limit:
\begin{eqnarray}
r_{0.05} &<& 0.069 \quad \text{(\CL{95}, {lowlEB})}.
\end{eqnarray}
Note that this constraint is almost independent of the other {$\rm{\Lambda CDM}$}\ parameters, and in particular the reionization optical depth $\tau$.
To demonstrate this, using the same data set ({lowlB}\ and {lowlEB}), we derive 2-dimensional constraints for $\tau$ and $r$ and plot them in Fig.~\ref{fig:lol_tau-r}.
The constraint is stable when sampling for $\tau$. Indeed, in this case, we obtain
\begin{eqnarray}
r_{0.05} = \phantom{+}0.025 \pm 0.064 &&\quad \text{({lowlB})},\\
r_{0.05} = -0.015 \pm 0.045 &&\quad \text{({lowlEB})},
\end{eqnarray}
and for the reionization optical depth
\begin{eqnarray}
\tau = 0.0577 \pm 0.0056 &&\quad \text{({lowlEB})},
\end{eqnarray}
compatible with {lowlE}\ results, while {lowlB}\ shows no detection of $\tau$, since $BB$ is dominated by noise.
\begin{figure}[htbp!]
\begin{center}
\includegraphics[draft=false,width=\columnwidth]{lol_r-tau_commander_EE_BB_EB.pdf}
\caption{{\tt LoLLiPoP}\ posterior distribution in the $\tau$--$r$ plane using {lowlE}\ (blue), {lowlB}\ (red), and {lowlEB}\ (black). The sky fraction here is $f_{\rm sky} = 50\,\%$.}
\label{fig:lol_tau-r}
\end{center}
\end{figure}
\section{Combined results}
\label{sec:combined}
Up to this point, the constraints on $r$ have been derived relative to a fixed fiducial {$\rm{\Lambda CDM}$}\ spectrum based on the \textit{Planck}\ 2018 results. Including the \textit{Planck}\ temperature likelihoods (both {lowT}\ and {hlp} TT) in a combined analysis of the \textit{Planck}\ CMB spectra allows us to properly propagate uncertainties from other cosmological parameters to $r$, as well as to self-consistently derive constraints in the $n_{\rm s}$--$r$ plane.
In this section, we combine the {lowT}\ and {hlp} TT with the \mbox{low-$\ell$}\ polarized likelihood {lowlEB}\ to sample the parameter space of the {$\rm{\Lambda CDM}$}+$r$ model.
The comparison of contours at 68\,\% and 95\,\% confidence levels between PR3 and PR4 data is presented in Fig.~\ref{fig:triangle_lcdm} of Appendix~\ref{ann:lcdm}.
We also include the BK15 constraints from \citet{Bicep2018limit}. When combining \textit{Planck}\ and BK15, we neglect the correlation between the two data sets and simply multiply the likelihood distributions. This is justified because the BK15 spectra are estimated on 1\,\% of the sky, while the \textit{Planck}\ analysis is derived from 50\,\% of the sky.
Figure~\ref{fig:likr_combined} gives posteriors on $r$ after marginalization over the nuisance and the other {$\rm{\Lambda CDM}$}\ cosmological parameters. We obtain the following \CL{95} upper limits:
\begin{eqs}
r_{0.05} &<& 0.060 \quad \text{(\CL{95}, {hlp} TT+{lowT}+BK15)};\\
r_{0.05} &<& 0.056 \quad \text{(\CL{95}, {hlp} TT+{lowT}+{lowlEB})};\\
r_{0.05} &<& 0.044 \quad \text{(\CL{95}, {hlp} TT+{lowT}+{lowlEB}+BK15)}.
\end{eqs}
\begin{figure}[htbp!]
\begin{center}
\includegraphics[draft=false,width=\columnwidth]{lik_r_Planck_BK15.pdf}
\caption{Posterior distributions for $r$ after marginalization over the nuisance parameters and the other {$\rm{\Lambda CDM}$}\ parameters, for the \textit{Planck}\ temperature data ({hlp} TT+{lowT}) in combination with BK15 and the large-scale polarized \textit{Planck}\ likelihood ({lowlEB}).}
\label{fig:likr_combined}
\end{center}
\end{figure}
Figure~\ref{fig:ns_r_inflation} shows the constraints in the $r$--$n_{\rm s}$ plane for \textit{Planck}\ data in combination with BK15. The constraints from the full combination of \textit{Planck}\ data are comparable to those from BK15. The addition of the \mbox{high-$\ell$}\ $TE$ likelihood produces tighter constraints on the spectral index $n_{\rm s}$ \citep[as already reported in][]{planck2016-l06}.
\begin{figure}[htbp!]
\begin{center}
\includegraphics[draft=false,width=\columnwidth]{planck_ns_r_ext}
\caption{Marginalized joint 68\,\% and 95\,\% CL regions for $n_{\rm s}$ and $r_{0.05}$ from \textit{Planck}\ alone ({hlp}+{lowT}+{lowlEB}) and in combination with BK15. The solid lines correspond to {hlp} TT+{lowT}+{lowlEB}, while the filled regions include TE and correspond to {hlp} TTTE+{lowT}+{lowlEB}.}
\label{fig:ns_r_inflation}
\end{center}
\end{figure}
There have been several other attempts to constrain the value of $r$, particularly through measurements of the $BB$ power spectrum. As we have already stressed, there is a weak limit from the $TT$ spectrum and at the current sensitivity level for $r$, the constraints from $EE$ are about as powerful as those from $BB$; hence the tensor constraints in this paper are derived from a combination of $BB$ limits with those coming from $TT$ and $EE$. We show in Appendix~\ref{ann:BBplot} a comparison of our $BB$ limits with those of other experiments.
\section{Conclusions}
In this paper, we have derived constraints on the amplitude of tensor perturbations using \textit{Planck}\ PR4 data.
We investigated the intrinsic sensitivity of the $TT$ spectrum, which is cosmic-variance limited, and found $\sigma_r = 0.094$ using the full range of multipoles. We noted the impact of the \mbox{low-$\ell$}\ anomaly, which pushes the maximum posterior distribution towards negative values of $r_{\rm eff}$ at roughly the 1$\,\sigma$ level.
For the first time, we analysed the \textit{Planck}\ $BB$ spectrum for $r$ and obtained $\sigma_r = 0.069$, which is lower than in temperature.
The \textit{Planck}\ $B$-mode spectrum, being dominated by noise, gives a constraint on $r$ that is fully compatible with zero from both low and intermediate multipoles, in other words from both the reionization and recombination peaks. Multipoles above $\ell \simeq 150$ do not contribute to the result, since the noise in $BB$ is too high.
Using an appropriate likelihood in polarization, we showed that the \textit{Planck}\ $EE$ spectrum is also sensitive to the amplitude of the tensor-to-scalar ratio $r$.
The combined constraints from \textit{Planck}\ $EE$ and $BB$, including $EB$ correlations, lead to a sensitivity on $r$ of $\sigma_r = 0.046$, two times better than in temperature.
We also investigated the impact of foreground residuals using different Galactic cuts and by varying the range of multipoles used in the polarized likelihood.
Finally, by combining temperature and polarization constraints, we derived the posterior distribution on $r$ marginalized over the {$\rm{\Lambda CDM}$}\ cosmological parameters and nuisance parameters, including uncertainties from systematics (both instrumental and astrophysical). The result gives an upper limit of $r < 0.056$ at the 95\,\% confidence level using \textit{Planck}\ data only.
In combination with the BICEP/Keck measurements from 2015, this constraint is further reduced to $r < 0.044$ (\CL{95}), the tightest limit on $r$ to date.
\begin{acknowledgements}
\textit{Planck}\ is a project of the European Space Agency (ESA) with instruments provided by two scientific consortia funded by ESA member states and led by Principal Investigators from France and Italy, telescope reflectors provided through a collaboration between ESA and a scientific consortium led and funded by Denmark, and additional contributions from NASA (USA).
Some of the results in this paper have been derived using the {\tt HEALPix} package.
This research used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility operated under Contract No. DE-AC02-05CH11231.
We gratefully acknowledge support from the CNRS/IN2P3 Computing Center for providing computing and data-processing resources needed for this work.\end{acknowledgements}
\bibliographystyle{aat}
|
train/arxiv
|
BkiUfWU5qhDCZSoNyAvC
| 5 | 1 |
\section{Introduction}
The work of Bent J\o{}rgensen is rich and deep. Working on the foundations of generalized linear models (GLMs) he gave the main steps in the construction of the theory of exponential dispersion models (EDMs), a class of parametric families of one-dimensional distributions. The necessity of expanding the EDMs to include further parametric families of distributions ({\it e.g.}, the von Mises and the simplex distributions) led to the construction of the class of proper dispersion models (PDMs) in a process which culminated with the de\-ve\-lopment of the general theory of dispersion models. The class of dispersion models encompasses both the EDMs and the PDMs under the same umbrella. In this article, we will briefly expose some of the main aspects of the theoretical development referred above for which Bent J\o{}rgensen played a crucial role being a driving force and an inspiration source.
The path from the construction of EDMs to the development of the general theory of dispersion models presents an increasing level of abstraction. While EDMs rely on relatively strong assumptions (inherited from exponential families of distributions), the general dispersion models are based on a minimalistic mathematical structure while keeping the desirable distributional and statistical properties. This theory-generation process was inductive in the sense that it represented a movement from several particular cases (disparate collections of distributions) to a general setup (large classes of families of probability measures with common properties). In this article, our exposition follows a reversed order, starting from the general theory of dispersion models (Section \ref{SS.DM}) involving a weak mathematical structure and specializing in two different statistical scenarios: EDMs and PDMs (Sections \ref {SS.EDM} and \ref{SS.PDM}, respectively).
We anticipate that when working in the general scenario, we meet some difficulties even for solving basic questions like finding a mathematical procedure to generate the families of probability involved there. For instance, the generation of dispersion models involves the solution of Fredholm integral equations that are known to be mathematically hard, and in many instances, it is difficult even to establish whether the equation has a solution or not. When introducing more mathematical structure, as expected, those difficulties gradually disappeared. Interestingly, arguments of probabilistic and statistical nature ({\it e.g.}, a coincidence of an integral with the moment ge\-ne\-ra\-ting function, probabilistic properties of the characteristic function and exactness of the so-called $p^{\star}$ approximation) turn the hard mathematical problems tractable. The general integral equation becomes a convolution equation (solvable using standard methods of generalized functions or other methods of deconvolution) or involves a simple calculation of a Riemman-Stieltjes integral in some of the more specific scenarios.
We stress that the deductive path used in our exposition (starting from the general and moving to more particular cases) was only feasible because we knew the entire inductive path referred to above and then we essentially reversed the order of the construction. We believe that this choice of disposition of arguments is more comfortable to follow and enlighten the theoretical construction of the theory of dispersion models, but we invite the reader to imagine the reversed order while reading the text in order to get a feeling of the difficulties that she/he would face in constructing the general theory of EDMs. In our view, the most significant contribution that Bent J\o{}rgensen gave to the theory of dispersion models was to envision the structure that we briefly present here.
Bent J\o{}rgensen's professional trajectory was circular, starting in Denmark (Aarhus and Odense), he moved to several countries (England, Brazil and Canada) and eventually returned to Denmark (Aarhus-Foulum and finally Odense, where he was born). He obtained a master degree from Aarhus University (1979) with a thesis on the inverse Gaussian distribution, under the supervision of Ole Barndorff-Nielsen. His master thesis was later published in a book \citep{Jorgensen1982}. Bent J\o{}rgensen received a Ph.D. title from the University of Southern Denmark (1987), also under the supervision of Ole Barndorff-Nielsen) and a Doctor of Science degree from Aalborg University (1997). He also studied at the Imperial College, in London, from 1981 to 1983. His famous book ``The Theory of Dispersion Models'' \citep{Jorgensen1997} was primarily derived from the results of his PhD thesis and his doctoral dissertation. A significant part of this work was published as lecture notes after teaching a course in the First School of Regression Models in S\~ao Paulo in 1989.
Bent J\o{}rgensen came to the ``Instituto de Matematica Pura e Aplicada'' (IMPA, in Rio de Janeiro, Brazil) by the influence of Gauss Cordeiro who met him as a student at the Imperial College (London) in 1981. During a congress in York, in 1986, a few months before coming to Brazil, Bent expressed the desire to work in Brazil because of his Brazilian wife, Vera Botelho. Gauss Cordeiro took the steps to arrange that Bent came to IMPA since there he would have an environment suitable to do research without an overwhelming teaching and administrative load. He arrived in IMPA at the middle of the process of dissolution of the statistical group, but he fought to keep statistics alive there. From 1990 to 1992, Gauss Cordeiro stayed as a visiting professor at IMPA giving support to Bent. Together, they promoted an international event in asymptotic theory that attracted a large number of famous international statisticians.
Bent J\o{}rgensen visited the main statistical groups in Brazil during his stay from 1986 to 1992 in the country. In particular, he worked intensively with the group of statisticians at the ESALQ-USP through Clarice Dem\'etrio, with whom Bent kept a fruitful collaboration until his last days. This collaboration, including some year latter C{\'e}lestin C. Kokonendji and John P. Hinde (among others), generated two main research lines: one on models for counts, see \cite{Bonat-etal2018}, and one on the relations between the so called Taylor's law
(according to which the variances of responses related to some natural phenomena tend to be proportional to a power of the mean) and the Tweedie models, see \cite{Jorgensen-etal2011}.
He recruited many students to work with him in IMPA (including Rodrigo Labouriau, Jos\'e Ra\'ul Martinez, among others). Bent attracted many international researchers in statistics as guests at IMPA including Ole Barn\-dorff-\-Nielsen, Preben Bl\ae{}sild, Michael S\o{}rensen, Jesper M\o{}ller (from Aarhus University), Ib Skovgaard (University of Copenhagen), Stephen Laurizen, S\o{}ren Lundbye-Christensen (Aalborg University) and Gerard Letac (from Paul Sabatier University). The departure of Bent J\o{}rgensen from IMPA to the University of British Columbia (UBC) in 1992 represented a significant loss for the Brazilian Statistics.
A group of statisticians was quickly gathered by Bent J\o{}rgensen at the UBC ({\it e.g.}, S\o{}ren Lundbye-Christensen, as a recurrent visitor from Denmark, and Peter Song, among others). They worked in a new research line in which EDMs were used to represent a latent stochastic process governing the temporal development of a phenomena of interest, see \cite{Jorgensen-etal1996A, Jorgensen-etal1996B, Jorgensen-etal1996C, JorgensenSong1997} and Section \ref{SS.latentProcesses}. In this period (around 1995--1997) Bent J\o{}rgensen worked also with Rinaldo Artes (currently at Insper - Instituto de Ensino e Pesquisa, S\~ao Paulo) developing part of his thesis entitled ``Extensions of generalized estimation equation theory to circular data and dispersion models'', which was approved in 1997 at University of S\~ao Paulo, see also \cite{Artes-Jorgensen}.
Bent J\o{}rgensen left UBC and came back to Denmark in a temporary position shared between the Aarhus University (at the Department for Theoretical Statistics) and the Biometry Research Unit at the Foulum Research Centre in 1996. In this period, he worked with Ole Barndorff-Nielsen (on simplex distributions and inferential separation techniques via concepts of sufficiency, ancillarity, and non-formation). The presence of Bent J\o{}rgensen brought much life to the statistical discussion in the Foulum group of statisticians led at that time by Rodrigo Labouriau. He attracted Gordon Smyth who was working with dispersion parameter modeling and composite Poisson distributions, and latter Antonieta Peres (whom met him in IMPA and worked with him on state space models, but she died a few years later), see \cite{ Botter-etal-2002}, and Renjun Ma, who was working with models of repeated measures, see \cite{Ma-Jorgensen2007,Ma-etal2009}. The last of Bent J\o{}rgensen's Ph.D. students was Wagner Hugo Bonat, who worked in multivariate extensions of dispersion models (currently at the Universidade Federal do Paran\'a) and Tweedie models, see \cite{BonatJorgensen2016} and \cite{BonatKokonendjib}. All in all, Bent J\o{}rgensen worked in many different academic environments, spread over several countries, and always gathered many collaborators around himself. In the following sections, we review and discuss some of his main contributions.
\section{Dispersion Models}
\subsection{Introduction} \label{SS.DM}
The notion of dispersion models can be seen as a generalization of the normal distribution as we expose below. Consider the density of a univariate normal distribution with expectation $\mu\in\mathbb{R}$ and variance $\tau\in\mathbb{R}_+$,
\begin{equation}\nonumber
p(y; \mu, \tau) = (2\pi \tau)^{-1/2} \exp \left\{ -\frac{1}{2 \tau} (y-\mu)^2 \right\},
\mbox{ for } y\in C =\mathbb{R} \, .
\end{equation}
Setting $d(y;\mu) = (y-\mu)^2$ and $a(y;\tau) = (2\pi \tau)^{-1/2}$, we can express the density above in the form
\begin{equation}\label{eq.2.1.01}
p(y; \mu, \tau) = a(y;\tau) \exp \left\{ -\frac{1}{2 \tau} d(y;\mu) \right\},
\mbox{ for } y\in C,
\end{equation}
where $d(y;\mu)$ is the squared Euclidean distance between the observation $y$ and the location parameter $\mu$.
The idea of dispersion models \citep[p. 4]{Jorgensen1997} is to replace the squared distance $d$ in (\ref{eq.2.1.01}) by another
sui\-table function, called the unit deviance, measuring how far an observation $y$ is from a central reference point $\mu$ of the distribution.
This idea turned to be rather fruitful since it generates a rich class of families of distributions, the dispersion models (to be defined precisely below), including many classic continuous, discrete and mixed type distributions. Dispersion models are often used in applications since they are families of distributions of outstanding statistical models as generalized linear models, exponential family nonlinear
mo\-dels (see Section \ref{SS.EFNLR}), generalized additive models, generalized linear mixed models, among others. Moreover, the dispersion models have some common mathematical properties that allow building an elegant and coherent theory of statistical inference and to construct models involving observations of well behaved stochastic processes ({\it e.g.}, processes with stationary and independent increments, {\it i.e.}, L\`{e}vy processes, see Section \ref{SS.latentProcesses}). In this article, we study the notion of univariate dispersion models and refer the interested reader to \cite{JorgensenLaurizen2000} and \cite{Jorgensen2013} for the multivariate case; see also \cite{BonatJorgensen2016}.
\subsubsection{Basic Definitions} \label{SS.DM.01}
A dispersion model is a family of probability distributions parameterized by two parameters as follows. The starting point for the construction of the dispersion models is to define the concept of unit deviance. Let $S \subseteq \mathbb{R}$ be the set of the realizable values of the probability distributions contained in the family that we will construct (assumed to be the same for each element of the family). Denote the convex support of $S$ ({\it i.e.}, the smallest interval containing $S$) by $C$ and set $\Omega = int (C)$. Here $\Omega$ will be the parameter space of the referential parameter $\mu$. A \emph{unit deviance} is a function $d:C\times \Omega \rightarrow \mathbb{R}_+$ such that $d(y;y) = 0$ for all $y\in\Omega$, and $d(y;\mu ) > 0$ for all $(y,\mu) \in C\times \Omega$ such that $y\ne \mu$. A unit deviance $d$ is said to be \emph{regular} when $d$ is continuously twice differentiable in $C\times \Omega$ and $\partial^2 d(\mu; \mu) / \partial \mu^2 >0$ for all $\mu$ in $\Omega$.
A {\it dispersion model} generated by a unit deviance $d$ is a parametric family of real probability measures with support contained in an interval $C\subseteq \mathbb{R}$ with density with respect to a suitable common dominating measure, $\upsilon$, taking the form (\ref{eq.2.1.01}). The dominating measure $\upsilon$ is typically the Lebesgue measure on $\mathbb{R}$ or the counting measure. Here the function $a:C\times \mathbb{R}_+ \rightarrow \mathbb{R}_+$ is such that the integral of the density $p$ is $1$. The parameters $\mu$ and $\tau$ are called the \emph{position parameter} and the \emph{dispersion parameter}, respectively. Classic examples of dispersion models are the normal ($d(y;\mu) = (y-\mu)^2$),
gamma ($d(y;\mu) = 2\{ y/\mu - \log(y/\mu) - 1\}$, for $y\in\mathbb{R}_+$),
von Mises ($d(y;\mu) = 2\{ 1-\cos(y-\mu) \}$, for $y\in [0, 2\pi )$),
simplex ($d(y;\mu) = (y-\mu)^2/\{ y(1-y)\mu^2(1-\mu)^2 \}$)
and Poisson ($d(y;\mu) = 2\{ y\log (y/\mu) - y + \mu \}$) distributions, see \citep[p. 13-23]{Jorgensen1997} for further examples.
An immediate consequence of the above definition is that the position parameter $\mu$ is the mode of the density (\ref{eq.2.1.01}) when the dispersion parameter $\tau$ is fixed, since $\log \{ p(y; \mu, \tau)\}$ is proportional to $-d(y;\mu )$ and the function $d(y; \cdot )$ has by definition a minimum at $\mu = y$. A unit deviance can also be viewed as a generalization of the Kullback-Leibler information divergence \citep{JorgensenLaurizen2000}.
We define for each regular unit deviance $d:C\times\Omega \rightarrow \mathbb{R}_+$ the associated \emph{unit variance function} $V:\Omega \rightarrow \mathbb{R}_+$ given by $V(\mu)= 2/\{ \partial^2 d(\mu;\mu)/ \partial\mu^2 \}$, for each $\mu \in \Omega$. The unit variance function plays an important rule in the theory of dispersion models since it expresses the dependency of the variance on the expectation
under dispersion models and it characterizes uniquely the elements of some important classes of these models. Moreover, the variance function is useful to characterize certain forms of convergence of dispersion models, {\it c.f.}, \cite{Jorgensen1987A, Jorgensen1987B,Jorgensen1997}.
Two major classes of dispersion models will be studied in details: the proper dispersion models (PDMs) and the exponential dispersion models (EDMs). A dispersion model with density (\ref{eq.2.1.01}) is said to be a \emph{proper dispersion model} generated by a unit deviance $d:C\times\Omega \rightarrow \mathbb{R}_+$ when the function $a:C\times\mathbb{R}_+ \rightarrow \mathbb{R}_+$ factorizes as follows,
\begin{equation}\nonumber
a(y;\tau) = a_0(\tau)\,b(y),
\mbox{ for all } y\in C \mbox{ and } \mu\in\Omega
\, ,
\end{equation}
for a suitable choice of the functions $a_0$ and $b$. A PDM is said to be a \emph{regular proper dispersion model} when $C = \Omega$, the unit deviance $d$ is regular and $b(\cdot) = V^{-1/2} (\cdot)$, where $V$ is the unit variance function associated to the regular unit deviance $d$. There is essentially no loss of generality in assuming that $b(\cdot) = V^{-1/2}(\cdot)$ as we shall argue in Section \ref{SS.DM.03}.
A dispersion model generated by a unit deviance $d$ is said to be a \emph{reproductive exponential dispersion model} when the unit deviance has the form
\begin{equation}\label{eq.2.1.02}
d(y;\mu) = y f(\mu) + g(\mu) + h(y) \, , \mbox{ for all } y\in C \mbox{ and } \mu\in\Omega
\, ,
\end{equation}
for suitable functions $f,g$ and $h$. The special form of the unit deviance in (\ref{eq.2.1.02}) will be absorbed in the density
(\ref{eq.2.1.01}) to obtain (in Section \ref{SS.EDM}, equation (\ref{exp})) an alternative representation of the probability density of the reproductive EDM which is the standard used in most of the literature on the subject.
Examples of reproductive EDMs are the normal, gamma, inverse Gaussian and Poisson distributions. The von Mises, simplex, normal, gamma and inverse Gaussian distributions are classic examples of PDMs. Dispersion models that are neither a regular PDM nor a reproductive EDM ``are still not well understood, mainly for lack of examples of this kind'' and because methods for ge\-ne\-ra\-ting those models are currently non-existent \citep[p. 8, last paragraph]{Jorgensen1997}. However, we give below an example and show a general method for obtaining this type of dispersion models. There are only three PDMs that are also reproductive EDMs: the normal, the gamma and the inverse Gaussian distributions \citep[Theorem 5.6, p. 188]{Jorgensen1997}; see also \cite{Daniels1980}.
\subsubsection{Construction of Dispersion Models} \label{SS.DM.02}
We turn now to the problem of constructing dispersion models. Given a unit deviance $d$ defined on $C\times\Omega$, for a given real interval $C$ and $\Omega = int(C)$, a dispersion model can be obtained by finding a function $a:C\times\mathbb{R}_+\rightarrow\mathbb{R}_+$ such that the integral of the right side of (\ref{eq.2.1.01}) is $1$, {\it i.e.}, the function $a$ is the solution of the following Fredholm integral equation of first kind
\begin{equation}\label{eq.2.1.03}
\int_C a(y;\tau) \exp \left\{ -\frac{1}{2\tau}d(y;\mu) \right\}\upsilon (dy) = 1,
\mbox{for all } (\mu, \tau) \in \Omega\times\mathbb{R}_+ .
\end{equation}
The generation of a dispersion model involves then the construction of a real function from $C\times\Omega$ that is a unit deviance and finding
a solution of the related integral equation (\ref{eq.2.1.03}). Not all unit deviances generate a dispersion model since this equation
might not have a solution. Typically, solving this equation or even just establishing the existence and unicity for a solution is a hard problem. However, the integral equation (\ref{eq.2.1.03}) takes a simpler form for PDMs as we shall see in Section \ref{SS.PDMconstruction}.
Regarding the construction of a unit deviance, note that any distance on $C\times\Omega$ or any increasing function of a distance on $C\times\Omega$ is a unit deviance. The following construction based on basic properties of characteristic functions yields unit deviances with a tractable related integral equation. Let $P$ be a probability measure on $\mathbb{R}$ with characteristic function $\varphi$. If $P$ is symmetric around zero, then $\varphi (t) \in \mathbb{R},$ for all $t\in \mathbb{R}$. Moreover, $\varphi (0) = 1$ and $\vert \varphi (t) \vert \le 1$ for all $t\in \mathbb{R}$\citep[p. 15]{Lucaks-1970}. Assuming further that $P$ is not a lattice distribution ({\it i.e.}, $P$ is not concentrated on a set of the form $\{ a + nh, n = 0, \pm 1, \pm 2, \dots \}$ for some $a, h\in\mathbb{R}$ and $h>0$), then $\vert \varphi (t) \vert < 1$, for every $t\ne 0$ \citep[Theorem 1.1.3, p.2]{Ushakov1999}. Therefore, defining $d:C\times\Omega\rightarrow\mathbb{R}_+$ by $d(y;\mu)= 1 - \varphi (y- \mu ) = 1 - \varphi (\mu -y)$, for each $(y,\mu )\in C\times\Omega$ (the last equality follows from the symmetry of any characteristic function taking real values), yields a unit deviance. If we further require that the first two moments of $P$ are finite, then $d$ is twice differentiable and $\partial^2 / \partial \mu^2 d(\mu ; \mu ) = 1 + m_2 > 0$ (where $m_2$ is the second central moment of $P$), so $d$ is a regular unit deviance. For instance, the unit deviance given by $d(y; \mu ) = 1 - \exp(\vert y - \mu \vert)$, corresponding to the unit deviance constructed with the characteristic function of the Cauchy distribution, is not a regular unit deviance.
The integral equations (\ref{eq.2.1.01}) related to unit deviances constructed with characteristic functions as above, when $C=\mathbb{R}$, for a fixed $\tau = \tau_0$, becomes
\begin{eqnarray}\label{eq.2.1.06}
1 = \int_\mathbb{R} a(y; \tau_0) \exp \left\{ -\frac{1 - \varphi (\mu - y) }{2\tau_0} \right\} \upsilon (dy)
\! = \! \left [ a_{\tau_0} * K_{\tau_0} \right ] \! (\mu ),
\end{eqnarray}
for all $\mu\in\mathbb{R}$. Here the convolution operator, $``*"$, refers to the convolution between functions. We want to solve the equation for $a_{\tau_0} (\,\cdot\,) = a (\,\cdot\,;\tau_0)$ where $K_{\tau_0} (\,\cdot\, ) = \exp \{ -1/(2\tau_0)[ 1 - \varphi (\,\cdot\,) ] \}$ is the kernel of a convolution equation. It is remarkable that the kernel $K_{\tau_0}$ is a characteristic function of a probability measure as proved in the Corollary 1.3.4 in \cite{Ushakov1999}, page 8, and therefore it has a well defined Fourier transform. The solution of the convolution equation (\ref{eq.2.1.06}) does not depend on $\mu$, which is a consequence of the Lemma 5.2 in \cite{Jorgensen1997}.
A calculation involving the formalism of tempered distributions, see \cite{Rudin-1973}, Chapters 7 and 9, and the delta Dirac distribution (in the sense of generalized functions), yields a general (formal weak) solution of the type
$
a_{\tau_0} (\,\cdot\, )
=
\mathcal{F}^{-1} \left [ \delta (\,\cdot\, ) / \mathcal{F} \{ K_{\tau_0} \} (\,\cdot\, ) \right ] .
$
Here $ \mathcal{F}$ and $ \mathcal{F}^{-1}$ are the Fourier transform and the inverse of the Fourier transform, respectively. Note that the function $a(\,\cdot\, ; \tau) = a_{\tau} (\,\cdot\, )$ cannot be factorized as a product of a function of the observations $y$ and a function of the parameter $\tau$; therefore the model generated is not a PDM. It is easy to see that the dispersion model generated by the unit deviance $d$ is not an EDM. Let $\mathcal{P}_0$ be the class of Borel probability measures in $\mathbb{R}$ that are symmetric around zero and are not a lattice distribution. The discussion above allows us to claim that the cardinality of the regular dispersion models that are not a PDM and not a reproductive EDM is at least the same as the cardinality of $\mathcal{P}_0$. Moreover, the cardinality of those models that are regular is at least the cardinality of the set of elements of $\mathcal{P}_0$ that have the first two central moments finite.
To our best knowledge, this result has never been exposed in the literature before. This idea will be further explored in a future publication.
\subsubsection{Some General Properties of Dispersion Models} \label{SS.DM.03}
Even though PDMs and reproductive EDMs have somewhat different distributional features, these two classes share some fundamental statistical properties, which are common to all regular dispersion models. We summarize these common properties below. First, we note that for any regular unit deviance $d$ it holds that
\begin{equation} \label{eq.2.1.07}
\frac{\partial^2d(\mu;\mu)}{\partial y^2} =
\frac{\partial^2d(\mu;\mu)}{\partial \mu^2}=
- \frac{\partial^2d(\mu;\mu)}{\partial \mu\partial y}, \mbox{ for all } \mu\in\Omega \, ,
\end{equation}
\cite[Lemma 1.1, p. 24]{Jorgensen1997}. This general result has two immediate consequences: it gives alternative ways to calculate the unit variance function and it implies that the unit deviance behaves similarly to the unit deviance of the normal family near its minimum,
$\mu_0,$ since it follows from (\ref{eq.2.1.07}) that
\begin{equation} \nonumber
d(\mu_0 + x\delta; \mu_0 + m\delta ) = \frac{\delta^2}{V(\mu_0)} (x-m)^2 + o(\delta^2) .
\end{equation}
This approximation sends us back to the initial idea of viewing dispersion models as a form of generalization of the normal distribution.
A useful characteristic of dispersion models is a duality property for (well behaved) transformations, as we explain below.
Given a unit deviance $d$ on $C\times\Omega$ and a one-to-one function $f: C\rightarrow C_f$ ($C_f \subseteq \mathbb{R}$), the function
$d_f: C_f\times int(C_f) \rightarrow \mathbb{R}_+$
given by $d_f= d \left (f^{-1}(z); f^{-1}(\xi) \right )$ for each $(z,\xi)\in C_f\times int(C_f)$
is also a unit deviance. Moreover, if a random variable $Y$ follows a dispersion model with unit dispersion $d$ and the function $f$ is monotone and differentiable, then the distribution of a transformed random variable $Z = f(Y)$ is in a dispersion model with density (in the continuous case)
\begin{equation}\nonumber
p_Z(z; \xi, \tau ) =
\frac{ a \left \{ f^{-1} (z), \tau \right\}}
{ \vert f^{\prime} \{ f^{-1} (z) \} \vert }
\exp \left\{ -\frac{1}{\tau} d_f(z;\xi) \right\}
\mbox{, for all } z\in C_f \, .
\end{equation}
In the discrete case, we eliminate the Jacobian $1/ \vert f^{\prime} \{ f^{-1} (z) \} \vert$ of the above expression. The new dispersion model generated as above is called in the literature a \emph{re-parametrization by a transformation}, in an abuse of nomenclature since the new dispersion model is actually not a re-parametrization of the original dispersion model. If the unit deviance $d$ is regular and the transformation $f$ is twice continuously differentiable with $\vert f^\prime (y)\vert > 0$ for all $y\in\Omega$, then the unit deviance $d_f$ is also regular and has the associated unit variance $V_f(\xi ) = V\{f^{-1} (\xi ) \}[f^\prime \{f^{-1}(\xi)\}]^2$, where $V$ is the unit variance associated to the unit deviance $d$. The transformation $f(y) = \int_{y_*}^y V^{-1/2} (v) dv$ (for a fixed $y_*$) has a constant variance function $V_*(\xi ) = 1$ for all $\xi\in int(C_f)$. This transformation, called the \emph{variance stabilizing transformation}, plays an important rule in the asymptotic theory of dispersion models.
The probability density of a regular dispersion model, $p$ given in (\ref{eq.2.1.01}), can be well approximated by
\begin{equation} \nonumber
q(y;\mu, \tau) = \left\{ 2\pi\tau V(y) \right\}^{-1/2}
\exp \left\{ -\frac{1}{2 \tau} d(y;\mu) \right\} \, ,
\end{equation}
in the sense that $p(y;\mu, \tau) / q(y;\mu, \tau) \rightarrow 1$ when $\tau \rightarrow 0$ for each $y\in C$ and $\mu \in\Omega$.
This approximation is called the \emph{saddlepoint approximation}. Clearly, this convergence is equivalent to
$a(y,\tau)/ \{ 2\pi\tau V(y) \}^{-1/2} \rightarrow 1$ when $\tau \rightarrow 0$. The saddlepoint approximation is often very accurate and it is useful because the function $a(\cdot,\cdot)$ in (\ref{eq.2.1.01}) is difficult to be calculated or numerically evaluated.
Note that the integral of the functions $q(\cdot;\mu,\tau)$ are not necessarily $1$, therefore we define the \emph{renormalized saddlepoint approximation} by $q_0(\cdot;\mu,\tau )= q(\cdot;\mu,\tau)\,a_0(\mu,\tau)$, where $a_0(\mu,\tau) = 1/ \int_C q (y ;\mu,\tau ) \nu(dy)$. The approximation $p(y;\mu, \tau) \sim q_0(y;\mu, \tau)$ (as $\tau \rightarrow 0$) is often more accurate than the original saddlepoint approximation.
\subsection{Exponential Dispersion Models} \label{SS.EDM}
An EDM is defined as a class of real distributions having a density with respect to a suitable dominating measure taking the form
\begin{equation}
p(y;\theta,\tau)=\exp\left\{\frac{1}{\tau}\Big[y\theta
-b(\theta)\Big]+c(y;\tau)\right\}. \label{exp}
\end{equation}
Here $b:\Theta \rightarrow \mathbb{R}$, and $c: C \times \mathbb{R}_+ \rightarrow \mathbb{R}$ are known appropriate functions, $\Theta$ is an open set in $\mathbb{R}$, $\theta\in \Theta$ and $\tau>0$ are called the {\it canonical parameter} and the {\it dispersion parameter}, respectively. Typically, the dominating measure is the Lebesgue measure in $\mathbb{R}$, yielding continuous distributions, or the counting measure, generating discrete distributions. A {\it natural exponential family} is obtained when the dispersion parameter $\tau$ is kept fixed. The notion of EDMs was pioneered by \cite{Tweedie1}, who studied several special cases and pointed out important structural properties. The theory of EDMs was systematically exposed in \cite{Jorgensen1987A, Jorgensen1997} where several mathematical properties of EDMs were presented for the first time.
The terminology ``exponential dispersion model" reflects the exponential form of the density of those distributions and the important role played by the dispersion parameter $\tau$.
The cumulant generating function (cgf) of a distribution with density given by (\ref{exp}) is
\begin{equation}\label{cgf}
K(t;\theta,\tau)=\frac{1}{\tau}\Big[b(\theta+ \tau t)-b(\theta)\Big],
\end{equation}
which depends only on the function $b$, termed the {\it cumulant generator}.
Setting $b(\theta)=\theta^2/2$, $b(\theta)=-\log(-\theta)$ and $b(\theta) = -(-2\theta)^{1/2}$
yield the normal, gamma and inverse Gaussian distributions, respectively. The discussion above implies that we can construct one EDM for any specified non-degenerate function $b$ since this function uniquely determines the class of cgf of an EDM via (\ref{cgf}) and the cgf uniquely determines the distribution (an instance of the Fourier inversion theorem for the characteristic functions). Therefore, there exist many EDMs.
Differentiating (\ref{cgf}) yields the expectation of the distribution with density (\ref{exp}) given by $\mu \mathrel{\overset{\makebox[0pt]{\mbox{\normalfont\tiny\sffamily def}}}{=}} M(\theta) \mathrel{\overset{\makebox[0pt]{\mbox{\normalfont\tiny\sffamily def}}}{=}} \partial b(\theta)/\partial \theta$. The function $M$, associating each value of $\theta$ to the expectation of the corresponding distribution, is called the {\it mean value mapping}. By inverting the mean value mapping (we show below that $M$ is indeed invertible) we obtain that $\theta= M^{-1} (\mu ) = b^{\prime-1}(\mu)\mathrel{\overset{\makebox[0pt]{\mbox{\normalfont\tiny\sffamily def}}}{=}} q(\mu)$. The variance of the distributions given by (\ref{exp}) are then
$\tau V(\mu)$, where the {\it variance function} $V(\,\cdot\,)$ has the following alternative forms, for each value of $\mu$,
$$
V(\mu)=
\frac{\partial^2b(\theta)}{\partial\theta^2}=
\frac{\partial b^{\prime}\{q(\mu)\}}{\partial\theta}=
\frac{\partial M\{ q(\mu)\}}{\partial\theta}=
m^{\prime}\{q(\mu)\}>0.
$$
Consequently, the mean value mapping $\mu$ is a strictly increasing function and the parameter $\theta=q(\mu)=\int V(\mu)^{-1}d\mu$ is a known one-to-one function of $\mu$. For a given variance function $V(\,\cdot\,)$, we can easily obtain the inverse of the mean value mapping $q(\,\cdot\,)$ and then calculate $b(\theta)=\int q^{-1}(\theta) d\theta$ (for each $\theta$) and the cgf given by (\ref{cgf}). The elements of the EDMs are then uniquely determined by their variance functions $V(\,\cdot\,)$. Moreover, the variance functions play a key role for studying several of the structural properties of EDMs.
The EDMs with quadratic variance function given by $V(\mu)=a \mu^2+b \mu+c$ form an important and well studied class of EDMs. \cite{Morris} studied those EDMs in details and proved that this subclass is composed of only six distributions: the normal ($a=b=0, c=1$), the gamma
($a=1, b=c=0$), the Poisson ($a=c=0, b=1$), the binomial ($a=-1, b=1, c=0$), the negative binomial ($a=b=1, c=0$) and the generalized secant hyperbolic ($a=c=1, b=0$) distributions. J\o{}rgensen (1997) discussed further the generalized secant hyperbolic by taking $b(\theta)=-\log\{\cos(\theta)\}$ and
$$c(y;\tau) = \log \left[\frac{2^{(1-2\tau)/\tau}} {\tau \Gamma(\tau^{-1})} \right]
- \sum_{j=1}^{\infty} \log \left[1 + \frac {y^2} {(1+ 2 j \tau)^2}\right],$$
where $\Gamma(\cdot)$ is the gamma function.
The $r$th \normalsize cumulant of a distribution contained in an EDM with density given by (\ref{exp}) is $\kappa_r=\kappa_r(\theta,\tau)=\partial^{r}K(t;\theta,\tau)/\partial^r t \big{|}_{t = 0}$. Therefore, the function $b(\theta)$ generates all cumulants of the distribution
$\kappa_r=\tau^{r-1} \partial^{r}b(\theta) / \partial\theta^r$ for $r \ge1$. A direct implication of that is that any distribution contained in an EDM has finite cumulants of all orders, which rules out the use of EDM for modeling situations where it is required that the distribution has very heavy tails.
\cite{Tweedie1} proved the normal convergence of the random variable $Z=(Y-\mu)/\sqrt{\tau}$, where the distribution of the random variable $Y$ is contained in an EDM, using an expansion for its cgf. The cgf of $Z$ follows from (\ref{cgf}) as
$$K_Z(t;\theta,\tau)=-\mu \frac{t}{\sqrt{\tau}}+\frac{1}{\tau}\,\bigg[b(\theta+ \sqrt{\tau} t)-b(\theta)\bigg].$$
By expanding $b(\theta+ \sqrt{\tau} t)$ in a Taylor series around zero with the cumulants of $Y$ as coefficients and collecting like terms,
we obtain
$$K_Z(t;\theta,\tau)=V(\mu)\, t^2/2 +\sum_{k=3}^{\infty}\,\frac{\partial^{k}b(\theta)}{\partial^{k}\theta}\,\frac{\tau^{k/2-1}\,t^k}{k!}.$$
Based on this expansion, we conclude that
\begin{equation}\label{asy}
Z=(Y-\mu)/\sqrt{\tau}\,\,\stackrel{\rm D}{\rightarrow}\,\,\text{N}(0,V(\mu))\,\,\,\,\,\text{when}\,\,\,\,\,\tau\rightarrow 0,
\end{equation}
where $\stackrel{\rm D}{\rightarrow}$ denotes convergence in distribution. Clearly, equation (\ref{asy}) is exact for the normal distribution since the derivatives of $b(\theta)$ of order greater than two vanish.
Equation (\ref{asy}) generalizes a number of known results on convergence to normality such as those for the gamma and inverse Gaussian
distributions. In fact, under some regularity conditions, distributions contained in EDMs are approximate normally distributed for small values of $\tau$. In this way, the standard asymptotic theory applies for small values of the dispersion parameter as well as for large sample sizes. The mathematical conditions for this result called {\it small dispersion asymptotics} were fully discussed in J\o{}rgensen (1987b).
Let $Y_1,\cdots,Y_n$ be independent and identically distributed (iid) random variables with distribution contained in an EDM
with mean $\mu$ and dispersion parameter $\tau$. The form of the cgf in (\ref{cgf}) implies that the distribution of the sample
mean $\bar{Y}=\sum_{i=1}^n Y_i/n$ belongs to the same EDM as the distribution of the elements of the sample with mean $\mu$ and dispersion parameter $\tau/n$. This result includes well-known convolution properties of the normal, gamma and inverse Gaussian distributions and implies that the distributions contained in EDMs are infinite divisible provided the dispersion parameter is allowed to take values arbitrarily close to zero.
\subsubsection{Tweedie Models}
A important sub-class of the models defined by (\ref{exp}), called {\it Tweedie models} is obtained when $V(\mu)=\mu^{p}$, for $p\in (-\infty, 0]\cup [1,+\infty)$. A full discussion of these distributions with varying $p$ was first addressesed by \cite{Tweedie1,Tweedie2} and further discussed by \cite{Jorgensen1987A}. The support of the Tweedie models depends on the value of $p$. The normal, Poisson, gamma and inverse Gaussian distributions can be obtained from (\ref{exp}) for $V(\mu)=\mu^{p}$ and $p=0,1,2$ and $3$, respectively. Distributions generated by extreme stable distributions are determined when $p<0$ with support on $\mathbb{R}$. J\o{}rgensen (1987a) demonstrated that there exist no EDM with power va\-riance functions for $0 < p <1$. When $1 < p < 2$, we obtain the Gamma compound Poisson distributions which are inte\-res\-ting because they are continuous for $y > 0$ but have positive probability mass at zero. For $p > 2$ ($p \ne 3$), we obtain continuous distributions generated by positive stable distributions. When $p$ increases to $+\infty$ the Tweedie models converge to extreme stable distributions.
We define the cumulant generator $b_{p}(\theta)$ for Tweedie models ($p\neq1,2$) by
$$b_{p}(\theta) = (2-p)^{-1} \left[(1-p)\,\theta \right]^{\frac {p-2} {p-1}}.$$
Further, $b_{1}(\theta)=\exp(\theta)$ and $b_{2}(\theta)=-\log(-\theta)$.
The unit deviance $d_p(\cdot ;\cdot)$ of a Tweedie model follows by the following straightfoward calculation
$$
d_{p}(y;\mu) = 2\,\int_{\mu}^y \frac{(y-t)}{V(t)} dt =
2 \left\{\frac{[max(y,0)]^{2-p}}{(1-p)(2-p)}-\frac{y \mu^{1-p}}{1-p}+\frac{\mu^{2-p}}{2-p}\right\}.
$$
The remaining quantities in equations (\ref{exp}) can be evaluated numerically for the cases $1<p<2$ and $p>2$ in power series following
\cite{Jorgensen1987A}. See, also \cite{BonatKokonendjib}.
\subsubsection{Saddlepoint Approximations}
We adopt the notation $K_Y^{(j)}(t;\theta,\tau)=\partial^j K_Y(t;\theta,\tau)/\partial t^j$ for $j \ge 1$.
The saddlepoint approximation for the density of $Y$ takes the form
\begin{equation}\label{saddle}
\pi_Y(y;\theta,\tau) \simeq \left[\frac{1}{2\pi\,K_Y^{(2)}(\hat{\lambda};\theta,\tau)}\right]^{1/2}\,\exp\left[
K_Y(\hat{\lambda};\theta,\tau)-\hat{\lambda} y\right],
\end{equation}
where the saddlepoint $K_Y(\lambda;\theta,\tau)-y$ is found by solving the (usual nonlinear) equation $K_Y^{\prime}(\hat{\lambda};\theta,\tau)=y$.
This equation was derived by Daniels (1954) as an approximation for any density given its cgf, although it can be applied here to approximate
the density of an EDM when $\tau$ is small. In fact, the $r$th cumulant of $Y$ is of order $O(\tau^{r-1})$ (for $r \ge1$) and then the
density of $Y$ with large precision parameter $1/\tau$ can be considered as the density of the sample average in large samples.
By differentiating (\ref{cgf}), we find $b^{\prime}(\theta+\tau\hat{\lambda})=y$ and then $\hat{\lambda}
=[q(y)-\theta]/\tau$. By definition of the variance function, we have $K_Y^{(2)}(\hat{\lambda};\theta,\tau)=\tau\,V(y)$. Inserting these quantities in the last density approximation, the saddlepoint approximation for the ED density can be expressed in a simple form
\begin{equation}
\pi_Y(y;\theta,\tau)\simeq \left[\frac{1}{2\pi\,\tau\,V(y)}\right]^{1/2}\exp\left[-\frac{1}{2 \tau}\,d(y;\mu)\right],\label{spoint}
\end{equation}
which holds when $\tau\rightarrow 0$. This approximation is exact independent of $\tau$ for the normal distribution since the third and fourth cumulants of $Y$ vanish. Equation (\ref{spoint}) is equivalent to the asymptotic result from (\ref{eq.2.1.01}): $\sqrt{\tau}\,a(y;\tau) \rightarrow [2\pi\,V(y)]^{1/2}$ when $\tau \rightarrow 0$.
The distribution function of $Y$ follows approximately from (\ref{spoint}) as
\begin{equation*}
\Pi_Y(y;\theta,\tau)=P(Y \le y)\simeq \int_{-\infty}^y \left[\frac{1}{2\pi\,\tau\,V(x)}\right]^{1/2}\,\exp\left[-\frac{1}{2 \tau}\,d(x;\mu)\right]\,dx.
\end{equation*}
By integrating the last equation, Lugannani and Rice (1980) proved that
\begin{equation*}
\Pi_Y(y;\theta,\tau)\simeq \left[\Phi\left(\frac{r}{\sqrt\tau}\right)+\sqrt\tau\,\phi\left(\frac{r}{\sqrt\tau}\right)\,\left(\frac{1}{r}-\frac{1}{u}\right)\right],
\end{equation*}
where $\Phi(\cdot)$ and $\phi(\cdot)$ are the standard normal distribution and density, respectively, $r=\rm{sgn}(y-\mu)\,\sqrt{d(y;\mu)}$ is called the {\it deviance residual} and $u=\frac{V(y)^{1/2}}{2}\,\frac{\partial d(y;\mu)}{\partial y}$ is the {\it dual score residual}. This equation is very easy to be applied to compute probabilities for any EDM since it involves only the variance and the deviance functions.
We now move to the density of the sample average $\overline{Y}=\sum_{i=1}^{n}Y_i/n$ of iid random variables $Y_1,\cdots,Y_n$ having density (\ref{exp}) and cgf (\ref{cgf}). The density function of $\overline{Y}$ follows from the Fourier inversion integral as
$$\pi_{\overline{Y}}(y;\theta,\tau)=\frac{1}{2 \pi}\, \exp\Big[- \rm{i} t\, y + n \, K_Y( \rm{i} t/n;\theta,\tau)\Big],$$
where $\rm{i}=\sqrt{-1}$. This equation is suitable for Daniels' saddlepoint approxi\-mation. Setting $z =\rm{i} t/n$, the saddlepoint of
$K_Y(z;\theta,\tau)- z y$ is $K^{\prime}_Y(\hat z;\theta,\tau)=y$. Then, the density approximation of $\overline{Y}$ can be expressed as
\begin{eqnarray*}
\pi_{\overline{Y}}(y;\theta,\tau)&=&\Bigg[\frac{n}{2\pi\,K_Y^{(2)}(\hat z;\theta,\tau)}\Bigg]^{1/2}\,\exp\Big\{n \,\Big[K_Y(\hat z;\theta,\tau)
-\hat{z} \,y\Big]\Big\},
\end{eqnarray*}
which can provide good results in practice.
It is much more frequent in statistical applications to compute distribution functions than density functions. By integrating the last equation, the cumulative distribution function (cdf) of $\overline{Y}$ has the form
\begin{equation*}
\Pi_{\overline{Y}}(y;\theta,\tau)\simeq \large{\int_{-\infty}^y}\Bigg[\frac{n}{2\pi\,K_Y^{(2)}(t;\theta,\tau)}\Bigg]^{1/2}\,
\exp\Big\{n\,\Big[K_Y(t;\theta,\tau)- t \,x\Big]\Big\} dx,
\end{equation*}
where $t= t(x)$ is determined by $K_Y^{\prime}(t;\theta,\tau)=x$. By transformation of variables and integration with respect to the saddlepoint variable $t$ instead of $x$, we obtain $K_Y^{(2)}(t;\theta,\tau)\,dt=d x$ and then
\begin{equation*}
\Pi_{\overline{Y}}(y;\theta,\tau)\simeq \large{\int_{-\infty}^{t(y)}}\Bigg[\frac{n K_Y^{(2)}(t;\theta,\tau)}{2\pi}\Bigg]^{1/2}\exp\Big\{n \Big[K_Y(t;\theta,\tau)-t \,K_Y{\prime}(t;\theta,\tau)\Big]\Big\} dt,
\end{equation*}
where $t(y)$ is found by solving $K_Y^{\prime}(t(y);\theta,\tau)= y$. This integral for $\Pi_{\overline{Y}}(y;\theta,\tau)$ is much easier
to compute than the previous one because it includes explicitly the saddlepoint function in the integrand. The saddlepoint approximation for the cdf of ${\overline{Y}}$ follows from Lugannani and Rice (1980) as
\begin{equation}\label{sadlemean}
\Pi_{\overline{Y}}(y;\theta,\tau)\simeq \Phi\left[r(y)\right]+\phi\left[r(y)\right]\,\left[\frac{1}{r(y)}-\frac{1}{u(y)}\right],
\end{equation}
where
$$r(y)=\rm{sgn}[t(y)]\,\left\{2 n\, \left[y\,t(y)-K_Y(t(y);\theta,\tau)\right]\right\}^{1/2}$$
and
$$u(y)=t(y)\,\left[n\,K_Y^{(2)}(t(y);\theta,\tau)\right]^{1/2}.$$
Equation (\ref{sadlemean}) provides highly accurate results for the probabilities associated
with $\overline{Y}$.
\subsection{Proper Dispersion Models} \label{SS.PDM}
We consider now the notions of general dispersion models and general PDMs that will allow us to understand some statistical inferential aspects of the notion of PDMs introduced in Section \ref{SS.DM}. A {\it general dispersion model} is a family of real distributions parameterized by two parameters, $\theta\in \Theta \subseteq \mathbb{R}$ and $\lambda \in \Lambda \subseteq \mathbb{R}_+ ,$ where $\Theta$, $\Lambda$ are intervals, $\Lambda$ is unbounded to the right and the density (or Radon-Nikodym derivative) with respect to a common dominating real measure $v$ is of the form
\begin{equation}\label{eq.2.3.1.01}
p(y; \theta, \lambda) = a(y;\lambda) \exp \left\{\lambda t(y;\theta) \right\},
\mbox{ for all } y\in S,\ \theta\in \Theta\ \mbox{and}\ \lambda\in \Lambda .
\end{equation}
Here $S$ is the support of the dominating measure $v$, and $a: S\times\Lambda \rightarrow \mathbb{R}_+ ,
$ and $t: S\times\Theta \rightarrow \mathbb{R}_+$ are suitable functions. When the function $a$ factorizes as $a(y;\lambda)=a_0(\lambda)b(y)$, for all $y\in S$ and $\lambda\in \Lambda$, the family of distributions above is said to be a {\it general proper dispersion model} (general PDM). Note that when the function $-d$ is a unit dispersion, $\lambda = 1/\tau$ and $\mu = \theta$, then the general dispersion model and the general PDM coincide with the dispersion models and the PDMs defined in Section \ref{SS.DM}. Moreover, (\ref{eq.2.3.1.01}) coincides with (\ref{eq.2.1.01}) defined with the unit deviance $t$ instead of $d$. Apparently, the setup above is much more general than the situation considered in Section \ref{SS.DM} since here we are not requiring the function $t$ to be a unit deviance. However, we will argue that in order to obtain families of distributions with some desirable statistical inferential properties we will need to introduce some restrictions in the general definition given above that will render the present definition essentially equivalent to the setup discussed in Section \ref{SS.DM}. In this way, we are using an embedding of classes of families of distributions to obtain a better understanding of inferential properties of the statistical models based on PDMs.
\subsubsection{Some Key Properties}
The notion of yoke and yokable function defined below will allow us to connect the notions of PDMs (as defined in Section \ref{SS.DM}) and general dispersion models (defined above). Moreover, these notions will allow us to characterize the existence of maximum likelihood estimates (MLEs) for PDMs. Given an interval $\Omega \subseteq \mathbb{R} ,$ a function $t:\Omega\times\Omega \rightarrow \mathbb{R}$ is said to be a {\it yoke} if $\sup_{\begin{subarray}{l}\theta\in\Omega\end{subarray}} t(y;\theta) = t(y;y)$ for all $y\in \Omega$. When additionally, $t(y;y)=0,$ for all $y\in\Omega$, then the function $t$ is called a {\it normed yoke}.
If $d:\Omega\times\Omega \rightarrow \mathbb{R}_+$ is a unit deviance, then $-d$ is a normed yoke.
On the other hand, if the function $t:\Omega\times\Omega \rightarrow \mathbb{R}$ is a yoke and we denote for each $y\in\Omega$ the supremum $\sup_{\begin{subarray}{l}\theta\in\Omega\end{subarray}} t(y;\theta)$ by
$\hat t(y)$, then the function given by $\tilde t (y;\theta) = \hat t(y ) - t(y;\theta)$ is a normed yoke.
A function $t:C\times C \rightarrow \mathbb{R}$, where $C$ is a real interval, is said to be \emph{yokable} when the following three conditions are satisfied:
{\it i)} $\sup_{\begin{subarray}{l}\theta\in\Omega\end{subarray}} t(y;\theta) < \infty$ for all $y\in C$;
{\it ii)} There exists an open interval $\Omega\subseteq C$ such that, for each $y\in\Omega$ the supremum $\hat\theta_y = \sup_{\begin{subarray}{l}\theta\in\Omega\end{subarray}} t(y;\theta)$ is unique; and
{\it iii)} The function $\hat\theta: \Omega \rightarrow int(\Theta)$ given by $\hat\theta (y) = \hat\theta_y$ (for each $y\in\Omega$) is a bijection.
If the function $t$ used in (\ref{eq.2.3.1.01}) to define general PDMs is yokable, then we might define a PDM using the unit deviance $d:C\times\Omega \rightarrow \mathbb{R}$ given by
$d(y;\mu) = 2 \left [\hat t(y) - t \left\{ y; \hat\theta (\mu ) \right\} \right]$, for all $y\in C$ and $\mu\in\Omega$.
In this case, the density of a general PDM takes the form
\begin{equation}\label{eq.2.3.1.02}
p(y; \theta, \lambda) =
a_0(\lambda)b(y)
\exp \left\{\lambda\hat\, t(y) - \frac{\lambda}{2}
d(y; \mu)\right \},\\
\end{equation}
for all $y\in S$, $\mu\in \Theta$ and $\lambda\in \Lambda$.
Here $a(y;\lambda) = a_0 (\lambda) \exp\{\lambda \hat{t}(y)\}$.
If we further assume that $\Omega = S = C$, that the function $d$ is regular unit deviance, and that for all $y\in \Omega$, $b(y)=V^{-1/2}(y)$, then the family of distributions defined by (\ref{eq.2.3.1.02}) is a regular PDM as defined in Section \ref{SS.DM.01}.
We argue below that it is natural from the statistical point of view to assume the function $t$ in (\ref{eq.2.3.1.01}) to be yokable.
First, the conditions {\it i)} and {\it ii)} of the definition of yokable function ensure the existence of at least a local maximum of the likelihood function for $\mu$, obtained when fixing the index parameter $\lambda$. Note that this is a minimal necessary requirement for the existence and uniqueness of the maximum likelihood estimates. The condition {\it iii)} implies that $\theta$ is a re-parametrization of the position parameter of the related proper dispersion model.
In order to elucidate some basic statistical properties of general dispersion models, let us consider the so called Barndorff-Nielsen's $p^*$ formula \citep{Barndorff-Nielsen-1983, Barndorff-Nielsen-1988} for approximating the conditional distribution of the MLE (for a given statistical model) given an ancillary statistic ({\it i.e.}, a statistic carrying no information on the parameter of interest, see \cite{JorgensenLabouriau2012}, Chapter 2 for details). Here we consider the maximum likelihood estimation of the parameter $\mu$ in a statistical model defined by (\ref{eq.2.3.1.02}) when the parameter $\lambda$ is kept fixed and the estimation is based on a single observation, $y\in S$. Taking a degenerate ancillary statistics ({\it e.g.} , a constant statistic) the $p^*$ formula yields an approximation to the marginal distribution of the MLE. Since in these circumstances the MLE of $\mu$ is $\hat{\mu}(y)=y$ and the $p^*$ approximation is defined by
\begin{equation}\label{eq.2.3.1.03}
p(y; \theta(\mu), \lambda) \sim p_0(y;\mu ,\lambda),
\end{equation}
where $p_0$ is the renormalized saddlepoint approximation corresponding to the unit deviance $d$, with
$p_0(y; \mu, \lambda) = a_0(\mu,\lambda) V^{-1/2 } (y)
\exp \left\{- \frac{\lambda}{2} d(y; \mu)\right\}$,
for all $ y\in S$, $\mu\in \Theta$ and $\lambda\in \Lambda$.
The Barndorff-Nielsen's formula is said to be {\it exact} if the two sides of (\ref{eq.2.3.1.03}) coincide for all $y$ in $S$ and all $(\mu, \lambda )$ in $\Omega\times\Lambda$.
A consequence of the saddlepoint approximation above is that standard dispersion models are asymptotically normal distributed for $\lambda$ large. In this way, the Barndorff-Nielsen's formula may be viewed as a refinement of the normal approximation to the distribution of the MLE. Furthermore, there is a strong result for regular PDMs stating that (assuming $b$ continuous at $y=\mu$) the following three statements are equivalent: i) The Barndorff-Nielsen's formula is exact for all $\lambda\in \Lambda$; ii) the Barndorff-Nielsen's formula is asymptotically exact in the sense that the ratio $p_0/p$ tends to 1 as $\lambda \to \infty$ for all $y$ and $\mu$ in $\Omega$; iii) The function $\hat{t}(y)$ is constant on $\Omega$, and $b(y) \propto V^{-1/2}(y)$ (see \cite{Jorgensen1997}, Theorem 5.4 and Corollary 5.5). When these statements hold, the normalizing constant $a_0(\mu,\lambda)$ does not depend on $\mu$, and satisfies $a_0(\mu,\lambda)\propto a(\lambda) \exp\left\{\lambda \hat{t}(\mu)\right\},$ and $a(\lambda)\sim \sqrt{\frac{\lambda}{2\pi}}\exp\left\{-\lambda \hat{t}(\mu)\right\}$ as $\lambda \rightarrow \infty .$
In conclusion, the density in (\ref{eq.2.3.1.02}) defines a regular PDM when the Barndorff-Nielsen's formula is exact.
We discuss below some other properties of PDMs which will show some of the peculiarities of those families of distributions.
For any fixed value of the position parameter $\mu$, say $\mu = \mu_0$, the family given by (\ref{eq.2.1.01}) is an exponential family
with canonical statistic $d(\cdot;\mu_0)$ and canonical parameter $\tau$. This is enough to recognize that the general form of the density of a distribution belonging to the exponential family is obtained when setting the position parameter $\mu$ fixed.
Another property of PDMs is that when the dispersion parameter is fixed, say $\tau = \tau_0,$ the unit deviance is a pivotal statistic for
$\mu$. That is, if $Y$ is a random variable having density (\ref{eq.2.3.1.02}) with respect to $\upsilon$, then the distribution of the random variable $d(Y,\mu)$ does not depend on $\mu$. This property follows by observing that the integral
$\int_C b(y) \exp \left\{ - d(y,\mu) / (2 \tau_0) \right\} \upsilon (dy )= 1/a_0(\tau_0)$ does not depend on the value of $\mu$; it is easy then to prove that the moment generating functions of the random variables $T = T_\mu= d(Y,\mu)$, for $\mu\in\Omega$, are all equal and depend only on $\tau_0.$ This key result is the Lemma 5.2 in \cite{Jorgensen1997}.
\subsubsection{Construction of proper dispersion models} \label{SS.PDMconstruction}
As discussed in Section \ref{SS.DM.02}, given a unit deviance an associated dispersion model can in principle be constructed by solving the integral equation (\ref{eq.2.1.03}), but this is in general a hard problem. However, that integral equation takes the following simpler form for PDMs
\begin{equation}\label{eq.3.2.01}
a_0(\tau)\int_C b(y) \exp \left\{ -\frac{1}{2\tau} d(y;\mu) \right\} \upsilon (dy) =
1,
\mbox{ for all } (\mu, \tau) \in \Omega\times\mathbb{R}_+
\, ,
\end{equation}
which has a solution $a_0(\tau) = 1 / \int_C b(y) \exp \left\{ -d(y;\mu) /2\tau \right\} \upsilon (dy)$, provided that the integral involved exists. The integral $\int_C b(y) \exp \left\{ -d(y;\mu) /2\tau \right\} \upsilon (dy)$ does not vanish (since the integrand is positive apart from a $\upsilon$-null set) and does not depend on the parameter $\mu$ because of the second key property of PDMs discussed before.
Another useful technique for constructing PDMs involves the use of a transformation group, say $G$, acting freely and transitively
on $\Omega = C = \mathbb{R}$; see Section 3.3 of \cite{JorgensenLabouriau2012} for the basic definitions and a short account on transformation groups in statistical inference. Here we denote the action of $G$ by $(g,y) \mapsto gy$ for $(g,y) \in G\times \Omega$. Let $t,b:\Omega\rightarrow\mathbb{R}_+$ where the function $b$ is invariant by the action of $G$({\it i.e.}, $b(gy) = b(y)$ for all $g\in G$ and $y\in\Omega$) and $t$ is an arbitrary function. Assume, moreover that $\int_\Omega b(y) \exp \left\{ \lambda t( g^{-1} y) \right\} dy < \infty$, for $\lambda$ in an interval
$\Lambda \subseteq \mathbb{R}_+$ that is unbounded from the right. Since $G$ acts freely and transitively on $\Omega$, there is a one-to-one correspondence between $\Omega$ and $G$. Assume further that the supremum $\hat t = \sup_{g\in G} t(g^{-1} y)$ is finite. Then, it is easy to see that $t(g^{-1} y)$ is yokable and we might define the unit deviance $d(y;\mu) = 2 \left\{ \hat t - t(\hat g^{-1}_\mu y ) \right\}$, where $g^{-1}_\mu$ is the MLE of $g$ when the observation is $\mu$. The type of dispersion model constructed with this kind of unit deviance is called a \emph{transformation dispersion model}.
Examples of the construction techniques based on transformation groups described above are the special PDMs called \emph{location-dispersion model} for which the density function with respect to the do\-mi\-nating measure $\upsilon$ has the form
$ p(y; \mu, \tau) = c(\tau) \exp \left\{ - d(y-\mu)/(2\tau) \right\}$,
where $C = \Omega = \mathbb{R}$ and $d$ is a unit deviance; here the transformation group is the group of translations with action
$y \mapsto (g + y)$ . Another example is the von Mises distribution defined using the transformation group of rotations of the unit circle with action $y \mapsto module \{ 2\pi; (g + y)\}$.
\section{Applying Dispersion and Exponential Dispersion Regression}
We review the Exponential Dispersion (ED) regression to extend the well-known GLMs, discuss some improved hypotheses tests and
some models for clustered and dependent data based on latent L\'evy processes.
\subsection{Exponential Family Non-Linear Models}
\label{SS.EFNLR}
We consider models where the random variables $Y_1,\cdots,Y_n$ are assumed independent and each $Y_i$ has a density or probability function of the form (\ref{eq.2.1.01}) with mean $\mu_i=E(Y_i)$ on a convenient support.
We define the ED regression by the random component (\ref{eq.2.1.01}) and the systematic component
\begin{equation}\label{systematic}
g(\mu_i)=\eta_i=f(\boldsymbol{x}_i;\boldsymbol{\beta}),
\end{equation}
where $g(\cdot)$ is a known one-to-one twice continuously differentiable link function, $\eta_i$ and
$\boldsymbol{x}_i = (x_{i1}, \cdots, x_{ip})^T$
denote the linear predictor and the $p\times 1$ vector of non-stochastic independent variables associa\-ted with the $i$th observation, respectively, $\boldsymbol{\beta}=(\beta_1,\cdots,\beta_p)^T$ is a $p$-vector of unknown parameters, and $f(\cdot;\cdot)$
is a (possibly nonlinear) twice continuously differentiable function with respect to $\boldsymbol{\beta}$. Here a standard GLM
is obtained when the function $f(\cdot;\cdot)$ is bi-linear, i.e., when $f(\boldsymbol{x}_i;\boldsymbol{\beta}) = \boldsymbol{x}_i^T \boldsymbol{\beta}$.
The systematic component relates the explanatory variables $\boldsymbol{x}_i$ to the mean parameter $\mu_i$ of
interest. The $n\times p$ matrix of derivatives of $\boldsymbol{\eta}$ with respect to $\boldsymbol{\beta}$, specified by
$\widetilde{\bf{X}}=\widetilde{\bf{X}}(\boldsymbol{\beta})=\partial\boldsymbol{\eta}/\partial\boldsymbol{\beta}$,
is assumed to have rank $p$ for all $\boldsymbol{\beta}$. We have $p+1$ parameters to be estimated: the vector $\boldsymbol \beta$ and $\tau$. The ED regression model has two important components: the ED class for the response variable and a possible nonlinear regression on a vector
$\boldsymbol{\beta}$ by means of the link function. We assume that the standard regularity conditions for likelihood theory hold. The ED regression model was called {\it exponential fa\-mi\-ly nonlinear} (EFNL) model by Cordeiro and Paula (1989), thus extending the well-known idea of the GLMs by allowing a nonlinear regression structure for the explanatory variables. Wei (1998) wrote an excellent book for the EFNL models.
Let $\boldsymbol{y}=(y_1,\cdots,y_n)^T$ be a vector of observations and $\ell=\ell(\boldsymbol{\beta},\phi)$ be the total
log-likelihood function for a given ED regression model expressed in terms of $\boldsymbol{\beta}$ and $\tau$. A simple calculation shows
that $E(\partial^2\ell/\partial\tau\partial\boldsymbol{\beta})=0$, i.e., the parameters $\boldsymbol{\beta}$ and $\tau$ are globally orthogonal.
Let $\widehat{\boldsymbol{\beta}}$ and $\hat{\tau}$ be the MLEs of $\boldsymbol{\beta}$ and $\tau$, respectively. Let $\mu_i = g^{-1}(\eta_i)$ be the inverse link function evaluated at the linear predictor. Given a data vector $\boldsymbol{y}$, the total deviance for the ED regression is defined as $$D(\boldsymbol{y};\boldsymbol{\mu})= \sum_{i=1}^n d(y_i;\mu_i).$$
The vector $\widehat{\boldsymbol{\beta}}$ can be calculated by minimizing the total deviance $D(\boldsymbol{y};\boldsymbol{\mu})$
with respect to this parameter vector. The MLE of $\boldsymbol{\beta}$ does not depend on the dispersion parameter $\tau$.
Let $\widehat{\boldsymbol{\beta}},\widehat{\boldsymbol{\eta}}$ and $\widehat{\boldsymbol{\mu}}=g^{-1}(\widehat{\boldsymbol{\eta}})
=\left(g^{-1}(\hat \eta_1),\cdots, g^{-1}(\hat \eta_n) \right)^T$ be the MLEs of the vector of regression coefficients $\boldsymbol{\beta}$, the vector of linear predictors $\boldsymbol{\eta} = (\eta_1, \cdots, \eta_n)^T$ and the vector of means $\boldsymbol{\mu} =(\mu_1, \cdots, \mu_n)^T$, respectively. The Fisher information matrix for $\boldsymbol{\beta}$ is $K(\boldsymbol{\beta})=\tau\,\widetilde{\bf{X}}^{T} \textrm{W}\widetilde{\bf{X}}$, where $\textrm{W}={\rm diag}\{w_{1},\cdots, w_{n}\}$ is a diagonal matrix with weights $w_{i}=V(\mu_{i})^{-1}(\partial\mu_{i}/\partial\eta_{i})^{2}$.
The algorithm to estimate $\boldsymbol{\beta}$ can be carried out by using iteratively re-weighted least squares (IRLLS)
$$\widehat{\boldsymbol{\beta}}=(\widehat{\widetilde{\bf{X}}}^{T}\widehat{\textrm{W}}\widehat{\widetilde{\bf{X}}})^{-1}
\widehat{\widetilde{\bf{X}}}^{T}\widehat{\textrm{W}}\widehat{\boldsymbol{z}},$$
where $\widehat{\widetilde{\bf{X}}}$ and $\widehat{\textrm{W}}$ are the quantities $\widetilde{\bf{X}}$ and $\textrm{W}$
evaluated at $\widehat{\boldsymbol{\beta}}$, $\widehat{\boldsymbol{z}}=(\hat z_{1},\cdots, \hat z_{n})^{T}$ is the working vector
with components $z_{i}=\eta_{i}+(y_{i}-\mu_{i}) \partial\eta_{i}/\partial\mu_{i}$ at $\widehat{\boldsymbol{\beta}}$. These nonlinear equations have the same form as the estimating equations for GLMs with a local model matrix $\widetilde{\bf{X}}$ instead of a known design matrix and can be solved by iterative methods. The IRLLS algorithm is easily implemented using some standard statistical software such as SAS or the GAMLSS
script in {\bf R} (R Development Core Team, 2007).
Some asymptotic results for the ED regression were obtained by Cordeiro and Paula (1989), Cordeiro and McCullagh (1991) and Simas and Cordeiro (2009), among others, which produce wider results than those for GLMs.
Estimation of the parameter $\tau$ is a more difficult problem than the estimation of $\boldsymbol{\beta}$ and the complexity depends entirely on the functional form of $c(y;\tau)$. The MLE $\widehat\tau$ is a function of the deviance of the model, namely $\widehat\tau$ is the solution of the following equation
\begin{equation}
\widehat{\tau}^{2}\left. \sum_{i=1}^{n}\frac{\partial c(y_{i};\tau)}
{\partial\tau }\right|_{\tau=\widehat{\tau}}= \left[\sum_{i=1}^n l(y_i;y_i) -\frac{D(\boldsymbol{y};\boldsymbol{\mu})}{2}\right].\label{estimatephi}
\end{equation}
Equation ($\ref{estimatephi}$) requires in general the use of a nonlinear algorithm to compute numerically $\hat\tau$ except for normal and inverse Gaussian models. However, for some ED regressions, the form of $c(y;\tau)$ is complicated, and $\hat\tau$ could be difficult to compute from (\ref{estimatephi}). In these cases, we can use a moment estimate of $\tau$ directly from $\widehat{\boldsymbol{\mu}}$ given by
$\widetilde\tau = \frac{1}{n-p} \sum_{i=1}^{n} (y_i-\hat\mu_i)^{2}V(\hat\mu_i)^{-1}$ on the grounds that the expected value of $D(\boldsymbol{y};\boldsymbol{\mu})/\tau$ is approximately $n-p$ for a well fitted model.
If ($\ref{exp}$) is a two-parameter full exponential family with canonical para\-meters $1/\tau$ and $\theta/\tau$, the following
decomposition holds
\begin{equation}
c(y;\tau)=\frac{1}{\tau} a(y)+d(\tau)+e(y)\label{decompos}
\end{equation}
and then explicit expressions for $\widehat\tau$ are possible. Clearly, equation (\ref{decompos}) is valid for normal, gamma and inverse Gaussian distributions but does not hold for all ED distributions in (\ref{exp}).
The above results apply to all GLMs by setting $f(\widetilde{\bf{X}};\boldsymbol{\beta})=\bf{X} \boldsymbol{\beta}$ for a given design matrix
$\bf{X}$. Several diagnostic measures for the ED regression are simple extensions of those measures for GLMs.
The Tweedie regression models are extensively used in several areas for non-negative right-skewed data and continuous data that allow zero observations. Bonat and Kokonendjib (2017) proposed maximum likelihood, quasi-likelihood and pseudo-likelihood methods for estimation and inference of Tweedie regression models with unknown power parameter $p$ in the va\-rian\-ce function. The last two methods are fast and computationally simple because they employ the first two moments only and thus it does not require the function $c_p(y;\tau)$.
\subsection{Improved Tests }
\label{SS.IT}
Bartlett and Bartlett-type corrections improve the large-sample $\chi^2$ approximation to the null distribution of the likelihood ratio, score and gradient statistics, when the sample size is finite. For a detailed discussion, see \cite{Cordeiro-Cribari}. In several papers, like those from \cite{Ferrari-Cordeiro-Cribari}, \cite{CordeiroPaulaBotter} and \cite{Medeiros-Ferrari-Lemonte}, for example, improved likelihood ratio, score and gradient tests, respectively, were obtained in the class of dispersion models.
\subsection{Modelling Clustered and Dependent Data}
\subsubsection{Latent Stochastic Processes Based Models}
\label{SS.latentProcesses}
Dispersion and ED mo\-dels were used to study data containing clustered and dependent observations
\citep{Jorgensen-etal1996A,
Jorgensen-etal1996B,
Jorgensen-etal1996C,
JorgensenSong1997,
JorgensenTsao1999,
Jorgensen-etal1999,
Artes-Jorgensen,
Botter-etal-2002,
Ma-Jorgensen2007,
Ma-etal2009} in recent years.
The common idea explored there is that the dependence in the data is modeled using a latent stochastic process, the observations being conditionally independent given the latent process. For example, in \cite{Jorgensen-etal1996C} when modeling the number of hospital emergency visits and in \cite{Botter-etal-2002} when modeling the mortality by lung diseases a latent (unobservable) stochastic process represented a time varying morbidity. These stochastic process based constructions are possible if the distributions of the increments of processes are infinite divisible, see \cite{Jorgensen-etal1996A} for details, which is the case for the EDMs that have the index set $\Lambda$ equal to $\mathbb{R}_+$. The infinite divisibility condition is satisfied by the Tweedie exponential dispersion models.
\subsubsection{Estimating Equations Inference}
\cite{Zeger-Liang} considered the generalized estimating equations (GEEs) to analyze longitudinal data based on quasi-likelihood methods. \cite{Liang-Zeger} derived the GEEs from a different and slightly more limited context. The method derives from EDMs, but is essentially
based on second-moment assumptions for the response. In both articles, the GEEs are derived without fully specifying the joint distribution. The regression coefficients are consistently estimated even when the correlation structure is misspecified. However, efficiency depends on the working correlation matrices proximity to the true one \citep{Liang-Zeger}. Liang and Zeger's method has been widely used in several areas dealing with non-Gaussian correlated data \citep{Hardin-Hilbe}.
\cite{Artes-Jorgensen} extended the GEE method to the class of dis\-persion models to handle certain types of non-normal data such as angles and proportions that are not well accommodated by EDMs, and for which there are currently no good methods avai\-la\-ble for longitudinal data analysis.
\cite{Song-Tang} proposed methods to directly model the marginal means of the longitudinal proportional responses using the simplex distribution that takes into account the fact that such responses are percentages restricted between zero and one and may as well have large dispersion.
\section{Concluding Remarks and Future Perspectives}
As mentioned above, the Danish statistician Bent J\o{}rgensen (April 15, 1954; November 19, 2015) made several vital contributions in the area of statistical modeling. He supervised many students in Denmark, Brazil, and Canada and developed a vast international scientific collaboration network. Bent worked on a combination of theoretical and applied topics, including exponential families, univariate and multivariate dispersion models, exponential dispersion models, proper dispersion models, Tweedie distributions, generalized estimating equations and other types of statistical models. Although Bent's domain was mainly theoretical statistics, he also made significant contributions in a wide range of applied fields such as insurance, meteorology, and marine ecology, among others. In this article, as colleagues and friends, we outlined some details of his carrier and reviewed some of his main contributions, especially, univariate dispersion models, exponential dispersion and proper dispersion models.
The work of Bent J\o{}rgensen opened new research areas and inspired other researchers in the field. As a piece of evidence, we mention higher-order asymptotics (see Section \ref{SS.IT}), the field of exponential family non-linear regressions (see Section \ref{SS.EFNLR}) and the latent stochastic processes based models (see Section \ref{SS.latentProcesses}). We envision that further developments might appear by expanding the general theory of dispersion models to different multivariate and dependent observation contexts (already partially done). Defining general dispersion models via integral transforms other than the Laplace transform might allow incorporating heavy tail distributions and different types of stochastic processes not considered yet.
\section*{Acknowledgements}
We thank Jeanett S. Pelck (Applied Statistics Laboratory, Department of Mathematics, Aarhus University) and Ole Barndorff-Nielsen (Department of Mathematics, Aarhus University) for helpful comments which improve the manuscript. We are also grateful to the National Council for Scientific and Technological Development (CNPq) and the National Council for the Improvement of Higher Education (CAPES) for financial support of the first and the third authors.
|
train/arxiv
|
BkiUdl44uzlhiT5TDEIU
| 5 | 1 |
\section{Introduction}
As is known, such physical processes as gravitational collapse and turbulence
compression play a key role in creation and evolution of star formation
regions over the wide range of scales, from star complexes over
OB associations down to compact embedded clusters and to clumps of young
stars inside them. These stellar systems form a continuous hierarchy
of structures for all these scales \citep{efremov1995,efremov1998,
elmegreen2000,elmegreen2002,elmegreen2006b,elmegreen2011}. It is suggested
that the hierarchy extends up to 1~kpc
\citep*{efremov1987,elmegreen2006c,zhang2001}.
\citet{efremov1987} and \citet{ivanov1991} described at least three
categories of hierarchical star groups on the largest levels:
OB associations with a length scale $\approx80$~pc, stellar aggregates with
a length scale $\approx250$~pc and star complexes with diameters
$\approx600$~pc. H\,{\sc i}/H$_2$ superclouds are ancestors of star
complexes; OB associations are formed from giant molecular clouds
\citep{efremov1989,efremov1995,efremov1998,elmegreen1994,elmegreen2006c,
elmegreen2009,odekon2008,marcos2009}. Sizes and clustering of these
structures have been studied for many nearby spiral and irregular galaxies
\citep*{bastian2005,bianchi2012,battinelli1991,battinelli1996,borissova2004,
bresolin1996,bresolin1998,bruevich2011,elmegreen2001,feitzinger1984,
gouliermis2010,gusev2002,harris1999,magnier1993,pietrzynski2001,
pietrzynski2005,sanchez2010,wilson1991,wilson1992}. Power-law power spectra
of optical light in galaxies suggest the same maximum scale, possibly
including the ambient galactic Jeans length
\citep*{elmegreen2003a,elmegreen2003b}. If the ambient Jeans length is the
largest scale, then a combination of gravitational and turbulent
fragmentations can drive the whole process. Observed star formation rates
in galaxies can follow from such turbulent structures \citep{krumholz2005}.
Hierarchical clustering disappears with age as stars mix. The densest
regions have the shortest mixing times and lose their substructures first.
Nevertheless, very young clusters have a similar pattern of subclustering,
suggesting that this structure continues down to individual stars
\citep*{brandeker2003,dahm2005,heydari2001,nanda2004,oey2005,sanchez2013}.
\begin{figure*}
\resizebox{0.98\hsize}{!}{\includegraphics[angle=000]{MN-13-3360-MJ-Fig1.eps}}
\caption{$B$ image of NGC~628 and positions of the galaxy's star formation
regions (crosses). The numbers of the star formation regions from
Table~\ref{table:positions} are indicated. The image size is
$8.26\times6.00$~arcmin. North is upward and east is to the left.
}
\label{figure:fig_iden}
\end{figure*}
\begin{table}
\caption[]{\label{table:param}
Basic parameters of NGC~628.
}
\begin{center}
\begin{tabular}{ll} \hline \hline
Parameter & Value \\
\hline
Type & Sc \\
RA (J2000.0) & 01$^h$36$^m$41.81$^s$ \\
DEC (J2000.0) & +15$\degr$47$\arcmin$00.3$\arcsec$ \\
Total apparent $B$ magnitude ($B_t$) & 9.70 mag \\
Absolute $B$ magnitude ($M_B$)$^a$ & -20.72 mag \\
Inclination ($i$) & $7\degr$ \\
Position angle (PA) & $25\degr$ \\
Apparent corrected radius ($R_{25}$)$^b$ & 5.23 arcmin \\
Apparent corrected radius ($R_{25}$)$^b$ & 10.96 kpc \\
Distance ($D$) & 7.2 Mpc \\
\hline
\end{tabular}\\
\end{center}
\begin{flushleft}
$^a$ Absolute magnitude of a galaxy corrected for Galactic extinction and
inclination effect. \\
$^b$ Isophotal radius (25 mag\,arcsec$^{-2}$ in the $B$-band) corrected for
Galactic extinction and absorption due to the inclination of NGC~628.
\end{flushleft}
\end{table}
The interstellar matter also shows a hierarchical structure from the largest
giant molecular clouds down to individual clumps and cores. The complex
hierarchical structure of the interstellar matter is shaped by supersonic
turbulence \citep{ballesteros2007}. The scaling relations observed in
molecular clouds \citep{larson1981} can be explained by the effect of
turbulence, where energy is injected at largest scales and cascades down to
the smallest scales, creating eddies and leading to a hierarchical structure
on all scales \citep{elmegreen2006}. It is believed that turbulence plays a
major role in star formation; it creates density enhancements that become
gravitationally unstable and collapse to form stars \citep{elmegreen2006}.
The spatial distribution of young stars and stellar groups on wide length
scales probably reflects this process.
The purpose of this paper is to study size distribution and hierarchical
structures of star formation regions in nearby face-on spiral
galaxy NGC~628 (Fig.~\ref{figure:fig_iden}), based on our
own observations in the $U$, $B$, and $V$ passbands. This
galaxy is an excellent example of a galaxy with numerous star formation
regions observed at different length scales. We use the term
'star formation regions', which includes young star complexes,
OB associations, H\,{\sc ii} regions, i.e. all young stellar groups
regardless of their sizes.
\citet{hodge1976} identified 730 H\,{\sc ii} regions in the galaxy.
\citet{ivanov1992} estimated sizes and magnitudes of 147 young stellar
associations and aggregates in NGC~628 and discussed briefly hierarchical
structures at the scales from 50 to 800~pc. \citet{larsen1999} studied 38
young star clusters with an effective diameters from 2 to 90~pc.
\citet{bruevich2007} obtained magnitudes, colours and sizes of 186 star
formation regions based on the list of H\,{\sc ii} regions from
\citet{belley1992}.
\citet{elmegreen2006} studied distributions of size and luminosity of
star formation regions over a range of scales from 2 to 110~pc using
progressively blurred versions of blue optical and H$\alpha$ images from
the {\it Hubble Space Telescope (HST)}. They counted and measured
features in each blurred image using SExtractor program and found that the
cumulative size distribution satisfies a power law with a slope of
approximately from --1.8 to --1.5 on all studied scales.
\begin{figure*}
\vspace{5.0mm}
\resizebox{0.92\hsize}{!}{\includegraphics[angle=000]{MN-13-3360-MJ-Fig2.eps}}
\caption{Left panel: contour map of the vicinity of star formation regions
Nos.~33-35.
Grey areas correspond to the regions Nos.~33, 34, 35a, and 35b within their
half-maximum brightness level. Red dashed contour levels correspond to the
levels of $\sigma$, $3\sigma$, $5\sigma$, $7\sigma$, and $9\sigma$, black
solid contour levels correspond to the levels of $2\sigma$, $4\sigma$,
$6\sigma$, $8\sigma$, and $10\sigma$ above the average brightness level of
background. Position of profile A--A' is shown. Central panel: photometric
profile A--A'. Surface brightness, $\mu$, is given in units of $\sigma$.
Right panel: diameters of star formation regions Nos.~33-35 and their
hierarchical structures
measured at the different levels of surface brightness in units of $\sigma$.
}
\label{figure:fig33_35}
\end{figure*}
The fundamental parameters of NGC~628 are presented in
Table~\ref{table:param}. We take the distance to NGC~628, obtained in
\citet*{sharina1996} and \citet*{vandyk2006}. We used the position angle and
the inclination of the galactic disc, derived by \citet{sakhibov2004}.
Other parameters were taken from the LEDA data
base\footnote{http://leda.univ-lyon1.fr/} \citep{paturel2003}. We adopt the
Hubble constant $H_0 = 75$ km\,s$^{-1}$Mpc$^{-1}$ in the paper. With the
assumed distance to NGC~628, we estimate a linear scale of
34.9~pc\,arcsec$^{-1}$.
Observations and reduction stages of $UBVRI$ images for NGC~628 have
already been published in \citet{bruevich2007}. The reduction of the
photometric data was carried out using standard techniques, with the European
Southern Observatory Munich Image Data Analysis
System\footnote{http://www.eso.org/sci/software/esomidas/} ({\sc eso-midas}).
\section{Identification and size estimations of star formation regions}
In \citet{bruevich2007}, we have identified star formation regions in
the galaxy with the list of H\,{\sc ii} regions of \citet{belley1992},
based on their H$\alpha$ spectrophotometric data. The list of
\citet{belley1992} is still the most complete survey of H\,{\sc ii} regions
and their parameters in NGC~628. Note that our coordinate grid coincides
with that of \citet{kennicutt1980} and is systematically shifted with respect
to that of \citet{belley1992}. Altogether, we identified 127 of 132 star
formation regions studied in \citet{belley1992}. Three regions
\citep[Nos. 1, 2, and 96 in][]{belley1992} were outside the field of view of
our images. Two star formation regions (Nos. 23 and 76) are missing in the
list of \citet{belley1992}. \citet{belley1992} did not distinguish between
isolated star formation regions, with typical sizes about 60-70~pc, and
compound multi-component regions, with typical sizes about 200~pc. We obtained
images of the galaxy with better seeing than \citet{belley1992}. As a result,
we were able to resolve the compound star formation regions into components.
Firstly, we identified such subcomponents by eye. We selected the
components, the maximal (central) brightness in which was at least 3 times
higher than the brightness of surrounding background. Next, we fitted
profiles of star formation regions using Gaussians. The components separation
condition was
that the full width at half-maximum (FWHM) of the region is less than the
distance between centres of Gaussians. Numbers of these complexes in the
first column of Table~\ref{table:positions} contain additional letters:
'a', 'b', 'c', and 'd'. Compound regions which do not satisfy this
condition were classified as objects with observed, but unresolved, internal
structure. In total, we identified 186 objects (Fig.~\ref{figure:fig_iden}).
In this paper we use the numbering order adopted in \citet{bruevich2007}.
It coincides with the numbering order of \citet{belley1992} with the
exception of the missed star formation regions.
We found that 146 regions from Table~\ref{table:positions} have a star-like
profile (see the last column in this table). Other 40 objects have a
non-star-like (extended (diffuse) or multi-component) profile, i.e. these
objects have an observed, but unresolved, internal structure.
We took the geometric mean of major and minor axes of a star formation
region for the star formation region's characteristic diameter $d$:
$d = \sqrt{d_{max} \times d_{min}}$. We measured $d_{max}$ and $d_{min}$
from the radial $V$ profiles as the FWHM for regions having a star-like
profile, or as the distance between points of maximum flux gradient for
regions having non-star-like profiles. We adopted seeing for the uncertainty
in the size measurements, which definitely exceeds all other errors. Obtained
parameters of star formation regions are presented in
Table~\ref{table:positions}.
\section{Hierarchical structures of star formation regions}
The simplest way to study hierarchical clustering is to identify
structures of different hierarchical levels based on lower level surface
brightness thresholds above the background level. The similar method
was used by \citet{gouliermis2010}, who used the stellar density levels
to study hierarchical stellar structures in the dwarf irregular galaxy
NGC~6822. They identified hierarchical structures using density thresholds
$1\sigma - 5\sigma$ above the average background density level with step of
$1\sigma$.
However, this direct way is not applicable for identification of hierarchical
structures in NGC~628. The background level varies significantly in the
galactic plane. The surface brightness of the background differs by several
times inside spiral arms and in interarm regions.
Therefore we modified the technique of \citet{gouliermis2010}.
Identification and size estimation of 186 star formation regions at the
highest hierarchical level (Level~1) were done using their half-maximum
brightness levels, independent of background levels (see Section~2).
Additionally, we fitted the profiles of star formation regions along their
minor and major axes using Gaussians. To identify structures of Level~2 and
lower, we measured the background surface brightness in the $V$ passband in
the vicinity of every group of star formation regions of Level~1.
\begin{figure}
\vspace{3.1mm}
\resizebox{0.90\hsize}{!}{\includegraphics[angle=000]{MN-13-3360-MJ-Fig3.eps}}
\caption{Distribution histogram of star formation regions of Levels~2-5 by
the level of maximum brightness decrease. Brightness is given in units of
$\sigma$. Grey histogram is the distribution of star formation regions of
the lowest hierarchical level. Shaded histogram is the distribution of
star formation regions of the first hierarchical level from the lowest one.
Thick black histogram is the distribution of star formation regions of the
second hierarchical level from the lowest one. See the text for details.
}
\label{figure:fig_sigma}
\end{figure}
The selection of a threshold in units of $\sigma$ above the average
brightness level of background for star formation regions of Level~2 was
carried out based on two basic conditions: (i) it must be lower than the
level of brightness of the appropriate star formation region of Level~1 and
(ii) it must deviate more than 4 pixels (seeing of the $V$ image) from the
fitting Gaussian of the profile of the star formation region at Level~1. The
same conditions were applied to select the brightness level of every next
lower level of the hierarchy. The exception was made for several resolved
close binary star formation regions, such as 40a-40b, where the second
condition is not applied. To identify star formation regions of lower
hierarchical levels, we used lower levels of brightness.
To select surface brightness thresholds, we firstly analysed a
typical light distribution in selected star formation regions and their
vicinities. Example of such region, star formation regions Nos.~33-35, is
given in Fig.~\ref{figure:fig33_35}.
Fig.~\ref{figure:fig33_35} (central panel) shows that the surface brightness
falls irregularly with distance from the knots of star formation:
'plateau-like' areas with constant surface brightness alternate with
areas of a sharp drop in brightness. At such sites, a fall in brightness
usually exceeds $1\sigma$ value. At higher hierarchical levels, where
the surface brightness is higher, absolute drop in brightness is larger than
at lower levels of the hierarchy. As a result, diameters of star formation
regions increase slowly with a decrease of brightness level within the same
hierarchical level. Significant growth of the diameters is observed only at
merger of two separate star formation regions into one common star formation
region at the lower hierarchical level (Fig.~\ref{figure:fig33_35}).
We consider brightness in units of $\sigma$. So the brightness level, where
the maximum brightness decrease is observed, is also measured in units of
$\sigma$. Maximum brightness decrease corresponds to the minimum of
first derivative of the brightness profile function
(Fig.~\ref{figure:fig33_35}, central panel) in units of $\sigma$. After
measuring the brightness level in units of $\sigma$ by the maximum
brightness decrease, we determine size of star formation region with the
isophots as described in Section~2.
We analysed all hierarchical structures in vicinities of star formation
regions of Level~1 and determined which level of brightness corresponds to
the level of maximum brightness decrease in them
(Fig.~\ref{figure:fig_sigma}).
Distribution of star formation regions of Levels~2-5 by the level of maximum
brightness decrease shows two maxima at $3\sigma$ and $5\sigma$
(Fig.~\ref{figure:fig_sigma}). Distribution of star formation regions of the
lowest hierarchical level has a maximum at $2\sigma-3\sigma$, distribution of
star formation regions
of the first hierarchical level from the lowest one shows maxima at
$3\sigma$ and $5\sigma-6\sigma$. Star formation regions of the second
hierarchical level
from the lowest one have characteristic levels of maximum brightness decrease
of $5\sigma$, $8\sigma$, and $11\sigma$ (Fig.~\ref{figure:fig_sigma}).
Analysis of the distribution of star formation regions by the level of
maximum brightness
decrease, in units of $\sigma$, has shown that neither arithmetic nor
geometric sequences of the brightness levels are suitable to describe the
hierarchical structures of star formation regions. When using a geometric
sequence, we may
miss some of the hierarchical levels. When using an arithmetic sequence, we
lose some of the brightness levels because they do not satisfy the condition
(ii) (Fig.~\ref{figure:fig33_35}). In this case, low hierarchical levels
will correspond to arbitrary levels of brightness.
Analysis of the distribution showed that the best sequence of brightness
levels is the Fibonacci sequence, $1\sigma$, $2\sigma$, $3\sigma$,
$5\sigma$, $8\sigma$, as an intermediate sequence between arithmetic and
geometric sequences.
Diameters of star formation regions of the lower hierarchical levels which
have the maximum
brightness decrease at the level of $4\sigma$ or $6\sigma-7\sigma$ are measured
at the next lower surface brightness level of $3\sigma$ or $5\sigma$,
respectively. Typically, the difference between diameters measured at the
levels of $3\sigma$ and $4\sigma$, or $5\sigma$ and $6\sigma-7\sigma$ does
not exceed 35-40~pc, a value of the seeing of the image
(Fig.~\ref{figure:fig33_35}).
Thus, we used surface brightness thresholds of $8\sigma$, $5\sigma$,
$3\sigma$ and $2\sigma$ above the average brightness level of background
in the vicinity of star formation region. The threshold of $1\sigma$ was not
used due to large fluctuations of background around many identified groups
of star formation regions.
For each individual region, not every next lower brightness level
satisfies the conditions adopted. Such brightness levels were missed.
Furthermore, a full set of brightness levels from $8\sigma$ to $2\sigma$
above the background was used only for star formation regions Nos.~79a and
79b and hierarchical
structures of a lower order related with them (Table~\ref{table:tree}). The
lowest level of every hierarchical structure usually corresponds to the
brightness level of $2\sigma$ or $3\sigma$ above the background
(Fig.~\ref{figure:fig_sigma}). As a result, the same hierarchical level may
correspond to different levels of brightness.
Diameters of star formation regions at Levels~2 and lower were measured in
the same manner as for star formation regions of Level~1:
$d = \sqrt{d_{max} \times d_{min}}$, where $d_{min}$ and $d_{max}$ are
diameters along the major and minor axes of star formation region.
Star formation regions obtained on different hierarchical levels,
and their sizes are presented in Table~\ref{table:tree}. Some star
formation regions of low hierarchical levels consist of one or several
star-like cores (star formation regions of Level~1) and an extended halo.
Such star formation regions are indicated by letter 'h' in
Table~\ref{table:tree}. A map of location of these objects in
the galactic plane is shown in Fig.~\ref{figure:fig_levels}.
\begin{figure*}
\resizebox{0.98\hsize}{!}{\includegraphics[angle=000]{MN-13-3360-MJ-Fig4.eps}}
\caption{Map of star formation regions of different levels of the hierarchy.
Regions of higher levels of the hierarchy are shaded darker than lower
ones. The image size is $8.26\times6.00$~arcmin. North is upward and east is
to the left.
}
\label{figure:fig_levels}
\end{figure*}
To illustrate the hierarchical structures we used so-called dendrograms.
Dendrograms were introduced as 'structure trees' for analysis of
molecular cloud structures by \citet{houlahan1992}, refined by
\citet{rosolowsky2008}, and used in \citet{gouliermis2010} to study
hierarchical stellar structures in the nearby dwarf galaxy NGC~6822. A
dendrogram is constructed by cutting an image at different brightnesses and
identifying connected areas, while keeping track of the connection to
surface brighter smaller structures (on a higher level) and surface fainter
larger structures (on the next lower level, which combines structures of the
previous level).
The dendrogram for the star formation regions from
Tables~\ref{table:positions} and
\ref{table:tree} is presented in Fig.~\ref{figure:fig_tree}. Unlike
\citet{gouliermis2010}, we constructed the dendrogram using the ordinate
axis in units of diameter. It better illustrates length scales of
hierarchical structures. The combination of this dendrogram with the map of
Fig.~\ref{figure:fig_levels} illustrates graphically the hierarchical spatial
distribution of star formation regions in NGC~628.
The dendrogram demonstrates that most star formation regions are combined
into larger
structures over, at least, 1-2 levels. We found only 12 separate associations
without visible internal structure, which are out of hierarchical structures
(Fig.~\ref{figure:fig_tree}). Most of them are located in interarm regions
(Fig.~\ref{figure:fig_iden}). The largest ($d>1$~kpc) and the most populous
(8-17 star formation regions of Level~1) structures are located in the ends
of spiral arms.
First of them (Nos.~75-80) is located near the corotation radius, which was
obtained in \citet{sakhibov2004} based on a Fourier analysis of the spatial
distribution of radial velocities of the gas in the disc of NGC~628. Largest
and brightest in UV star complex of the galaxy was found here in
\citet*{gusev2014}. Second structure (Nos.~120-127) is located in the
nothern-western part of NGC~628, in the disturbed part of the spiral arm
(Fig.~\ref{figure:fig_iden}).
As seen from the dendrogram, the numbering order does not reflect
correctly the hierarchical structures. The numbering is violated for star
formation regions
Nos.~4-7 at Level~2, Nos.~85-89 at Level~4, and Nos.~97-109 at Level~4
(Table~\ref{table:tree}).
\begin{figure}
\vspace{2.3mm}
\resizebox{1.00\hsize}{!}{\includegraphics[angle=000]{MN-13-3360-MJ-Fig5.eps}}
\caption{Dendrogram of star formation regions structures. The black dots
indicate star formation regions from Tables~\ref{table:positions},
\ref{table:tree}. Regions which are united into a hierarchical structure
are connected by solid line. The numbering order might not strictly follow
the order of hierarchical structures (see the text for details). The arrows
down indicate star formation regions with an observed internal structure
(star formation regions with a non-star-like profile).
}
\label{figure:fig_tree}
\end{figure}
\section{Size distributions of star formation regions}
In Fig.~\ref{figure:fig_hist}, we present size distribution histograms for
three sets of star formation regions under study. The first set includes 297
regions of all hierarchical levels, the second set is a sample of 146
associations with a star-like profile, and the third set includes 111 regions
of Level~2 and lower from Table~\ref{table:tree}. The second set unites the
star formation regions without an observed internal structure; their
subcomponents (if exist) have sizes $\le35-40$~pc. The third set includes
only star formation regions with obvious internal structure; their
subcomponents were detected and measured.
As seen from the figure, associations with a star-like profile have a
narrow range of sizes, from 40 to 100~pc, with a few exceptions. The mean
diameter of these star formation regions is equal to $66\pm18$~pc. This is a
typical size of OB associations. Star formation regions with extended profile
have, on average, slightly larger sizes, $\sim100$~pc. As a result, the size
distribution of star formation regions of Level~1 with both star-like and
extended profiles is displaced a little toward the larger sizes (see
Fig.~\ref{figure:fig_hist} and Table~\ref{table:mean}).
Star formation regions of lower levels clearly show a bimodal size
distribution. Two maxima
at $\approx250$ and $\approx600$~pc are observed (Fig.~\ref{figure:fig_hist}).
The first smoothed peak corresponds to a characteristic size of stellar
aggregates by classification of \citet{efremov1987}, and the second
peak is located on diameters, which are typical for star complexes.
\begin{figure}
\vspace{3.1mm}
\resizebox{1.00\hsize}{!}{\includegraphics[angle=000]{MN-13-3360-MJ-Fig6.eps}}
\caption{Number distribution histograms of all star formation regions
from Tables~\ref{table:positions} and \ref{table:tree}, star formation
regions of Level~1 with a star-like profile (grey histogram), and
star formation regions of Levels~2 and lower (shaded histogram).
}
\label{figure:fig_hist}
\end{figure}
\begin{table}
\caption[]{\label{table:mean}
Diameters of star formation regions.
}
\begin{center}
\begin{tabular}{ccc} \hline \hline
Star formation & $d^a$ & $d^b$ \\
regions & (pc) & (pc) \\
\hline
Associations$^c$ & $66\pm18$ & 64 \\
Associations$^d$ & $72\pm26$ & 66 \\
Aggregates & $240\pm90$ & 234 \\
Complexes & $583\pm84$ & 601 \\
\hline
\end{tabular}\\
\end{center}
\begin{flushleft}
$^a$ Mean diameter. $^b$ Diameter obtained from best fitting Gaussian. \\
$^c$ Associations with a star-like profile (146 objects). \\
$^d$ All associations from Table~\ref{table:positions} (186 objects).
\end{flushleft}
\end{table}
We also fitted size distributions of studied sets of star formation regions
using Gaussians. To fit the size distribution for the set of 111 complex
star formation regions, we used a combination of two Gaussians. It was found
that all sets of star formation regions have size distributions close to the
Gaussian distribution. Diameters obtained from the best-fit Gaussians are
almost the same as the mean diameters for all sets of star formation regions
(Table~\ref{table:mean}).
Following \citet{elmegreen2006}, we constrained the cumulative size
distribution function in the form $N (d>D) \propto D^{\gamma}$, where $N$ is
the integrated number of objects that have a diameter $d$ greater than some
diameter $D$ (Fig.~\ref{figure:fig_fsize}).
\begin{figure}
\vspace{3.1mm}
\resizebox{1.00\hsize}{!}{\includegraphics[angle=000]{MN-13-3360-MJ-Fig7.eps}}
\caption{Left panel: cumulative size distribution functions for regions of
\citet{elmegreen2006} (E; grey thick solid curve), H\,{\sc ii} regions of
\citet{hodge1976} (H; grey thin solid curve), associations (I(a); black
dashed curve) and complexes (I(c); black dotted curve) of \citet{ivanov1992},
large star formation regions in the spiral arms of the galaxy (G; grey thin
dashed curve) from \citet{gusev2014}. Right panel: cumulative size
distribution functions for regions of \citet{elmegreen2006} (grey thick solid
curve), 146 star formation regions with a star-like profile from
Table~\ref{table:positions}
(black dashed curve), 186 star formation regions from
Table~\ref{table:positions} (black solid curve), and 297 star formation
regions from Tables~\ref{table:positions} and \ref{table:tree}
(black thick curve). Dark thin solid straight lines in both panels represent
slopes $\gamma$~=~--1.5, --3.5 and --5 of the size distribution function. See
the text for details.
}
\label{figure:fig_fsize}
\end{figure}
Detailed exploration of the size distribution of objects in NGC~628 was
made in \citet{elmegreen2006} in the range of scales from 2 to
110~pc\footnote{For an adopted distance of 7.2~Mpc.} based on {\it HST}
images. For regions in the central part of the galaxy brighter than the
$3\sigma$ noise limits in $B$ and $V$ images, \citet{elmegreen2006} found
that the cumulative size distribution obeys a power law, with a slope
$\gamma \approx -1.5$ in the range from 2 to 55~pc. The similar slope of the
cumulative size distribution function was found for OB associations from the
list of \citet{ivanov1992} in the range from 30 to 110~pc. The size
distribution of larger objects, H\,{\sc ii} regions studied by
\citet{hodge1976}, satisfies a power law with a slope $\gamma \approx -3.5$
in the range from 100 to 300~pc. The size distribution of large
star formation regions (in the range from 300 to 600~pc) in
spiral arms of NGC~628 obtained in \citet{gusev2014} shows a slope
$\gamma \approx -4.5$. The size distribution
of complexes from \citet{ivanov1992} gives $\gamma = -4.1$ in the range from
500 to 1000~pc (Fig.~\ref{figure:fig_fsize}).
Summarizing the results of the size distribution obtained previously, we can
conclude that the size distributions of star formation regions with a
diameter of $\le100$~pc satisfy a power law with $\gamma \approx -1.5$. The
distribution of larger star formation regions obeys a power law with
$\gamma$ between $\approx -5$ and $\approx -3.5$.
In Fig.~\ref{figure:fig_fsize} (right panel) we present size distribution
functions constructed for three sets of star formation regions. The first set
includes 297 regions of all hierarchical levels, the second set is a sample
of 186 star formation regions of Level~1, and the third set includes 146
regions of Level~1 with star-like profile.
Size distribution of 146 star formation regions with a star-like profile,
beginning with
$d \approx 50$~pc, obeys a power law with a slope $\gamma \approx -5$. Size
distribution of all 186 star formation regions of Level~1 satisfies a power
law with a slope
$-4\le\gamma\le-3.5$ in the range from 50 to 170~pc. It repeats the
distribution of H\,{\sc ii} regions of \citet{hodge1976} with a displacement
$\log D \approx 0.2$ (Fig.~\ref{figure:fig_fsize}). In general, the size
distribution of star formation regions of Level~1 has slopes between
--5 and --3.5, such as size distributions of previously studied star
formation regions of a single level of hierarchy
(Fig.~\ref{figure:fig_fsize}).
Note, that the end of the size distribution curve for regions of
\citet{elmegreen2006} coincides with the beginning of the size distribution
curve for our 186 star formation regions of Level~1
(Fig.~\ref{figure:fig_fsize}). Given
that the area studied in \citet{elmegreen2006} occupies $\sim 70\%$ of the
area of NGC~628, which is studied in this paper, we can conclude that (i) the
number of H\,{\sc ii} regions identified in \citet{belley1992} is smaller
than the numbers of regions found by \citet{elmegreen2006} using SExtractor,
and, that is more important, (ii) our measurements of sizes of star
formation regions using photometric profiles are in a good agreement with
measurements of \citet{elmegreen2006}.
More interesting behaviour is observed for the curve of size distribution of
star formation regions of all hierarchical levels. It continues the size
distribution curve
for regions of \citet{elmegreen2006} at $d=30-40$~pc and has the same slope
$\approx -1.5$ in the range from 45 to 85~pc -- diameters of OB associations.
Flatter slope, $\gamma > -1$, is observed in the range from $\approx 90$ to
$\approx 180$~pc for regions of Level~1 with an extended profile and for
the smallest regions of Level~2. Size distribution of star formation regions,
which are
classified as stellar aggregates and complexes, obeys a power law with
$\gamma = -1.5$ very well (see the distribution curve in the range from 190
to 600~pc in Fig.~\ref{figure:fig_fsize}). Largest hierarchical structures
with $d=0.65-0.9$~kpc are also distributed by sizes by a power law with
$\gamma \sim -1.5$ (Fig.~\ref{figure:fig_fsize}).
Thus, the size distribution of star formation regions of all hierarchical
levels continues the size distribution function for regions of
\citet{elmegreen2006} towards the larger sizes with the same slope
$\approx -1.5$.
\section{Discussion}
The modern theory of star formation explains an existence of OB associations
and star complexes, which are associated by unity of an origin with hydrogen
superclouds and giant molecular clouds, respectively
\citep{elmegreen2006c,efremov1998}. Structures of H$_2$ on the intermediate
scale length are unknown. However, such intermediate young stellar structures
are observed in galaxies. These are stellar aggregates with diameters
$\sim200-300$~pc.
Our Fig.~\ref{figure:fig_hist} shows a bimodal size distribution of star
formation regions of
Level~2 and lower. Bimodal size distributions with a secondary peak at
$d=150-300$~pc were found for 'associations' in SMC \citep{battinelli1991},
M31 \citep{magnier1993}, NGC~2090, NGC~2541, NGC~3351, NGC~3621 and NGC~4548
\citep{bresolin1998}, NGC~1058 and UGC~12732 \citep{battinelli2000}, NGC~300
\citep{pietrzynski2001}, NGC~3507 and NGC~4394 \citep{vicari2002}, NGC~7793
\citep{pietrzynski2005}. \citet{pietrzynski2005} named such 'associations'
'superassociations'.
Thus, the existence of stellar structures, 'aggregates' or
'superassociations', with a characteristic size 200--300~pc is confirmed by
numerous obsevations in different galaxies. However, the question of origin
of stellar aggregates is still open.
As we noted above, the size distribution of star formation regions of all
hierarchical levels
continues the size distribution of regions of \citet{elmegreen2006} with the
same slope $\approx -1.5$ for sizes from 45~pc to $\sim0.9$~kpc. However, the
function of size distribution deviates from a power law with the slope --1.5
at $d=90-180$~pc and $600-650$~pc (Fig.~\ref{figure:fig_fsize}).
We believe that the flatter slope in the range of 90 to 180~pc is a result
of significant number of star formation regions with a diameter of
$\sim100-150$~pc with an
unresolved internal structure. Taking into account such undetected objects will shift
the distribution curve upward along the ordinate axis at sizes smaller than
or equal to diameters of these star formation regions.
Opposite situation is observed at $d=600-650$~pc. Largest structures with
$d>600$~pc have a low boundary surface brightness. They are difficult for
identification in spiral arms of grand design galaxy NGC~628 because of the
significant variations of background level (see Section~4). Underestimation
a number of star formation regions at lowest hierarchical levels leads to a
drastic drop in the size distribution curve.
In spite of small statistics, largest star formation regions with
$d\approx0.65-0.9$~kpc are
also distributed by size by a power law with $\gamma \sim -1.5$. It can be an
additional argument in favor of the assumption of \citet{efremov1987},
\citet{elmegreen2006c} and \citet{zhang2001}, who adopted that the
hierarchical structures extend to the scale of 1~kpc.
Taking into account the hierarchy of star formation regions is crucial for
construction of the cumulative size distribution function. Neglecting the
internal structure of star formation regions of higher hierarchical levels
and underestimation of the number of star formation regions of lower
hierarchical levels leads to a decrease or an increase of the slope of the
size distribution function, respectively. To illustrate this, we compare
size distributions of regions of \citet{elmegreen2006}, our star formation
regions of all hierarchical levels, and our star formation regions of Level~1
with any profiles in the range of scale from 50 to 110~pc in
Fig.~\ref{figure:fig_fsize}.
On the scale of 200--600~pc, the characteristic sizes of stellar
aggregates and complexes, the size distribution function has a constant
slope. We believe that the sample of objects at different levels of
hierarchy within this range of scale is complete.
The slope $\gamma$ of the cumulative size distribution function for star
formation regions is of
fundamental importance. It is associated with the fractal dimension of objects
in the galaxy at different scales. \citet{elmegreen2006} introduced the
fractal of dimension $\ifmmode\mathcal{D}\else$\mathcal{D}$\fi\ $, where $\ifmmode\mathcal{D}\else$\mathcal{D}$\fi\ = - \gamma$. Following
\citet{elmegreen2006}, we believe that the size distribution of stellar groups
suggests a fractal distribution of stellar positions projected on the disc
of the galaxy, with a constant fractal dimension of $\ifmmode\mathcal{D}\else$\mathcal{D}$\fi\ \approx 1.5$ in the
wide range of length scales from 2~pc to 1~kpc. It is comparable to the
fractal dimension of projected local interstellar clouds,
$\ifmmode\mathcal{D}\else$\mathcal{D}$\fi\ \approx 1.3$ \citep*{falgarone1991}, and to the fractal dimension of
H\,{\sc i} ($\ifmmode\mathcal{D}\else$\mathcal{D}$\fi\ = 1.2-1.5$) in the M81 group of galaxies
\citep{westpfahl1999}.
\section{Conclusions}
We studied hierarchical structures and the size distribution of
star formation regions in the spiral galaxy NGC~628 over a range of
scale from 50 to 1000~pc based on size estimations of 297 star formation
regions. Most star
formation regions are combined into larger structures over
several levels. We found three characteristic sizes of young star groups:
OB associations with mean diameter $d=66\pm18$~pc, stellar aggregates
($d=240\pm90$~pc) and star complexes ($583\pm84$~pc).
The cumulative size distribution function of star formation regions satisfies
a power law with a slope of $-1.5$ at scales from 45 to 85~pc, from 190
to 600~pc, and from 650~pc to 900~pc, which are appropriate to the sizes of
associations, aggregates and complexes. Together with the result of
\citet{elmegreen2006}, who found the slope $-1.8\le\gamma\le-1.5$ for
regions at scales from 2 to 100~pc, our result shows that the size
distribution of young stellar structures in the galaxy obeys a power law with
a constant slope of $\approx-1.5$ at all studied scales from
$\approx2$~pc to $\approx1$~kpc.
Ignoring the hierarchical structures, i.e. using star formation regions of
only one of hierarchical levels to examine the size distribution, gives
slopes $-5\le\gamma\le-3$.
\section*{Acknowledgements}
The author is grateful to the referee for his/her constructive comments.
The author is grateful to Yu.~N.~Efremov (Sternberg Astronomical Institute)
for useful discussions. The author thanks E.~V.~Shimanovskaya (Sternberg
Astronomical Institute) for help with editing this paper. The author
acknowledges the usage of the HyperLeda
data base (http://leda.univ-lyon1.fr). This study was supported in part by
the Russian Foundation for Basic Research (project nos. 12--02--00827 and
14--02--01274).
|
train/arxiv
|
BkiUdP84ubnhAvmH6222
| 5 | 1 |
\section{Introduction}
For a holomorphic map $F:N^n\to P^p$ between compact complex
manifolds one can consider the set of points $\eta(F)$ in the source
manifold $N$ where the map has a certain kind of singularity $\eta$. The
Thom polynomial $\Tp(\eta)$ of $\eta$ is a multivariate polynomial
depending only on $\eta$, with the property that the cohomology class
represented by the closure of $\eta(F)$ is equal to the specialization
of $\Tp(\eta)$ at the characteristic classes $c_i(N)$, $F^*(c_i(P))$.
For this statement to hold, the map $F$ must satisfy transversality
conditions. There is an analogous theory for real smooth maps, where one
studies a polynomial of the Stiefel-Whitney classes of $TN$ and $F^*TP$
expressing $[\eta(F)]\in H^*(N;\Z_2)$. These real Thom polynomials can
be calculated from the complex Thom polynomials \cite{borel-haefliger},
hence we restrict our study to the complex case.
We must specify what the singularity $\eta$ means. For the definition of
$\eta(F)$ to make sense, $\eta$ must be a subset of $\E_0(n,p)$, the vector space of holomorphic map germs $(\C^n,0)\rightarrow (\C^p,0)$, invariant under
the action of the holomorphic reparametrization groups of $(\C^n,0)$ and
$(\C^p,0)$. A natural choice for such a subset is obtained by
considering those germs whose local algebras (see definition below) are isomorphic.
Subsets obtained by this way are called contact singularities. In the
language of equivariant cohomology, the Thom polynomial of the first
paragraph is the $\GL(n)\times \GL(p)$-equivariant cohomology class represented by the closure of the contact class $\eta$ in $H_{\GL(n)\times \GL(p)}^*(\E_0(n,p))$.
Thom polynomials have applications in various parts of differential topology, algebraic geometry, and algebraic combinatorics, let us just allude to the simplest case, the celebrated Giambelli-Thom-Porteous
formula---where $\eta$ is the set of germs with corank $k$ differential. The present paper is devoted to the problem of calculating Thom polynomials in the $n \leq p$ case, as well as the study of their
interior structure.
As discussed above, Thom polynomials are parameterized by an algebra
$\mathcal Q$, and two integers $n$, and $p$; we will call such a Thom polynomial
$\Tp_{\mathcal Q}(n,p)$. It turns out that the corresponding $\eta\subset
\E_0(n,p)$ is finite codimensional if and only if $\mathcal Q$ is a finite
dimensional, commutative, local algebra. We recently showed
in \cite{dstab} that---under technical conditions---the Thom polynomials $\Tp_{\mathcal Q}(n,p)$ for the same $\mathcal Q$ but varying $n$ and $p$ can be organized into a formal power series
in infinitely many variables (cf. Section \ref{sec:d-stab}). The direct application of the method of
Restriction Equations \cite{rrtp} yields certain individual Thom polynomials,
but not the calculation of whole Thom series (unless $\mathcal Q$ has very small
dimension, see Section \ref{sec:exa}).
In a recent paper \cite{bsz06} B\'erczi and Szenes introduced a new method of studying
Thom polynomials of so-called Morin singularities, ie. singularities corresponding to algebras $\mathcal Q=\C[[x]]/(x^i)$.
One of their key ideas is the usage of (improved versions of) equivariant localization formulas.
Their method naturally presents the whole Thom series. As a result, they reduced the calculation of the
Thom series of Morin singularities to a finite set of data, as well as determined this
data for $i\leq 7$. Another important novelty of \cite{bsz06} is the encoding of the Thom
series of Morin singularities by iterated residues of certain rational functions.
In the present paper we revisit a partial resolution construction of J.
Damon for all contact singularities. The B\'erczi-Szenes equivariant localization
formula applied to this construction leads to our main
result, a Localization Formula for $\Tp_{\mathcal Q}(n,p)$, see Theorem \ref{lf}.
The form of this formula implies different stabilization properties of
Thom polynomials, including the long-hidden $d$-stability property of
Thom series. The inputs of the Localization formula for a fixed $\mathcal Q$ is a
{\em finite} set of various Euler classes inside of a Grassmannian, or
Hilbert scheme; showing how a finite set can encode a whole Thom series.
Below we develop different techniques to find these Euler classes, and
hence we will calculate several new Thom series. These examples include
Thom series corresponding to local algebras of dimension $<6$, as well
as a two-parameter list of algebras.
The present work relies on the recent rapid developments of Thom polynomial theory; such as the method of
restriction equations of the authors, and the various extensions and
applications made by M. Kazarian. It is however particularly triggered
by the new ideas and results of B\'erczi and Szenes \cite{bsz06}.
\subsection{The plan of the paper} In Section \ref{sec:tpforcontact} we recall contact
singularities and give a firm foundation of their Thom polynomial
theory. In Section \ref{sec:exa} we summarize known Thom polynomials. In Section
\ref{sec:resolution}, \ref{sec:localization} and \ref{sec:locforcontact}
we review Damon's partial resolution of contact singularities
and explain how it yields an equivariant Localization Formula for Thom
polynomials. In Section \ref{sec:stability} we explore stabilization properties that our
main formula implies. In Section \ref{sec:calc} we develop geometric and algebraic
techniques to calculate the inputs of the Localization Formula. In Section \ref{sec:return} we explain the connection with the equivariant geometry of the local punctual Hilbert scheme. Section \ref{sec:phi} presents the calculation of Thom series of singularities $\Phi_{n,r}$. In Section \ref{sec:generatingfn} we study generating functions of Thom series, iterated residue formulas, and their relations to geometry (as well as iterated residue identities they depend on). The interesting phenomena of formally applying Thom polynomial formulas to dimensions where they are
not defined is discussed in Section \ref{sec:smallp}.
Throughout the paper we will work in the complex analytic category. Cohomology will be meant with rational coefficients.
\subsection{Acknowledgements}
The authors are indebted to M. Kazarian for several helpful discussions on the topic. He also informed us about his work in progress \cite{kaza:gysin},\cite{kaza:noas} in which he calculates Thom series using the Gysin-map. We are also grateful to P. Frenkel, B.
K\H om\H uves, T. Ohmoto, G. Smith, A. Szenes, G. B\'erczi, E. Szab\'o and C. T. C. Wall for valuable comments. The first author thanks T. Ohmoto for the opportunity to visit Hokkaido University and RIMS which greatly helped this work.
\section{Thom polynomials of contact singularities}\label{sec:tpforcontact}
\subsection{Contact equivalence of finite germs} \label{sec:contact}
Consider $\E_0(n,p)$, the vector space of holomorphic map germs $(\C^n,0)\rightarrow (\C^p,0)$. Throughout the paper we assume that $n\leq p$. The vector space $\E_0(n):=\E_0(n,1)$ is an algebra without an identity. The space $\E_0(n,p)$ is a module over $\E_0(n)$. A map germ $g\in \E_0(n,p)$ induces a pullback $g^*:\E_0(p)\to \E_0(n)$ by composition.
\begin{definition} \label{quotientalgebra} The ideal $I_g$ of a germ $g \in \E_0(n,p)$ is the ideal in $\E_0(n)$ generated by $g^*\E_0(p)$. The {\em quotient algebra} $Q_g$ of a germ $g \in \E_0(n,p)$ is defined by $Q_g = \E_0(n)/ I_g$.
\end{definition}
Here, and in the whole paper, an ideal {\em generated} by some ring elements is the smallest ideal containing the specified ring elements, even if the ring has no identity. In singularity theory one usually considers the one dimension larger {\em local algebra}---defined as $\mathcal{Q}_g:=\E(n)/ I_g$, where $\E(n)$ is the ring of function germs $(\C^n,0)\to\C$---which has an identity. The two versions can easily be obtained from each other, but the quotient algebra comes up more naturally in our geometric setting. We will be concerned with germs $g$ for which the quotient algebra is finite dimensional. We call these germs {\em finite}. Finite germs only exist for $n\leq p$, this is the reason of our overall assumption of $n\leq p$. Finiteness is also equivalent to the property that the ideal $(g^{*}\E_0(p))$ contains a power of $\E_0(n)$. For a finite germ, in local coordinates $g=(g_1(x_1, \ldots, x_n), \cdots, g_p(x_1, \ldots, x_n))$, we have
$$\mathcal{Q}_g=\C[[x_1,\ldots,x_n]]/ (g_1, \ldots, g_p),\qquad\text{and}\qquad Q_g=\M_n /(g_1, \ldots, g_p),$$
where $\M_n$ is the maximal ideal of $\C[[x_1,\ldots,x_n]]$, that is, the ideal generated by the variables $(x_1,\ldots,x_n)$. For finite germs the quotient algebra is nilpotent, so we will also call $Q_g$ the {\em nilpotent algebra} of the germ $g$ to distinguish it from the local algebra.
\smallskip
The invertible holomorphic germs $(\C^n,0)\to (\C^n,0)$ form the {\em right group} $\mathcal{R}(n)$. This group acts on $\E_0(n)$ by composition, and hence it acts on the set of ideals of $\E_0(n)$.
\begin{definition} \label{contactequivalent} Two germs $f,g \in \E_0(n,p)$ are {\em contact equivalent} if their ideals are in the same $\mathcal{R}(n)$ orbit. An equivalence class $\eta\subset \E_0(m,n)$ is called a (contact) singularity.
\end{definition}
In singularity theory one considers the so called contact group \cite[Ch3, 1.6]{avgl}
\[ \mathcal{K}=\mathcal{K}(n,p)=\{(h,M):h\in \mathcal{R}(n), M \text{ is a germ}\ (\C^n,0)\to (\GL(p),1)\},\]
acting on the vector space $\E_0(n,p)$ by $\big((h,M)g\big)(x)=M(x)g(h^{-1}(x))$, and defines germs to be contact equivalent if they are in the same orbit. It is a theorem of Mather \cite[Thm 2.9]{mather4} that for finite germs the two definitions are equivalent. Hence we will denote the contact equivalence class of a germ $g$ by $\mathcal{K}g$. Equivalently
\begin{theorem}[Mather] \label{isoq-germ} The finite germs $f,g \in \E_0(n,p)$ are contact equivalent if and only if their nilpotent algebras are isomorphic.
\end{theorem}
Indeed, suppose that we have and isomorphism $\phi:Q_f\to Q_g$, and let $[x_i]$ denote the image of $x_i\in\M_n$ in $Q_f$. Pick $p_i\in\M_n$ such that $[p_i]=\phi[x_i]$ in $Q_g$. It is easy to check that the $p_i$'s can be chosen in such a way that $h=(p_1,\dots,p_n)\in \E_0(n,n)$ is an element of $\mathcal{R}(n)$ and $hI_f=I_g$.
\subsection{Thom polynomials}\label{sec:tp}
Given a map $F:N^n\to P^p$ between complex manifolds, and a point $x\in N$, we can choose charts around $x$ and $F(x)$, and consider the germ of $F$ at $x$ in these charts. The contact singularity of this germ does not depend on the choice of the charts. Indeed, it is a consequence of Mather's theorem cited above that after reparametrizing the source $(\C^n,0)$, and target $(\C^p,0)$ spaces, the ideal of the germ will be in the same $\mathcal{R}(n)$ orbit. Equivalently, we can refer to the fact that the group $\K(n,p)$ contains the group $\mathcal{R}(n)\times \mathcal{R}(p)$ of holomorphic reparametrizations of the source $(\C^n,0)$ and target spaces $(\C^p,0)$. (In this context the group $\mathcal{R}(p)$ is usually
called the left group and denoted by $\mathcal{L}(p)$.)
Therefore it makes sense to talk about the contact singularity of $F$ at a point $x\in N$. Hence, for a map $F:N^n\to P^p$ and a contact singularity $\eta\subset \E(n,p)$, we define the singularity subsets
$$\eta(F)=\{x\in N| \text{ the germ of $F$ at $x$ is in } \eta\}.$$
\noindent After some preparations (Sections \ref{sec:pd}-\ref{sec:jetapprox}), in Theorem~\ref{tp} we explain the following statements.
\begin{quotation}
{\em If $F$ satisfies certain transversality conditions, then the subset $\eta(F)$ defines a cohomology class $[\eta(F)]\in H^*(N)$. Moreover, this class can be expressed as a universal polynomial (the Thom polynomial, $\Tp(\eta)$) of the Chern classes of $TN$ and $f^*TP$.}
\end{quotation}
First, in Section \ref{sec:pd}, we will discuss degeneracy loci, and how universal cohomology classes are associated with them. Then we will interpret $\eta(F)$ as a degeneracy locus in Section \ref{sec:jetapprox}. These two sections serve as a rigorous definition of the Thom polynomial for a contact singularity class.
\subsection{Poincar\'e dual, equivariant cohomology and degeneracy loci}\label{sec:pd}
In this section we discuss degeneracy loci, and cohomology class represented by them. First recall that subvarieties $Y \subset X$ represent cohomology classes in the underlying space (see e.g. \cite[p.219]{fulton:young}).
\begin{proposition}[{\bf Definition}] \label{dual} If $X$ is a smooth algebraic variety and $Y$ is an irreducible subvariety of complex codimension $d$ then there is a unique element $[Y\subset X]\in H^{2d}(X)$ such that
\begin{enumerate}
\item $[Y\subset X]$ is supported on $Y$, i.e. $[Y\subset X]$ restricted to $X\setminus Y$ is zero,
\item $[Y\subset X]|_{X\setminus\sing Y}=[Y^o\subset (X\setminus\sing Y)]$.
\end{enumerate}
Here $\sing Y$ denotes the singular subvariety of $Y$ and $Y^o=Y\setminus \sing Y$. The cohomology class $[Y^o\subset
(X\setminus\sing Y)]$ is defined by extending the Thom class of a tubular neighborhood of the proper submanifold $Y^o\subset (X\setminus\sing Y)$ via excision.
\end{proposition}
If $Y$ has several components $Y_i$ (usually of the same codimension) then $[Y\subset X]$ is defined to be the sum of the
classes $[Y_i\subset X]$. When the underlying space $X$ is clear from the context, we denote $[Y\subset X]$ by $[Y]$.
\medskip
We need the equivariant version of Proposition \ref{dual} above. Let $G$ be a complex algebraic Lie group. If $X$ is a smooth algebraic variety with a $G$-action, and $Y$ is a $G$-invariant subvariety then $Y$ represents a $G$-equivariant cohomology class in the equivariant cohomology of $X$, as follows (see e.g. \cite{kazass} or \cite{edidin-graham},or a recent account, see \cite{fulton:eq}).
\begin{theorem}[{\bf Definition}] \label{equi_dual}
Let $X$ be a smooth algebraic variety with a $G$-action, and $Y\subset X$ be a $G$-invariant irreducible subvariety of complex codimension $d$. Then there is a unique element $[Y\subset X]_G\in H^{2d}_G(X)$ (called the {\em $G$-equivariant Poincar\'e dual of $Y$ in $X$}) such that for all algebraic principal $G$-bundles $\pi:P\to M$ over a smooth algebraic variety $M$ with classifying map $k:M\to BG$ we have
\begin{equation}[P\times_GY\subset P\times_GX]=\tilde{k}^*[Y\subset X]_G,\label{eq:gpd} \end{equation}
where $\tilde{k}:P\times_G X\to EG\times_G X$ is induced by $k$.
\end{theorem}
Intuitively $[Y\subset X]_G$ is the class represented by $EG\times_GY$ in $EG\times_GX$. From the next section on, we will be mainly interested in the case when $X$ is a vector space. Then $H^*_G(X)\iso H^*(BG)$ canonically. The class $[Y\subset X]_G$ has various definitions and names in the literature (see \cite{zsolt} for an account). We will also use the notation $[Y]_G$ or simply $[Y]$ for $[Y\subset X]_G$, when the underlying space $X$ and the group action is clear from the context.
\begin{remark} We can make this construction more explicit for the torus $T=\GL(1)^r$ (this is the case we need in the Localization Formula below): repeat the construction above for the principal $T$-bundle $P=\big(\C^{d+1}\setminus \{0\}\big)^r\to (\P^d)^r$. Here the classifying map $k$ is the standard inclusion $(\P^d)^r\to (\P^\infty)^r=BT$. It is not difficult to show that
\[\tilde{k}^*:H^j(ET\times_TX)\to H^j(P\times_TX)\]
is bijective for $j\leq 2d$. So for large enough $d$, equation (\ref{eq:gpd}) defines $[Y\subset X]_T$ uniquely:
\[[Y\subset X]_T=(\tilde{k}^*)^{-1}[P\times_TY\subset P\times_TX].\]
\end{remark}
\begin{definition} \label{def:deg-locus} Suppose now that $s:M\to E$ is a section of the fiber bundle $\varphi:E=P\times_GX\to M$. If $Y\subset X$ is a $G$-invariant subset, then we use the notation $Y(\varphi):=P\times_GY$ for the set of `$Y$-points' in $E$ and $Y(s):=s^{-1}\big(Y(\varphi)\big)$ for the set of `$Y$-points of $s$'. We call $Y(s)$ the {\em degeneracy locus} corresponding to $Y$ and $s$.
\end{definition}
To make a statement about the class represented by $Y(s)\subset M$ (Corollary \ref{tp_deg_locus}) we need to discuss transversality.
\begin{definition}Let $f:A\to B$ be an algebraic map between algebraic manifolds and $Y\subset B$ be a subvariety. The map $f$ is {\em transversal} to $Y$ if it is transversal to all singularity strata of $Y$, i.e. to the (not necessarily equidimensional) manifolds $Y^o=Y\setminus \sing Y$, $\sing Y\setminus \sing(\sing Y)$ and so on.
\end{definition}
The following is a well known fact.
\begin{proposition}\label{trans}If $f:A\to B$ is transversal to $Y\subset B$, then $f^*([Y])=[f^{-1}(Y)]$.\end{proposition}
This statement easily generalizes to the equivariant setting.
\begin{proposition}\label{prop:equitrans}
Let the $G$-equivariant map $f:A\to B$ be transversal to the $G$-invariant subvariety $Y\subset B$. Then
$f^*([Y]_G)=[f^{-1}(Y)]_G$.
\end{proposition}
As a consequence, the equivariant class $[Y]_G$ determines the class of the degeneracy locus $Y(s)$:
\begin{corollary}\label{tp_deg_locus} If the section $s:M\to E$ of the vector bundle $\varphi:E=P\times_GX\to M$ is transversal to $Y(\varphi)=P\times_GY$ then $[Y(s)]=k^*[Y]_G $ where $k:M\to BG$ is the classifying map of $P$.
\end{corollary}
\begin{remark} In the complex algebraic setting the existence of a transversal section is not guaranteed. Nevertheless $k^*[Y]_G $ is always an obstruction: if $k^*[Y]_G $ is non-zero then there is no section $s$ with $Y(s)=\emptyset$, since $k^*[Y]_G $ is supported on $Y(s)$. The theory can be extended to the real smooth category. In that case the existence of the Poincar\'e dual is not automatic, but a generic section is transversal.
\end{remark}
\subsection{Jet approximation: reduction to finite dimension} \label{sec:jetapprox} In this section we interpret $\eta(F)$ (from Section \ref{sec:tp}) as a degeneracy locus. For this we need a finite dimensional approximation of $\E_0(n,p)$, and related notions.
The vector space of $k$-jets is defined to be the vector space of degree $k$ polynomials $(\C^n,0)\to (\C^p,0)$. That is, we have
\[ J^k(n,p)=\bigoplus_{i=1}^k\Hom(\Sym^i\C^n,\C^p), \] where $\Sym^i\C^n$ is the $i$\textsuperscript{th} symmetric power of the vector space $\C^n$. Let $J^k(n)=J^k(n,1)$. The map $\E_0(n,p)\to J^k(n,p)$, defined by taking the degree $k$ Taylor
polynomial at 0, will be denoted by $j^k$. The space $J^k(n)$ is an algebra (without identity) with multiplication $h_1\cdot h_2=j^k(h_1\cdot h_2)$. The $j^k$-image (`$k$-jets') of elements in $\mathcal{R}(n)$ form a group $\mathcal{R}^k(n)$. The group $\mathcal{R}^k(p)$ will also be denoted by $\mathcal{L}^k(p)$. The group $\mathcal{R}^k(n)$ acts on the algebra $J^k(n)$ by \[ \alpha \cdot h = j^k( h \circ \alpha^{-1}) \qquad\qquad \big(\alpha\in \mathcal{R}^k(n), h\in J^k(n)\big). \]
Hence the group $\mathcal{R}^k(n)$ also acts on the set of ideals of $J^k(n)$. Similarly we can define the group $\mathcal{K}^k=\mathcal{K}^k(n,p)$ acting on the vector space $J^k(n,p)$.
Let $h\in J^k(n,p)$. The ideal in $J^k(n)$, generated by the coordinate functions of $h$, will be denoted by $I_h$. We call $Q_h=J^k(n)/I_h$ the nilpotent algebra of the jet $h$. Two $k$-jets are defined to be contact equivalent, if their ideals are in the same $\mathcal{R}^k(n)$-orbit.
\begin{proposition}\label{qiso} \cite[Thm 2.9, 2.1]{mather4} Two $k$-jets are contact equivalent if and only if they are in the same $\mathcal{K}^k$-orbit if and only if their nilpotent algebras are isomorphic.
\end{proposition}
The proof is the same as of Proposition \ref{isoq-germ}.
Next we define invariants of germs and jets. The dimension of the quotient algebra $\mu(f):=\dim(Q_f)$ of a finite germ (or $k$-jet) $f$ plays a crucial role in our study.
We say that $f$ has depth $d$ ($\depth(f)=d$) if $d$ is the smallest $i$ for which $\E_0(n)^{i+1}\subset I_f$ (or $(J^k(n))^{i+1}\subset I_f$ for the $k$-jet case). It is an application of the Nakayama lemma that $\depth(f)\leq \mu(f)$.
\begin{definition} A germ $f\in \E_0(n,p)$ is $k$-determined if for every $g\in \E_0(n,p)$ the germ $g$ is contact equivalent to $f$ if their $k$-jets are equal.
\end{definition}
Our main objects---the finite germs---are finitely determined due to the following
\begin{theorem}\cite{gaffney:phd},\cite[Thm 1.2]{wall:finitedet} Any finite germ (or $k$-jet) $f$ is $\depth(f)+1$-determined.
\end{theorem}
Using the observation that
\begin{equation}\label{eq:jet-of-k} j^k(H(g))=j^k(H)(j^k(g)) \end{equation}
for any $H\in \mathcal{K}(n,p), g\in \E_0(n,p)$) we immediately get the following
\begin{proposition}\label{reduction} Suppose that $f\in\E_0(n,p)$ is $k$-determined. Then $f$ is contact equivalent to $g$ if and only if $j^kf$ is contact equivalent to $j^kg$.
\end{proposition}
In fact, the previous statements also imply that if $k\geq\depth(f)+1$ then $Q_f\iso Q_{j^kf}$.
\smallskip
Now we can give the degeneracy locus description of the singularity set $\eta(F)$ promised in Section \ref{sec:tp}. Given a map $F:N^n \to P^p$ between manifolds, and a positive integer $k$, we construct a fiber bundle
\begin{equation}\label{fibration}\phi_F:\{(x,h): x\in N,\ h\
\text{is the $k$-jet of a germ $(N,x)\to(P,F(x))$}\}\ \to\ N
\end{equation}
$$(x,h)\mapsto x,$$
with fiber $J^k(n,p)$. For $k=1$ this is the vector bundle Hom$(TN,F^*TP)$. In general the fiber is a vector space, but the structure group (the left-right group $\mathcal{R}^k(n)\times\mathcal{L}^k(p)$ acting on $J^k(n,p)$ by composition on the two sides) does not act linearly. The bundle $\phi_F$ has a natural section
\begin{equation}\label{jetsection}j^kF: x \mapsto (x,j^k(\text{germ of $F$ at $x$})).\end{equation}
Now let $g\in \E_0(n,p)$ be a $k$-determined germ, let $\eta=\mathcal{K}g$ be its contact equivalence class, and let $\eta^k$ be the contact equivalence class of $j^kg\in J^k(n,p)$. Since $\eta^k$ is $\mathcal{R}^k(n)\times\mathcal{L}^k(p)$-invariant, it defines a degeneracy locus in the sense of Definition \ref{def:deg-locus}. Corollary \ref{reduction} implies that we have the following degeneracy locus description of the $\eta$ singularity subset of a map $F$
\begin{equation}\label{deg_locus} \eta(F) = \eta^k(j^k(F)).\end{equation}
\subsection{Definition of the Thom polynomial}
Now we are ready to define the Thom polynomial of a contact singularity. Let $g\in \E_0(n,p)$ be a $k$-determined finite germ. Let $\eta^k\subset J^k(n,p)$ be the closure of $\mathcal{K}^kj^kg$. Notice that $\mathcal{K}^k$ is a connected algebraic group acting algebraically, so the orbit-closure is the same in the Zariski and the metric topology. Connectedness implies that the closure of the orbit is irreducible.
\begin{definition} The Thom polynomial $\Tp(g)$ of the $k$-determined finite germ $g\in \E_0(n,p)$ (or a $k$-jet) is defined to be the class represented by $\eta^k$ in the $\mathcal{R}^k(n)\times \mathcal{L}^k(p)$-equivariant cohomology of $J^k(n,p)$.
\end{definition}
Since the contact class of $g$ depends only on the quotient algebra of $g$, we will also use the notation $\Tp_Q(n,p):=\Tp(g)$ for any $g\in J^k(n,p)$ with $Q_g\iso Q$.
$J^k(n,p)$ is a vector space (hence contractible), and $\mathcal{R}^k(n)\times \mathcal{L}^k(p)$ is homotopy equivalent to $\GL(n)\times \GL(p)$, therefore we have
\[ \Tp(g)=[\eta^k \subset J^k(n,p)]_{\mathcal{R}^k(n)\times \mathcal{L}^k(p)} =
[\eta^k \subset J^k(n,p)]_{\GL(n)\times \GL(p)} \in H^*\big(B(\GL(n)\times \GL(p))\big). \]
The degree of the Thom polynomial is the codimension of $\eta^k$ in $J^k(n,p)$. We will also refer to this degree as the {\em codimension of the germ $g$}.
The cohomology ring $H^*\big(B(\GL(n)\times \GL(p))\big)$ is a polynomial ring generated by the universal Chern classes $a_1,\ldots,a_n, b_1,\ldots,b_p$ of the groups $\GL(n)$, $\GL(p)$, hence the Thom polynomial is indeed a polynomial.
The meaning of the Thom polynomial is enlightened by putting together expression (\ref{deg_locus}) with Definition~\ref{equi_dual}. We obtain the following
\begin{proposition}\label{tp} Let $g\in \E_0(n,p)$ be a $k$-determined germ, $\eta^k$ the closure of $\mathcal{K}^k(j^kg)$ in $J^k(n,p)$, and let $F:N^n\to P^p$ be a map between compact complex manifolds. Suppose that the section $j^kF$ (see~(\ref{jetsection})) is transversal to $\eta^k(\phi_F)$---the $\eta^k$-points of the fibration~(\ref{fibration}). Then the cohomology class $[\overline{\eta(F)}\subset N]$ represented by the $\eta$-points---where $\eta=\mathcal Kg$---of the map $F$ is equal to the Thom polynomial of $g$ evaluated at the Chern classes of $TN$ and $F^*TP$.
\end{proposition}
We have not included the letter $k$ in the notation $\Tp(g)$, since as we will show in Section \ref{sec:k} that the Thom polynomial does not depend on the choice of $k$, as long as $g$ is $k$-determined. Observe that Proposition \ref{tp} proves this statement, provided there are sufficiently many maps $F$ satisfying the conditions of the Proposition. We will use a different approach in Section \ref{sec:k}.
\section{Examples, known results}\label{sec:exa}
Suppose that a complex, commutative, finite dimensional, local algebra $\mathcal{Q}$ is given. Then there exists a contact singularity in $\E_0(n,p)$ with local algebra $\mathcal{Q}$ for each $n$, and $p$ if $n$ and $p-n$ are large enough. A general Thom polynomial is a formula (containing $n$, and $p$ as parameters) expressing the Thom polynomial of all these singularities together.
For example, the Thom polynomial of a singularity in $\E_0(n,p)$ with local algebra $\mathcal{Q}=\C[[x]]/(x^3)$ is
\[c_{l+1}^2+\sum_{i=1}^\infty 2^{i-1}c_{l+1-i}c_{l+1+i},\]
where $l=p-n$, and the classes $c_i$ are defined by
\begin{equation}\label{quotient_vars}1+c_1t+c_2t^2+\ldots=\frac{1+b_1t+b_2t^2+\ldots+b_pt^p}{1+a_1t+a_2t^2+\ldots+a_nt^n},\end{equation}
and the conventions $c_0=1$, $c_{<0}=0$. Using Schur polynomials
\begin{equation}\label{schurdef}
\Delta_{\lambda_1\geq \lambda_2\geq \ldots\geq \lambda_r}:=\det \left( c_{\lambda_i+j-i} \right)_{i,j=1,\ldots,r}
\end{equation}
we can further write the Thom polynomial in the form
$$\Delta_{l+1,l+1}+2\Delta_{l+2,l}+4\Delta_{l+3,l-1}+\ldots.$$
This example displays four important properties:
\begin{itemize}
\item the Thom polynomial can be expressed in the ``quotient variables'' (\ref{quotient_vars});
\item when the Thom polynomial is expressed in quotient variables, then the dependence on $p$ and $n$ is only through $l=p-n$;
\item if the general Thom polynomial is expressed in quotient variables, and the indexes are shifted by $l+1$ (i.e.
substituting $d_i=c_{l+1+i}$), the expression does not depend on $l$ either;
\item the coefficients of Thom polynomials in the basis of Schur polynomials are non-negative.
\end{itemize}
The general Thom polynomial after shifting the indices by substituting $d_i=c_{l+1+i}$ is called the Thom series of the local algebra $\mathcal{Q}$, and is denoted by $\ts_\mathcal{Q}$. For example
\[ \ts_{\C[[x]]/(x^3)}=d_0^2+d_{-1}d_1+2d_{-2}d_2+4d_{-3}d_3+\ldots. \]
Alternatively we can work with nilpotent algebras. We will also use the notation $\ts_Q$ for $Q$ being a nilpotent algebra.
All four properties above hold in general. The first three we will prove in Section \ref{sec:stability}. The first two are classical facts, we will call them the {\em Thom-Damon-Ronga theorem}, the third (in a special case) is a theorem from \cite{dstab}. The fourth property was recently proved in \cite{pragacz:positivity}.
\smallskip
Several individual Thom polynomials are known for small values of $l$ (see e.g. \cite{rrtp}, \cite{kazamulti}), but hardly any general Thom polynomials, i.e. Thom series are known. Here is a complete list of local algebras whose Thom series is known:
\begin{itemize}
\item $\Sigma^n=\C[[x_1,\ldots,x_n]]/\M_n^2$ (Giambelli-Thom-Porteous formula)
\item $A_2$ \cite{rongaij}
\item $A_3$ \cite{a3} (announced), \cite{pragacz} (sketched), \cite{bsz06}, \cite{lascoux-pragacz} (proved)
\item $A_4,A_5,A_6$ \cite{bsz06}
\item The Thom-Boardman classes $\Sigma^{n,1}$ \cite{kf}
\item $I_{2,2}$ \cite{dstab, pragacz:i22, kaza:gysin}.
\end{itemize}
Here and in what follows we use standard notations of singularity theory, as follows. $A_i=\C[[x]]/(x^{i+1})$,
$I_{a,b}=\C[[x,y]]/(xy,x^a+y^b)$, $III_{a,b}=\C[[x,y]]/(xy,x^a,y^b)$. For Thom series of Thom-Boardman classes see also Section \ref{sec:smallp}.
Below we will develop a method to calculate general Thom polynomials. It leads to formulas where the stability properties are not apparent, as these formulas are given in Chern roots. In Section \ref{sec:quotientchern} we show how to find formulas in quotient variables. In Section \ref{sec:generatingfn} we explore even more compact descriptions in terms of generating functions.
\section{A partial resolution}\label{sec:resolution}
In this section we introduce the key geometric idea leading to our cohomological localization formula. We present a partial resolution (i.e. a birational map of varieties) of contact invariant subvarieties of $J^k(n,p)$, in particular of closures of contact singularities. The construction is originally due to J.~Damon~\cite{damonphd}. Similar ideas are present in works of J.~Mather on Thom-Boardman singularities~\cite{mathertb}. The idea is that a contact invariant subvariety of $J^k(n,p)$ is the union of large linear subspaces.
Let $m$ be a nonnegative integer, and let $\gr^m=\gr^m(J^k(n))$ be the Grassmannian of $m$-codimensional linear subspaces of $J^k(n)$.
\begin{definition} \label{def:correspondence} Let $Y\subset \gr^m$ be a subvariety and fix $p\geq 1$. The {\em correspondence variety} of $Y$ is
\[ C(Y)=\{(I,g)\in\gr^m\times J^k(n,p)\ |\ I\in Y, I_g\subset I\}. \]
\end{definition}
We have now the following diagram
\begin{equation} \label{damondiag}
\xymatrix{C(Y) \ar@{^{(}->}@<-3pt>[rr]^{i} \ar[d] & & \gr^m\times
J^k(n,p) \ar[d]^{\pi_1}\ar[r]^{\ \ \ \pi_2} & J^k(n,p) \\
Y\ar@{^{(}->}@<-3pt>[rr]& & \gr^m, & } \end{equation} where $\pi_1$,
$\pi_2$, $i$ are the obvious projections and imbedding.
The projection $C(Y)\to Y$ makes $C(Y)$ a vector bundle with fiber $C_I=I\otimes\C^p\subset J^k(n)\otimes\C^p=J^k(n,p)$.
\begin{proposition}\label{birat} Let $g \in J^k(n,p)$ with $\mu(g):=\codim I_g=m$. Let $Y$ be $\overline{\mathcal{R}I_g}\subset \gr^m$ for $\mathcal{R}=\mathcal{R}^k(n)$. Then
\[\phi=\pi_2\circ i:C(\overline{\mathcal{R}I_g})\to \overline{\mathcal{K}g}\]
is a birational map.
\end{proposition}
\begin{proof} For $\tilde{\mathcal{K}}g:=\{(I_h,h):h\in \mathcal{K}g\}$ we see that $\phi|_{\tilde{\mathcal{K}}g}:\tilde{\mathcal{K}}g\to \mathcal{K}g$ is a bijection, so it is enough to show that $\tilde{\mathcal{K}}g\subset C(\overline{\mathcal{R}I_g})$ is open (and therefore dense since $C(\overline{\mathcal{R}I_g})$ is irreducible). For this it is enough to show that $\tilde{\mathcal{K}}g$ intersected with a fiber is open in the fiber; hence we need the following lemma.
\begin{lemma}\label{ei}For any jet $g \in J^k(n,p)$ the set $A_g:=\{h\in I_g\otimes\C^p:I_h=I_g\}$ is Zariski open in $I_g\otimes\C^p$.\end{lemma}
\begin{proof} Let $h=(h_1,\ldots,h_p)\in I_g\otimes\C^p$, and let $a_i^j$ be the coefficients of $h_i$ in some linear
basis of $I_g$. The property that the $h_i$'s generate $I_g$ as an ideal is equivalent to the property that an appropriate matrix, whose entries are linear functions of the $a_i^j$'s, has full rank. Therefore, the property that the $h_i$'s generate $I_g$ cuts out a Zariski open subset.
\end{proof}
This finishes the proof of Proposition \ref{birat}.
\end{proof}
Using the Gysin (or pushforward) map $\phi_*$ we have that $\phi_*(1)=[\tilde{\mathcal{K}}g]=\Tp(g)$. Details on the properties of the equivariant Gysin map can be found in \cite{fulton:eq}. We calculate the Gysin map using localization in the next section.
\section{Singular-base equivariant localization}\label{sec:localization}
In this section we recall a version of the Berline-Vergne-Atiyah-Bott equivariant localization formula, due to B\'erczi and Szenes. For completeness we give a proof. This version presents the `localization' of an equivariant cohomology class on the total space of a vector bundle over a compact singular base space.
Let $V$ be a vector space. Suppose that $M$ is a compact algebraic manifold, and $Y\subset M$ a subvariety. Let $E \to Y$ be a sub-vector bundle of the trivial bundle $M\times V \to M$ restricted to $Y$. Let $\pi_2: M\times V \to V$ be the projection, $i:E \subset M\times V$ the embedding, and $\phi=\pi_2\circ i$, as in the diagram
$$\xymatrix{E \ar@{^{(}->}@<-3pt>[rr]^{i} \ar@/^2pc/[rrr]^{\phi} \ar[d] & & M\times
V \ar[d]\ar[r]^{\pi_2} & V \\ Y\ar@{^{(}->}@<-3pt>[rr]& & M. & } $$
Recall that for $A\subset B$, by $[A]$ or $[A\subset B]$ we mean the cohomology class represented by $A$ in the cohomology of $B$.
\begin{proposition} \cite[(3.8)]{bsz06} \label{31} Suppose that the torus $T$ acts on all spaces in the diagram above, and that all maps are $T$-equivariant. Assume that the fixed point set $F(M)$ of the $T$-action on $M$ is finite.
Then for the push-forward map $\phi_*:H_T^*(E)\to H_T^*(V)$ we have
\begin{equation} \label{singbase}
\phi_*(1) =\sum_{f\in F(M)}\frac{[Y\subset M]|_f\cdot [E_f\subset V]}{e(T_fM)}=
\sum_{f\in F(Y)}\frac{[Y\subset M]|_f\cdot [E_f\subset V]}{e(T_fM)}.
\end{equation}
Consequently, if $\phi$ is birational to its image, then the right hand side of (\ref{singbase}) is equal to $[\phi(E)]\in H_T^*(V)$.
\end{proposition}
\begin{proof}We have to calculate the integral ${\pi_2}_*(i_*(1))=\int_{M}i_*(1)=\int_M[ E\subset M\times V]$ (we identify the cohomology of $M\times V$ with the cohomology of $M$) for which we apply the Berline-Vergne-Atiyah-Bott localization formula, that we recall now.
\begin{proposition}\cite{atiyah-bott}\label{ab} Suppose that $M$ is a compact manifold and $T$ is a torus acting smoothly on $M$, and the fixed point set $F(M)$ of the $T$-action on $M$ is finite. Then for any cohomology class $\alpha\in H_T^*(M)$
\begin{equation}\label{abformula}\int_M\alpha=\sum_{f\in F(M)} \frac{\alpha|_f}{e(T_{f}M)}.\end{equation}
Here $e(T_fM)$ is the $T$-equivariant Euler class of the tangent space $T_fM$. The right side is considered in the fraction field of the polynomial ring of $H^*_T($point$)=H^*(BT)$ (see more on details in \cite{atiyah-bott}): part of the statement is that the denominators cancel when the sum is simplified.
\end{proposition}
\noindent We complete the proof of Proposition~\ref{31} by noticing that
\begin{equation} [E\subset M\times V]|_f=[E_f\subset V]\cdot[Y\subset M]|_f,\label{eq:product} \end{equation}
where $f\in M\subset M\times V$.
The second equality in (\ref{singbase}) follows from the fact that the cohomology class $[Y\subset M]$ is supported on $Y$, so other fixed points give zero contribution.
\end{proof}
Notice that the same argument gives a localization formula for the case of smooth base and singular fiber.
\begin{remark}\label{zero} If $\phi$ decreases the dimension, then $\phi_*(1)$ is zero---since supported on a too small subset---so the right hand side of (\ref{singbase}) is zero.
\end{remark}
\begin{remark} \label{smooth} If $f$ is a smooth point of $Y$, then
\[ \frac{e(T_fM)}{[Y\subset M]|_f} =e(T_f Y),\]
hence if $Y$ is smooth then formula (\ref{singbase}) further simplifies to
\[ \phi_*(1) =\sum_{f\in F(M)}\frac{[E_f\subset V]}{e(T_fY)}.\]
This formula holds in the general case too, if we define the {\em virtual (tangential) Euler class} $e(T_f Y)$ to be $\ \displaystyle{\frac{e(T_fM)}{[Y\subset M]|_f}}\ $ even if $f$ is not a smooth point of $Y$.
\end{remark}
\begin{remark} The moral of Proposition~\ref{31} is that if we want to calculate the equivariant class of a variety with
localization, we should look for high-dimensional linear spaces in it. Precisely saying, we need an other variety, birational to the original, which is the total space of a vector bundle over a compact base space. The higher the rank of the bundle, the simpler the formula is. Usually the variety we start with is a cone, hence there is a canonical line bundle whose total space is birational to it. Therefore formula (\ref{singbase}) can be applied to find the Thom polynomial. This case is used in \cite{root}. Certain quiver varieties are birational to total spaces of vector bundles with higher rank, over a smooth compact space \cite{reineke}. Hence formula~(\ref{singbase}) can be effectively applied, yielding similar formulas for quiver polynomials as in \cite{ks}.
\end{remark}
\section{Localization for contact classes}\label{sec:locforcontact}
Now we apply the equivariant localization formula above to a the construction of Section \ref{sec:resolution}. This is different from the resolution used in \cite{bsz06} for Morin sinmgularities; it is more general (it covers all contact singularities), but numerically less effective.
Let $G(n,p)=\GL(n)\times \GL(p)$. Recall that the spaces in diagram (\ref{damondiag}) have $G(n,p)$-actions, and the maps in the diagram are
$G(n,p)$-equivariant. Let $T(n,p)=T(n)\times T(p)\cong U(1)^n\times U(1)^p$ be the maximal torus of $G(n,p)$, and restrict the action on the spaces and maps of diagram~(\ref{damondiag}) to $T(n,p)$. Recall also that the map $H^*_{G(n,p)}(\pt)\to H^*_{T(n,p)}(\pt)$ is injective (splitting lemma), hence by this restriction we do not loose any cohomological information. Now we can apply Proposition~\ref{31} and Proposition~\ref{birat} to the diagram (\ref{damondiag}), and we obtain our main result. Let $F$ denote the set of monomial ideals in $\gr^m$ being the fixed points of the $T(n,p)$-action on $\gr^m$ (where $T(p)$ acts trivially).
\begin{theorem}[Localization Formula] \label{lf} Let $g\in J^k(n,p)$ be a $k$-jet. Then
\[\Tp(g)=\sum_{I\in F}\frac{[C_I\subset J^k(n,p)]\cdot[ \overline{\mathcal{R}I_g}\subset\gr^m]|_I}{e(T_I\gr^m)}.\]
\end{theorem}
\hfill\qed
Using the virtual tangent Euler classes we get
\begin{equation}\label{eq:lfeu} \Tp(g)=\sum_{I\in F}\frac{[C_I\subset J^k(n,p)]}{e(T_I\overline{\mathcal{R}I_g})}.
\end{equation}
In the rest of this section we present two lemmas in which we study the factors $[C_I\subset J^k(n,p)]$ and $e(T_I\gr^m)$ in the Localization Formula. For this we choose the following notations. Let
\[H_{T(n,p)}^*(J^k(n,p))= H_{T(n,p)}^*(\pt)=\Z[\alpha_1,\ldots,\alpha_n,\beta_1,\ldots,\beta_p],\]
where $\alpha_i$ (resp $\beta_i$) denotes the universal first Chern class in the $i$'th factor of $H^*_{T(n)}($pt$)=\otimes_{i=1}^n H^*(BU(1))$ (resp. $H^*_{T(p)}($pt$)=\otimes_{i=1}^p H^*(BU(1))$). We call the $\alpha_i$'s and the $\beta_i$'s the Chern roots of the group $T(n,p)$. As usual, we identify weights of a $T(n,p)$-representation with linear combinations of the Chern roots. For a $T(n,p)$-representation $A$, let $W_A$ denote the multiset of its weights. The Euler class of a representation $A$ is $e(A)=\prod_{w\in W_A} w$.
We define resultants by $\res(S|T)=\prod_{s\in S, t\in T} (s-t)$ for the finite multisets $S$ and $T$. For example, the representation of $T(n,p)$ on the vector space $\Hom(\C^n,\C^p)$ by $(A,B)\cdot F=B\circ F \circ A^{-1}$ has weights $W_{\Hom(\C^n,\C^p)}=\{\beta_i-\alpha_j:i=1,\dotsc,p; j=1,\dotsc,n\}$ and Euler class $\res(\{\beta_1,\ldots,\beta_p\}|\{\alpha_1,\ldots,\alpha_n\})$. (In the rest of the paper, we will drop the brackets $\{\ \}$ from the notation.) Similarly,
\[ W_{J^k(n,p)}=\{\beta_i-\sum_{j=1}^n a_{ij}\alpha_j:i=1,\dotsc,p; a_{ij}\geq0, 1\leq \sum_{j=1}^n a_{ij}\leq k\} \]
and for the $T(n)$-representation on $J^k(n)$ we have
\[ W_{J^k(n)}=\{-\sum_{j=1}^n a_{j}\alpha_j: a_{j}\geq0, 1\leq \sum_{j=1}^n a_{j}\leq k\}. \]
The equivariant cohomology class represented by an invariant linear subspace in a representation space is the Euler class of the factor representation. Hence we have the following lemma.
\begin{lemma} Let $I$ be a monomial ideal. Then
\[ [C_I\subset J^k(n,p)]=e(Q_I\otimes \C^p)=\prod_{i=1}^p\prod_{w\in W_{Q_I}}(\beta_i+w)=\res(\beta_1,\dotsc,\beta_p|-W_{Q_I}),\]
where $Q_I$ is the quotient space $J^k(n)/I$. If $I$ is monomial then $Q_I$ is equipped with the induced representation of $T(n)\leq \GL(n)$.
\end{lemma} \qed
Notice that all coefficients in $W_{Q_I}$ are negative, so in applications the form $\res(\beta_1,\dotsc,\beta_p|-W_{Q_I})$ seems more natural.
The tangent bundle of a Grassmannian is Hom$(A,B)$ where $A$ and $B$ are the tautological sub- and quotient bundles. Therefore the following lemma calculates the denominator of the Localization Formula explicitly.
\begin{lemma} We have \[e(T_I\gr^m)=\res(W_{Q_I}|W_I).\] \end{lemma} \qed
Again, if we want positive coefficients, we can write $e(T_I\gr^m)=\res(-W_{I}|-W_{Q_I})$.
The factor $[\overline{\mathcal{R}I_g}\subset \gr^m]|_I$ in the Localization Formula is a subtle invariant of the set
$\overline{\mathcal{R}I_g}$ at~$I$. Its calculation is difficult in general. In Section \ref{sec:calc} we calculate special cases.
\subsection{First application of the Localization Formula}\label{firstex} Let $g\in J^k(n,p)$ be the jet with the degree $d$ monomials as coordinate functions for $k\geq d$ and $p=\binom{n+d-1}{d}$. That is,
$I_g=(J)^d$, where $J=J^k(n)$. The ideal $I_g$ is a fixed point of the right group $\mathcal{R}$, so the localization formula immediately gives
\begin{equation} \label{eq:firstex} [(J)^d(n,p)]=\res(\beta_1,\dotsc,\beta_p|-W_{Q((J)^d)}), \end{equation}
where $-W_{Q((J)^d)}=\{\sum a_i\alpha_i:a_i\geq0, 0<\sum a_i<d\}$.
The singularity whose $k$-jet is $g$ is called the Thom-Boardman singularity $\Sigma^{n,\dotsc,n}(n,p)$ (the number of $n$'s in the superscript is $d$). Hence (\ref{eq:firstex}) is the Thom polynomial of this singularity. This is not a new result though it might be the first appearance in the literature in this generality. The $d=1$ case recovers a special case of the Giambelli-Thom-Porteous formula (cf. Theorem \ref{porteous}) in the Chern root format:
$\Tp_{\Sigma^{n}}(n,p)=\res(\beta_1,\ldots,\beta_p|\alpha_1,\ldots,\alpha_n)$.
\section{Stability properties of the Thom polynomial}\label{sec:stability}
\subsection{Dependence of Thom polynomials on $k$} \label{sec:k}
Our definition of the Thom polynomial of a finite germ $f$, used its $k$-jet. In this section we show that the Thom polynomial does not depend on $k$, that is, we prove the following
\begin{theorem} \label{k} Suppose that $g\in J^l(n,p)$ and $\depth(I_g)=k\leq l$, i.e. $(J)^{k+1}\subset I_g$, where $J=J^l(n)$. Then
\[\Tp(j^k g)= \Tp(g),\]
where $j^k:J^l(n,p)\to J^k(n,p)$ is the projection.
\end{theorem}
\proof We need to show that
\[(j^k)^{-1}\overline{\mathcal{K}^k(j^k g)}=\overline{\mathcal{K}^l(g)}.\]
Using the same notation $j^k$ for the projection $\mathcal{K}^l\to\mathcal{K}^k$ we have that
\begin{equation}\label{eq:jet-of-k-l} j^k(H(f))=j^k(H)(j^k(f)), \end{equation}
where $H\in \mathcal{K}^l$ and $f\in J^l(n,p)$, which implies that $j^k(\mathcal{K}^lf)=\mathcal{K}^kj^k(f)$. Therefore it is enough to show that $j^k(f)=j^k(g)$ implies $f\in \overline{\mathcal{K}^l(g)}$. Th equality $j^k(f)=j^k(g)$ implies that $ I_f+(J)^{k+1}=I_g+(J)^{k+1}$ and by the assumption on $g$ we have that $I_g+(J)^{k+1}=I_g$, which implies that $I_f\subset I_g$, which further implies $f\in \overline{\mathcal{K}^l(g)}$ by Lemma \ref{ei}.\qed
\subsection{Dependence of Thom polynomials on $p-n$} \label{sec:p-n} In this section we prove the classical stability result---Theorem \ref{stab}---on Thom polynomials of contact singularities. Let $\sigma:J^k(n,p)\to J^k(n+1,p+1)$ denote the stabilization map
\[\sigma g(x_1,\dotsc,x_{n+1}):=\big(g_1(x_1,\dotsc,x_{n}),\dotsc,g_p(x_1,\dotsc,x_{n}),x_{n+1}\big). \]
\begin{theorem}[Stability] \label{stab}
\[\Tp(g)=\sigma^*\Tp(\sigma g),\]
where $\sigma^*:H^*_{G(n,p)}\to H^*_{G(n+1,p+1)}$ is the homomorphism induced by the map
\[ G(n,p)\to G(n+1,p+1),\ \ (M,N)\mapsto \left(\smx M001,\smx N001\right). \]
\end{theorem}
The reason we include the proof here is twofold. First, we would like to strengthen the stability theorem and show that these Thom polynomials are {\em supersymmetric}. Second, it gives us a chance to study the geometry related to the Localization Formula.
The Localization Formula gives the Thom polynomial in Chern roots i.e. in generators of $H^*_{T(n,p)}\iso \Z[\alpha_1,\dotsc,\alpha_n,\beta_1,\dotsc,\beta_p]$. Since $\Tp(g)$ is in the image of $H^*_{G(n,p)}\to H^*_{T(n,p)}$, it is symmmetric in both the $\alpha$ and the $\beta$ variables. However it has more symmetry.
\begin{definition}The polynomial $q\in \Z[\alpha_1,\dotsc,\alpha_n,\beta_1,\dotsc,\beta_p]$ is {\em supersymmetric} (see \cite{lascoux}) if
\begin{enumerate}
\item symmmetric in both the $\alpha$ and the $\beta$ variables,
\item $q(\alpha_1,\dotsc,\alpha_{n-1},t,\beta_1,\dotsc,\beta_{p-1},t)$ does not depend on $t$.
\end{enumerate}
\end{definition}
\begin{theorem} \label{supersym} The Thom polynomial of a finite germ is supersymmetric.\end{theorem}
An important property of supersymmetric polynomials is that they can be expressed in {\em quotient variables}:
We define a map
\[\rho_{n,p}:\Z[c_1,\dotsc,c_i,\dotsc]\to \Z[\alpha_1,\dotsc,\alpha_n,\beta_1,\dotsc,\beta_p]\]
by the formal power series
\begin{equation}\label{eq:c} 1+c_1t+c_2t^2+\cdots=\frac{\prod_{j=1}^p(1+t\beta_j)}{\prod_{i=1}^n(1+t\alpha_i)},
\end{equation}
i.e. $\rho_{n,p}(c_1)=\beta_1+\cdots+\beta_p-\alpha_1-\cdots-\alpha_n$ and so on. We say that $q\in \Z[\alpha_1,\dotsc,\alpha_n,\beta_1,\dotsc,\beta_p]$ can be expressed in quotient variables if it is in the image of $\rho_{n,p}$.
\begin{theorem}[Lascoux \cite{lascoux}] \label{super=>quot}The polynomial $q\in \Z[\alpha_1,\dotsc,\alpha_n,\beta_1,\dotsc,\beta_p]$ is supersymmetric if and only if it can be expressed in quotient variables. \end{theorem}
The expression of supersymmetric polynomials in terms of the quotient variables is unique if the degree of $q$ is not too high compared to $n$ and $p$.
\begin{proposition}\label{injectivity} If $\rho_{n,p}(h)=0$ for a non-zero polynomial $h\in \Z[c_1,\dotsc,c_i,\dotsc]$ then $\deg(h)\geq (n+1)(p+1)$ with the convention $\deg c_i=i$. \end{proposition}
In fact, the kernel of $\rho_{n,p}$ is known explicitly (see \cite[\S 4.2]{pragacz:enumgeo}): $\ker (\rho_{n,p})=\langle
\Delta_\lambda:{(n+1)}^{(p+1)}\subset \lambda\rangle$, where $\langle\ \rangle$ means the generated $\Z$-module (for the definition of $\Delta$ see (\ref{schurdef})).
Now we translate supersymmetry to geometry. Notice that the stabilization map $\sigma:J^k(n,p)\to J^k(n+1,p+1)$ is $G'$-equivariant for $G'=G(n,p)\times \GL(1)$, where $\GL(1)$ acts trivially on $J^k(n,p)$ and {\em diagonally on the last variables} of elements of $J^k(n+1,p+1)$. Supersymmetry and stability is equivalent to the following strengthening of the stability Theorem \ref{stab}:
\begin{theorem}(Strong stability.) \label{strongstab} For any $g\in J^k(n,p)$
\[ \Tp_{G'}(g)=\Tp_{G'}(\sigma g). \] \end{theorem}
This theorem immediately follows from the two lemmas below and Proposition~\ref{prop:equitrans} on the transversal pull back of Thom polynomials.
\begin{lemma}\label{K-trans} The stabilization map $\sigma$ is transversal to every contact class in $J^k(n+1,p+1)$.
\end{lemma}
\begin{lemma}\label{preim} We have $\sigma^{-1}(\mathcal{K}\sigma g)=\mathcal{K} g$ for any $g\in J^k(n,p)$. \end{lemma}
\proof[Proof of Lemma \ref{K-trans}.] The results of Section \ref{sec:resolution} imply that for $g\in J^k(n,p)$ the tangent space of its contact class is
\begin{equation}\label{eq:K-tangent}T_g\mathcal{K}g=I_g\otimes \C^p+T_g\mathcal{R}g, \end{equation}
where
\begin{equation}T_g\mathcal{R}g=\left\{\sum_{i=1}^n p_i\partial_ig:p_i\in J^k(n)\right\}.\end{equation}
Applying this to the germ $\sigma g\in J^k(n+1,p+1)$ we see that transversality is equivalent to the property that the three subspaces $I_{\sigma g}\otimes \C^{p+1}, \ T_{\sigma g}\mathcal{R}\sigma g$ and $\sigma J^k(n,p)$ span $J^k(n+1,p+1)$.
Let $h=\sum_{i=1}^{p+1}h_i\otimes e_i$ any element of $J^k(n+1,p+1)$, where $\{e_i:i=1,\dotsc,p+1\}$ is the standard basis of $\C^{p+1}$ and $h_i\in J^k(n+1)$. We can write $h_i$ in the form $h_i=a_i+b_ix_{n+1}$ where $a_i \in J^k(n)$ and $b_i \in J^k(n+1)$. Since $x_{n+1}\in I_{\sigma g}$, it is enough to show that $a_i\otimes e_i$ is in the span. For $i\leq p$ we have $a_i\otimes e_i\in \sigma J^k(n,p)$ and for the last coordinate notice that $a_{p+1}\partial_{n+1}\sigma{g}=a_{p+1}\otimes e_{p+1}$.\qed
\proof[Proof of Lemma \ref{preim}.] The statement follows from Proposition \ref{qiso} and that $Q_{\sigma g}\iso Q_g$ for any $g\in J^k(n,p)$.\qed
The proof of Theorem \ref{strongstab}---and hence Theorems \ref{stab} and \ref{supersym}---is complete. These facts imply the {\em Thom-Damon-Ronga theorem} of Section \ref{sec:exa}. Analogously to Thom polynomials of contact classes it is possible to define Thom polynomials for {\em right-left} classes and they cannot be expressed in quotient variables in general.
Recall that $\mu(g)=\codim (I_g \subset J^k(n))$.
\begin{proposition}\label{unique} Let $g\in J^k(n,p)$ with $n\geq \mu(g)-1$. Then there is a unique polynomial $\tp(g)\in \Z[c_1,c_2,\dots]$ such that $\rho_{n,p}(\tp(g))=\Tp(g)$.
\end{proposition}
\proof Theorem \ref{super=>quot} implies existence. Since $\deg(\tp(g))=\mu(g)p-\dim(\mathcal{R}g)$, we have $\deg(\tp(g))<(n+1)(p+1)$, and
Proposition \ref{injectivity} implies uniqueness. \qed
\begin{definition} If the condition $n\geq \mu(g)-1$ is not satisfied, then we can take an iterated stabilization of $g$ to get a unique polynomial, what we will also denote by $\tp(g)$. Also, we will use the notation $\tp_Q(l):=\tp(g)$, where $g\in J^k(n,p)$, such that its nilpotent algebra $Q_g$ is isomorphic to $Q$, and $p-n=l$. Stability justifies this notation.
\end{definition}
As we already remarked, formula (\ref{eq:firstex}) implies that for $p\geq \binom{n+1}{2}$
\begin{equation}\label{por_root} \Tp_{\Sigma^n}(n,p)=\res(\beta_1,\dotsc,\beta_p|\alpha_1,\dotsc,\alpha_n).\end{equation}
Since $\mu(\Sigma^n(n,p))=n$, Proposition \ref{unique} can be applied. We obtain that the polynomial
in quotient variables which is equal to the right hand side of (\ref{por_root}) will express the Thom polynomial of any $\Sigma^n(*,*+l)$
(at least for $l\geq \binom{n}{2}$). This argument reproves the following classical theorem.
\begin{theorem}[Giambelli-Thom-Porteous]\label{porteous} The Thom polynomial of $\Sigma^n$ in quotient variables is
\begin{equation}\label{eq:gia}
\tp_{\Sigma^n}(l)=\Delta_{{\underbrace{\scriptstyle{n+l,\dotsc,n+l}}_{\mbox{$\scriptstyle{n}$}}}}=\det(c_{n+l+j-i})_{1\leq i,j\leq n}. \end{equation}
\end{theorem}
\subsection{Dependence of Thom polynomials on $p$}\label{sec:d-stab}
In this section we study the relation between the Thom polynomial of the jets $g$ and $\delta g$ where \[\delta:J^k(n,p)\to J^k(n,p+1),\ \ \delta g:(x_1,\ldots,x_n)\mapsto(g_1,\dotsc,g_p,0).\]
In other words we are interested in the dependence of the Thom polynomial $\tp_Q(l)$ on $l$. In \cite{dstab} we showed that under a technical condition one can calculate $\tp(g)$ from $\tp(\delta g)$ by `lowering the indices'. Notice that $Q_{\delta g}\iso Q_g$. Consequently the Thom polynomials of all germs with a given quotient algebra $Q$ (or local algebra $\mathcal{Q}$) can be arranged into a series what we called the {\em Thom series} of $Q$. The variables of this series are normalized Chern classes what we denoted by $d_i$, and hence this stabilization property will be called {\em d-stability}.
\begin{definition}\label{flat} Fix $m \in\N$ and assume that the polynomial $q\in \Z[c_0,c_1,\dotsc]$ has {\em width} $m$, i.e.
\[q=\sum_{|K|=m}a_Kc^K,\ \ \text{where} \ K\in\N^m\ \text{and} \ c^K=\prod_{i=1}^m c_{K_i}, \]
using the $c_0=1$ convention. We define the {\em lowering operator} $\flat=\flat(m)$ by
\[q^\flat:=\sum_{|K|=m}a_Kc^{K^\flat},\ \ \text{where} \ K^\flat_i=K_i-1,\]
using the $c_{-1}=0$ convention.
\end{definition}
E.g. for $m=2$ and $q=c_2^2+c_1c_3+2c_4$ we have $q^\flat=c_1^2+c_2$, where we did not write out the $c_0=1$ factors.
\begin{theorem}\label{th:d-stab} Let $g$ be a jet with $\mu(g)=m$. Then $\tp(g)$ has width $m$ and
\[\tp(\delta g)^\flat=\tp(g).\]
\end{theorem}
A simple calculation shows (see \cite[2.3]{dstab}) that to prove Theorem \ref{th:d-stab} it is enough to prove the following.
\begin{proposition}\label{root-d-stab} Let g$\in J^k(n,p)$ with $\mu(g)=m$ and write
\[ \Tp(\delta g)(\alpha_1,\dotsc,\alpha_n,\beta_1,\dotsc,\beta_p,\beta_{p+1})=\sum p_i\beta_{p+1}^{m-i}\ \ \ \text{for}
\ \ \ p_i\in\Z[\alpha_1,\dotsc,\alpha_n,\beta_1,\dotsc,\beta_p].\]
Then $p_0=\Tp(g)$.
\end{proposition}
\proof Notice that changing $g$ to $\delta g$ in the Localization Formula only changes the factors $[C_I]$ by multiplying them with $\res(\beta_{p+1}|-W_{Q_I})$, i.e. if
\[ \Tp(g)=\sum_{I\in F} a_I\ \ \text{for $a_I$ being the local contribution at the fixed point $I$, then}\]
\[Tp(\delta g)=\sum_{I\in F} a_I\res(\beta_{p+1}|-W_{Q_I}), \]
which implies the Proposition, therefore the Theorem. \qed
We can rephrase Theorem \ref{th:d-stab} in terms of Thom series. Let $Q$ be an $m$-dimensional nilpotent algebra over $\C$ and let $l\geq0$ such that there exist a jet $g(n,p)\in J^k(n,p)$ with $Q_g\iso Q$ and $l=p-n$. If $k>m$, $n\geq m$ and $p\geq b(Q)-a(Q)+n$---where $a(Q)$ is the minimal number of generators for $Q$ and $b(Q)$ is the minimal number of relations for $Q$---then such a jet exists.
\begin{theorem}\label{thomseries} Let $Q$ be an $m$-dimensional nilpotent algebra over $\C$. Then there is a unique homogeneous ($\deg d_i=i$) formal power series
\[\ts_Q=\sum_{|K|=m}a_Kd^K\in \Z[[\dotsc,d_{-i},\dotsc,d_0,\dotsc,d_i,\dotsc]], \]
such that the Thom polynomials $\tp_Q(l)$ can be obtained by substituting $d_i=c_{i+l+1}$ with the usual $c_0=1$, $c_i=0$ for $i<0$ conventions. \qed
\end{theorem}
This is an improvement of \cite[Th.4.1]{dstab}, where the assumption of non-zero normal Euler class was assumed. The degree of $\ts_Q$ can be calculated by finding the degree of $\Tp_Q(n,p)$ for some $n$ and $p$ which requires the calculation of the dimension of an $\mathcal{R}$-orbit (or, equivalently, of the corresponding unfolding space).
From this proof we can see that the d-stability property is not as deep as stability. It is a curious fact of the history of Thom polynomials that it remained hidden for so long.
\begin{remark} \label{m-1-eleg} The Localization Formula and Proposition \ref{unique} shows that if we know the tangent Euler classes of $\overline{\mathcal{R}I_g}$ for $g\in J^k(m-1,p)$ with $\mu(g)=m$ at the monomial ideals (the number of these depend only on $m$ and not on $p$, for a precise formulation, see Proposition \ref{pmax}), then we have a simple algorithm to calculate the Thom series of $Q_g$. But to give a closed formula for $\ts_Q$ in the $d$-variables is a different problem in algebraic combinatorics. The difficulty is to move from formulas in Chern roots to formulas in Chern classes. There are many unsolved problems in this context, like expressing Chern classes of various vector bundle constructions in terms of the Chern classes of the input bundles (see e.g. \cite[\S.2]{pragacz_dd}). It is sometimes a question of taste which form one prefers. In some cases we succeeded to find formulas in the $d$-variables, see Section \ref{sec:quotientchern}.
\end{remark}
\section{Further calculations}\label{sec:calc}
\subsection{Extrapolation}\label{sec:interpol}\mbox{}
The tangent Euler classes $e(T_I\overline{\mathcal{R}I_g})$ are difficult to calculate directly. At this point we do not have a general method to do it. One of our strategies is to use the Localization Formula backwards: knowing the Thom polynomial $\Tp_Q(n,p)$ for some $n$ and $p$ we can calculate the tangent Euler classes and then we can calculate the whole Thom series. This method is based on relating the tangent Euler classes to incidences (in the sense of \cite{rrtp}).
We will use the following shorthand notations for the tangent Euler classes:
\[ e(g,f)= e(T_{I_f}\overline{\mathcal{R}I_g}),\ e(g,I)= e(T_{I}\overline{\mathcal{R}I_g}),\ e(Q,I)=e(T_I\eta_Q),\]
where $\eta_Q\subset\gr^m$ is the closure of the set of ideals $I$ with quotient algebra $Q_I=J^k(n)/I$ isomorphic to $Q$ and $\dim(Q)=m$. We also use the $Q_f=Q_{I_f}$ notation.
\begin{theorem}[Interpolation Formula]\label{interpol} Let $g\in J^k(n,p)$ and let $f\in J^k(n,p)$ be a monomial germ with $\mu(f)=\mu(g)$. Then
\[ e(g,f)=\frac{\res(W_f|W_{Q_f})}{\Tp(g)|_f}, \]
where $W_f=\{w_1,\dotsc,w_p\}$, $w_i=\sum w_{i,j}\alpha_j$ with $f_i=\prod x_j^{w_{i,j}}$ and $|_f$ denotes the restriction to the $n$-dimensional subtorus $T(f)$ of $\mathcal{K}$ fixing $f$, identifying the generators of $H^*_{T(f)}$ with $\alpha_1,\dotsc,\alpha_n$. In other words $\alpha_i|_f=\alpha_i$ and $\beta_i|_f=\sum w_{i,j}\alpha_j$.
\end{theorem}
\proof Restricting the Localization Formula we obtain
\[ \Tp(g)|_f=\sum_{I\in F}\frac{\res(W_f|-W_{Q_I})}{e(g,I)}. \]
If $I$ is a monomial ideal different from $I_f$ with $\mu(I)=\mu(f)$ then there is a $w_i\in -W_f\cap W_{Q_I} $ therefore $\res(W_f|-W_{Q_I})=0$. \qed
The next lemma will further simplify our calculations by allowing us to use as small $n$ as possible. Recall that the stabilization map $\sigma:J^k(n,p)\to J^k(n+1,p+1)$ is defined by
\[\sigma g(x_1,\dotsc,x_{n+1})=\big(g_1(x_1,\dotsc,x_{n}),\dotsc,g_p(x_1,\dotsc,x_{n}),x_{n+1}\big).\]
\begin{lemma}[Tangent Lemma]\label{tangentlemma} Let $f,g\in J^k(n,p)$ and let $f$ be a monomial germ. Then
\[e(\sigma g,\sigma f)=e(g,f)\res(\alpha_{n+1}|-W_{Q_f}). \]
\end{lemma}
\proof Theorem \ref{strongstab} on strong stability implies that $\Tp(\sigma g)|_{\sigma f}=Tp(g)|_{f}$. Using the Interpolation Theorem \ref{interpol} we get
\[e(\sigma g,\sigma f)= \frac{\res(W_{\sigma f}|-W_{Q_{\sigma f}})}{\Tp(\sigma g)|_{\sigma f}}=
\frac{\res(W_f|-W_{Q_f})\res(\alpha_{n+1}|-W_{Q_f})}{\Tp(g)|_f}, \]
by noticing that $W_{\sigma f}=W_f\cup \{\alpha_{n+1}\}$ and $Q_{\sigma f}=Q_f$. \qed
Now we sketch a geometric proof, based on a suggestion of M. Kazarian:
Let $V<J^k(n)$ be a complementary invariant subspace to $I_f$ (take the span of monomials not in $I_f$). Then for any $v\in V$ the jet
\[g_v(x_1,\dotsc,x_{n+1})=\big(g_1(x_1,\dotsc,x_{n}),\dotsc,g_p(x_1,\dotsc,x_{n}),x_{n+1}+v\big)\]
is contact equivalent to $\sigma g$. By checking the dimension we can see that in the affine neighbourhood $U\iso \Hom(I_{\sigma f},V)$ of $I_{\sigma f}\in \gr^\mu$ defined by the decomposition $J^k(n+1)=I_{\sigma f}\oplus V$ we have
\[\mathcal{R}(n+1)\sigma g\iso \mathcal{R}(n)g\times \Hom(\C x_{n+1},V).\]
This local product structure immediately implies the Tangent Lemma. \qed
Using the Localization Formula it is easy to see that the Tangent Lemma is equivalent to Theorem \ref{strongstab}, so the second proof of the Tangent Lemma gives a direct proof of the strong stability.
\begin{example}{\bf The Thom polynomial of $A_3$}: The first case not covered in Section \ref{firstex} is the Thom series of the Morin singularity $A_3$, the contact class corresponding to the algebra $\C[[x]]/(x^4)$. Since $\mu(g)=3$ it is enough to write down the Localization Formula for $n=2$ (see Remark \ref{m-1-eleg}). The monomial ideals for $n=2$ can be identified with partitions of $\mu+1=4$: $(4), (31), (211), (22)$. These monomial ideals will be denoted by
$I_{4}$, $I_{31}$, $I_{211}$ and $I_{22}$. Germs with these monomial ideals will be denoted by $f_{4}$, $f_{31}$, $f_{211}$ and $f_{22}$.
Since $f_{4}$ is the suspension (c.f. Section \ref{sec:p-n}) of the jet $x_1\mapsto x_1^4\in J^4(1,1)$, we can apply the Tangent Lemma~\ref{tangentlemma}.
The ideal $(x_1^4)$ of $J^k(1)$ is a fixed point of the $\mathcal R$-action, so $e(x_1^4,x_1^4)=1$, therefore
\[e(f_4,f_4)=\res(\alpha_1,2\alpha_1,3\alpha_1|\alpha_2).\]
For $I_{22}=(x_1^2,x_2^2)$ we use the Interpolation Formula. The ideal $I_{22}$ has 2 generators so we need that
$\tp_{A_3}(0)=c_1^3+3c_1c_2+2c_3$. We write this polynomial in the Chern roots $\alpha_1,\alpha_2,\beta_1,\beta_2$
and restrict to $f_{22}$ ($\beta_1\mapsto 2\alpha_1,\ \beta_2\mapsto 2\alpha_2$) i.e. make the substitutions
\[c_1=2\alpha_1+2\alpha_2-\alpha_1-\alpha_2=\alpha_1+\alpha_2, \ \
c_2=\alpha_1\alpha_2-\alpha_1^2-\alpha_2^2, \ \
c_3=(\alpha_1+\alpha_2)(\alpha_2-\alpha_1)^2,\]
and we get that
\[\Tp_{A_3}(2,2)|_{f_{22}}=(\alpha_1+\alpha_2)\alpha_1\alpha_2.\]
The interpolation theorem implies that
\[e(f_4,f_{22})= \frac{\res(\alpha_1,\alpha_2,\alpha_1+\alpha_2|2\alpha_1,2\alpha_2)} {(\alpha_1+\alpha_2)
\alpha_1\alpha_2}= \frac{(\alpha_2-\alpha_1)^2(2\alpha_1-\alpha_2)(\alpha_1-2\alpha_2)}{\alpha_1+\alpha_2}. \]
We also need to calculate the Euler class at $f_{31}=(x_1^3,x_1x_2,x_2)$ (the Euler class at $f_{211}$ can be obtained by permuting $\alpha_1$ and $\alpha_2$). The ideal of $I_{31}$ has three generators so we need $\tp_{A_3}(1)$ to apply the interpolation formula. This calculation is better to do with computer, the result can be found in Section \ref{sec:smallmu} at $\mu=3$.
\end{example}
This method of calculating all the ingredients of the Localization Formula from a concrete Thom polynomial $\tp_Q(l)$ (for an appropriate $l$), will be called the Extrapolation method. Now we estimate the value of $l$ for which this method works.
\begin{proposition}\label{pmax} Let $I$ be a monomial ideal in $J^k(n)$ and assume that $a(Q)$---the minimal number of generators of $Q=J^k(n)/I$---is at most $\mu(Q)-1$. Then $b(Q)$---the minimal number of relations of $Q$---is at most $\binom{\mu(Q)}2$. \end{proposition}
\begin{proof} If $a(Q)=\mu(Q)-1$, then $I$ contains all but one quadratic monomials. They are all generators and, if the missing monomial is of the form $x_i^2$, there can be one extra generator, all together maximum $\binom{\mu(Q)}2$ generators. If $a(Q)<\mu(Q)-1$, then ``cut off'' a maximal degree monomial from $Q$: let us call the resulting algebra $Q'$. Then $\mu(Q')=\mu(Q)-1$ and $a(Q')\leq a(Q)<\mu(Q)-1$ so by an induction argument we can assume that $b(Q')\leq \binom{\mu(Q)-1}2$. Hence $b(Q)\leq b(Q')+(\mu(Q)-2)< \binom{\mu(Q)}2$.
\end{proof}
\begin{corollary}To calculate the Thom series of the nilpotent algebra $Q$ with the Extrapolation method, it is enough to know $\tp_Q(\binom{\mu(Q)-1}2)$. \qed
\end{corollary}
We saw already that the Thom series depends only on the finite data of tangent Euler classes at monomial ideals for $n=\mu(Q)-1$. Now we see that the same information is stored in the polynomial $\tp_Q(\binom{\mu(Q)-1}2)$ in a compact way.
This argument shows that it is theoretically possible to calculate a closed formula for {\em any} Thom series, as we have an algorithm based on Groebner degeneration to calculate any Thom polynomial. However this algorithm is extremely ineffective for explicit calculations (works only for trivial cases).
There is a remarkable relation among the tangent Euler classes $e(Q,I)$ for different monomial ideals $I$. Consider the Berline-Vergne-Atiyah-Bott equivariant localization formula \ref{ab} for $[\eta_Q]\in H^*_T(\gr^\mu)$:
\begin{equation}\label{eq:reciprocity}\pi_*[\eta_Q]=\sum_I\frac1{e(Q,I)},\end{equation}
where $\pi:\gr^\mu\to*$ is the collapse map. The Gysin map $\pi_*$ decreases the degree by the dimension of $\gr^\mu$, so the left hand side of (\ref{eq:reciprocity}) is 0 unless $\eta_Q$ is zero-dimensional, i.e. a fixed point of $\mathcal{R}(n)$, when (\ref{eq:reciprocity}) reduces to a tautology. These cases were treated in Section \ref{firstex}.
As a consequence, the value of $e(Q,\M_{\mu(Q)}^2)$ can be calculated algebraically from the other Euler classes.
\subsection{Thom polynomials corresponding to algebras of small dimension}\label{sec:smallmu}
In what follows the maximal ideal $\M_i$ of $\C[[x_1,\dotsc,x_i]]$ will be considered to be a subset of $\M_{i+1}$. If $I\subset \M_i$ is an ideal we define its descendant in $\M_{i+1}$ as $I+(x_{i+1})$. Descendants of descendants are also called descendants. Observe that the factor ring of $\M_i$ by $I$ is isomorphic to the factor ring of $\M_{i+1}$ by the descendant of $I$, in particular $\codim(I\subset \M_i)=\codim(I'\subset \M_{i+1})$ for the descendant $I'$ of $I$.
Recall also that we consider the right group $\mathcal{R}(i)$ acting on $\M_i$, in particular the symmetric group $S_i\subset \GL(i)\subset \mathcal{R}(i)$ also acts on $\M_i$.
\smallskip
Let us fix $\mu\geq 1$. Consider a list of monomial ideals $I_i\subset \M_{n(i)}$ ($n(i)\leq \mu$) such that
\begin{itemize}
\item{} $\codim ( I_i\subset \M_{n(i)})=\mu$;
\item{} no ideal in the $\GL(n(i))$-orbit of $I_i$ is the descendant of an ideal in $\M_{n(i)-1}$;
\item{} the $S_\mu$-orbits of the descendants of $I_i$'s in $\M_\mu$ form a no-repetition, complete list of the codimension $\mu$ monomial ideals of $\M_\mu$.
\end{itemize}
\begin{example}\label{listofIi}
Here are examples for small $\mu$, with the notation $x,y,z,\dots=x_1,x_2,x_3,\dots$.
\noindent{$\mu=1$:} $I_1=(x^2)\subset \M_1$.
\noindent{$\mu=2$:} $I_1=(x^3)\subset \M_1$, $I_2=(x^2,xy,y^2)=\M_2^2\subset \M_2$.
\noindent{$\mu=3$:} $I_1=(x^4)\subset \M_1$, $I_2=(x^2,y^2)\subset \M_2$, $I_3=(x^2,xy,y^3)\subset \M_2$,
\noindent{\ \ \ \ \ \ \ \ \ } $I_4=\M_3^2=(x^2,y^2,z^2,xy,yz,zx)\subset \M_3$.
\noindent{$\mu=4$:} $I_1=(x^5)\subset \M_1$, $I_2=(x^2,xy,y^4)\subset \M_2$, $I_3=(x^3,xy,y^3)\subset \M_2$,
$I_4=(x^2,xy^2,y^3)\subset \M_2$,
\noindent{\ \ \ \ \ \ \ \ \ } $I_5=(x^2,y^2,z^3,xy,yz,zx)\subset \M_3$, $I_6=(x^2,y^2,z^2,xy,xz)\subset \M_3$,
\noindent{\ \ \ \ \ \ \ \ \ } $I_7=\M_4^2\subset \M_4$.
\end{example}
\begin{remark} Monomial ideals $I$ of $\M_n$ can be visualized by the set $\{(i_1,i_2,\ldots,i_n)\in \Z^n: \prod_{j=1}^n x_j^{i_j}\not\in I\}$. This set can be viewed as the $n$-dimensional generalization of (two dimensional) Young diagrams of partitions. In this language, the list $I_i$ for a given $\mu$ is the list of all ``shapes'' of cardinality $\mu+1$ Young diagrams of dimension at most $\mu$.
\end{remark}
The Localization Formula (\ref{eq:lfeu}) can now be rephrased as follows.
\begin{theorem} \label{small_mu_th} Let $\mu$ be a positive integer, and $I_i$ be a list of monomial ideals described above. Then for a nilpotent algebra of dimension $\mu$ we have
\begin{equation} \label{3rdform}\Tp_Q(n,p)=\sum_i \Sym_{I_i} \frac{ \res(\beta_1,\ldots,\beta_p|
-W_{Q_{I_i}})}{e(Q,I_i)\cdot \res(\alpha_{n(i)+1},\ldots,\alpha_n|-W_{Q_{I_i}})}, \end{equation}
where $e(Q,I_i)$ is the virtual tangent Euler class of the closure of the set
$$\{I\triangleleft \gr^\mu(\M_{n(i)}): \M_{n(i)}/I\cong Q\}$$
at the point $I_i$. The symmetrizer operator acts on a polynomial $p$ by
$$\Sym_{I_i}\big(p(\alpha_1,\ldots,\alpha_n)\big)=\frac{1}{|\{\sigma\in S_n: \sigma(I_i)=I_i\}|}
\sum_{\sigma\in S_n} p(\alpha_{\sigma(1)},\ldots,\alpha_{\sigma(n)}).$$
If $n(i)>n$, or $e(Q,I_i)=\infty$ for some $i$, then the $i$'th term in the sum in (\ref{3rdform}) is defined to be 0.
\end{theorem} \qed
\begin{corollary}
The {\em finitely many} rational functions $e(Q,I_i)$ determine the Thom polynomials $\Tp_Q(n,p)$ of the nilpotent algebra $Q$ {\em for all} $n$ and $p$.
\end{corollary}
Using the convention $x,y,z,\ldots=x_1,x_2,x_3,\ldots$, and the following names of nilpotent algebras:
$$A_i=\M_1/(x^{i+1}),\qquad I_{a,b}=\M_2/(xy,x^a+y^b),$$
$$III_{a,b}=\M_2/(x^a,xy,y^b), \qquad \Sigma^{2,1}=\M_2/(x^2,xy^2,y^3),$$
here is a list of some Euler classes:
\bigskip
\noindent{\bf $\mu=1$:}\begin{center} $e(A_1,(x^2))=1$ \end{center}
\noindent{\bf $\mu=2$:}
\begin{center}
\[\begin{array}{|c||c|c|}
\hline I= & (x^3) & (x^2,xy,y^2) \\
\hline \hline e(A_2,I)= & 1&
\frac{1}{3}(\alpha_1-2\alpha_2)(\alpha_2-2\alpha_1) \\
\hline
\end{array}\]
\end{center}
\noindent{\bf $\mu=3$:}
\begin{center}
\[\begin{array}{|c||c|c|c|}
\hline I= & (x^4) & (x^2,y^2) & (x^2,xy,y^3) \\
\hline \hline e(A_3,I)= & 1
&\displaystyle{\frac{(\alpha_1-\alpha_2)^2(2\alpha_1-\alpha_2)(\alpha_1-2\alpha_2)}{(\alpha_1+\alpha_2)}}
& \frac12 (3\alpha_2-\alpha_1)(\alpha_1-\alpha_2)^2\\
\hline e(I_{2,2},I)= & \infty& -(\alpha_1-\alpha_2)^2&2(\alpha_1-\alpha_2)^2 \\
\hline e(III_{2,3},I)=& \infty & \infty& \alpha_1-\alpha_2 \\
\hline
\end{array}\]
\end{center}
\noindent{\bf $\mu=4$:}
\begin{tabular}{ll}
$e(A_4,(x^5))= $& $1$,\\
$e(A_4, (x^2,xy,y^4))=$ &
$\frac{1}{5}(\alpha_1-\alpha_2)(\alpha_1-2\alpha_2)(\alpha_1-4\alpha_2)(3\alpha_2-2\alpha_1),$
\\
$e(A_4,(x^3,xy,y^3))=$ & $
\frac{1}{5}(\alpha_1-\alpha_2)^2(2\alpha_1-3\alpha_2)(3\alpha_1-2\alpha_2),$ \\
$e(A_4,(x^2,xy^2,y^3)=$ &
$\frac{2(\alpha_1-2\alpha_2)(2\alpha_1-\alpha_2)(\alpha_1-3\alpha_2)(\alpha_1-\alpha_2)^2}{5(\alpha_1+\alpha_2)},$ \\
$e(A_4,(x^2,y^2,z^2,xy,xz))=$ & $\clubsuit \cdot
\frac{(\alpha_2+\alpha_3-2\alpha_1)(\alpha_2-2\alpha_3)(\alpha_3-2\alpha_2)}{5(\alpha_2+\alpha_3)},$ \\
$e(A_4,(x^2,y^2,z^3,xy,yz,zx))=$ &
$\spadesuit\cdot\frac{-2(\alpha_1+\alpha_2-2\alpha_3)(3\alpha_3-\alpha_2)(3\alpha_3-\alpha_1)(\alpha_1-2\alpha_2)(\alpha_2-2\alpha_1)}
{5(4\alpha_1^3-9\alpha_1^2\alpha_3-5\alpha_1^2\alpha_2-4\alpha_1\alpha_3^2+15\alpha_1\alpha_2\alpha_3-5\alpha_1\alpha_2^2-4\alpha_2\alpha_3^2-
9\alpha_2^2\alpha_3+4\alpha_2^3+9\alpha_3^3)}.$
\end{tabular}
\bigskip
\begin{tabular}{ll}
$e(I_{2,3},(x^5))=$ & $\infty$, \\
$e(I_{2,3},(x^2,xy,y^4))=$& $(\alpha_1-\alpha_2)(\alpha_1-2\alpha_2)(2\alpha_1-3\alpha_2),$\\
$e(I_{2,3},(x^3,xy,y^3))=$&
$\frac{(\alpha_1-\alpha_2)^2(2\alpha_1-3\alpha_2)(3\alpha_1-2\alpha_2)}{\alpha_1+\alpha_2},$\\
$e(I_{2,3},(x^2,xy^2,y^3))=$&$(\alpha_1-\alpha_2)^2(2\alpha_2-\alpha_1),$\\
$e(I_{2,3},(x^2,y^2,z^2,xy,xz))=$&$\clubsuit \cdot
\frac{(2\alpha_3-\alpha_2)(\alpha_3-2\alpha_2)(2\alpha_1-\alpha_2-\alpha_3)}{\alpha_1\alpha_2+\alpha_1\alpha_3+4\alpha_2^2+4\alpha_3^2-16\alpha_2\alpha_3},$\\
$e(I_{2,3},(x^2,y^2,z^3,xy,yz,zx))=$\\
\end{tabular}
\ \hfill
$\spadesuit\cdot\frac{2(\alpha_1+\alpha_2-2\alpha_3)(3\alpha_3-\alpha_1)(3\alpha_3-\alpha_2)(\alpha_1-2\alpha_2)(\alpha_2-2\alpha_1)}
{4(\alpha_1^4+\alpha_2^4)-6\alpha_1^2\alpha_2^2-5\alpha_1\alpha_2(\alpha_1^2+\alpha_2^2)+\alpha_3(-25(\alpha_1^3+\alpha_2^3)+39\alpha_1\alpha_2(\alpha_1+\alpha_2))+\alpha_3^2(29(\alpha_1^2+\alpha_2^2)-59\alpha_1\alpha_2)+\alpha_3^3(\alpha_1+\alpha_2)}$.
\bigskip
\begin{tabular}{ll}
$e(III_{2,4},(x^5))= $&$ \infty$,\\
$e(III_{2,4},(x^3,xy,y^3))=$&$ \infty,$\\
$e(III_{2,4},(x^2,xy,y^4))=$&$(\alpha_1-\alpha_2)(\alpha_1-2\alpha_2)$,\\
$e(III_{2,4},(x^2,xy^2,y^3))=$&$(\alpha_1-\alpha_2)(2\alpha_2-\alpha_1),$\\
$e(III_{2,4},(x^2,y^2,z^2,xy,xz))=$&$\clubsuit \cdot
\frac{(\alpha_2-2\alpha_3)(\alpha_3-2\alpha_2)}{\alpha_1\alpha_2+\alpha_1\alpha_3-4\alpha_2^2-4\alpha_3^2+4\alpha_2\alpha_3},$\\
$e(III_{2,4},(x^2,y^2,z^3,xy,yz,zx))=$&
$\spadesuit\cdot\frac{-(\alpha_1-3\alpha_3)(\alpha_2-3\alpha_3)}
{2(\alpha_1^2+\alpha_2^2+\alpha_3^2-2\alpha_1\alpha_3-2\alpha_2\alpha_3-\alpha_1\alpha_2)}.$
\end{tabular}
\bigskip
\begin{tabular}{ll}
$e(III_{3,3},(x^5))=$&$\infty$,\\
$e(III_{3,3},(x^2,xy,y^4))=$&$\infty,$\\
$e(III_{3,3},(x^3,xy,y^3))=$&$-(\alpha_1-\alpha_2)^2$,\\
$e(III_{3,3},(x^2,xy^2,y^3))=$&$2(\alpha_1-\alpha_2)^2,$\\
$e(III_{3,3},(x^2,y^2,z^2,xy,xz))=$&$\clubsuit \cdot(-1),$\\
$e(III_{3,3},(x^2,y^2,z^3,xy,yz,zx))=$&$
\spadesuit\cdot\frac{2(\alpha_1-2\alpha_2)(\alpha_2-2\alpha_1)}
{2\alpha_1^2+2\alpha_2^2-3\alpha_3^2-2\alpha_1\alpha_2+\alpha_1\alpha_3+\alpha_2\alpha_3}.$
\end{tabular}
\bigskip
\begin{tabular}{ll}
$e(\Sigma^{2,1},(x^5))=$&$ \infty$,\\
$e(\Sigma^{2,1},(x^2,xy,y^4))=$&$ \infty$,\\
$e(\Sigma^{2,1},(x^3,xy,y^3))=$&$\infty$,\\
$e(\Sigma^{2,1},(x^2,xy^2,y^3))=$&$\alpha_1-\alpha_2$,\\
$e(\Sigma^{2,1},(x^2,y^2,z^2,xy,xz))=$&$\clubsuit \cdot
\frac{1}{2(\alpha_1-\alpha_2-\alpha_3)},$\\
$e(\Sigma^{2,1},(x^2,y^2,z^3,xy,yz,zx))=$&$ \spadesuit\cdot
\frac{1}{\alpha_1+\alpha_2-\alpha_3},$
\end{tabular}
where
$$\clubsuit=(\alpha_1-\alpha_2)(\alpha_1-\alpha_3)(\alpha_1-2\alpha_2)(\alpha_1-2\alpha_3)(\alpha_2-\alpha_3)^2,$$
$$\spadesuit=(\alpha_1-\alpha_3)^2(\alpha_2-\alpha_3)^2(\alpha_1-\alpha_2-\alpha_3)(\alpha_2-\alpha_1-\alpha_3).$$
Theorem \ref{small_mu_th}, and the list of $e(Q,I)$-classes above give the Thom polynomial of all singularities whose associated algebra has dimension at most 4, with the following exceptions:
\begin{itemize}
\item{} We did not include $e(Q,I)$ classes for $Q=\M_i/\M_i^2=\Sigma^i$ for $i=2,3,4$, since those Thom polynomials (Giambelli-Thom-Porteous formulae) are known, see (\ref{eq:gia}).
\item{} For $\mu=3$ and $\mu=4$ we did not include the classes $e(Q,\M_{\mu}^2)$, since they can be calculated using (\ref{eq:reciprocity}).
\item{} There are three other algebras with $\mu=4$, namely
$$\M_3/(x^2,y^2,z^3,xy,yz,zx), \M_3/(x^2,y^2,z^2,xz,yz),\M_3/(xy,xz,yz,x^2-y^2,x^2-z^2).$$
Their Thom polynomials will be studied in Section \ref{sec:phi} (under the names $\Phi_{3,2}$, $\Phi_{3,1}$, $\Phi_{3,0}$).
\end{itemize}
\section{Returning to geometry} \label{sec:return}
In this section we use some simple geometric observations to calculate the Localization Formula for some singularities. We believe that this is just the beginning. In a similar fashion one can transform the deep geometric knowledge of singularity theorists into further formulas.
\subsection{The punctual Hilbert scheme} The localization Formula reduces the Thom polynomial calculations to the study of the space of ideals $\mathcal{H}^m(n)=\mathcal{H}^m(n,k)$ in $\gr^m(J^k(n))$. We suppress $k$ from the notation since these spaces are isomorphic for all $k\geq m$, what we will assume. The variety $\mathcal{H}^m(n)$ was studied under the name {\em local punctual Hilbert scheme} (with the reduced scheme structure) by A. Iarrobino, J. Damon, A. Galligo, T. Gaffney and others (see \cite{iarrobino, damon-galligo, gaffney}). The connection of this Hilbert scheme with singularity theory is well known, however the $\mathcal{R}(n)$-equivariant theory needed for Thom polynomial calculations is not developed yet. We do not pursue this approach here but we believe that it would lead to strong results.
The structure of $\mathcal{H}^m(n)$ is complicated. In general even its dimension is not known. It has many components, one of which is the closure of the orbit $\mathcal{R}(n)(x_1^{m+1},x_2,\dotsc,x_n)$. For $n=2$ this is the only component, for $n\geq 3$, there can be others. Nevertheless we can see that the calculation of the Thom polynomials of the Morin singularities $A_m$ is related to the study of the singularities of this component at monomial ideals.
The Hilbert scheme $\mathcal{H}^m(n)$ has many {\em smooth} $\mathcal{R}(n)$-invariant subvarieties. For the corresponding contact classes we can easily calculate the Localization Formula.
\subsection{Subgrassmannians}\label{subgrass} Let $V$ be a $d$-dimensional subspace of $P:=\Hom(\sym^{k+1}\C^n,\C)$ for $d<\dim(P)=\binom{n+k-1}k$. Let $N\geq 1$ and consider the ideal $I_V<J^{k+N}(n)$ generated by $V$ and $\Hom(\sym^{k+2}\C^n,\C)$. We have the $\GL(n)$-equivariant embedding $j:V\mapsto I_V$ mapping the Grassmannian $\gr_d(P)$ into the Hilbert scheme $\mathcal{H}^m(n)$ where $m=\sum_{i=2}^{k+1}\binom{n+i-1}i-d$. These subgrassmannians were used by Iarrobino in \cite{iarrobino:dim} to give a lower bound on the dimension of $\mathcal{H}^m(n)$. The corresponding contact classes
\[\Sigma^{n^k(d)}(n,p)=\Sigma^{\overbrace{n,\dotsc,n}^k(d)}(n,p):=\{g\in J^{k+N}(n,p):I_g\in j(\gr_d(P))\}\]
were studied by J. Damon in \cite{damonphd}. He also calculated the Thom polynomial of some of these classes. The Localization Formula gives the answer for all of these cases:
Let W be the set of integer $k+1$-tuples $w=(w_1,\dotsc,w_{k+1})$ with $1\leq w_1\leq w_2\leq\cdots\leq w_{k+1}\leq n$ and let $\alpha_{w}$ denote the weight $\sum w_i\alpha_i$. Then $\{\alpha_w:w\in W\}$ is the set of weights of $P=\Hom(\sym^{k+1}\C^n,\C)$. The $T(n)$-fixed points of the Grassmannian $\gr_d(P)$ are identified with the $d$-element subsets of $W$. For a fixed point $S$ let
\[ [E_S]=\prod_{i=1}^p\prod_{\sigma\in S}(\beta_i-\alpha_\sigma) \ \ \text{and} \ \
e_S=e(T_S \gr_d(P))=\prod_{\sigma\in S}\prod_{\sigma'\in W\setminus S}(\alpha_\sigma-\alpha_{\sigma'}).\]
Then by the Localization Formula we get
\begin{equation}[\Sigma^{n^k(d)}(n,p)]=[\Sigma^{n^k}(n,p)]\cdot\sum_{S}\frac{[E_S]}{e_S}.\label{eq:gr} \end{equation}
Notice that the result is independent of $N$ in accordance with Theorem \ref{k}. For the choice $N=1$ we get $I_V=V$.
Recall that we calculated $[\Sigma^{n^k}(n,p)]$ in Section \ref{firstex}. If $k=1$ and $d=\binom{n+1}2-1$ then by Proposition \ref{unique} we can calculate the stable Thom polynomial from (\ref{eq:gr}). In fact these Thom polynomials can be calculated by a direct geometric argument (see the proof of Theorem \ref{Phi_quotient}). (Knowing the result from the Localization Formula certainly helped to find the geometric argument.)
\section{Thom series of $\Phi_{m,r}$ singularities} \label{sec:phi}
The subgrassmannian $j\gr^1(\Hom(\sym^2\C^m,\C))\iso\P( \sym^2\C^m)$ splits into orbits $X(m,r)$ according to the corank of the symmetric matrices in $\sym^2\C^m$. The orbits correspond to the following algebras.
\begin{definition} Let $m>r$ be nonnegative integers. The quotient of $J^2(m)$ by the ideal
\[J_{m,r}=\left\{ \sum_{1\leq i\leq j\leq m} a_{ij} x_ix_j\ :\ \sum_{r+1}^m a_{ii}=0 \right\}\]
will be denoted by $\Phi_{m,r}$.
\end{definition}
\noindent A finite generator set of $J_{m,r}$ (as an ideal but also as a vector space) is given by:
\[J_{m,r}=\left\langle x_ix_j, x_k^2, x_{r+1}^2-x_l^2: 1\leq i<j\leq m, 1\leq k\leq r, r+2\leq l\leq m \right\rangle\] for $r<m-1$, and
\[J_{m,m-1}=\left\langle x_ix_j, x_k^2\ :\ 1\leq i<j\leq m, 1\leq k \leq m-1\right\rangle.\]
Observe that for small values of the parameters $m,r$ we recover familiar algebras:
\[ \Phi_{1,0}=A_2, \qquad \Phi_{2,0}=I_{2,2}, \qquad \Phi_{2,1}=III_{2,3}.\]
Following our previous convention, the singularities corresponding to the algebras $\Phi_{m,r}$ in $\E_0(n,n+l)$ will be denoted by $\Phi_{m,r}(n,n+l)$. Calculation shows that
\[\codim \big( \Phi_{m,r}(n,n+l) \subset \E_0(n,n+l) \big)= (m+1)l+ \left( \binom{m+1}{2}+\binom{r+1}{2}+1\right).\]
\subsection{Thom polynomials of $\Phi_{n,r}$ in terms of Chern roots.}
To calculate the Thom polynomials of these classes---using the Localization Formula---we need the $\GL(n)$-equivariant cohomology classes $[X(n,r)\subset \P(\sym^2\C^n)]$ restricted to the $T(n)$-fixed points of $\P(\sym^2\C^n)$. The cohomology class of the cone of $X(n,r)$ was
calculated in \cite{harris-tu} and \cite{jlt}:
\[ [\text{Cone}X(n,r)\subset \sym^2\C^n] =2^r\Delta_{r,r-1,\dotsc,2,1},\]
where $\Delta_{r,r-1,\dotsc,2,1}=\det(c_{r+1+j-2i})_{i,j=1,\dots,r}$ denotes the Schur polynomial in the Chern classes $c_1,\dotsc,c_n$, corresponding to the partition $(r,r-1,\dotsc,2,1)$. Using \cite[\S6]{forms} we can calculate the $T(n)$-equivariant {\em projective Thom polynomial}
\[[X(n,r)\subset\P(\sym^2\C^n)]=2^r\Delta_{r,r-1,\dots,2,1}(\alpha_1-\frac12\xi,\dotsc,\alpha_n-\frac12\xi)\ \ \text{in}\] \[ H^*_{T(n)}(\P( \sym^2\C^n))\iso \Z[\alpha_1,\dotsc,\alpha_n,\xi]/\prod(\alpha_i+\xi).\]
We need the restriction of this class to the fixed points $\{f_{ij}:1\leq i\leq j\leq n\}$ of $\P( \sym^2\C^n)$:
\begin{equation}[X(n,r)]|_{f_{ij}}=2^r\Delta_{r,r-1,\dotsc,2,1}(\alpha_1-\frac12(\alpha_i+\alpha_j),\dotsc,\alpha_n-\frac12(\alpha_i+\alpha_j)).\label{eq:xnr}
\end{equation}
The other components of the Localization Formula are
\[[E_{ij}]=\res(\beta_1,\dotsc,\beta_p|\alpha_i+\alpha_j), \ \
e_{ij}^{(n)}=\res(K_{ij}^{(n)}|\alpha_i+\alpha_j), \ \text{for} \ K_{ij}^{(n)}=\{\alpha_k+\alpha_l:k\leq l,\ (k,l)\neq (i,j)\}.\]
Hence the Interpolation formula yields to the following
\begin{theorem} \label{tpgnr} The Thom polynomial of $\Phi_{n,r}$ is
\begin{equation} \label{phi_local_exp}\Tp_{\Phi_{n,r}}(n,p)=\res(\beta_1,\dotsc,\beta_p|\alpha_1,\dotsc,\alpha_n)
\sum_{1\leq i\leq j\leq n}\frac{[E_{ij}]}{e_{ij}^{(n)}}[X(n,r)]|_{f_{ij}}. \end{equation}\qed
\end{theorem}
\subsection{Thom polynomials of $\Phi_{m,r}$ in terms of quotient Chern classes}\label{sec:quotientchern}\mbox{}
Since $\mu(\Phi_{n,r})=n+1$, the polynomial $\Tp_{\Phi_{n,r}}(n,p)$ determines the Thom series of $\Phi_{n,r}$ by Proposition \ref{unique}. We devote this section to the calculation of these Thom series.
\subsubsection{Notations from algebraic combinatorics}Let
\[A_n=\{\alpha_1,\ldots,\alpha_n\},\qquad B_p=\{\beta_1,\ldots,\beta_p\}.\]
For a partition $\lambda=(\lambda_1,\ldots,\lambda_s)$ and variables $x_1,\ldots, x_t$ we define
\[ \Delta_\lambda(x_1,\ldots,x_t)=\det(\sigma_{\lambda_i+j-i})_{1\leq i,j\leq s},\]
where $\sigma_i=\sigma_i(x_1,\ldots,x_t)$ is the $i$th elementary symmetric polynomial of $x_1,\ldots, x_t$ (i.e.
$\sigma_1=\sum x_i$, $\sigma_2=\sum_{i\not= j} x_ix_j$ etc). The symbol $\Delta_\lambda$ without arguments (as before, in (\ref{schurdef})) will denote the determinant
\[\Delta_\lambda=\det(c_{\lambda_i+j-i})_{1\leq i,j\leq s}\]
with entries the quotient variables (\ref{quotient_vars}), (\ref{eq:c}). We will use partitions, and their notations as are in \cite{fulton:young}. For example, $\overline{\lambda}$ will denote the {\sl conjugate} partition of $\lambda$. Addition of partitions is defined coordinatewise, $a^b$ means $b$ copies of $a$, and concatenation is indicated by a comma. For example $(3^4+(2,1),(1,1))=(5,4,3,3,1,1)$. We will need the staircase partition
\[\rho_s=(s,s-1,\ldots,2,1).\]
\begin{definition} \label{def:segre} The Schur coefficients of the equivariant Segre classes of $\Sym^2\C^n$ will be denoted by double brackets; namely:
\[ \frac{1}{\prod_{1\leq i\leq j \leq n} (1-\alpha_i-\alpha_j)}=
\sum_{I} ((I)) \Delta_{\overline{I-\rho_{n-1}}}(\alpha_1,\ldots,\alpha_n).\]
Here $I$ runs through all length $n$ sequences $I=(i_1,i_2,\ldots, i_n)$ with $i_1>i_2>\ldots >i_n\geq 0$.
\end{definition}
The numbers $((I))$ are positive; their combinatorics, as well as recursion and Pfaffian formulas are studied in \cite{pragacz:enumgeo,lalat,pragacz_dd}. For practical purposes, the following recursion is most useful
\[ r ((i_1,\ldots, i_r)) -2\sum_{k=1}^r ((i_1,\ldots, i_{k-1}, i_k-1, i_{k+1},\ldots, i_r))= \begin{cases} 0 & i_r>0 \\
((i_1,\ldots,i_{r-1})) & i_r=0,\end{cases}\]
together with the conventions $((0))=1$, and that $((I))=0$ if $I$ does not satisfy $i_1>i_2>\ldots >i_n\geq 0$. For example $((i))=2^i$, $((i,0))=2^i-1$, $((2,1))=3$, $((3,1))=10$.
Now we are ready to present the Thom series corresponding to the algebras $\Phi_{m,m-s}$.
\begin{theorem}\label{Phi_quotient}We have
\begin{equation}\label{Phi_quotient_formula} \tp_{\Phi_{m,m-s}}(l)= \sum_I ((I))\Delta_{I'}, \end{equation}
where
\begin{eqnarray*}
I' & = & ((l+s)^s+I-\rho_{s-1}, (l+m)^{m-s},l+m+1-s-|I|) \\
& = & (l+1+i_1, l+2+i_2, \ldots, l+s+i_s, {\underbrace{{l+m,\ldots,l+m}}_{\mbox{$m-s$}}}, l+m+1-s-|I|),
\end{eqnarray*}
and the summation is for sequences $I=(i_1,\ldots,i_s)$ with $i_1>i_2>\ldots>i_s\geq m-s$ and $|I|=i_1+\ldots+i_s\leq l+m-s+1$.
\end{theorem}
The special case $m=s=2$ (the singularity $I_{2,2}$) was proved in \cite{pragacz:i22}.
\begin{remark} We can formally change the summation for all sequences $I$ with $i_1>i_2>\ldots>i_s\geq 0$ without changing the sum. Indeed, if $|I|$ is larger than $l+m-s+1$ then the $\Delta$ polynomial is zero, since the last part in the partition is negative. If $i_s<m-s$ then in the determinant expansion of $\Delta$ the $s$th row coincides with one of the next $m-s$ rows, hence, again $\Delta=0$.
\end{remark}
\begin{remark} The sum of the parts of all the partitions $I'$ above is $l(m+1)+\binom{m+1}{2}+\binom{m-s+1}{2}+1$, consistent with the fact that this is the codimension of the singularity $\Phi_{m,m-s}(*,*+l)$ in $\E_0(*,*+l)$.
\end{remark}
\begin{remark} The formula gets particularly simple if $r=m-1$:
\begin{equation} \label{eq:phinn-1} \tp_{\Phi_{n,n-1}}(l)=2^{n-1}\sum_{i=0}^{l+1}2^i
\Delta_{n+l+i,{\underbrace{\scriptstyle{n+l,\dotsc,n+l}}_{\mbox{$\scriptstyle{n-1}$}}},l+1-i}, \end{equation}
which recovers the Ronga-formula for $A_2=\Phi_{1,0}$ and gives the Thom series of $III_{2,3}=\Phi_{2,1}$ for n=3. The Thom series of $III_{2,3}$ was also calculated recently in \cite{ozturk:3}.
\end{remark}
\begin{proof} First we prove Theorem \ref{Phi_quotient} for the case $s=m$. This is a direct geometric argument, not using localization.
We want to calculate the equivariant Poincar\'e dual of the $\Phi_{m,0}$-jets $X_{m,0}\subset\Hom(\C^m,\C^p)\oplus\Hom(\sym^2\C^m,\C^p)$. By definition:
\[ [X_{m,0}]=e\big(\Hom(\C^m,\C^p)\big)\cdot[\Sigma^1(\sym^2\C^m,\C^p)], \]
where $\Sigma^1(V,W)$ denotes the corank 1 linear maps from $V$ to $W$. We have that
\[ e\big(\Hom(\C^m,\C^p)\big)=\res(B_p|A_m), \ \text{and}\ [\Sigma^1(\sym^2\C^m,\C^p)]=c_q(\C^p\ominus\sym^2\C^m), \]
where $q=p-\binom{m+1}2+1$ and $c_q(\C^p\ominus\sym^2\C^m)$ denotes the $q$\textsuperscript{th} (equivariant) Chern class of the formal difference $\C^p\ominus\sym^2\C^m$. The second statement is the Giambelli-Thom-Porteous formula. Now
\[ c_q((\sym^2\C^m,\C^p))=\sum_{i=0}^qc_{q-i}(\C^p)s_i(\sym^2\C^m), \]
where $s_i$ denote the Segre classes, which are defined by the identity
\[ (1+c_1t+c_2t^2+c_3t^3+\cdots)\cdot(1-s_1t+s_2t^2-s_3t^3+-\cdots)=1. \]
As we mentioned in Definition \ref{def:segre} the Segre classes of $\sym^2\C^m$ can be expressed from the Chern roots of $\C^m$:
\[ s_i(\sym^2\C^m)=\sum_{I} ((I)) \Delta_{\overline{I-\rho_{m-1}}}(\alpha_1,\ldots,\alpha_m),\]
where $I$ runs through all length $m$ sequences $I=(i_1,i_2,\ldots, i_m)$ with $i_1>i_2>\ldots >i_n\geq0$ with $|I|-\binom{m}2=i$. We finish the proof of the formula (\ref{Phi_quotient_formula}) for $s=m$ by recalling the Factorization Property of Schur polynomials.
\begin{lemma}\label{factorization_property} \cite[I.3]{macdonald} (Factorization Formula) Let $n,p$ be nonnegative integers, and let the quotient Chern
classes be defined as in (\ref{quotient_vars}). Suppose that $(p^n+\lambda,\mu)$ is a partition. Then
\[ \Delta_{p^n+\lambda,\mu}=\res(A_n,B_p) \Delta_{\mu}(B_p) \Delta_{\overline{\lambda}}(A_n).\] \end{lemma}
For $m=2$ we have $\Phi_{2,0}=I_{2,2}$. The Thom series of $I_{2,2}$ was calculated by several authors \cite{dstab}, \cite{pragacz:i22}, \cite{kaza:gysin}.
\medskip
Now we go on with the proof of Theorem \ref{Phi_quotient}. An interpretation of what we proved so far
is that the expression in (\ref{phi_local_exp}) is equal to the expression in
(\ref{Phi_quotient_formula}) for $s=m$. In the remaining of the proof we will use this statement to prove that expression
(\ref{phi_local_exp}) agrees with expression (\ref{Phi_quotient_formula}) for any $m>s$. As before, $b_j$ will denote the $j$th elementary symmetric polynomial of the $\beta_i$'s.
The equality of formula (\ref{phi_local_exp}) with (\ref{Phi_quotient_formula}) for $s=m$ can be written---using the
Factorization Formula, Lemma~\ref{factorization_property}---as
\begin{equation}\label{tudjuk1}\sum_{1\leq i \leq j \leq s}
\frac{\res(B_{s+l}|\alpha_i+\alpha_j)}{\res(K^{(s)}_{i,j}|\alpha_i+\alpha_j)}=\sum_I
((I)) b_{l+1-|I|} \Delta_{\overline{I-\rho_{s-1}}}(\alpha_1,\ldots,\alpha_s).\end{equation}
What we want to prove is the equality of these two formulas for any $m$ and $s$, that is (using the Factorization Formula again)
\begin{equation}\label{akarjuk1} \sum_{1\leq i \leq j \leq m}
\frac{\res(B_{m+l}|\alpha_i+\alpha_j)}{\res(K^{(m)}_{i,j}|\alpha_i+\alpha_j)}2^{m-s}\Delta_{\rho_{m-s}}
\big(\alpha_1-\frac{\alpha_i+\alpha_j}{2},\ldots,\alpha_m-\frac{\alpha_i+\alpha_j}{2}\big)=\end{equation}
\[\sum_I ((I)) b_{l+1+m-s-|I|} \Delta_{\overline{I-(m-s)^s-\rho_{s-1}}}(\alpha_1,\ldots,\alpha_m).\]
Checking the coefficients of $b_{s+l-k}$ and $b_{m+l-k}$ respectively, in these equations we can reduce the theorem to the following problem: Knowing
\begin{equation} \label{tudjuk2} \sum_{1\leq i \leq j \leq s}
\frac{(\alpha_i+\alpha_j)^k}{\res(K^{(s)}_{i,j}|\alpha_i+\alpha_j)}=\sum_{|I|=k+1-s} ((I))
\Delta_{\overline{I-\rho_{s-1}}}(\alpha_1,\ldots,\alpha_s),\end{equation}
we want to prove
\begin{equation}\label{akarjuk2}\sum_{1\leq i \leq j \leq m}
\frac{(\alpha_i+\alpha_j)^k}{\res(K^{(m)}_{i,j}|\alpha_i+\alpha_j)}2^{m-s} \Delta_{\rho_{m-s}}
\big(\alpha_1-\frac{\alpha_i+\alpha_j}{2},\ldots,\alpha_m-\frac{\alpha_i+\alpha_j}{2}\big)=\end{equation}
\[\sum_{|I|=k+1-s} ((I)) \Delta_{\overline{I-(m-s)^s-\rho_{s-1}}}(\alpha_1,\ldots,\alpha_m).\]
We recall the Gustafson-Milne identity:
\begin{lemma}\label{gs_lemma} \cite{gm}, \cite{cl} Let $m\geq s$ be nonnegative integers. If $H\subset \{1,\ldots,m\}$ then the set $\{\alpha_h\}_{h\in H}$ will be denoted by $\alpha_H$, and the set
$\{\alpha_1,\ldots,\alpha_m\}\setminus \alpha_H$ will be denoted by $\alpha_{\overline{H}}$. Let the partition
$\mu=(\mu_1,\mu_2,\ldots)$ satisfy $\mu_1\leq s$. Then we have
\[ \Delta_{\mu} (\alpha_1,\ldots,\alpha_m)=\sum_{H\subset\{1,\ldots,m\}, |H|=s}
\frac{\Delta_{s^{m-s},\mu}(\alpha_H)}{\res(\alpha_H|\alpha_{\overline{H}})}.\]
\end{lemma}
The Gustafson-Milne identity implies that the right hand side of (\ref{akarjuk2}) is obtained from the right hand side of
(\ref{tudjuk2}) by the following operation:
\[ p(\alpha_1,\ldots,\alpha_s) \mapsto \sum_{H \subset \{1,\ldots,m\}, |H|=s}
\frac{p(\alpha_H)}{\res(\alpha_H|\alpha_{\overline{H}})}.\]
Hence it is enough to show that the same operation maps the left hand sides into each other, too. That is we need to prove
\begin{equation} \sum_{H \subset \{1,\ldots,m\}, |H|=s} \frac{\sum_{i \leq j \in H}
\frac{(\alpha_i+\alpha_j)^k}{\res(K^{H}_{i,j}|\alpha_i+\alpha_j)}} {\res(\alpha_H|\alpha_{\overline{H}})}=\end{equation}
\[ \sum_{1\leq i \leq j \leq m}\frac{(\alpha_i+\alpha_j)^k}{\res(K^{(m)}_{i,j}|\alpha_i+\alpha_j)}2^{m-s}
\Delta_{\rho_{m-s}}\big(\alpha_1-\frac{\alpha_i+\alpha_j}{2},\ldots,\alpha_m-\frac{\alpha_i+\alpha_j}{2}\big).\]
For this, the following Lemma will be useful.
\begin{lemma}\label{thirdlemma} Let $m>s$ be non-negative integers, and consider the variables $\gamma_1,\ldots,\gamma_m,y$. For a subset $H\subset \{1,\ldots,m\}$ we set $\overline{H}=\{1,\ldots,m\}-H$, $\gamma_H=\{\gamma_i\}_{i\in H}$, $\gamma_{\overline{H}}=\{\gamma_i\}_{i\in \overline H}$. We have
\[ \sum_{H\subset \{1,\ldots,m\}, |H|=s}\frac{\Delta_{\rho_s}(\gamma_H)}{\res(\gamma_H,\gamma_{\overline{H}})}
\prod_{i\in H}\prod_{j\in \overline{H}} (\gamma_i+\gamma_j)=\Delta_{\rho_s}(\gamma_1,\ldots,\gamma_{m},y,-y).\]
\end{lemma}
\begin{proof}In this proof we use the Thom polynomials of the representation of $\GL_m$ on $\Lambda^2\C^m$, see e.g. \cite[\S 3]{forms}. Consider the canonical exact sequence of vector bundles $0\to S \to E \to Q \to 0$ over the Grassmannian $Gr_s\C^m$, and the diagram of maps
\[\xymatrix{\Lambda^2 Q \ar@{^{(}->}@<-3pt>[rr]^{i} \ar@/^2pc/[rrr]^{\phi} \ar[d] & & \Lambda^2 E
\ar[d]\ar[r]^{\pi_2} & \Lambda^2 \C^m \\ Gr_s\C^m \ar@{->}[rr]^{id} & & Gr_s\C^m, & } \]
with the action of the $n$-torus. Using the fact that $\phi(\Lambda^2 Q)$ is the set of two forms of corank at least $s$,
and the identification $\Lambda^2 E =\Lambda^2 S \oplus \Lambda^2 Q\oplus (S \otimes Q)$, Proposition \ref{31} gives the statement of the lemma for $y=0$. We leave to the reader to prove the fact that the right hand side is independent of $y$. (One way of proving this is the identification of the right hand side with an {\sl incidence class} of two orbits of the representation of $\GL_{m+2}$ on $\Lambda^2 \C^{m+2}$, see \cite{zsolt}.)
\end{proof}
Lemma \ref{thirdlemma} (with the substitution $\gamma_u=\alpha_u-(\alpha_i+\alpha_j)/2$) can be used to show that
the coefficient of $(\alpha_i+\alpha_j)^k$ are the same on the two sides, for all $i\leq j$. This completes the proof of Theorem \ref{Phi_quotient}.
\end{proof}
\section{Iterated residue formulae and generating functions}\label{sec:generatingfn}
In \cite[\S 6.2]{bsz06} (see also \cite{szenes}) B\'erczi and Szenes used one rational function---we will call it {\em generating function}---and the iterated residue operation, to encode the Thom polynomial of all singularities corresponding to the same nilpotent algebra of type $A_d$. We show that generating functions can be assigned to other singularities. We give some examples and indicate how the generating function can be a useful tool in future studies of Thom polynomials.
Certain rational functions in the variables $z_1,\ldots,z_d$ generate polynomials in the quotient variables through the {\em iterated residue operation}, which we describe now, following \cite{bsz06}. Consider $\C^d$ with coordinates $z_1,\ldots,z_d$. Let $\omega_1,\ldots,\omega_N$ be linear forms on $\C^d$, and let $h(z_1,\ldots,z_d)$ be a polynomial. We define the iterated residue operator by
\begin{equation} \RES_d \frac{h(z_1,\ldots,z_d)}{\prod_{i=1}^N \omega_i}=\int_{|z_1|=R_1}
\cdots\int_{|z_d|=R_d}\frac{h(z_1,\ldots,z_d)dz_1\ldots dz_d}{\prod_{i=1}^N \omega_i}, \end{equation}
where $0<<R_1<< R_2<<\ldots<<R_d$. This definition makes sense up to a choice of a sign, but this will be enough for our purposes: in the formulas below we always mean the iterated residue with the appropriate choice of $\RES_d (dz_1\ldots dz_d)=\pm 1$. That is, we will describe certain expressions up to sign.
We will use the notations
$D_j=\sum_{i=0}^\infty \frac{c_i}{z_j^i}$ and $\dis_\mu=\prod_{i=1}^\mu \prod_{j=i+1}^\mu (z_i-z_j)$
The following conjecture is an extension of the Theorem (7.2) in \cite{bsz06}, where it is proved for Morin singularities.
We arrived at this conjecture while discussing the problem with M. Kazarian. He informed us that in his work in progress \cite{kaza:noas} he would prove it.
\begin{conjecture}\label{residue_conj}
Let $Q$ be a $\mu$-dimensional, commutative, nilpotent algebra with $\deg(\tp_Q(l))= \mu\cdot l +\gamma$.
\newline
\noindent{\bf (a)} There exists a rational function $k_Q$---called the generating function of $Q$---in the variables $z_1,\ldots,z_\mu$, of degree $\gamma-\binom{\mu+1}{2}$ such that
\begin{equation} \tp_Q(l)=\RES ( k_Q \cdot \dis_\mu \cdot \prod_{i=1}^\mu z_i^l D_i).\end{equation}
\newline
\noindent{\bf (b)} The generating function $k_Q$ has the form \begin{equation}k_Q(z_1,\ldots,z_\mu)=\frac{h(z_1,\ldots,z_\mu)}{\prod_{a\in A} (z_{i_a}+z_{j_a}-z_{s_a})},\label{gfn_form}\end{equation} where $h$ is a polynomial, and $\{i_a,j_a,s_a\}_{a\in A}$ is a repetition-free list of indexes with $i_a \leq j_a<s_a$ for all $a\in A$.
\end{conjecture}
The function $k_Q$ is not unique in general. The Giambelli-Thom-Porteous formula (\ref{eq:gia}) can be encoded by setting
\[ k_{\Sigma^r}=\prod_{i=1}^{r-1}z_i^i \qquad\qquad (\text{here\ } \mu=r, \gamma=r^2). \]
Formula (7.2) of \cite{bsz06} can be interpreted as the the existence of $k_Q$ for $Q=A_i, (i=1,2,\ldots$), as well
as a concrete form of its denominator (all indices with $i_a+j_a\leq s_a$). For the $A_i$ singularity $\mu=\gamma=i$, hence the degree of $k_{A_i}$ is $-\binom{i}{2}$. B\'erczi and Szenes also calculated $k_{A_i}$ for $i=1,..,6$. Here is the first three of their results:
\[ k_{A_1}=1, \qquad k_{A_2}=\frac{1}{2z_1-z_2}, \qquad k_{A_3}=\frac{1}{(2z_1-z_2)(2z_1-z_3)(z_1+z_2-z_3)}.\]
In Section \ref{sec:phi} the singularities $\Phi_{m+r,r}$ were considered ($r=0,1,\ldots$, $m=1,2,\ldots$). For these singularities we have $\mu=m+r+1$, and $\gamma=\binom{m+r+1}{2}+\binom{r}{2}+1$, and hence $\deg k_{\Phi_{m+r,r}}=\binom{r+1}{2}-m$. The results of Section \ref{sec:quotientchern} can be summarized by the following generating functions
\[ k_{\Phi_{m+r,r}}(z_1,\ldots,z_{m+r+1})=
\frac{ \prod_{i=1}^{r-1} z_{m+1+i}^i}{2^{m-1}(2z_1-z_{m+1})\prod_{i=1}^{m-1}(z_i+z_{i+1}-z_{m+1})}.\]
For example we have $k_{\Phi_{2,0}}=k_{I_{2,2}}=\frac1{2(2z_1-z_3)(z_1+z_2-z_3)}$ and $k_{\Phi_{2,1}}=k_{III_{2,3}}=\frac1{(2z_1-z_3)}$.
With some experimenting with the computer one can find generating functions for the remaining nilpotent algebras with $\mu\leq4$:
\[ k_{III_{2,4}}=\frac1{(2z_1-z_2)(z_1+z_2-z_3)(2z_1-z_4)(z_1+z_2-z_4)}, \]
\[ k_{III_{3,3}}=\frac1{4(2z_1-z_3)(z_1+z_2-z_3)(2z_1-z_4)(z_1+z_2-z_4)}, \]
\[ k_{I_{2,3}}=\frac1{(2z_1-z_4)(2z_2-z_3)(2z_2-z_4)(z_1+z_2-z_4)(z_2+z_3-z_4)}, \]
\[ k_{\Sigma^{2,1}}=\frac1{(2z_1-z_3)(z_1+z_2-z_3)(2z_1-z_4)}. \]
Now we want to explore the connection of the conjectured residue-form and the localization form of Thom polynomials.
Recall that for a nilpotent algebra $Q$, and an ideal $I$, the virtual Euler class $e(Q,I)$ was defined in Remark \ref{smooth}. For a function $f$ in variables $z_1,\ldots,z_\mu$ define the asymetrization operator
\[\Asym_\mu( f ) = \sum_{\sigma\in S_\mu} \varepsilon(\sigma)f(z_{\sigma(1)},\ldots,z_{\sigma(\mu)}),\]
where $\varepsilon(\sigma)$ is the sign of the permutation $\sigma$. For a function $f$ with variables $\alpha_i$ let $f|_{\alpha_i:=z_i}$ be the same function with the variables changed to $z_i$.
\begin{conjecture} \label{res_vs_local} Suppose Conjecture \ref{residue_conj}(a) holds. Then
\[ \Asym_\mu ( k_Q ) = \frac{\dis_\mu}{e(Q,\M_\mu^2)|_{\alpha_i:=z_i}}.\]
\end{conjecture}
If we assume part (b) of Conjecture \ref{residue_conj} as well, then Conjecture \ref{res_vs_local} reduces to a remarkable (conjectured) identity for iterated residues (or the Orlik-Salamon algebra) associated with the hyperplane arrangement $\cup_A \{z_{i_a}+z_{j_a}-z_{s_a}=0\}\cup \cup_{i,j}\{z_i=\alpha_j\}$.
\begin{remark} Conjecture \ref{res_vs_local} can be used to guess the function $k_Q$, as soon as the class $e(Q,\M_\mu^2)$ is known. In practice $e(Q,\M_\mu^2)$ can be calculated using Theorem \ref{interpol}, or equation (\ref{eq:reciprocity}). Its denominator is a symmetric function, which is a product of factors $\alpha_i+\alpha_j-\alpha_s$. Then one has finitely many choices to guess for the denominator of $k_Q$. (Eg. for $2\alpha_1-\alpha_2$ we can choose $2z_1-z_2$ or $2z_3-z_5$, etc.) Knowing the degree of $k_Q$ we also find the degree of the numerator. Putting all this together one arrives an oftentimes effective procedure to find the $k_Q$ function.
\end{remark}
\section{The Localization Formula for the ``small $p$'' case} \label{sec:smallp}\mbox{}
In Theorem \ref{lf} we gave a localization formula for the Thom polynomial $\Tp_Q(n,p)$ where $Q$ is a nilpotent algebra. We can evaluate this formula for any $p$, even if $Q$ cannot be defined by $p$ relations. Sometimes it is not zero and we would like to interpret these ``small $p$'' cases.
Fix $n$ and $p$ and assume that $\dim(Q)=m$. Then the correspondence variety---introduced in Definition \ref{def:correspondence}---of $Q$ is
\[ C(Y_Q)=\{(I,g)\in\gr^m\times J^k(n,p)\ |\ I\in Y_Q, I_g\subset I\}, \]
where $Y_Q=\overline{\{I\in\gr^m: Q_I\iso Q\}}$. We have the restriction of the second projection
\begin{equation} \label{phi} \phi:C(Y_Q)\to \eta_Q(n,p):=\phi(C(Y_Q)). \end{equation}
In other words
\[ \eta_Q(n,p)=\{g\in J^k(n,p):\exists\ I\in Y_Q \ \text{such that}\ I_g\subset I\}. \]
We showed in Proposition \ref{birat} that if $n\geq a(Q)$ (minimal number of generators of $Q$) and $p\geq b(Q)$ (minimal number of relations of $Q$), then $\phi$ is birational. In some cases $\phi$ is birational for smaller $p$ as well. In these cases the Localization Formula still calculates $[\eta_Q(n,p)]$.
\subsection{The singularities $III_{a,b}$ and $I_{a,b}$} Consider the nilpotent algebra $III_{a,b}= \M_2/(xy,x^a,y^b)$. Germs $g:(\C^n,0) \to (\C^p,0)$ with this algebra only exist if $p\geq n+1$. Yet, consider $p=n$ and the map
$$\phi: C(Y_{III_{a,b}}) \to \eta_{III_{a,b}}(n,n)$$
of (\ref{phi}). Clearly the $I_{a,b}$ germ $g=(xy,x^a+y^b,x_3,\dots,x_n)$ is in $\eta_{III_{a,b}}(n,n)$. Therefore $\overline{\mathcal Kg}\subset \eta_{III_{a,b}}(n,n)$. Checking their dimensions we get that in fact $\overline{\mathcal Kg}= \eta_{III_{a,b}}(n,n)$. One can verify that the only ideal in the $\mathcal{R}$-orbit of $(xy,x^a,y^b,x_3,\dots,x_n)$ containing the ideal $(xy,x^a+y^b,x_3,\dots,x_n)$ is $(xy,x^a,y^b,x_3,\dots,x_n)$ itself, therefore $\phi$ is generically one to one. It implies that (see Section \ref{sec:d-stab})
\begin{theorem} \label{thm:iabiiiab}
For $a,b\geq 2$
\[ \tp_{I_{a,b}}(0) =\tp_{III_{a,b}}(1)^{\flat(a+b-2)}. \]
\end{theorem}
The Thom polynomials occurring in Theorem \ref{thm:iabiiiab} are only known for small values of $a$ and $b$.
For $a=b=2$ this theorem was known, because it follows from the following simple fact. If $p=n$ then the contact singularities with algebra $I_{2,2}$ form an open subset of the $\Sigma^2$ germs, while if $p>n$ then the singularities with algebra $III_{2,2}$ form an open subset of the $\Sigma^2$ germs. Hence the theorem reduces to the obvious $\tp_{\Sigma^2}(0)=\tp_{\Sigma^2}(1)^{\flat(2)}$.
The Thom polynomials $\tp_{I_{2,3}}(0)$, $\tp_{III_{2,3}}(1)$ are calculated in \cite{rrtp}, but their relation is not noticed there.
\subsection{Lowering the Thom polynomial of $\Phi_{n,n-1}$} \label{sec:veronese} Recall formula (\ref{eq:phinn-1}), the Thom polynomial of $\Phi_{n,n-1}$:
\[ \tp_{\Phi_{n,n-1}}(l)=2^{n-1}\sum_{i=0}^{l+1}2^i
\Delta_{n+l+i,{\underbrace{\scriptstyle{n+l,\dotsc,n+l}}_{\mbox{$\scriptstyle{n-1}$}}},l+1-i}. \]
Germs $(\C^n,0) \to (\C^{n+l},0)$ with this algebra only exist with $l\geq \binom{n}{2}$. Yet, choose $m=n+1$, $k=2$ and $l=-1$, and consider the map
$$\phi: C(Y_{\Phi_{n,n-1}}) \to \eta_{\Phi_{n,n-1}}(n,n-1) \subset J^2(n,n-1)$$
of (\ref{phi}). One may check that the image of this map is $\Hom(\sym^2\C^n,\C^{n-1})\subset J^2(n,n-1)$, whose cohomology class is $e(\Hom(\C^n,\C^{n-1})) = \Delta_{{\underbrace{\scriptstyle{n-1,\dotsc,n-1}}_{\mbox{$\scriptstyle{n}$}}}}$. On the other hand, applying $l+1$ times the lowering operator $\flat(n+1)$ to the polynomial $\tp_{\Phi_{n,n-1}}(l)$ we get $2^{n-1}\Delta_{{\underbrace{\scriptstyle{n-1,\dotsc,n-1}}_{\mbox{$\scriptstyle{n}$}}}}$ (using the elementary fact that $\Delta_{(i_1,\dotsc,i_m)}^{\flat(m)}=\Delta_{(i_1-1,\dotsc,i_m-1)}$). Comparing these two cohomology classes implies that the map $\phi$ is a covering with $2^{n-1}$ sheets.
Now we show the classical geometry reason for $\deg \phi = 2^{n-1}$. The definition of $\Phi_{n,n-1}$ implies that $Y_{\Phi_{n,n-1}}\subset \P(\sym^2\C^n)\subset \gr^{n+1}(J^2(n))$ is identified with the projectivization of the set of rank 1 symmetric matrices. This closed variety is the image of the {\em Veronese} map $\P(\C^n) \to \P(\sym^2\C^n)$. Denote the two obvious projections of $C(Y_{{\Phi_{n,n-1}}})$ by $\pi_1$ and $\pi_2$. For a generic $g$ in the image of $\phi$, the set $\pi_1(\pi_2^{-1}(g))$ intersects $\P(\sym^2\C^n)\subset \gr^{n+1}(J^2(n))$ in an $n-1$-codimensional linear subspace, hence the number of $\phi$-preimages of $g$ is the degree of the Veronese variety. This degree is known to be $2^{n-1}$, see e.g. \cite[p.231]{harris:ag}, agreeing with our result above.
\subsection{Thom series for Thom-Boardmann classes}\label{sec:tb} Let $\Sigma^K$ denote the Thom-Boardmann class corresponding to $K=(i_1,\dotsc,i_s)$ for $i_1\geq\cdots\geq i_s\geq 1$ (see e.g. \cite{mathertb}). For $n\geq i_1$ and $p\geq p_0$ ($p_0$ depending on $K$) there is a jet $g_K(n,p)\in J^k(n,p)$, such that $\mathcal{K}^kg_K$ is open in $\Sigma^K$, and hence $\Tp(g_K(n,p))=\Tp_{\Sigma^K}(n,p)$. The nilpotent algebra of any of these $g_K(n,p)$ jets are isomorphic, it will be denoted by $Q_K$. For $n-i_1<p<p_0$ the Thom-Boardmann class $\Sigma^K(n,p)$ is still not empty, but it may split into families of lower dimensional contact classes. The question was raised in \cite{dstab} whether the Thom series of $g_K$ calculates $[\Sigma^K(n,p)]$ for $n-i_1<p<p_0$, too. Consider such a small $p$, a sufficiently large $k$, and the map
$\phi: C(Y_{Q_K}) \to \eta_{Q_K}(n,p)\subset J^k(n,p)$ from (\ref{phi}). The image $\eta_{Q_K}(n,p)$ can be identified with the $\Sigma^K$ germs in $J^k(n,p)$.
The dimensions of the source and target spaces of $\phi$ are the same. Moreover, $\phi$ has a well-known section, $g \mapsto \beta(g):=$the {\em Boardmannization} of $g$ (see \cite[\S 2]{mathertb}). Since the correspondence variety $C(Y_{Q_K})$ is connected, this implies that $\phi$ has degree 1, and we have the following
\begin{theorem}\label{th:tb}The Thom series $\ts_{g_{\raisebox{-.4ex}{$\scriptscriptstyle K$}}}$ calculates the Thom polynomial of $\Sigma^K(n,p)$ for all $n,p$.
\end{theorem}
We can call $\ts_{g_{\raisebox{-.4ex}{$\scriptscriptstyle K$}}}$ the Thom series $\ts(\Sigma^K)$ of $\Sigma^K$. Notice that this way we obtain Thom polynomials for $p<n$ as well, which case is not covered by the Localization Formula. We should mention that at this point this is only a theoretical possibility as $\ts(\Sigma^K)$ is known only in the few cases listed in Section \ref{sec:exa}.
\subsection{Nets of conics} \label{sec:netsofconics} The 1-parameter family of jets
\[g_\lambda=(x^2-\lambda yz,y^2-\lambda xz,z^2-\lambda xy) \ \ \text{for} \ \lambda(\lambda^3-1)(8\lambda^3+1)\not=0.\]
was studied by Mather in \cite{mather5} and Wall in \cite{wall:nets}. This is the smallest codimensional example of a family of jets for $n=p$. The contact class of $g_\lambda$ has codimension 10, their union is open in the Thom-Boardmann class $\Sigma^3(3,3)$. Thom polynomials of contact classes contained in $\Sigma^3(n,n)$ (for any $n$) are linear combinations of $\Delta_{\mu}$ where the Young diagram of the partition $\mu$ contains a $3\times 3$ square (see \cite[\S 4.2]{pragacz:enumgeo}). Therefore $\tp(g_\lambda)=A\Delta_{3331}+B\Delta_{433}$ for some $A,B\in \N$. The restriction equation $\tp(g_\lambda)|_{g_\mu}=0$ implies that $2A=B$.
\begin{theorem}The Thom polynomial of $g_\lambda$ for generic $\lambda$ is
\begin{equation}\label{eq:nets} \tp(g_\lambda)=4\Delta_{3331}+8\Delta_{433}. \end{equation}
\end{theorem}
\noindent{\em Sketch of the proof.} The ideal $I_\lambda$ of $g_\lambda$ in $J^k(3)$, where $k\geq 3$, has depth 3 and we have $\mu(g_\lambda)=7$. Consider the ideal $I'_\lambda=I_\lambda+(J^k(3))^3$, whose depth is 2. This ideal can only be generated by at least 4 polynomials, hence the 6-dimensional nilpotent algebra $Q_\lambda:=J^3(3)/I'_\lambda$ does not corresponds to any germ with $p=n$. Yet, consider $n=p=3$ and the map
$$\phi: C(Y_{Q_\lambda}) \to \eta_{Q_\lambda}(3,3)$$
from (\ref{phi}). The only ideal of codimension 7 in $I'_\lambda$ is $I_\lambda$. This implies that $\phi$ is a birational map to the closure of the contact orbit of $g_\lambda$. We have $Y_{Q_\lambda}\subset X:=j(\gr_3(\Hom(\sym^2\C^3,\C)))$, where $j$ is the obvious embedding $\gr_3(\Hom(\sym^2\C^3,\C))\to \gr^6$ discussed in Section \ref{subgrass} on subgrassmannians.
The action of the right group $\mathcal{R}(3)$ on the 9-dimensional $X$ is studied in \cite{mather5}. It is shown there that the action is equivalent to the action of the 8-dimensional Lie group PGL$(3)$, and the orbit closure $O_\lambda$ of $I^{'}_\lambda$ is 8-dimensional, i.e. a hypersurface.
Ideas of \cite{wall:nets} can be used to show that the degree of $O_\lambda$ for generic $\lambda$ is 4 (and there is one orbit closure with degree 2). Either the Localization Formula, or the idea of {\em projective} Thom polynomials (\cite[\S 6]{forms}) can be used to obtain that this degree is equal to the coefficient of $\Delta_{3331}$ in the the Thom polynomial of $g_\lambda$. \qed
\section{Final remarks}\label{sec:final}
The results and examples of this paper may give the wrong impression that now the Thom polynomials of all singularities are calculated. Although we indeed reached beyond the previously known Thom polynomials (and series), let us demonstrate the boundaries of our knowledge by some open problems.
We do not know how to calculate the Thom series of $A_n$ for $n>6$. For $n>9$ we do not even know the first Thom polynomial $\tp_{A_n}(0)$. We do not know the Thom series of the Thom-Boardmann class $\Sigma^{211}$. Are there closed formulas for classes of singularities, for example $\{A_n: n\geq 0\}$ or $\{I_{a,b}: a,b\geq 2\}$? We repeat a conjecture of the second author \cite{rrtp} in a slightly strengthened form:
\noindent {\bf Conjecture:} Every coefficient of the Thom polynomials $\tp_{A_n}(l)$---written as a linear combination of Chern monomials---is non-negative, and all coefficients of width at most $n$ are strictly positive. (By d-stability, the other coefficients are 0.)
In \cite[Sect. 8.7]{bsz06} this conjecture is verified for $n=3$ and 4.
|
train/arxiv
|
BkiUbLQ241xg-HfkrnQS
| 5 | 1 |
\section{Introduction}
This paper considers online gradient descent and the online proximal-gradient methods for dynamic optimization and learning~\cite{popkov2005gradient, towfic2013distributed,bedi2018tracking,selvaratnam2018Numerical,yi2016tracking, mokhtari2016online,chang2021online,besbes2015non}. Because of their computational tractability, these are attractive first-order methods for solving a number of learning and optimization tasks where data points and functions are processed on-the-fly and without storage. Online gradient and proximal-gradient descent are powerful methods also in the context of online stochastic optimization~\cite{shames2020online,cao2020online}, stochastic learning~\cite{vlaski2020tracking,hallak2020regret}, and feedback-based optimization~\cite{hauswirth2021optimization,ospina2022feedback}.
We examine the performance of online gradient and proximal-gradient descent in the presence of \emph{inexact} gradient information, and when the cost to be minimized satisfies the \emph{Polyak-\L ojasiewicz (PL) condition}~\cite{karimi2016linear}. Formally, we consider a optimization problem of the form
\begin{align}
\label{eq:main-problem}
\min_{{\bf x} \in \mathbb{R}^n} F_t({\bf x}) := f_t({\bf x}) + g_t({\bf x})
\end{align}
where $t \in \mathbb{N}$ is the time index, $f_t: \mathcal{D} \rightarrow \mathbb{R}$ is a continuously differentiable function with a Lipschitz-continuous gradient at each time $t$, $\mathcal{D} \subseteq \mathbb{R}^n$ an open and non-empty convex set, and $g_t: \mathcal{D} \to \mathbb{R} \cup \{ +\infty \}$ is a closed, convex and proper function for all $t$, possibly not differentiable. Accordingly, we consider two main cases:
\noindent \emph{c1)} $g_t({\bf x}) \equiv 0$, ${\bf x} \mapsto f_t({\bf x})$ satisfies the PL inequality for all $t$, and an inexact gradient is available; and,
\noindent \emph{c2)} ${\bf x} \mapsto F_t({\bf x})$ satisfies the proximal-PL inequality~\cite{karimi2016linear}, and an inexact gradient is available.
We note that strong convexity implies the PL inequality. However, functions that satisfy the PL inequality are not necessarily convex; instead, they satisfy the notion of invexity~\cite{karimi2016linear}. Examples of cost functions that satisfy the PL inequality includes least squares (LS) and logistic regression, with applications that span learning and feedback-based optimization. On the other hand, prime examples of costs that satisfy the proximal-PL condition are the LS with a sparsity-promoting regularize and an indicator function for a polyhedral set (see, e.g.,~\cite{karimi2016linear} for additional examples).
The analysis is performed in terms of the instantaneous regret $r_t := F_t({\bf x}_t) - F_t^*$, where $F_t({\bf x}_t)$ is the cost achieved at time $t$ by the point ${\bf x}_t$ produced by the algorithm and $F_t^*$ is the optimal value function (that one would have achieved if the problem~\eqref{eq:main-problem} was solved to convergence at time $t$).
Motivating examples for considering stochastic gradient information are drawn from a variety of applications in learning and data-driven control; for example: i) settings where bandit and zeroth-order methods are utilized to estimate the gradient from (one or a few) functional evaluations~\cite{hajinezhad2017zeroth,tang2020distributed}; ii)~feedback-based optimization of networked systems, where errors in the gradient are due to measurement errors and asynchronous measurements~\cite{Bolognani_feedback_15,hauswirth2021optimization,ospina2022feedback}; and, iii)~online stochastic optimization settings~\cite{shames2020online,cao2020online}.
\emph{Prior works}. Online (projected) gradient descent
methods with exact gradient information have been investigated, and we refer to the
representative works~\cite{selvaratnam2018Numerical,mokhtari2016online,madden2021bounds} as well as
to references therein. A regret analysis was performed in, e.g.,~\cite{yi2016tracking,mokhtari2016online,chang2021online} (see also pertinent references therein), and the excess-risk was analyzed in~\cite{towfic2013distributed}. Inexact gradient information was considered in, e.g.,~\cite{besbes2015non,yi2016tracking}, where bounds in expectation on the regret incurred by the inexact online gradient descent were derived, and in~\cite{bedi2018tracking} where the distance from the unique trajectory of optimizers was bounded in expectation. Convergence results in expectation were provided in the context of online stochastic optimization in, e.g.,~\cite{shames2020online,cao2020online}. Convergence guarantees for online
stochastic gradient methods where drift and noise terms satisfy sub-Gaussian assumptions were provided
in~\cite{cutler2021stochastic}. Online projected gradient methods with sub-Weibull gradient error and a strongly convex cost are analyzed in~\cite{ospina2022feedback}.
We also acknowledge representative prior works on inexact and stochastic gradient and proximal-gradient methods for batch optimization in, e.g.,~\cite{schmidt2011convergence,devolder2014first,gannot2021frequency,rosasco2014convergence,atchade2017perturbed,moulines2011non,bertsekas1997gradient,li2020high} (see also references therein). In particular, almost sure convergence to a first-order stationary point is proved assuming only strong smoothness and a weak assumption on the noise in \cite{bertsekas1997gradient}; mean convergence under the PL inequality is shown in, e.g, \cite{khaled2020better}. High-probability convergence results assuming strong smoothness and norm sub-Gaussian noise were provided in e.g.,~\cite{li2020high}, and in~\cite{Harvey2} for strongly convex functions in the non-smooth setting. Finally, we also acknowledge prior works that investigate geometric conditions implying linear convergence of proximal gradient algorithms~\cite{bolte2007lojasiewicz,attouch2013convergence,bolte2010characterizations}. These works are for static optimization.
\emph{Contributions}. We consider the cases \emph{c1)} and \emph{c2)} described above, and offer the following main contributions.
\emph{(i)} We provide new bounds for the instantaneous regret $r_t$ in \emph{expectation} and in \emph{high probability} for the inexact online gradient descent, when the cost satisfies the PL inequality. The high-probability convergence results are derived by adopting a sub-Weibull model~\cite{vladimirova2020sub} for the gradient error. We also provide an \emph{almost sure} result for the asymptotic behavior of the regret $r_t$.
\emph{(ii)} Similarly, we provide new bounds for the instantaneous regret $r_t$ in \emph{expectation} and in \emph{high probability} for the inexact online proximal-gradient descent method.
\emph{(iii)} For the case of static costs, our bounds provide contributions over~\cite{karimi2016linear,schmidt2011convergence,devolder2014first,gannot2021frequency,rosasco2014convergence,atchade2017perturbed,moulines2011non,bertsekas1997gradient,li2020high,bertsekas1997gradient,khaled2020better,li2020high} by considering a sub-Weibull model for the gradient error. In terms of bounds in expectation, this paper extends the results of in the context of static optimization to an online setting where the cost changes over time.
To better highlight the merits of the bounds, is important to mention that the sub-Weibull distribution allows one to consider a variety of gradient error models in a unified manner; in fact, the sub-Weibull distribution includes sub-Gaussian distributions and sub-exponential distributions as sub-cases, as well as random variables whose probability density function has a finite support~\cite{vershynin_high-dimensional_2018}. The bounds we derived can be customized to sub-Gaussian and sub-exponential distributions and for random variables with finite support by simply tuning the parameters of the sub-Weibull model. Furthermore,~\cite{bastianello2021stochastic} showed that intermittent updates can also be modeled using sub-Weibull random variables.
The rest of the paper is organized as follows. Section~\ref{sec:preliminaries} introduces relevant definitions and assumptions, and Section~\ref{sec:gradient} presents the main results for online gradient descent. Section~\ref{sec:prox-gradient} focuses on the online proximal-gradient method, and Section~\ref{sec:results} provides numerical results. Section~\ref{sec:conclusions} concludes the paper.
\section{Preliminaries}
\label{sec:preliminaries}
We start by introducing relevant definitions and assumptions that will be utilized throughout the paper\footnote{\emph{Notation}. Upper-case (lower-case) boldface letters will be used for matrices (column vectors); $(\cdot)^\top$ denotes transposition. For given column vectors ${\bf x}, {\bf y} \in \mathbb{R}^n$, $\langle {\bf x},{\bf y} \rangle$ denotes the inner product and $\|{\bf x}\| := \sqrt{\langle {\bf x},{\bf x} \rangle}$. Given a differentiable function $f: {\cal D} \rightarrow \mathbb{R}$, defined over a domain ${\cal D} \subseteq \mathbb{R}^n$ that is nonempty, $\nabla f ({\bf x})$ denotes the gradient of $f$ at ${\bf x}$ (taken to be a column vector).
$\mathcal{O}(\cdot)$ refers to the big-O notation, whereas $o(\cdot)$ refers to the little-o notation. For a given random variable $\xi \in \mathbb{R}$, $\mathbb{E}[\xi]$ denotes the expected value of $\xi$, and $\pr{\xi \leq \epsilon}$ denotes the probability of $\xi$ taking values smaller than or equal to $\epsilon$; furthermore, $\norm{\xi}_p := \mathbb{E}[|\xi|^p]^{1 / p}$, for any $p \geq 1$. Finally, $e$ will denote Euler's number.}.
\subsection{Modeling and Definitions}
We consider functions $\{f_t\}_{t \in \mathbb{N}}$ and $\{g_t\}_{t \in \mathbb{N}}$, defined over an open ball $\mathcal{D} := \{ {\bf x} \in \mathbb{R}^n: \|{\bf x}\| < r$\} for some $r >0$, that satisfy the following assumptions.
\vspace{.1cm}
\begin{assumption}
\label{as:f}
The function ${\bf x} \mapsto f_t({\bf x})$ is continuously differentiable and has a Lipschitz-continuous gradient over $\mathcal{D}$ for all $t$; i.e., $\exists~ L > 0$ such that $\| \nabla f_t({\bf x}) - \nabla f_t({\bf y}) \| \leq L \|{\bf x} - {\bf y}\|$ for any ${\bf x}, {\bf y} \in {\cal D}$, for all $t$. \QEDB
\end{assumption}
\vspace{.1cm}
\begin{assumption}
\label{as:g}
For every $t \in \mathbb{N}$, the function ${\bf x} \mapsto g_t({\bf x})$ is convex, proper, and lower semi-continuous, possibly non-differentiable over $\mathcal{D}$. \QEDB
\end{assumption}
\vspace{.1cm}
Recall that the following inequality follows from the Lipschitz-continuity of the gradient of $f_t$:
\begin{align}
\label{eq:smooth_def}
f_t({\bf y}) \leq f_t({\bf x}) + \langle \nabla f_t({\bf x}), {\bf y} - {\bf x} \rangle + \frac{L}{2} \|{\bf y} - {\bf x}\|^2 \, ,
\end{align}
$\forall \, {\bf x}, {\bf y} \in {\cal D}$; this inequality will be utilized throughout the paper. Let ${\cal X}^*_t := \arg \min_{{\bf x} \in \mathbb{R}^n} F_t({\bf x})
$, be the set of global minimizers of the problem~\eqref{eq:main-problem} at time $t$, and let $F_t^* := F_t({\bf x}_t^*)$, with ${\bf x}_t^* \in {\cal X}^*_t$.
The following is assumed.
\vspace{.1cm}
\begin{assumption}
\label{as:bounded_optimal}
The set ${\cal X}^*_t$ is non-empty for all $t$ and ${\cal X}^*_t \subset \mathcal{D}$; furthermore, $- \infty < F_t^*$ for all $t$. \QEDB
\end{assumption}
\vspace{.1cm}
The temporal variability of the problem~\eqref{eq:main-problem} could be measured based on how fast its optimal solutions or optimal value functions change; see, for example,~\cite{besbes2015non,yi2016tracking,chang2021online} and references therein. More precisely, one can consider the change in the optimal value function as:
\begin{align}
\label{eq:variability_optimal}
\sigma_t := |F_{t}^* - F_{t-1}^*| \, .
\end{align}
It will also be convenient to utilize the additional metrics $\tilde \phi_t({\bf x}) := |F_{t}({\bf x}) - F_{t-1}({\bf x})|$ and
\begin{align}
\label{eq:variability}
\phi_t & := \sup_{{\bf x} \in \mathcal{D}} \tilde \phi_t({\bf x}) \, .
\end{align}
For future developments, it is also convenient to define $\psi_t := \phi_t + \sigma_t$, $\tilde \psi_t := \tilde \phi_t + \sigma_t$, and $\bar{\psi} := \sup_{t \in \mathbb{N}} \psi_t$. These metrics will be utilized to characterize the convergence of the online gradient and proximal-gradient methods. Whenever $g_t \equiv 0$ (this will be the main setting of Section~\ref{sec:gradient}), we will use the notation $f_t^* = \min_{{\bf x} \in \mathcal{D}} f_t({\bf x})$ whenever convenient (in this case, it is clear that $F_t^* = f_t^*$); the definitions of $\sigma_t$, $\tilde \phi_t$, and $\phi_t$ remain unchanged. We also note that the case where $\sigma_t = 0$ and $\phi_t = 0$ for all $t$ corresponds to a static optimization problem (where the cost function does not change over time).
We next recall the definition of the PL inequality and its generalization to composite cost functions~\cite{karimi2016linear}.
\vspace{.1cm}
\begin{definition}[Polyak-\L ojasiewicz (PL) Inequality]
\label{as:def_PL}
A continuously differentiable function $f:\mathcal{D} \rightarrow \mathbb{R}$ satisfies the PL inequality over $\mathcal{D}$ if the following holds for some $\mu > 0$:
\begin{align}
\label{eq:pl}
2 \mu (f({\bf x}) - f^*) \leq \|\nabla f({\bf x})\|^2, \, \forall~{\bf x} \in \mathcal{D}
\end{align}
where $f^*$ is the optimal value function.
\end{definition}
\vspace{.1cm}
It is important to note that the PL inequality implies the quadratic bound $f({\bf x}) - f^* \geq \frac{\mu}{2} \|{\bf x} - {\bf x}^*\|^2$ for any global minimizer ${\bf x}^*$. As shown in~\cite{karimi2016linear}, strong convexity implies the PL inequality. However, functions that satisfy the PL inequality are not necessarily convex, instead, they
satisfy the notion of invexity.
\vspace{.1cm}
\begin{definition}[Proximal-PL Condition]
\label{as:def_proxPL}
Let $f:\mathcal{D} \rightarrow \mathbb{R}$ be a continuously differentiable function and $g:\mathcal{D} \rightarrow \mathbb{R}$ be a convex function. The function $F({\bf x}) := f({\bf x}) + g({\bf x})$ satisfies the proximal-PL condition if the following holds:
\begin{align}
\label{eq:prox_pl}
2\mu (F({\bf x})-F^*) \leq \mathcal{A}_{g} ({\bf x},\xi),
\end{align}
for all ${\bf x} \in {\cal D}$ and for some $\mu > 0$, where $\xi > 0$ and
\begin{align}
\label{eq:defA}
\mathcal{A}_{g} ({\bf x},\xi) := -\frac{2}{\xi} \min\limits_{\bf y} \Big\{ \langle \nabla f({\bf x}), {\bf y}-{\bf x} \rangle + \frac{1}{2 \xi} \|{\bf y}-{\bf x}\|^2 \notag \\
+g({\bf y})-g({\bf x}) \Big\}.
\end{align}
\end{definition}
\subsection{Sub-Weibull random variables}
In this section, we introduce the definition of sub-Weibull random variable (rv), which will be utilized to model the errors incurred by the inexact online gradient methods.
\vspace{.1cm}
\begin{definition}[Sub-Weibull rv~\cite{vladimirova2020sub}]
\label{def:sub-weibull}
A random variable $X \in \mathbb{R}$ is sub-Weibull if $\exists \, \theta > 0$ such that (s.t.) one of the following conditions is satisfied:
\begin{enumerate}
\item[(i)] $\exists \,\, K_1 > 0$ s.t. ${\mathbb{P}}[|X| \geq \epsilon] \leq 2 e^{- \left( \epsilon / K_1 \right)^{1 / \theta} }$, $\forall \, \epsilon~>~0$.
\item[(ii)] $\exists \,\, K_2 > 0$ s.t. $\|X\|_k \leq K_2 k^\theta$, $\forall \, k \geq 1$. \hfill $\Box$
\end{enumerate}
\end{definition}
The parameters $K_1, K_2$ differ by a constant that depends on $\theta$. In particular, if (ii) holds with parameter $K_2$, then (i) holds with $K_1 = \left( 2 e / \theta \right)^\theta K_2$. In this paper, we use the short-hand notation $X \sim \mathrm{subW}(\theta, K)$ to indicate that $X$ is a sub-Weibull rv according to Definition~\ref{def:sub-weibull}(ii).
The coefficient $\theta$ is related to the rate of decay of the tails; in particular, the tails become heavier as the parameter $\theta$ grows larger. We note that the sub-Weibull class includes sub-Gaussian and sub-exponential rvs as sub-cases; in particular, if $\theta = 1/2$ and $\theta = 1$ we have sub-Gaussian and sub-exponential rvs, respectively. Furthermore, if a rv has a distribution with finite support, it belongs to the sub-Gaussian class (by Hoeffding's inequality \cite[Theorem 2.2.6]{vershynin_high-dimensional_2018}) and, thus, to the sub-Weibull class.
\vspace{.1cm}
The following lemmas will be utilized throughout the paper to derive the main results.
\vspace{.1cm}
\begin{lemma} (\textit{Closure of sub-Weibull class}~\cite{bastianello2021stochastic}) Let $X_i \sim \mathrm{subW}(\theta_i, K_i)$, $i = 1,2$, based on Definition~\ref{def:sub-weibull}(ii).
\begin{enumerate}
\item[(a)] \emph{Product by scalar:} Let $a \in {\mathbb{R}}$, then $a X_i \sim \mathrm{subW}(\theta_i, |a| K_i)$.
\item[(b)] \emph{Sum by scalar:} Let $a \in {\mathbb{R}}$, then $a + X_i \sim \mathrm{subW}(\theta_i, |a| + K_i)$.
\item[(c)] \emph{Sum:} Let $\{X_i, i = 1,2\}$ be possibly dependent; then, $X_1 + X_2 \sim \mathrm{subW}(\max\{ \theta_1, \theta_2 \}, K_1 + K_2)$.
\end{enumerate} \label{sec:closure}
\end{lemma}
\vspace{.1cm}
\begin{lemma} (\textit{Inclusion}~\cite{vladimirova2020sub})
Let $X \sim \mathrm{subW}(\theta, K)$ for some $\theta, K > 0$. Let $\theta', K'$ be s.t. $\theta' \geq \theta$, $K' \geq K$. Then, $X \sim \mathrm{subW}(\theta', K')$. \hfill $\Box$ \label{sec:inclusion}
\end{lemma}
\vspace{.1cm}
\begin{lemma} (\textit{Powers of sub-Weibull rvs}~\cite{bastianello2021stochastic})
Let $X \sim \mathrm{subW}(\theta, K)$ for some $\theta, K > 0$, and let $a > 0$. Then, $X^a \sim \mathrm{subW}(a \theta, K^a \max\{ 1, a^{a \theta} \})$. \hfill $\Box$ \label{sec:power}
\end{lemma}
\vspace{.1cm}
We note that the definition of sub-Weibull rvs and their
properties do not require their mean to be zero. We conclude this section with the following high probability bound for a sub-Weibull rv.
\vspace{.1cm}
\begin{lemma}[High probability bound]\label{lem:high-probability-bound}
Let $X \sim \mathrm{subW}(\theta, K)$ according to Definition \ref{def:sub-weibull}(ii), for some $\theta, K > 0$. Then, for any $\delta \in (0, 1)$, the bound:
\begin{equation}
| X | \leq \ K \log^\theta \left(2 \delta^{-1} \right) \left( \frac{2e}{\theta} \right)^\theta
\end{equation}
holds with probability $1 - \delta$.
\hfill $\Box$
\end{lemma}
\section{Online Stochastic Gradient Descent}
\label{sec:gradient}
We start by considering the case where $g_t \equiv 0$ for all $t$; accordingly, the problem~\eqref{eq:main-problem} reduces here to:
\begin{align}
\label{eq:problem-f}
\min_{{\bf x}} f_t({\bf x})
\end{align}
where we recall that $t \in \mathbb{N}$ is the time index.
We consider the following \emph{inexact online gradient descent} (OGD):
\begin{equation}
\label{eq:iogd}
{\bf x}_{t+1} = {\bf x}_{t} - \eta \, {\bf v}_t
\end{equation}
where $\eta > 0$ is a given step-size, ${\bf v}_t := \nabla f_t({\bf x}_{t}) + {\bf e}_t$ is the approximate gradient, and ${\bf e}_t$ is a stochastic error. We are interested in studying the performance of~\eqref{eq:iogd} when the function $f_t$ satisfies the PL inequality~\eqref{eq:pl}, and the error $\|{\bf e}_t\|$ follows a sub-Weibull distribution. A discussion on the sub-Weibull model as well as the PL inequality in the context of problems in learning and feedback-based optimization is provided in Section~\ref{sec:errormodel}. The main convergence results are presented next.
\subsection{Convergence in expectation and in high probability}
\label{sec:gradient_conv}
Since $g_t \equiv 0$, the instantaneous regret at time $t$ boils down here to $r_t = f_t({\bf x}_t) - f_t^*$. Throughout this section, we assume that the gradient error has a sub-Weibull distribution, as formalized next.
\vspace{.1cm}
\begin{assumption}[Sub-Weibull norm gradient error]
\label{as:sub-weibull}
The error is distributed as $\|\mathbf{e}_t\| \sim \mathrm{subW}(\theta, K_t)$, for some $\theta > 0$ and $K_t > 0$.
\end{assumption}
\vspace{.1cm}
We note that if each individual entry of the random vector $\mathbf{e}_t$ follows a sub-Weibull distribution, then $\|{\bf e}_t\|$ is a sub-Weibull rv. This can be proved by using~\cite[Lemma 3.4]{bastianello2021stochastic} and part (c) of Proposition~\ref{sec:closure}. In the following, we state the main results concerning the convergence of~\eqref{eq:iogd}.
\vspace{.1cm}
\begin{theorem}[Convergence of the stochastic OGD]
\label{thm:regret}
Let Assumptions~\ref{as:f},~\ref{as:bounded_optimal}, and~\ref{as:sub-weibull} hold, and assume that the map ${\bf x} \mapsto f_t({\bf x})$ satisfies the PL inequality, for some $\mu > 0$, for all $t$. Let $\{{\bf x}_i\}_{i = 0}^t$ be a sequence generated by~\eqref{eq:iogd} with $\eta = 1/L$. The following bounds hold for \eqref{eq:iogd}:
\begin{enumerate}
\item For all $t\in \mathbb{N}$:
\begin{align}
\label{eq:expected_regret}
\hspace{-.4cm} \mathbb{E}[r_t] \leq \zeta^t r_0 + \sum_{\tau = 1}^t \zeta^{t - \tau} \left(\frac{1}{2L} \mathbb{E}[\|{\bf e}_{\tau-1}\|^2] + \psi_\tau \right)
\end{align}
where $\zeta := (1-\frac{\mu}{L})$.
\item For any $\delta\in(0,1)$, then the following bound holds with probability $1-\delta$:
\begin{align}
\label{eq:high_regret}
\hspace{-.4cm} r_t \leq h(\theta, \delta) \left(\zeta^t r_0 + \sum_{\tau = 1}^t \zeta^{t - \tau} \left( \frac{4^\theta}{2L} K_{\tau-1}^2 + \psi_\tau \right) \right)
\end{align}
where
$ h(\theta, \delta) := \log^{2 \theta}(2 \delta^{-1}) \left(\frac{e}{\theta} \right)^{2 \theta}$.
\end{enumerate}
\end{theorem}
\vspace{.1cm}
\begin{corollary}[Asymptotic convergence]
\label{cor:asymp_regret}
Under the same assumptions of Theorem~\ref{thm:regret}, it holds that
\begin{align}
\limsup_{t \rightarrow \infty} r_t \leq \frac{1}{2 \mu} \bar{e} + \frac{L}{\mu} \bar \psi \,\,\,\, \textrm{a.s.}
\end{align}
where $\bar{e} = \sup_t \{\mathbb{E}[\|{\bf e}_{t}\|^2]\}$.
\end{corollary}
\vspace{.1cm}
Before providing examples of applications and the proof of the results, some remarks are in order.
\vspace{.1cm}
\begin{remark}[Static optimization~\cite{karimi2016linear}]
\label{rem:static}
When the optimization problem~\eqref{eq:problem-f} is time-invariant (i.e., $f_t(x) = f(x)$ for all $t \in \mathbb{N}$), then,~\eqref{eq:expected_regret} is similar to~\cite[Thm.~4]{karimi2016linear} (where a different step-size was used). However, relative to~\cite{karimi2016linear}, we provide the following bound in high probability
\begin{align}
\label{eq:high_regret2}
\hspace{-.4cm} r_t \leq h(\theta, \delta) \left(\zeta^t r_0 + \sum_{\tau = 1}^t \zeta^{t - \tau} \frac{4^\theta}{2L} K_{\tau-1}^2 \right)
\end{align}
which holds with probability $1 - \delta$ for any $\delta \in (0,1)$; this bound can be derived from~\eqref{eq:high_regret} by setting $\psi_\tau = 0$ for all $\tau = 1, \ldots, t$.
\hfill \QEDB
\end{remark}
\vspace{.1cm}
\begin{remark}[Alternative bound in expectation]
\label{rem:tighterBound}
An alternative bound in expectation can be expressed as
\begin{align}
\label{eq:expected_regret2}
\mathbb{E}[r_t] \leq \zeta^t r_0 + \sum_{\tau = 1}^t \zeta^{t - \tau} \left(\frac{1}{2L} \mathbb{E}[\|{\bf e}_{\tau-1}\|^2] + \mathbb{E}[\tilde{\psi}_\tau] \right)
\end{align}
where $\tilde{\psi}_\tau = \sigma_\tau + \tilde{\phi}_\tau$, and $\tilde{\phi}_\tau := |F_{\tau}({\bf x}_\tau) - F_{\tau-1}({\bf x}_\tau)|$ (where the expectation $\mathbb{E}[\tilde{\psi}_\tau] $ is taken with respect to the error ${\bf e}_{\tau-1}$, conditioned on a filtration). This leads to a tighter bound relative to~\eqref{eq:expected_regret}.
\hfill \QEDB
\end{remark}
\vspace{.1cm}
\begin{remark}[Markov's inequality]
An alternative high probability bound
can be obtained by using~\eqref{eq:expected_regret} and Markov's inequality. However, the resulting bound would have a dependence $\delta^{-1}$; on the other hand, our bound has a $\log(\delta^{-1})$ dependence on $\delta$.
\hfill \QEDB
\end{remark}
\subsection{Remarks on applications and error model}
\label{sec:errormodel}
In this section, we provide some examples of applications that are relevant to our setting.
\vspace{.1cm}
\begin{example}[Online least-squares]
A function $f_t({\bf x}) = h_t({\bf A}_t {\bf x})$, with $h_t: \mathbb{R}^d \rightarrow \mathbb{R}$ a $\nu$-strongly convex function and ${\bf A}_t \in \mathbb{R}^{d \times n}$ satisfies the PL inequality~\cite{karimi2016linear}. This class includes the least-squares (LS) problem by setting $f_t({\bf x}) = \frac{1}{2} \|{\bf A}_t {\bf x} - {\bf b}_t\|^2$. Note that, when the matrix ${\bf A}_t$ is not full-column rank, one can utilize the results of this paper to establish linear convergence of OGD for the under-determined LS problem.
\end{example}
\vspace{.1cm}
\begin{example}[Online Logistic regression]
The logistic regression cost $f_t({\bf x}) = \sum_{i = 1}^d \log(1 + \exp(b_{i,t} {\bf a}_{i,t}^\top {\bf x}))$, with $b_{i,t} \in \mathbb{R}$ and ${\bf a}_{i,t} \in \mathbb{R}^n$, satisfies the PL inequality~\cite{karimi2016linear}.
\end{example}
\vspace{.1cm}
\begin{example}[Optimization of LTI systems]
\label{ex:feedbackoptimization}
Consider the algebraic representation of a stable linear time-invariant system ${\bf y}_t = {\bf G} {\bf x} + {\bf H} {\bf w}_t$, where ${\bf x}$ is the vector of controllable inputs and ${\bf w}_t$ are unknown exogenous disturbances. Suppose that $f_t({\bf x}) = \frac{1}{2}\|{\bf G} {\bf x} + {\bf H} {\bf w}_t - \bar{{\bf y}}_t\|^2$ with $\bar{{\bf y}}_t$ a time-varying reference. Since ${\bf w}_t$ is unknown, one way to compute $\nabla f_t({\bf u}_t)$ is ${\bf v}_t = {\bf G}^\top (\hat{{\bf y}}_t - \bar{{\bf y}}_t)$, where $\hat{{\bf y}}_t$ is a (noisy) measurement of the output ${\bf y}_t$~\cite{Bolognani_feedback_15,ospina2022feedback}.
\end{example}
\vspace{.1cm}
\begin{example}[Training of neural networks]
We refer the reader to recent discussions on the PL inequality in the context of training of neural networks in, e.g.,~\cite{li2017convergence}. The proposed framework may capture the case where stochastic gradient methods are utilized to train a neural network in an online fashion.
\end{example}
\vspace{.1cm}
In terms of gradient information, the error ${\bf e}_t$ may arise in the following (application-specific) scenarios:
\noindent \emph{(i)} A subset of the data points available at time $t$ are utilized to compute the gradient; for instance, in the Examples 1-2, one may utilize the data points $\{{\bf a}_{i,t}, b_{i,t}\}_{i \in {\cal S}_t}$, with $|{\cal S}_t| < d$.
\noindent \emph{(ii)} Bandit and zeroth-order methods are utilized to estimate the gradient~\cite{hajinezhad2017zeroth,tang2020distributed}.
\noindent \emph{(iii)} In an online stochastic optimization setting~\cite{shames2020online}, i.e. when $f_t({\bf x}) = \ev{\ell_t({\bf x},{\bf z})}$ for a given loss $\ell_t: \mathbb{R}^n \times \mathbb{R}^d \rightarrow \mathbb{R}$ and a random variable ${\bf z}$, the approximate gradient ${\bf v}_t$ may be computed using a single sample or a mini-batch.
\noindent \emph{(iv)} In measurement-based algorithms as in Example~\ref{ex:feedbackoptimization}, measurement errors and asynchronous measurements render the computation of the gradient inexact.
\subsection{Proofs}
\label{sec:proof}
In this section, we present the proof of Theorem~\ref{thm:regret}.
We start by using~\eqref{eq:smooth_def} with ${\bf y} = {\bf x}_{t+1}$ and ${\bf x} = {\bf x}_{t}$, where ${\bf x}_{t}$ and ${\bf x}_{t+1}$ are generated by~\eqref{eq:iogd}; this allows us to obtain:
\begin{subequations}
\begin{align}
f_{t}({\bf x}_{t+1}) & \leq f_t({\bf x}_t) -\frac{1}{L} \langle \nabla f_t({\bf x}_t),{\bf v}_t \rangle + \frac{1}{2L} \| {\bf v}_t \|^2 \\
&=f_t({\bf x}_t) -\frac{1}{L} \langle \nabla f_t({\bf x}_t), \nabla f_t({\bf x}_t) + \mathbf{e}_t \rangle \notag \\
&\quad+ \frac{1}{2L} \|\nabla f_t({\bf x}_t)+\mathbf{e}_t \|^2 \\
&=f_t({\bf x}_t)-\frac{1}{L} \|\nabla f_t({\bf x}_t)\|^2 -\frac{1}{L} \langle \nabla f_t({\bf x}_t), \mathbf{e}_t \rangle \notag \\
&\quad+ \frac{1}{2L} \|\nabla f_t({\bf x}_t)+\mathbf{e}_t\|^2 \\
&=f_t({\bf x}_t)-\frac{1}{L} \|\nabla f_t({\bf x}_t)\|^2 -\frac{1}{L} \langle \nabla f_t({\bf x}_t), \mathbf{e}_t \rangle \notag \\
&\quad+ \frac{1}{2L} \langle \nabla f_t({\bf x}_t)+\mathbf{e}_t, \nabla f_t({\bf x}_t)+\mathbf{e}_t \rangle \\
&=f_t({\bf x}_t)-\frac{1}{L} \|\nabla f_t({\bf x}_t)\|^2 -\frac{1}{L} \langle \nabla f_t({\bf x}_t), \mathbf{e}_t \rangle \notag \\
&\quad+ \frac{1}{2L}(\| \nabla f_t({\bf x}_t)\|^2+ 2 \langle \nabla f_t({\bf x}_t),\mathbf{e}_t \rangle +\|\mathbf{e}_t\|^2)\\
&=f_t({\bf x}_t) - \frac{1}{2L} \|\nabla f_t({\bf x}_t)\|^2+ \frac{1}{2L} \|\mathbf{e}_t\|^2 \, .
\end{align}
\end{subequations}
Next, adding $- f_t^*$ on both sides and using the PL inequality~\eqref{eq:pl}, one gets:
\begin{align}
&f_t({\bf x}_{t+1})-f_t^* \leq - \frac{1}{2L} \|\nabla f_t({\bf x}_t)\|^2 + \frac{1}{2L} \|\mathbf{e}_t\|^2+f_t({\bf x}_t)-f_t^* \notag \\
& \leq -\frac{\mu}{L}(f_t({\bf x}_t)-f_t^*)+f_t({\bf x}_t)-f_t^* + \frac{1}{2L}\|\mathbf{e}_t\|^2 \, .
\end{align}
Next, adding $-f_{t+1}^*$ and $f_{t+1}({\bf x}_{t+1})$ on both sides, using the definition of regret $r_t$, and applying the definitions of $\zeta$ and $\psi_t$ we obtain the stochastic inequality
$$
r_{t} \leq \zeta \, r_{t-1} + \frac{1}{2L} \|{\bf e}_{t-1}\|^2 + \psi_t , $$
which holds almost surely. Unraveling, we get
\begin{align}
\label{eq:randombound}
r_t \leq \kappa_t +\frac{1}{2L}\sum_{i=1}^{t}\zeta^{t-i}\|\mathbf{e}_{i-1}\|^2
\end{align}
where $\kappa_t := \zeta^t r_0 + \sum_{i=1}^{t} \zeta^{t-i} \psi_i$ for brevity. Taking the expectation on both sides, we get~\eqref{eq:expected_regret}.
For the high-probability bound~\eqref{eq:high_regret},
recall that $\|\mathbf{e}_i\| \sim \mathrm{subW}(\theta, K_i)$; by Lemma~\ref{sec:power}, setting $a = 2$ we get that $\|\mathbf{e}_i\|^2$ is a sub-Weibull rv and, in particular, $\|\mathbf{e}_i\|^2 \sim \mathrm{subW}(2 \theta, 4^\theta K_i^2)$. Next, using the closure properties (a), (b), and (c) in Lemma~\ref{sec:closure} and the fact that $\zeta > 0$, we have that the right-hand-side of~\eqref{eq:randombound} is a sub-Weibull rv; in particular,
\begin{align}
\label{eq:errortotal}
\kappa_t + \frac{1}{2L} \sum_{i=1}^{t}\zeta^{t-i}\|\mathbf{e}_{i-1}\|^2 \sim \mathrm{subW} \left(2 \theta, K^\prime \right) \, .
\end{align}
where
$$
K^\prime = \kappa_t + \frac{4^\theta}{2L} \sum_{i=1}^{t}\zeta^{t-i} K_{i-1}^2
$$
Using Lemma~\ref{lem:high-probability-bound}, the high-probability bound~\eqref{eq:high_regret} follows.
The proof of Corollary~\ref{cor:asymp_regret} follows similar steps as in~\cite[Corollary~4.8]{bastianello2021stochastic}, and is omitted.
\section{Stochastic Proximal-Gradient Method}
\label{sec:prox-gradient}
We now turn the attention to the time-varying problem~\eqref{eq:main-problem}, with the cost satisfying the Assumptions~\ref{as:f}-\ref{as:bounded_optimal}. Throughout this section, we further assume that the cost function $F_t({\bf x})$ satisfies the proximal-PL inequality~\eqref{eq:prox_pl}, for a given $\mu > 0$. As discussed in~\cite{karimi2016linear}, an important example of cost satisfying the proximal-PL inequality is the $\ell_1$-regularized least squares problem; additional examples of costs include (see the discussion in~\cite[Appendix F]{karimi2016linear}):
\begin{enumerate}
\item $F_t({\bf x}) = f_t({\bf A} {\bf x}) + g_t({\bf x})$, with $f_t$ strongly convex, $g_t$ the indicator function for a polyhedral set, and ${\bf A}$ a given matrix.
\item The case where $f_t$ is convex, and $F_t$ satisfies the quadratic growth condition.
\item The case where $F_t$ satisfies the Kurdyka-\L ojasiewicz inequality or the proximal exponential bound.
\end{enumerate}
Consider then the stochastic online proximal-gradient method (OPGM), which involves the following step:
\begin{align}
\label{eq:opgm}
{\bf x}_{t+1} = \mathrm{prox}_{\frac{1}{L} g_t} \left\{{\bf x}_t - \frac{1}{L} {\bf v}_t \right\} \, , \,\,\,\, t \in \mathbb{N}
\end{align}
where ${\bf v}_t$ is again an estimate of $\nabla f_t({\bf x}_t)$, $\mathrm{prox}_{\frac{1}{L} g_t}: \mathbb{R}^n \rightarrow \mathbb{R}^n$ denotes the proximal operator, and the step-size is taken to be $1/L$.
We are now interested in analyzing the behavior of~\eqref{eq:opgm} in terms of regret $r_t = F_t({\bf x}_t) - F_t^*$, where we recall that $F_t^*$ is the optimal value function, when the function $F_t$ satisfies proximal-PL inequality and the error $\|{\bf e}_t\|$ follows a sub-Weibull distribution. The main convergence result for~\eqref{eq:opgm} is stated next.
\vspace{.1cm}
\begin{theorem}[Convergence of the stochastic OPGM]
\label{thm:regretOPGM}
Let Assumptions~\ref{as:f}--\ref{as:sub-weibull} hold. Assume further that the function ${\bf x} \mapsto F_t({\bf x})$ satisfies the proximal-PL inequality for some $\mu > 0$, for all $t$. Let $\{{\bf x}_i\}_{i = 0}^t$ be a sequence generated by~\eqref{eq:opgm}. Then:
\begin{enumerate}
\item For all $t\in \mathbb{N}$:
\begin{align}
\label{eq:expected_regretOPGM}
\hspace{-.5cm} \mathbb{E}[r_t] \leq \zeta^t r_0 + \sum_{\tau = 1}^t \zeta^{t - \tau} \left( 2 D \mathbb{E}[\|{\bf e}_{\tau-1}\|] + \psi_\tau \right)
\end{align}
where $\zeta = (1-\frac{\mu}{L})$ and $D$ is the diameter of $\mathcal{D}$.
\item If $\delta\in(0,1)$, then with probability $1-\delta$:
\begin{align}
\label{eq:high_regretOPGM}
\hspace{-.5cm} r_t \leq h_p(\theta, \delta) \left(\zeta^t r_0 + \sum_{\tau = 1}^t \zeta^{t - \tau} \left( 2D K_{\tau-1} + \psi_\tau \right) \right)
\end{align}
where
$ h_p(\theta, \delta) := \log^{\theta}(2 \delta^{-1}) \left(\frac{2e}{\theta} \right)^{\theta}$.
\end{enumerate}
\end{theorem}
\vspace{.1cm}
A result for the asymptotic convergence of the OPGM similar to Corollary~\ref{cor:asymp_regret} can be derived too, but it is omitted to avoid repetitive arguments. Similar considerations as in Remark~\ref{rem:static} can also be drawn.
We note that when $g_t$ is the indicator function for a bounded polyhedron, the constant $D$ can be replaced by the diameter of the polyhedron. We also note that tighter bounds could be derived by introducing a filtered probability space (this will be clear in the proof, where the inner product between the iterates and the gradient error appears); however, this is left as a future extension.
To outline the proof of the theorem, we first note that step~\eqref{eq:opgm} is equivalent to
\begin{align}
\label{eq:opgm2}
{\bf x}_{t+1} =\arg \min_{{\bf x} } \langle {\bf v}_t, {\bf x} -{\bf x}_t \rangle & + \frac{L}{2} \|{\bf x} -{\bf x}_t\|^2 +g_t({\bf x})-g_t({\bf x}_t) .
\end{align}
We also recall the definition of $\mathcal{A}_{g_t}({\bf x}_t,1/L)$ in~\eqref{eq:defA}, and define $\tilde{\mathcal{A}}_{g_t}({\bf x}_t,1/L)$ as:
\begin{align}
\label{eq:tildeA}
\tilde{\mathcal{A}}_{g_t}({\bf x}_t,1/L)&:=-2L\min\limits_{{\bf y}} \{\langle {\bf v}_t,{\bf y}-{\bf x}_t \rangle+\frac{L}{2} \|{\bf y}-{\bf x}_t\|^2 \nonumber \\
& \hspace{2.8cm} +g_t({\bf y})-g_t({\bf x}_t) \} \, .
\end{align}
Lastly, for any ${\bf x} \in \mathcal{D}$, we recall that $\mathbf{e}_t \in \mathbb{R}^n$ is the gradient error, i.e., ${\bf v}_t=\nabla f_t({\bf x})+\mathbf{e}_t$, and $\|{\bf e}_t\| \sim \mathrm{subW}(\theta, K_t)$.
\vspace{.1cm}
\emph{Proof of Theorem~\ref{thm:regretOPGM}.} We start by recalling that $F_{t+1}({\bf x}_{t+1})=f_{t+1}({\bf x}_{t+1})+g_{t+1}({\bf x}_{t+1})$; adding and subtracting $F_t({\bf x}_{t+1})$ on the right-hand-side, and using the definition~\eqref{eq:variability}, we get
\begin{align}
& F_{t+1}({\bf x}_{t+1}) \leq \phi_{t+1} + f_t({\bf x}_{t+1}) +g_t({\bf x}_{t+1})+g_t({\bf x}_t)-g_t({\bf x}_t) \nonumber \\
&\leq \phi_{t+1}+ f_t({\bf x}_t)+\langle \nabla f_t({\bf x}_t),{\bf x}_{t+1}-{\bf x}_t \rangle \nonumber \\
&\quad+\frac{L}{2} \|{\bf x}_{t+1}-{\bf x}_t\|^2 +g_t({\bf x}_{t+1})+g_t({\bf x}_t)-g_t({\bf x}_t)
\end{align}
where we used~\eqref{eq:smooth_def} in the last step. Next, we add and subtract ${\bf e}_t$ in the inner product and use the definition ${\bf v}_t=\nabla f_t({\bf x})+\mathbf{e}_t$ to obtain:
\begin{align}
F_{t+1}({\bf x}_{t+1}) & \nonumber \\
& \hspace{-1.1cm} \leq F_t({\bf x}_t)+ \langle {\bf v}_t, {\bf x}_{t+1}-{\bf x}_t \rangle+\frac{L}{2} \|{\bf x}_{t+1}-{\bf x}_t\|^2 \nonumber \\
& \hspace{-.7cm} +g_t({\bf x}_{t+1})-g_t({\bf x}_t) -\langle \mathbf{e}_t,{\bf x}_{t+1}-{\bf x}_t \rangle + \phi_{t+1} \\
& \hspace{-1.1cm} \leq F_t({\bf x}_t)-\frac{1}{2L} \tilde{\mathcal{A}}_{g_t}({\bf x}_t,1/L)+\phi_{t+1} -\langle \mathbf{e}_t,{\bf x}_{t+1}-{\bf x}_t \rangle
\end{align}
where we used the definition~\eqref{eq:tildeA}. Adding and subtracting $\mathcal{A}_{g_t}({\bf x}_t,1/L)$ on the right-hand-side, we get
\begin{align}
F_{t+1}({\bf x}_{t+1}) &\leq F_t({\bf x}_t)+\frac{1}{2L} |\mathcal{A}_{g_t}({\bf x}_t,1/L) - \tilde{\mathcal{A}}_{g_t}({\bf x}_t,1/L)| \nonumber \\
& \hspace{-.8cm} -\frac{1}{2L} \mathcal{A}_{g_t}({\bf x}_t,1/L) +\phi_{t+1}-\langle \mathbf{e}_t,{\bf x}_{t+1}-{\bf x}_t \rangle \, .
\end{align}
Let $\varepsilon_t := |\mathcal{A}_{g_t}({\bf x}_t,1/L) - \tilde{\mathcal{A}}_{g_t}({\bf x}_t,1/L)|$ for brevity; using the definition of $\mathcal{A}_{g_t}({\bf x}_t,1/L)$ in~\eqref{eq:defA} and subtracting $F_{t+1}^*$ on both sides, we get:
\begin{subequations}
\begin{align}
&F_{t+1}({\bf x}_{t+1})-F_{t+1}^* \leq
F_t({\bf x}_t) - F_{t+1}^* -\frac{\mu}{L}(F_t({\bf x}_t)-F_t^*) \nonumber \\
&\quad +\frac{1}{2L} \varepsilon_t -\langle \mathbf{e}_t,{\bf x}_{t+1}-{\bf x}_t \rangle +\phi_{t+1} \\
& \leq
F_t({\bf x}_t) - F_{t}^* -\frac{\mu}{L}(F_t({\bf x}_t)-F_t^*) +\frac{1}{2L} \varepsilon_t -\langle \mathbf{e}_t,{\bf x}_{t+1}-{\bf x}_t \rangle \nonumber \\
&\quad +\phi_{t+1} + \sigma_{t+1} \\
& = (1-\frac{\mu}{L})(F_t({\bf x}_t)-F_t^*)+\psi_{t+1}+\frac{1}{2L} \varepsilon_t -\langle \mathbf{e}_t,{\bf x}_{t+1}-{\bf x}_t \rangle \\
& = (1-\frac{\mu}{L})(F_t({\bf x}_t)-F_t^*)+\psi_{t+1}+\frac{1}{2L} \varepsilon_t + \|\mathbf{e}_t\| D \label{eq:boundF}
\end{align}
\end{subequations}
where we used the definition of $\sigma_{t+1}$ and $D$ is the diameter of $\mathcal{D}$.
We now bound $\frac{1}{2L} \varepsilon_t$; from the definitions~\eqref{eq:defA} and~\eqref{eq:tildeA}, we have that
\begin{align}
\frac{1}{2L} \varepsilon_t = & \Big| \min_{{\bf y}} \left\{\langle \nabla f_t({\bf x}_t) , {\bf y} - {\bf x}_t \rangle +\frac{L}{2} \|{\bf y}-{\bf x}_t\|^2 +g_t({\bf y}) \right\} \nonumber \\
& \hspace{.2cm} - \min_{{\bf z}} \left\{\langle {\bf v}_t , {\bf z} - {\bf x}_t \rangle +\frac{L}{2} \|{\bf z}-{\bf x}_t\|^2 +g_t({\bf z}) \right\} \Big| .
\end{align}
From~\eqref{eq:opgm2}, one can notice that the minimizer of $\langle {\bf v}_t , {\bf z} - {\bf x}_t \rangle +\frac{L}{2} \|{\bf z}-{\bf x}_t\|^2 +g_t({\bf z})$ is ${\bf x}_{t+1}$ (the constant term $g_t({\bf x}_{t})$ does not modify the minimizer); thus, substituting ${\bf z}$ with ${\bf x}_{t+1}$ we get
\begin{align}
\frac{1}{2L} \varepsilon_t = & \Big| \min_{{\bf y}} \left\{\langle \nabla f_t({\bf x}_t) , {\bf y} - {\bf x}_t \rangle +\frac{L}{2} \|{\bf y}-{\bf x}_t\|^2 +g_t({\bf y}) \right\} \nonumber \\
& \hspace{-.7cm} - \langle {\bf v}_t , {\bf x}_{t+1} - {\bf x}_t \rangle - \frac{L}{2} \|{\bf x}_{t+1}-{\bf x}_t\|^2 - g_t({\bf x}_{t+1}) \Big|. \hspace{-.1cm}
\end{align}
Next, one has that $\min_{{\bf y}} \{\langle \nabla f_t({\bf x}_t) , {\bf y} - {\bf x}_t \rangle +\frac{L}{2} \|{\bf y}-{\bf x}_t\|^2 +g_t({\bf y}) \} \leq \langle \nabla f_t({\bf x}_t) , {\bf y} - {\bf x}_t \rangle +\frac{L}{2} \|{\bf y}-{\bf x}_t\|^2 +g_t({\bf y})$ for any ${\bf y} \in \mathcal{D}$, and, thus
\begin{align}
\frac{1}{2L} \varepsilon_t \leq & \Big| \langle \nabla f_t({\bf x}_t) , {\bf y} - {\bf x}_t \rangle +\frac{L}{2} \|{\bf y}-{\bf x}_t\|^2 +g_t({\bf y}) \nonumber \\
& \hspace{-.7cm} - \langle {\bf v}_t , {\bf x}_{t+1} - {\bf x}_t \rangle - \frac{L}{2} \|{\bf x}_{t+1}-{\bf x}_t\|^2 - g_t({\bf x}_{t+1}) \Big|
\end{align}
for any ${\bf y} \in \mathcal{D}$. Pick ${\bf y} = {\bf x}_{t+1}$; then, we have that:
\begin{subequations}
\begin{align}
\frac{1}{2L} \varepsilon_t \leq & \Big| \langle \nabla f_t({\bf x}_t) , {\bf x}_{t+1} - {\bf x}_t \rangle - \langle {\bf v}_t , {\bf x}_{t+1} - {\bf x}_t \rangle \Big| \\
= & \Big| \langle {\bf e}_t , {\bf x}_t - {\bf x}_{t+1} \rangle \Big| \leq \|\mathbf{e}_t\| D .
\end{align}
\end{subequations}
Therefore, letting $r_t = F_t({\bf x}_{t})-F_{t}^*$ for brevity, we get the stochastic recursion
\begin{align}
r_{t+1} \leq & (1-\frac{\mu}{L}) r_t+\psi_{t+1} + 2 D \|\mathbf{e}_t\| \label{eq:boundF2}
\end{align}
which holds almost surely. By applying recursively~\eqref{eq:boundF2} from $\tau = 0$ to $\tau = t$, we get
\begin{align}
\label{eq:boundF3}
\hspace{-.2cm} r_t \leq \zeta^t r_0 + \sum_{i = 1}^t \zeta^{t - i} \left( 2 D \|\mathbf{e}_{i-1}\| + \psi_i \right) \, .
\end{align}
Taking the expectation on both sides of~\eqref{eq:boundF3}, the bound~\eqref{eq:expected_regretOPGM} follows.
To show~\eqref{eq:high_regretOPGM}, recall first that $\|{\bf e}_t\| \sim \mathrm{subW}(\theta, K_t)$, and let $\kappa_t := \zeta^t r_0 + \sum_{i=1}^{t} \zeta^{t-i} \psi_i$ for brevity so that~\eqref{eq:boundF3} can be rewritten as $r_t \leq \kappa_t + 2D \sum_{i = 1}^t \zeta^{t - i} \|\mathbf{e}_{i-1}\|$. Using the closure properties (a), (b), and (c) in Lemma~\ref{sec:closure} and the fact that $\zeta > 0$ and $\kappa_t \geq 0$, we have that the right-hand-side of this inequality is a sub-Weibull rv; in fact,
\begin{align}
\label{eq:errortotal}
\kappa_t + 2D \sum_{i = 1}^t \zeta^{t - i} \|\mathbf{e}_{i-1}\| \sim \mathrm{subW} \left(\theta, K^{''} \right) \, .
\end{align}
where $K^{''} = \kappa_t + 2D \sum_{i=1}^{t}\zeta^{t-i} K_{i-1}
$.
Using Lemma~\ref{lem:high-probability-bound}, the high-probability bound~\eqref{eq:high_regret} follows.
\section{Illustrative Numerical Results}
\label{sec:results}
We provide two illustrative numerical experiments. The first one is based on a time-varying LS regression problem; then, we consider a problem related to real-time demand response in power grids.
\textbf{Least-squares problem}. We consider a time-varying LS regression problem, with the following cost at time $t$:
\begin{align}
f_t({\bf x}) = \frac{1}{2}\|{\bf A} {\bf x} - {\bf b}_t\|^2
\end{align}
where ${\bf A} \in \mathbb{R}^{d \times n}$ and ${\bf b}_t \in \mathbb{R}^d$; this cost satisfies the PL inequality, as shown in~\cite{karimi2016linear}.
We consider the case $n=10$ and $d=20$. The matrix ${\bf A}$ is generated by defining its singular value decomposition; for its left and right-singular vectors, we sampled two orthogonal matrices, ${\bf U}\in\mathbb{R}^{d\times d}$ and ${\bf V}\in\mathbb{R}^{n\times n}$, and we let its singular values be equally spaced from $\mu = 0.1$ to $L = 1$. We generated ${\bf b}_t$ as ${\bf b}_t = {\bf A} {\bf x}_t^* + {\bf r}_t$, where the optimal parameter ${\bf x}_t^*$ evolves via a random walk, i.e., ${\bf x}_t^* = {\bf x}_{t-1}^* + {\bf s}$ with ${\bf s} \sim \mathcal{N}(\mathbf{0}, 0.1 {\bf I})$, and ${\bf r}_t$ is a Gaussian vector $\mathcal{N}(\mathbf{0}, 10^{-3} {\bf I})$ (we set ${\bf x}_{0}^*$ to the vector of all ones).
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{F_paper_regret.eps}
\caption{Inexact OGD: Evolution of average regret obtained experimentally, the empirical $3$-standard deviation confidence interval, and the theoretical bound.}
\label{fig:regret}
\end{figure}
We corrupt the gradient with a random vector ${\bf e}_t$, which is modelled as a Gaussian vector $\mathcal{N}(\mathbf{0}, 10^{-3} {\bf I})$; we note that, if ${\bf e}_t$ is a Gaussian vector, then $\|{\bf e}_t\|^2$ is a sub-Weibull random variable. The regret is computed using a Monte Carlo approach, with 100 tests. Accordingly, Figure~\ref{fig:regret} illustrates the evolution of the expected regret obtained by averaging the trajectories of the instantaneous regret $r_t$ over the various runs, the empirical $3-\sigma$ confidence interval, and the theoretical bound~\eqref{eq:expected_regret2}. The figure validates the convergence results for the inexact OGD and, since ${\bf b}_t$ continuously changes, the average $r_t$ exhibits a plateau.
\textbf{Real-time demand response problem}.
We consider an example in the context of a power distribution grid serving residential houses or commercial facilities. We consider $n$ controllable distributed energy resources (DERs) providing services to the main grid; precisely, consider the setting where the vector ${\bf x}$ collects the active power outputs of the DERs, and assume the algebraic relationship $p_{0,t} = {\bf a}_x^\top {\bf x} + {\bf a}_w^\top {\bf w}_t$ for the net active power at the point of common coupling, where ${\bf a}_x \in \mathbb{R}^n$ and ${\bf a}_w \in \mathbb{R}^w$ are sensitivity coefficients, and ${\bf w}_t \in \mathbb{R}^w$ is a vector collecting active powers of uncontrollable devices; in particular, ${\bf a}_x$ and ${\bf a}_w$ can be set to the vector of all ones when line losses are negligible, or they are derived based on a linearized model for the power flow equations in case of resistive lines~\cite{Bolognani_feedback_15}. Consider the following time-varying optimization problem for real-time management of DERs:
\begin{equation}
\label{eq:tip_sg}
\min_{{\bf x}} \, \frac{1}{2} \left({\bf a}_x^\top {\bf x} + {\bf a}_w^\top {\bf w}_t - p_{0,t}^{\mathrm{ref}} \right)^2 + \mathbb{I}_{\{{\bf B} {\bf x} \leq {\bf c}\}}
\end{equation}
where $p_{0,t}^{\mathrm{ref}}$ is a time-varying reference point for the net active power at the point of common coupling $p_{0,t}$, and $\mathbb{I}_{\{{\bf B} {\bf x} \leq {\bf c}\}}$ is the set indicator function for the set $\{{\bf x} \in \mathbb{R}^n: {\bf B} {\bf x} \leq {\bf c}\}$ modeling box constraints for the active powers. For example, $p_{0,t}$ may be an automatic control generation (ACG) signal, a flexible ramping signal, or a demand response setpoint. We note that the cost~\eqref{eq:tip_sg} satisfies the proximal-PL inequality~\cite{karimi2016linear}. The main challenge behind applying a proximal-gradient descent to~\eqref{eq:tip_sg} is that the vector ${\bf w}_t $ is unknown; we therefore consider the approach of, e.g.,~\cite{Bolognani_feedback_15}, where measurements of $p_{0,t}$ are utilized to estimate the gradient in lieu of the model ${\bf a}_x^\top {\bf x} + {\bf a}_w^\top {\bf w}_t $. Precisely, we compute the approximate gradient as
\begin{equation}
{\bf v}_t = {\bf a}_x (\hat{p}_{0,t} - p_{0,t}^{\mathrm{ref}})
\end{equation}
where $\hat{p}_{0,t}$ is a measurement of $p_{0,t}$ collected at time $t$. Since measurements of $p_{0,t}$ may be affected by errors or by outliers, ${\bf v}_t$ does not in general coincide with the true gradient ${\bf a}_x ({\bf a}_x^\top {\bf x}_t + {\bf a}_w^\top {\bf w}_t - p_{0,t}^{\mathrm{ref}})$.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{F_paper_data.eps}
\caption{Demand response application: non-controllable power ${\bf a}_w^\top {\bf w}_t$ and reference point $p_{0,t}^{\mathrm{ref}}$ for the active power $p_{0,t}$. }
\label{fig:regret_data}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{F_paper_DM.eps}
\caption{Demand response application: Evolution of average regret obtained experimentally; the zoomed area also provides the empirical $3-\sigma$ confidence interval.}
\label{fig:regret_demand}
\end{figure}
As an example, we consider the case where $N = 500$ DERs are controlled; the limits for the active power of each device are $[-50, 50]$ kW for energy storage resources and $[0, 50]$ kW for solar inverters. We consider the case where $p_{0,t}^{\mathrm{ref}}$ follows the trajectory shown in Figure~\ref{fig:regret_data}; real data with a granularity of one second is taken from~\cite{Dallanese2018feedback} to generate the non-controllable powers ${\bf w}_t$, with the net power ${\bf a}_w^\top {\bf w}_t$ plotted in Figure~\ref{fig:regret_data} as well. The sensitivity vector ${\bf a}_w$ is computed as in~\cite{Dallanese2018feedback}. A Gaussian random variable with zero mean and variance $10$ kW is utilized to generate the measurement error affecting $\hat{p}_{0,t}$.
Figure~\ref{fig:regret_demand} illustrates the evolution of the regret $r_t$, averaged over $50$ experiments, in logarithmic scale. One can notice a linear decrease of the average regret during the first iteration of the algorithms; the regret then exhibit variations that are due to the considerable time-variability of the cost function (due to the large swings in the non-controllable powers ${\bf w}_t$). The plot also provides a zoomed version (in linear scale), where the $3$-standard deviation confidence interval is also reported.
\section{Conclusions}
\label{sec:conclusions}
In this paper, we showed that cost function achieved by the online (proximal-)gradient method exhibits a linear convergence to the optimal value functions within an error, for functions satisfying the (proximal-)PL inequality, and when inexact gradient information is available. We derived bounds in expectation and in high probability, where for the latter we utilized a sub-Weibull model for the gradient errors. The convergence results are applicable to a number of learning and feedback-optimization tasks, where the cost functions may not be SC, but satisfies the PL inequality. Our results also provide new insights on the convergence of the (proximal-)gradient method for time-varying functions and with exact gradient information, and for the case of static optimization with inexact gradient information. The gradient error model is general, and it allows one to consider various sources of inaccuracy and gradient estimation techniques.
\bibliographystyle{IEEEtran}
|
train/arxiv
|
BkiUbpg4uzlgp_siTS2o
| 5 | 1 |
\section{\label{sec:intro}Introduction}
When a charged particle is accelerated it radiates electromagnetic energy according to the Li\'enard formula \cite{Jackson1999_classical_electrodynamics}. The energy lost comes from the particle's kinetic energy and consequently a reaction force must act on the radiating particle for accounting such loss of kinetic energy. The equation of motion of a charged particle including the reaction force has been a subject of interest during the last 100 years and it has become one of the most popular unsolved problems of the Modern Physics. The first formula of the reaction force was the Abraham-Lorentz formula \cite{abraham1905theorie, lorentz1916theory}, which was extended to a Lorenz-covariant form by Dirac in 1938 \cite{Dirac1938_LAD_equation}. This expression is called the Lorentz-Abraham-Dirac (LAD) equation and suffers from some problems. Firstly, the LAD equation involves the temporal derivative of the acceleration and hence the initial position and
velocity of the particle does not determine the solution uniquely. For this reason, one extra initial condition is needed compared to the Newtonian mechanics. Secondly, if the external force is zero the LAD equation admits solutions that accelerate exponentially in addition to the trivial solution of uniform motion. These solutions are called runaway solutions and violate the principle of conservation of energy. The runaway solutions can be avoided by artificially introducing a preacceleration, but then the causality is violated. Furthermore, the LAD equation does not agree with the theory of Compton scattering except in the weak field case \cite{Hartemann2005_ComptonScattering_LAD}.
As a consequence of the inherent difficulties of the LAD equation, several alternative formulations have been proposed. Mo and Papas proposed in 1971 a new radiation reaction force formula that avoids the runaway solutions as well as the preacceleration phenomenon, assuming a reaction force proportional to the acceleration of the particle rather than the velocity \cite{MoPapas1971_EquationMotion_PhysRevD.4.3566}. Ford and O'Connell proposed in 1991 an alternative equation based on the use of the generalized quantum Langevin equation for an electron with a finite size \cite{Ford_OConnell1991}. Hartemann and Luhman \cite{hartemann1995classical}, Yaghjian \cite{yaghjian2010relativistic}, and Hammond \cite{hammond2010relativistic} have proposed other equations, but the most famous reaction force is the Landau-Lifshitz (LL) equation \cite{Landau1975} that can be obtained assuming that the radiation reaction force is much smaller than the external electromagnetic force. But the LL equation can also be obtained as a limit of the self-force modeling the electron as a sphere \cite{medina2006radiation, griffiths2010abraham}. Thus, the LL theory has been proposed as the correct equation of motion of a classical point charge \cite{rohrlich2001correct} and it has even been recently verified experimentally measuring the emission spectra of electrons and positrons penetrating into aligned single crystals \cite{nielsen2021experimentalLL}. However, the LL equation is not totally correct since it predicts no reaction force when the particle has a linear acceleration although in this case the radiation losses are practically negligible for typical electric fields \cite{Jackson1999_classical_electrodynamics}. Moreover, the LL equation fails for electromagnetic fields with abrupt changes, i.e. for high frequencies and/or high intensities \cite{hammond2008radiation_ABRUPT}.
The purpose of this work is the derivation of a new equation that overcomes these difficulties. The article is organized as follows. Firstly, the basic principles and the approximations to obtain the new proposed reaction force are discussed. Secondly, the proposed reaction force is solved for some simple physical examples comparing the results with the LL equation. We conclude the paper with final remarks.
\section{The proposed reaction force} \label{Sec_2}
Radiation from charged particles becomes important for ultra-relativistic particles and its emission is concentrated in the direction of motion of the particle with a symmetric angular distribution around this direction \cite{Jackson1999_classical_electrodynamics}. Consequently, we will assume that the reaction force is antiparallel to the velocity $\bf{v}$, i.e.
\begin{equation}
\mathbf{F}_{\mathrm{R}}=-F_{\mathrm{R}} \hat{\mathbf{v},}
\end{equation}
\noindent with $\hat{\mathbf{v}}=\mathbf{v} /\|\mathbf{v}\|$, in order to satisfy in average the principle of conservation of energy and momentum when the emission is produced. Note that $\|\|$ indicates the euclidean norm of a vector on the 3-dimensional space. On the other hand, in the average distance, called mean free path \cite{Burkhardt2007_Radiation}
\begin{equation}
\lambda=\frac{2 \sqrt{3} \rho}{5 \alpha \gamma},
\end{equation}
\noindent is emitted a photon of mean energy
\begin{equation}
\langle E\rangle=\frac{8}{15 \sqrt{3}} E_{c},
\end{equation}
\noindent where $E_{\mathrm{c}}=\frac{3}{2} \hbar c \gamma^{3} / \rho$ is the critical energy, $\rho$ is the radius of curvature of the trajectory, $\gamma$ is the relativistic factor, $\hbar$ is the reduced Planck constant, $c$ is the speed of light in vacuum and $\alpha \approx \frac{1}{137}$ is the fine-structure constant. Thus, the principle of conservation of energy during a small displacement $\Delta x$ gives the relation $F_{\mathrm{R}} \Delta x=\frac{\Delta x}{\lambda}\langle E\rangle$, i.e.
\begin{equation}\label{ecu.FR}
F_{\mathrm{R}}=\frac{\langle E\rangle}{\lambda},
\end{equation}
\noindent which depends only on $\gamma$ and $\rho$.
The radiation emitted by an extremely relativistic charge is approximately the same as that of a particle moving instantaneously in a circular arc of radius of curvature \cite{Jackson1999_classical_electrodynamics}:
\begin{equation}
\rho=\frac{\|\mathbf{v}\|^{2}}{\|\dot{\mathbf{v}}_{\perp}\|} \approx \frac{c^{2}}{\|\dot{\mathbf{v}}_{\perp}\|},
\end{equation}
\noindent where $\|\dot{\mathbf{v}}_{\perp}\|$ is the magnitude of the perpendicular component with respect to $\mathbf{v}$ of the acceleration. Note that the dot indicates a time derivative. The equation of motion of the particle will be
\begin{equation}\label{ecu.F_particle}
\dot{\mathbf{p}}=m \dot{\gamma} \mathbf{v}+m \gamma \dot{\mathbf{v}}=\mathbf{F}_{\mathrm{ext}}+\mathbf{F}_{\mathrm{R}},
\end{equation}
\noindent where $\mathbf{p}$ is the relativistic linear momentum, $m$ is the rest mass of the particle and $\mathbf{F}_{\mathrm{ext}}$ is the external force applied to the charged particle, e.g. the Lorentz force. Therefore, projecting the equation of motion in the perpendicular direction, we obtain
\begin{equation}
\dot{\mathbf{v}}_{\perp}=\frac{\mathbf{F}_{\mathrm{ext}, \perp}}{m \gamma} .
\end{equation}
\noindent Hence, substituting in Eq. (\ref{ecu.FR}), the following reaction force is obtained
\begin{equation} \label{ecu.FR_perp}
\mathbf{F}_{\mathrm{R}}=-\frac{\tau_{m}}{m c} \beta^{-4} \gamma^{2}\|\mathbf{F}_{\text {ext }, \perp}\|^{2} \hat{\mathbf{v}} \approx-\frac{\tau_{m}}{m c} \gamma^{2}\|\mathbf{F}_{\text {ext }, \perp}\|^{2} \hat{\mathbf{v}},
\end{equation}
\noindent where the characteristic time $\tau_{m}=\frac{2}{3} \frac{1}{4 \pi \varepsilon_{0}} \frac{e^{2}}{m c^{3}}$ has been defined; $e$ is the elementary charge, $\varepsilon_{0}$ is the vacuum electric permittivity and $\beta=v/c$. This proposed reaction force will be a great approximation for ultra-relativistic particles. Note that if the acceleration is parallel to the velocity, the Eq. (\ref{ecu.FR_perp}) erroneously predicts a null reaction force as e.g. the Landau-Lifshitz force. Nevertheless, for a given magnitude of applied force, the radiation emitted due to a perpendicular acceleration is a factor $\gamma^{2}$ larger than with a parallel acceleration \cite{Jackson1999_classical_electrodynamics}. Consequently, (\ref{ecu.FR_perp}) suggests the addition of an identical term for the parallel component of the force without the factor $\gamma^{2}$. Thus, the following final expression is obtained
\begin{equation}\label{ecu.FR_total}
\mathbf{F}_{\mathrm{R}}=-\frac{\tau_{m}}{m c}\left(\gamma^{2}\|\mathbf{F}_{\mathrm{ext}, \perp}\|^{2}+\|\mathbf{F}_{\mathrm{ext}, \|}\|^{2}\right) \hat{\mathbf{v}}.
\end{equation}
\noindent As a first check, we see that in absence of the external force the reaction force is zero. An alternative demonstration using four-vectors is included in the Appendix.
\section{SPECIAL CASES AND COMPARISON WITH THE LANDAU-LIFSHITZ EQUATION}\label{Sec_comparison}
Now we will solve the equation of motion (\ref{ecu.F_particle}) assuming the proposed reaction force (\ref{ecu.FR_total}) for an electron (with charge $q=-e$) in different basic physical situations comparing the results with those of the Landau-Lifshitz reaction force equation \cite{Landau1975}
\begin{equation}
\begin{aligned}
&F_{\mathrm{LL}}^{\mu}=\tau_{m}\left(\frac{q}{c} \frac{\partial F^{ \mu \nu}}{\partial x ^{\gamma}} u_{\nu} u^{\gamma}+\frac{q^{2}}{m c} F^{\mu \gamma} F_{\nu \gamma} u^{\nu}+\right.\\
&\left.\frac{q^{2}}{m c^{3}}\left(F_{\nu \gamma} u^{\gamma}\right)\left(F^{\nu \alpha} u_{\alpha}\right) u^{\mu}\right),
\end{aligned}
\end{equation}
\noindent where $F^{ \mu \nu}$ is the field tensor and $u^{\mu}$ is the four-velocity.
In the three-dimensional form it can be written for an electron as \cite{Bulanov2011_3D_LL_PhysRevE.84.056605}
\begin{equation} \label{ecu.LL3D}
\begin{aligned}
\mathbf{F}_{\mathbf{L L}} &=\tau_{m}\left\{e \gamma\left(\left[\frac{\partial}{\partial t}+(\mathbf{v} \cdot \nabla)\right] \mathbf{E}+\left\{\mathbf{v} \times\left[\frac{\partial}{\partial t}+(\mathbf{v} \cdot \nabla)\right]\right\} \mathbf{B}\right)\right.\\
&+\frac{e^{2}}{m c}\left(\mathbf{E} \times c \mathbf{B}+c[\mathbf{B} \times(\mathbf{B} \times \mathbf{v})]+\frac{1}{c} \mathbf{E}(\mathbf{v} \cdot \mathbf{E})\right) \\
&\left.-\frac{e^{2} \gamma^{2}}{m c^{2}} \mathbf{v}\left[(\mathbf{E}+\mathbf{v} \times \mathbf{B})^{2}-\frac{1}{c^{2}}(\mathbf{v} \cdot \mathbf{E})^{2}\right]\right\}
\end{aligned}
\end{equation}
\noindent where $\mathbf{E}$ and $\mathbf{B}$ are the external electric and magnetic field, respectively.
\subsection{Motion perpendicular to uniform magnetic field}
In this first example, we are going to assume a uniform magnetostatic field $\mathbf{B}=B_0\hat{\mathbf{z}}$ and motion in the $xy$-plane. Then, the proposed reaction force (\ref{ecu.FR_total}) and the Landau-Lifshitz force (\ref{ecu.LL3D}) are simply
\begin{equation}
\mathbf{F}_{\mathbf{R}}=\mathbf{F}_{\mathbf{L L}}=-\frac{\tau_{m} e^{2}}{m} \gamma^{2} B_{0}^{2} \mathbf{v},
\end{equation}
\noindent where $v\approx c$ has been assumed.
The solution of the motion for the general initial conditions $(x_{0},y_{0}), (v_{x0},v_{y0})$ is
\begin{equation} \label{ecu.motion_B_field}
\begin{gathered}
v_{x}(t)=e^{-\sigma \tau}\left[v_{x 0} \cos (\omega_{c} \tau)+v_{y 0} \sin (\omega_{c} \tau)\right], \\
v_{y}(t)=e^{-\sigma \tau}\left[-v_{x 0} \sin (\omega_{c} \tau)+v_{y 0} \cos (\omega_{c}\tau)\right], \\
x(t)=x_{0}+v_{x0}I_{1}(\tau) +v_{y0}I_{2}(\tau), \\
y(t)=y_{0}-v_{x0}I_{2}(\tau)+v_{y0}I_{1}(\tau),
\end{gathered}
\end{equation}
\noindent where we have defined the auxiliary integrals
\begin{equation}
\begin{gathered}
I_{1}(\tau)=\int_{0}^{\tau} \frac{e^{-\sigma \tau}}{\sqrt{1-\beta_{0}^2 e^{-2 \sigma \tau}}} \cos (\omega_{c} \tau) d \tau, \\
I_{2}(\tau)=\int_{0}^{\tau} \frac{e^{-\sigma \tau}}{\sqrt{1-\beta_{0}^2 e^{-2 \sigma \tau}}} \sin (\omega_{c} \tau) d \tau,
\end{gathered}
\end{equation}
\noindent the following quantities $\omega_{c}=eB_{0}/m, \sigma=\tau_{m}\omega_{c}^2$ and the proper time $\tau$ that can be expressed in the laboratory time $t$ as
\begin{equation}
\tau=\frac{1}{\sigma} \ln \left(\frac{\delta-e^{2 \sigma t}}{\delta-1}\right)-t
\end{equation}
\noindent with $\delta=\frac{1-\gamma_{0}}{1+\gamma_{0}}$. Note that $\beta_0=v_0/c$ and $\gamma_{0}=(1-\beta_{0}^2)^{-\frac{1}{2}}$.
Furthermore, we can calculate the energy lost per unit time $-\frac{dE}{dt}=-mc^2\frac{d\gamma}{dt}$, obtaining
\begin{equation}
-\frac{dE}{dt}=\frac{\tau_{m}\gamma^{2}}{mc}e^{2} B_{0}^{2} v^{2}
\end{equation}
\noindent that agrees with the Li\'enard formula \cite{Jackson1999_classical_electrodynamics}
\begin{equation}\label{ecu.Lienard}
P=\frac{\tau_{m} \gamma^{2}}{m c}\left(\dot{\mathbf{p}}^{2}-\beta^{2} \dot{p}^{2}\right),
\end{equation}
\noindent if it is particularized for the considered uniform magnetic field.
Figure \ref{fig:B_field} shows the trajectory (\ref{ecu.motion_B_field}) in the $xy$-plane of an ultra-relativistic electron. As it is expected, the motion is a spiral due to the radiated energy by the accelerated electron.
\begin{figure}[h!]
\includegraphics[width=0.95\columnwidth]{FIG_circular_uniform.png
\caption{Motion of an electron with an initial total energy of 100 GeV in a perpendicular magnetic field of $B=1$ T for the first 50 $\mu$s. The dashed line shows the circular trajectory with radius $x_{0}=m \gamma_{0} v_{0} / (eB)$ obtained if the radiation is neglected.}
\label{fig:B_field}
\end{figure}
\subsection{Motion parallel to uniform electric field}
In this case we will assume a homogeneous static electric field $\mathbf{E}=E_0\hat{\mathbf{z}}$ and motion in the $z$-axis. Hence, the proposed reaction force (\ref{ecu.FR_total}) is
\begin{equation}
\mathbf{F}_{\mathbf{R}}=-\frac{\tau_{m} e^{2}}{m c} E_{0}^{2} \hat{\mathbf{v}}.
\end{equation}
and the Landau-Lifshitz force (\ref{ecu.LL3D}) is $\mathbf{F}_{\mathbf{LL}}=\mathbf{0}$. The general solution of this motion is
\begin{equation}
u_{z}(t)=\gamma(t) v_{z}(t)=u_{z 0}+\frac{q E_{0}}{m}(1-\mathrm{sgn}(v_{z})\epsilon) t,
\end{equation}
\begin{equation}
z(t)=z_{0}+\frac{m c^{2}}{q E_{0}(1-\mathrm{sgn}(v_{z})\epsilon)}(\gamma(t)-\gamma_{0}),
\end{equation}
\noindent where the dimensionless parameter $ \epsilon= \frac{\tau_{m} e E_{0}}{m c} \approx 3.68 \times 10^{-21} E_{0}[\mathrm{V/m}] $ has been defined and $\mathrm{sgn}(x)$ is the sign function.
The solution of the Landau-Lifshitz force, i.e. neglecting the losses due to the radiation, is obtained by substituting $\epsilon=0$. Thus, it can be seen that for typical values of the electric fields in conventional particle accelerators, i.e, $E_{0}\sim 100 $ MV/m, this parameter includes a small perturbation to the motion obtained neglecting the effects of radiation. The cumulative energy lost as a function of time is given by
\begin{equation} \label{ecu.energy_E_field}
\Delta E(t)=m c^{2}(\gamma(t, \epsilon=0)-\gamma(t, \epsilon))=q E_{0} v_{z} t \epsilon+O\left(\epsilon^{2}\right).
\end{equation}
\noindent If we make the approximation $v_{z} \approx c$ this equation is analogous to the Li\'enard equation (\ref{ecu.Lienard}), as it is expected. Hence, our proposed reaction force will be valid as long as $\epsilon \ll 1$, i.e. up to electric fields of the order of $10^{19}$ V/m. It is important to remark that the effects of the radiation are in general totally negligible. For example, an electron initially at rest accelerated with an electric field of \linebreak 1 GV/m during 100 m has finally an energy of \linebreak $\sim$100 GeV, but the predicted losses due to the radiation are $\sim$0.36 eV in that distance. However, Eq. (\ref{ecu.energy_E_field}) shows that the energy lost as a function of time is proportional to the square of the electric field. Consequently, if the electric field is increased four or five orders of magnitude (i.e. for electric fields $\sim$10-100 TV/m) the radiated energy will become very important. These ultrahigh electric fields have been predicted using X-ray wakefield acceleration in metallic crystals \cite{tajima1987crystal, ZhangTajima2016_AcceleratorCrystals_PhysRevAccelBeams.19.101004} and hence the LL equation cannot be used in their simulations, whereas the new reaction force presented in this manuscript will be a good approximation.
\subsection{Motion in an electromagnetic plane wave}
Finally, we are going to study the motion of an electron due to an electromagnetic plane wave with fields $\mathbf{E}=E_{0} \cos (\omega t-k z) \hat{\mathbf{x}}$, $\mathbf{B}=B_{0} \cos (\omega t-k z) \hat{\mathbf{y}}$ with $\omega=ck$ and $E_{0}=c B_{0}$. The fourth order Runge-Kutta method has been used to numerically solve the motion. The radiated power can be calculated as
\begin{equation} \label{ecu.power_radiated}
P_{\mathrm{rad}}=\mathbf{F}_{\mathbf{ext}}\cdot{\mathbf{v}}-\frac{dE}{dt},
\end{equation}
i.e., the difference between the power obtained from the external force (the Lorentz force) and the variation of the total energy per unit of time.
\begin{figure}[!h]
\includegraphics[width=0.95\columnwidth]{FIG_plane_wave_trajectory_freqs_3.png}
\caption{Comparison between the trajectory obtained using the proposed reaction force (R) and the LL equation for $E_{0}=1$ TV/m and (a) $\omega=10^{16}$ rad/s, (b) $\omega=10^{17}$ rad/s, (c) $\omega=10^{18}$ rad/s. The initial conditions are $\mathbf{x_0}=\mathbf{0}$, $\mathbf{v_0}=v_0\hat{\mathbf{x}}$ with an initial electron total energy of 10 GeV. }
\label{fig:plane_wave_trajectory}
\end{figure}
\begin{figure}[!h]
\includegraphics[width=0.95\columnwidth]{FIG_plane_wave_power_vertical.png
\caption{Comparison between the Li\'enard formula and the radiated power associated with the motion obtained using the proposed reaction force (a) and the LL equation (b) for $E_{0}=1$ TV/m, $\omega=10^{18}$ rad/s, and the initial conditions $\mathbf{x_0}=\mathbf{0}$, $\mathbf{v_0}=v_0\hat{\mathbf{x}}$ and an initial electron total energy of 10 GeV. The time is normalized to the period of the plane wave $T=2\pi/\omega$.}
\label{fig:plane_wave_radiated_power_comp}
\end{figure}
Figure \ref{fig:plane_wave_trajectory} shows the comparison between the trajectories obtained using the proposed reaction force (\ref{ecu.FR_total}) and the LL equation (\ref{ecu.LL3D}) for different frequencies $\omega$, showing that the disagreement between the trajectories becomes very important for higher frequencies. Furthermore, Fig. \ref{fig:plane_wave_radiated_power_comp} shows the comparison between the Li\'enard formula (\ref{ecu.Lienard}) and the radiated power (\ref{ecu.power_radiated}) when the motion is numerically solved using the proposed reaction force and the LL equation. It can be seen that the new reaction force agrees with the Li\'enard formula, while the LL equation does not have a great accordance. It is interesting to remark that the radiated power associated with the proposed reaction and the LL equation is similar, but the power predicted by the Li\'enard formula is different in each case since the motion is different (see Figure \ref{fig:plane_wave_trajectory}).
\begin{figure*}[ht]
\includegraphics[width=2\columnwidth]{FIG_plane_wave_freqs_v2.png
\caption{Discrepancies between the Li\'enard formula and the radiated power associated with the motion obtained using the proposed reaction force (R) and the LL equation for different frequencies $\omega$: (a) $\omega=10^{15}$ rad/s, (b) $\omega=10^{16}$ rad/s, (c) $\omega=10^{17}$ rad/s and (d) $\omega=10^{18}$ rad/s. Same $E_{0}$ and initial conditions as in Fig. \ref{fig:plane_wave_radiated_power_comp}. The discrepancies are normalized to the maximum value of the corresponding Li\'enard formula and expressed in \%, and the time is normalized to the period of the plane wave $T=2\pi/\omega$. }
\label{fig:plane_wave_freqs}
\end{figure*}
Figure \ref{fig:plane_wave_freqs} shows the difference between the Li\'enard formula and the radiated power for different frequencies $\omega$ in order to study the differences between the two forces when the electromagnetic fields have rapid changes. Firstly, we can see that the discrepancy between the radiated power obtained using the LL equation and the Li\'enard equation increases linearly with the frequency. Nevertheless, the discrepancy is negligible for the proposed reaction force. Therefore, it has been verified that the emitted radiation calculated with the proposed reaction force agrees much better to the Li\'enard formula for high frequencies than the LL equation which definitively fails.
\section{Conclusions}\label{sec_conclusions}
A new reaction force has been proposed to take into account the radiation emitted from an accelerated charged particle. The proposed reaction force works equal or better than the Landau-Lifshitz force in the analyzed cases. Firstly, this new reaction force allows to introduce the radiation when the motion is parallel to an electric field, while the Landau-Lifshitz equation predicts no radiation. Secondly, the proposed reaction force agrees much better than the Landau-Lifshitz force with the Li\'enard formula for electromagnetic fields with rapid changes. Moreover, this new reaction force is much easier to numerically solve and implement since it does not depend on the derivatives of the electromagnetic fields.
\begin{acknowledgments}
This work has been supported by Ministerio de Universidades (Gobierno de Espa\~{n}a) under grant number FPU20/04958.
\end{acknowledgments}
\section{Appendix: Alternative demonstration of the proposed reaction force} \label{Appendix}
The Li\'enard formula for the radiated power in terms of the four-velocity is given by \cite{Jackson1999_classical_electrodynamics}
\begin{equation}
P_{\mathrm{rad}}=-\tau_{m}m\left(\frac{d u^{\mu}}{d \tau}\right)\left(\frac{d u_{\mu}}{d \tau}\right),
\end{equation}
\noindent where $\tau$ is the proper time. If we assume that the external force is the Lorentz force $F_{\mathrm{L}}^{\mu}=q F^{\mu \nu} u_{\nu}$, the equation of motion taking into account the radiation is
\begin{equation}
m \frac{d u^{\mu}}{d \tau}=F_{\mathrm{L}}^{\mu}+F_{\mathrm{rad}}^{\mu}.
\end{equation}
\noindent Thus, the temporal component is the variation of the total energy per unit of time
\begin{equation}
\frac{d E}{d t}=e \mathbf{E}\cdot{\mathbf{v}}+\frac{c F_{\mathrm{rad}}^{0}}{\gamma},
\end{equation}
\noindent where the second term is the contribution due to the radiated power $P_{\mathrm{rad}}$. Consequently, the temporal component of the reaction force is
\begin{equation}
F_{\mathrm{rad}}^{0}=-\frac{\gamma P_{\mathrm{rad}}}{c}=-\frac{P_{\mathrm{rad}}}{c^{2}} u^{0},
\end{equation}
\noindent where the minus sign indicates that the particle is radiating energy. This equation suggests that the reaction force is proportional to the four-velocity, obtaining the following four-force
\begin{equation}\label{ecu.Frad4c}
F_{\mathrm{rad}}^{\mu}=-\frac{P_{\mathrm{rad}}}{c^{2}} u^{\mu}=\frac{\tau_{m}m}{c^{2}}\left(\frac{d u^{\nu}}{d \tau}\right)\left(\frac{d u_{\nu}}{d \tau}\right) u^{\mu}.
\end{equation}
\noindent If we assume that the reaction force is much less important than the Lorentz force, i.e. $m \frac{d u^{\mu}}{d \tau}=F_{\mathrm{L}}^{\mu}$, the vector component of (\ref{ecu.Frad4c}) is
\begin{equation}\label{ecu.FR_total_appendix}
\mathbf{F}_{\mathrm{rad}}=-\frac{\tau_{m}v}{m c^{2}}\left(\gamma^{2}\|\mathbf{F}_{\mathrm{L}, \perp}\|^{2}+\|\mathbf{F}_{\mathrm{L}, \|}\|^{2}\right) \hat{\mathbf{v}},
\end{equation}
\noindent that coincides with the proposed reaction force (\ref{ecu.FR_total}) if $ v \approx c$ is assumed.
\input{output.bbl}
\end{document}
|
train/arxiv
|
BkiUapzxK6wB9lIs6jib
| 5 | 1 |
\section{Introduction}\label{sec:intro}
Minimizing convex functions is fundamental in optimization, both in theory and in the algorithm design.
For most applications, the assertions that can be made about a class of convex functions are of greater value than those concerning a particular problem. This theoretical analysis is valuable for the insights. Our main result in this paper states
that the set of all proper lower semicontinuous (lsc) convex functions which have strong minimizers is of second category.
Studying strong minima is important, because
numerical methods usually produce asymptotically minimizing sequences, we can
assert convergence of asymptotically
minimizing sequences when the function has a strong minimizer.
The strongly convex function is also of great use in optimization problems, as it can significantly
increase the rate of convergence of first-order methods such as projected subgradient descent \cite{simpler}, or
more generally the forward-backward algorithm \cite[Example 27.12]{convmono}.
Although every strongly convex function has a strong minimizer, we show that
the set of strongly convex functions is only of the first category.
As a proper lsc convex function allows infinity values, we propose to relate the
function to its Moreau envelope.
The importance of the Moreau envelope in optimization is clear; it is a regularizing (smoothing) function \cite{moreau1963,proximite}, and in the convex setting it has the same local minima and minimizers as its objective function \cite{funcanal,rockwets}.
The key tool we use is Baire category. A property is said to be generic if it holds for a second category set.
We will work in a metric space defined by Moreau envelopes. In this setting,
there are many nice properties of the set of Moreau envelopes of proper, lsc, convex functions. This set is proved to be closed and convex. Moreover, as a mapping from the set of proper lsc convex functions to the set of
Moreau envelopes of convex functions, the Moreau envelope mapping is bijective. We provide a detailed analysis of functions with strong minima, strongly convex functions, and their Moreau envelopes.
\par The organization of the present work is the following. Section \ref{sec:prelim} contains notation and definitions, as well as some preliminary facts and lemmas about Baire category, epi-convergence of convex functions, strongly convex functions and strong minimizers
that we need to prove the main results. We show that the Moreau envelope of a convex function inherits many nice properties of
the convex function, such as coercivity and strong convexity.
In Section \ref{sec:metric}, using Moreau envelopes of convex functions, we propose to use Attouch-Wets' metric
on the set of proper lsc convex functions. It turns out that
this metric space is complete, and it is isometric to the metric space of Moreau envelopes endowed with uniform convergence
on bounded sets. The main results of this paper are presented in Section \ref{sec:main}. We give some characterizations of
strong minimizers of convex functions, that are essential for our Baire category approach. We establish
Baire category classification of the sets of strongly convex functions, convex functions with strong minima, and convex coercive functions. Our main result says that most convex functions have strong
minima, which in turn implies that the set of convex functions not having strong minimizers is small.
Surprisingly, the set of strongly convex functions is only of the first category.
In addition, we show that a convex function is strongly convex if and only if its
proximal mapping is a down-scaled proximal mapping.
Concluding remarks and areas of future research are mentioned in Section \ref{sec:conc}.
A comparison to literature is in order. In \cite{mostmax}, Baire category theory was used to show that most (i.e. a generic set) maximally monotone operators have a unique zero. In \cite{pwang2016}, a similar track was taken, but it uses the perspective of proximal mappings in particular, ultimately proving that most classes of convex functions have a unique minimizer. The technique of
this paper differs in that it is based on functions. We use Moreau envelopes of convex functions, strong minimizers and strongly convex functions instead
of subdifferentials.
While Beer and Lucchetti obtained a similar result on generic well-posedness of convex optimization, their approach
relies on epi-graphs of convex functions \cite{beerl1991, beer1992}.
Our Moreau envelope approach is more accessible and natural
to practical
optimizers because taking the Moreau envelope is a popular regularization method used in the optimization community. We also
give a systematic study of strongly convex functions, which is new to the best of our knowledge.
See also \cite{spingarn1979} for generic nature of constrained optimization problems,
and \cite{lucchetti2006} for well-posedness in optimization. For comprehensive generic results on fixed points
of firmly nonexpansive mappings and nonexpansive mappings, we refer the reader to \cite{reichzas2014}.
\section{Preliminaries}\label{sec:prelim}
\subsection{Notation}
All functions in this paper are defined on $\operatorname{\mathbb{R}}^n,$ Euclidean space equipped with inner product $\langle x,y\rangle=\sum\limits_{i=1}^nx_iy_i,$ and induced norm $\|x\|=\sqrt{\langle x,x\rangle}.$ The extended real line $\operatorname{\mathbb{R}}\cup\{\infty\}$ is denoted $\overline{\operatorname{\mathbb{R}}}.$ We use $\operatorname{dom} f$ for the domain of $f,$ $\operatorname{int}\operatorname{dom} f$ for the interior of the domain of $f,$ $\operatorname{bdry}\operatorname{dom} f$ for the boundary of the domain of $f,$ and $\operatorname{epi} f$ for the epigraph of $f.$
We use $\Gamma_0(X)$ to represent the set of proper lsc convex
functions on the space $X$ with the terms proper, lsc, and convex as defined in \cite{convmono, rockwets}. More precisely,
$f$ is proper if $-\infty\not\in f(X)$ and $\operatorname{dom} f\neq\varnothing$; $f$ is lsc at $x$ if
$x_{k}\rightarrow x$ implies $\liminf_{k\rightarrow}f(x_{k})
\geq f(x)$, when this is true at every $x\in X$ we call $f$ lsc on $X$; $f$ is convex if
$$(\forall x, y\in\operatorname{dom} f) (\forall 0\leq \alpha\leq 1)
\quad
f(\alpha x+(1-\alpha)y)\leq\alpha f(x)+(1-\alpha)f(y).$$
The symbol $G_\delta$ is used to indicate a generic set. The identity mapping or matrix is $\operatorname{Id}:\operatorname{\mathbb{R}}^n\rightarrow\operatorname{\mathbb{R}}^n: x\mapsto x.$
We use $\operatorname{\mathbb{B}}_r(x)$ for the open ball centred at $x$ of radius $r,$ and $\operatorname{\mathbb{B}}_r[x]$ for the closed ball. For a set $C\subseteq\operatorname{\mathbb{R}}^n$,
its closure is $\overline{C}$. The closed line segment between $x, y\in\operatorname{\mathbb{R}}^n$ is $[x,y]:=
\{\lambda x+(1-\lambda)y: \ 0\leq \lambda \leq 1\}$.
We use $\overset{p}\rightarrow$ to indicate pointwise convergence, $\overset{e}\rightarrow$ for epi-convergence, and $\overset{u}\rightarrow$ for uniform convergence.
\subsection{Baire category}
Let $(X, d)$ be a metric space, where $X$ is a set and $d$ is a metric on $X$.
\begin{df}
A set $S\subseteq X$ is \emph{dense} in $X$ if every element of $X$ is either in $S,$ or a limit point of $S.$ A set is \emph{nowhere dense} in $X$ if the interior of its closure in $X$ is empty.
\end{df}
\begin{df}
A set $S\subseteq X$ is \emph{of first category (meagre)} if $S$ is a union of countably many nowhere dense sets.
A set $S\subseteq X$ is \emph{of second category (generic)} if $X\setminus S$ is of first category.
\end{df}
The following Baire category theorem is essential for this paper.
\begin{fact}
[Baire]\emph{(\cite[Theorem 1.47]{convanalgen} or \cite[Corollary 1.44]{convmono})} Let $(X,d)$ be a complete metric space. Then any countable intersection of dense open subsets of $X$ is dense.
\end{fact}
\begin{fact}\label{separable}
Finite-dimensional space $\operatorname{\mathbb{R}}^n$ is separable. That is, $\operatorname{\mathbb{R}}^n$ has a countable subset that is dense in $\operatorname{\mathbb{R}}^n.$
\end{fact}
\begin{proof}
This result is an extension of \cite[Example 1.3-7]{kreyszigfuncanal}, using the fact that the set of all $n$-tuples with rational components is a countable, dense subset of $\operatorname{\mathbb{R}}^n.$
\end{proof}
\subsection{Convex analysis}
In this section we state several key facts about convex functions
that we need in order to prove the main results in subsequent sections.
\subsubsection{Subdifferentials of convex functions}
Let $f\in\Gamma_{0}(\operatorname{\mathbb{R}}^n)$.
The set-valued mapping
$$\partial f\colon \operatorname{\mathbb{R}}^n\ensuremath{\rightrightarrows} \operatorname{\mathbb{R}}^n\colon
x\mapsto \menge{x^*\in \operatorname{\mathbb{R}}^n}{(\forall y\in
\operatorname{\mathbb{R}}^n)\; \langle y-x, x^*\rangle + f(x)\leq f(y)}$$ is the
{subdifferential
operator} of $f$.
\begin{fact}\emph{\cite[Theorem 20.40]{convmono}}\label{subdiffmaxmono}
If $f\in\Gamma_0(\operatorname{\mathbb{R}}^n),$ then $\partial f$ is maximally monotone.
\end{fact}
\begin{fact}\emph{(\cite[Theorem 12.41]{rockwets}, \cite[Theorem 2.51]{attouchwet86})}\label{maxmonoalmostconvex}
For any maximally monotone mapping $T:\operatorname{\mathbb{R}}^n\rightrightarrows\operatorname{\mathbb{R}}^n,$ the set $\operatorname{dom} T$ is almost convex. That is, there exists a convex set $C\subseteq \operatorname{\mathbb{R}}^n$ such that $C\subseteq\operatorname{dom} T\subseteq\overline{C}.$ The same applies to the set $\operatorname{ran} T.$
\end{fact}
\begin{fact}\emph{\cite[Corollary 23.5.1]{convanalrock}}\label{inverse}
If $f\in\Gamma_0(\operatorname{\mathbb{R}}^n),$ then $\partial f^*$ is the inverse of $\partial f$ in the sense of multivalued mappings, i.e. $x\in\partial f^*(x^*)$ if and only if $x^*\in\partial f(x).$
\end{fact}
\subsubsection{Convex functions and their Moreau envelopes}
\begin{df}
The \emph{Moreau envelope} of a proper, lsc function $f:\operatorname{\mathbb{R}}^n\rightarrow\overline{\operatorname{\mathbb{R}}}$ is defined as
$$e_\lambda f(x):=\inf\limits_y\left\{f(y)+\frac{1}{2\lambda}\|y-x\|^2\right\}.$$
The associated \emph{proximal mapping} is the (possibly empty) set of points at which this infimum is achieved, and is denoted $\operatorname{Prox}_f^\lambda:$
$$\operatorname{Prox}_f^\lambda(x):=\operatornamewithlimits{argmin}\limits_y\left\{f(y)+\frac{1}{2\lambda}\|y-x\|^2\right\}.$$
\end{df}
In this paper, without loss of generality we use $\lambda=1.$ The theory developed here is equally applicable with any other choice of $\lambda>0.$
\begin{fact}\emph{(\cite[Proposition 12.29]{convmono} or \cite[Theorem 2.26]{rockwets})} Let $f\in\Gamma_{0}(\operatorname{\mathbb{R}}^n)$. Then
$e_{1}f:\operatorname{\mathbb{R}}^n\rightarrow\operatorname{\mathbb{R}}$ is continuously differentiable on $\operatorname{\mathbb{R}}^n$, and its gradient
$$\nabla e_{1}f=\operatorname{Id}-\operatorname{Prox}_{f}^{1}$$
is $1$-Lipschitz continuous, i.e., nonexpansive.
\end{fact}
One important concept for studying the convergence of extended-valued functions is epi-convergence, see, e.g., \cite{rockwets}.
\begin{df}
The \emph{lower epi-limit} of a sequence $\{f^\nu\}_{\nu\in\operatorname{\mathbb{N}}}\subseteq\operatorname{\mathbb{R}}^n$ is the function having as its epigraph the outer limit of the sequence of sets $\operatorname{epi} f^\nu:$
$$\operatorname{epi}(\operatornamewithlimits{eliminf}_\nu f^\nu):=\limsup_\nu(\operatorname{epi} f^\nu).$$
Similarly, the \emph{upper epi-limit} of $\{f^\nu\}_{\nu\in\operatorname{\mathbb{N}}}$ is the function having as its epigraph the inner limit of the sets $\operatorname{epi} f^\nu:$
$$\operatorname{epi}(\operatornamewithlimits{elimsup}_\nu f^\nu):=\liminf_\nu(\operatorname{epi} f^\nu).$$
When these two functions coincide, the \emph{epi-limit} is said to exist and the functions are said to \emph{epi-converge} to $f:$
$$f^\nu\overset{e}\rightarrow f~~\mbox{if and only if }~~\operatorname{epi} f^\nu\rightarrow\operatorname{epi} f.$$
\end{df}
We refer the reader to \cite{rockwets, beer1992, vanderwerff} for further details on epi-convergence, e.g., continuity, stability and applications
in optimization.
The analysis of the limit properties of sequences of convex functions via their
Moreau envelopes is highlighted by the following fact.
\begin{fact}\emph{(\cite[Theorem 7.37]{rockwets}, \cite{attouch1984})}\label{fact:epi}
Let $\{f^\nu\}_{\nu\in\operatorname{\mathbb{N}}}\subseteq\Gamma_0(\operatorname{\mathbb{R}}^n),$ $f\in\Gamma_{0}(\operatorname{\mathbb{R}}^n).$ Then
$$f^\nu\overset{e}\rightarrow f~~\mbox{ if and only if }~~e_1f^\nu\overset{p}\rightarrow e_1f.$$
Moreover, the pointwise convergence of $e_1f^\nu$ to $e_1f$ is uniform on all bounded subsets of $\operatorname{\mathbb{R}}^n,$ hence yields epi-convergence to $e_1f$ as well.
\end{fact}
Two more nice properties about Moreau envelopes are:
\begin{fact}\emph{\cite[Example 1.46]{rockwets}}\label{fact1}
For any proper, lsc function $f:\operatorname{\mathbb{R}}^n\rightarrow\overline{\operatorname{\mathbb{R}}},$ $\inf f=\inf e_1f.$
\end{fact}
\begin{lem}\emph{\cite[Theorem 31.5]{convanalrock}}\label{lemmorconj}
Let $f\in\Gamma_0(\operatorname{\mathbb{R}}^n).$ Then
$$e_1f(x)+e_1f^*(x)=\frac{1}{2}\|x\|^2.$$
\end{lem}
For more properties of Moreau envelopes of functions, we refer the reader to \cite{attouch1984, convmono, convanalrock, rockwets}.
\subsection{Strong minimizers, coercive convex functions and strongly convex functions}
We now present some basic properties of strong minimizers, strongly convex functions, and coercive functions.
\begin{df}
A function $f:\operatorname{\mathbb{R}}^n\rightarrow \overline{\operatorname{\mathbb{R}}}$ is said to attain a \emph{strong minimum} at $\bar{x}\in\operatorname{\mathbb{R}}^n$
if
\begin{enumerate}
\item $f(\bar{x})\leq f(x)$ for all $x\in\operatorname{dom} f,$ and
\item $f(x_n)\rightarrow f(\bar{x})$ implies $x_n\rightarrow\bar{x}.$
\end{enumerate}
\end{df}
For further information on strong minimizers, we refer readers to \cite{lucchetti2006, borweinzhu, smoothvarprincip}.
\begin{df}
A function $f\in\Gamma_0(\operatorname{\mathbb{R}}^n)$ is called \emph{coercive} if
$$\liminf\limits_{\|x\|\rightarrow\infty}\frac{f(x)}{\|x\|}=\infty.$$
\end{df}
\begin{df}
A function $f\in\Gamma_0(\operatorname{\mathbb{R}}^n)$ is \emph{strongly convex} if there exists a modulus $\sigma>0$ such that $f-\frac{\sigma}{2}\|\cdot\|^2$ is convex. Equivalently, $f$ is strongly convex if there exists $\sigma>0$ such that for all $\lambda\in[0,1]$ and for all $x,y\in\operatorname{\mathbb{R}}^n,$
$$f(\lambda x+(1-\lambda)y)\leq\lambda f(x)+(1-\lambda)f(y)-\frac{\sigma}{2}\lambda(1-\lambda)\|x-y\|^2.$$
\end{df}
\begin{df}
The \emph{Fenchel conjugate} of $f:\operatorname{\mathbb{R}}^n\rightarrow\overline{\operatorname{\mathbb{R}}}$ is defined as
$$f^*(v):=\sup\limits_x\{\langle v,x\rangle-f(x)\}.$$
\end{df}
\begin{fact}\emph{(\cite[Exercise 21 p. 83]{convanal}, \cite[Theorem 11.8]{rockwets})}\label{domfcoercive}
Let $f\in\Gamma_0(\operatorname{\mathbb{R}}^n).$ Then $f$ is coercive if and only if $\operatorname{dom} f^*=\operatorname{\mathbb{R}}^n.$
\end{fact}
\begin{lem}\label{lem3}
The function $f\in\Gamma_0(\operatorname{\mathbb{R}}^n)$ is strongly convex if and only if $e_1f$ is strongly convex.
\end{lem}
\begin{proof} By \cite[Proposition 12.6]{rockwets}, $f$ is strongly convex if and only if $\nabla f^*$ is $\frac{1}{\sigma}$-Lipschitz for some $\sigma>0.$ Now
\begin{align*}
(e_1f)^*&=f^*+\frac{1}{2}\|\cdot\|^2,\mbox{ and}\\
\nabla(e_1f)^*&=\nabla f^*+\operatorname{Id}.
\end{align*}
Suppose that $f$ is strongly convex. Since $\nabla f^*$ is $\frac{1}{\sigma}$-Lipschitz, we have that $\nabla f^*+\operatorname{Id}$ is $\left(1+\frac{1}{\sigma}\right)$-Lipschitz. Hence, $\nabla(e_1f)^*$ is $\left(1+\frac{1}{\sigma}\right)$-Lipschitz. Then $e_1f$ is strongly convex, and we have proved one direction of the lemma. Working backwards with the same argument, the other direction is proved as well.
\end{proof}
\begin{lem}
Let $f\in\Gamma_0(\operatorname{\mathbb{R}}^n).$ Then $f$ is coercive if and only if $e_1f$ is coercive.
\end{lem}
\begin{proof}
Suppose that $f$ is coercive. By Fact~\ref{domfcoercive}, a function is coercive if and only if its Fenchel conjugate is full-domain. Since $(e_1f)^*=f^*+\frac{1}{2}\|\cdot\|^2,$ we have that $(e_1f)^*$ is full-domain. Hence, $e_1f$ is coercive. To prove the other direction, suppose that $e_1f$ is coercive, and an identical argument shows that $f$ is coercive as well.
\end{proof}
\begin{lem}\label{lem4}
Let $f\in\Gamma_0(\operatorname{\mathbb{R}}^n)$ be strongly convex. Then $f$ is coercive.
\end{lem}
\begin{proof} Since $f$ is strongly convex, $f$ can be written as $g+\frac{\sigma}{2}\|\cdot\|^2$ for some $g\in
\Gamma_{0}(\operatorname{\mathbb{R}}^n)$ and $\sigma>0.$ Since $g$ is convex, $g$ is bounded below by a hyperplane. That is, there exist $\tilde{x}\in\operatorname{\mathbb{R}}^n$ and $r\in\operatorname{\mathbb{R}}$ such that
$$g(x)\geq\langle\tilde{x},x\rangle+r\mbox{ for all }x\in\operatorname{\mathbb{R}}^n.$$
Hence,
$$f(x)\geq\langle\tilde{x},x\rangle+r+\frac{\sigma}{2}\|x\|^2\mbox{ for all }x\in\operatorname{\mathbb{R}}^n.$$
This gives us that
$$\liminf\limits_{\|x\|\rightarrow\infty}\frac{f(x)}{\|x\|}=\infty.$$
\end{proof}
Note that a convex function can be coercive, but fail to be strongly convex. Consider the following example.
\begin{ex}
For $x\in\operatorname{\mathbb{R}},$ define
$$f(x):=\begin{cases}
(x+1)^2& \text{ if $x<-1$},\\
0 &\text{ if $-1\leq x\leq1$},\\
(x-1)^2 & \text{ if $x>1.$}
\end{cases}$$
Then $f(x)$ is coercive, but not strongly convex.
\end{ex}
\begin{proof}
It is elementary to show that $f$ is convex and coercive.
\begin{figure}[H]
\begin{center}\includegraphics[scale=0.3]{coercive.png}\end{center}
\end{figure}
Suppose that $f$ is strongly convex, and let $x=-1,$ $y=1,$ $\lambda=\frac{1}{2}.$ Then, for some $\sigma>0,$ we have
\begin{align*}
f(\lambda x+(1-\lambda)y)&\leq\lambda f(x)+(1-\lambda)f(y)-\frac{\sigma}{2}\lambda(1-\lambda)|x-y|^2,\\
f\left(\frac{1}{2}(-1)+\frac{1}{2}(1)\right)&\leq\frac{1}{2}f(-1)+\frac{1}{2}f(1)-\frac{\sigma}{2}\frac{1}{4}|-1-1|^2,\\
0&\leq-\frac{\sigma}{2},
\end{align*}
a contradiction. Therefore, $f$ is not strongly convex.
\end{proof}
\begin{lem}\label{lem5}
Let $f:\Gamma_0(\operatorname{\mathbb{R}}^n)\rightarrow\overline{\operatorname{\mathbb{R}}}$ be strongly convex. Then the (unique) minimizer of $f$ is a strong minimizer.
\end{lem}
\begin{proof} Let $f(x_k)\rightarrow\inf\limits_xf(x).$ Since $f$ is coercive by Lemma~\ref{lem4},
$\{x_k\}_{k=1}^\infty$ is bounded. By the Bolzano-Weierstrass Theorem, $\{x_k\}_{k=1}^\infty$ has a convergent subsequence $x_{k_j}\rightarrow\bar{x}.$ Since $f$ is lsc, we have that $\liminf\limits_{k\rightarrow\infty}f(x_k)\geq f(\bar{x}).$ Hence,
$$\inf\limits_xf(x)\leq f(\bar{x})\leq\inf\limits_xf(x).$$
Therefore, $f(\bar{x})=\inf\limits_xf(x).$ Since strong convexity implies strict convexity, $\operatornamewithlimits{argmin} f(x)=\{\bar{x}\}$ is unique. As every subsequence of $\{x_k\}_{k=1}^\infty$ converges to the same limit $\bar{x},$ we conclude that $x_k\rightarrow\bar{x}.$
\end{proof}
To conclude this section, we provide an example that demonstrates the existence of functions that have strong minimizers, and yet are not strongly convex.
\begin{ex}
Let $f:\operatorname{\mathbb{R}}\rightarrow\operatorname{\mathbb{R}},$ $f(x)=x^4.$ The function $f$ attains a strong minimum at $\bar{x}=0,$ but is not strongly convex.
\end{ex}
\begin{proof} By definition, $f$ is strongly convex if and only if there exists $\sigma>0$ such that $g(x):=x^4-\frac{\sigma}{2}x^2$ is convex. Since $g$ is a differentiable, univariable function, we know it is convex if and only if its second derivative is nonnegative for all $x\in\operatorname{\mathbb{R}}.$ Since $g''(x)=12x^2-\sigma$ is clearly not nonnegative for any fixed $\sigma>0$ and all $x\in\operatorname{\mathbb{R}},$ we have that $g$ is not convex. Therefore, $f$ is not strongly convex.
Clearly zero is the minimum and minimizer of $f.$ Let $\{x_n\}_{n=1}^\infty\subseteq\operatorname{\mathbb{R}}$ be such that $f(x_n)\rightarrow f(0)=0.$ Then
$
\lim\limits_{n\rightarrow\infty}x_n^4=0$ implies
$\lim\limits_{n\rightarrow\infty}x_n=0.$
Therefore, $f$ attains a strong minimum.
\end{proof}
\section{A complete metric space using Moreau envelopes}\label{sec:metric}
The principal tool we use is the Baire category theorem. To this end, we need a Baire space.
In this section, we establish a complete metric space whose distance function makes use of the Moreau envelope.
This metric has been used by Attouch-Wets in \cite[page 38]{attouchwet86}.
The distances used in the next section refer to the metric established here.
We begin with some properties on the Moreau envelope set
$$e_{1}(\Gamma_{0}(\operatorname{\mathbb{R}}^n)):=\{e_{1}f:\ f\in\Gamma_{0}(\operatorname{\mathbb{R}}^n)\}.$$
\begin{thm}\label{thm1}
The set $e_1(\Gamma_0(\operatorname{\mathbb{R}}^n))$ is a convex set in $\Gamma_0(\operatorname{\mathbb{R}}^n).$
\end{thm}
\begin{proof} Let $f_1,f_2\in\Gamma_0(\operatorname{\mathbb{R}}^n),$ $\lambda\in[0,1].$ Then $e_1f_1,e_1f_2\in e_1(\Gamma_0(\operatorname{\mathbb{R}}^n)).$ We need to show that $\lambda e_1f_1+(1-\lambda)e_1f_2\in e_1(\Gamma_0(\operatorname{\mathbb{R}}^n)).$ By \cite[Theorem 6.2]{proxbas} with $\mu=1$ and $n=2,$ we have that $\lambda e_1f_1+(1-\lambda)e_1f_2$ is the Moreau envelope of the proximal average function $P_1(f,\lambda).$ By \cite[Corollary 5.2]{proxbas}, we have that $P_1(f,\lambda)\in\Gamma_0(\operatorname{\mathbb{R}}^n).$ Hence, $e_1P_1(f,\lambda)\in e_1(\Gamma_0(\operatorname{\mathbb{R}}^n)),$ and we conclude that $e_1(\Gamma_0(\operatorname{\mathbb{R}}^n))$ is a convex set.
\end{proof}
On $e_{1}(\Gamma_{0}(\operatorname{\mathbb{R}}^n)),$ define a metric by
\begin{equation}\label{e:m:envelope}
\tilde{d}(\tilde{f},\tilde{g}):= \sum\limits_{i=1}^\infty\frac{1}{2^i}\frac{\|\tilde{f}-\tilde{g}\|_i}{1+
\|\tilde{f}-\tilde{g}\|_i},
\end{equation}
where $\|\tilde{f}-\tilde{g}\|_i:=\sup\limits_{\|x\|\leq i}|\tilde{f}(x)-\tilde{g}(x)|$ and
$\tilde{f}, \tilde{g}\in e_{1}(\Gamma_{0}(\operatorname{\mathbb{R}}^n))$.
Note that a sequence of functions in $(e_{1}(\Gamma_{0}(\operatorname{\mathbb{R}}^n)),\tilde{d})$ converges if and only if the sequence
converges uniformly on bounded sets, if and only if the sequence converges pointwise on $\operatorname{\mathbb{R}}^n$.
\begin{thm}\label{thm2}
The metric space $(e_1(\Gamma_0(\operatorname{\mathbb{R}}^n)),\tilde{d})$ is complete.
\end{thm}
\begin{proof} Let $\{f_k\}_{k=1}^\infty\subseteq\Gamma_0(\operatorname{\mathbb{R}}^n),$ $f_k\rightarrow h.$ Then $e_1f_k\overset{p}\rightarrow g$ for some function $g.$ Our objective is to prove that $g$ is in fact the Moreau envelope of a proper, lsc, convex function. Since $f_k\in\Gamma_0(\operatorname{\mathbb{R}}^n)$ for each $k,$ by Theorem \ref{thm1} $e_1f_k\in\Gamma_0(\operatorname{\mathbb{R}}^n)$ for each $k.$ Then by \cite[Theorem 7.17]{rockwets}, we have that $e_1f_k\overset{e}\rightarrow g,$ and $e_1f_k\overset{u}\rightarrow g$ on bounded sets. Since $e_1f_k$ is convex and full-domain for each $k,$ $g$ is also convex and full-domain. By \cite[Theorem 11.34]{rockwets}, we have that $(e_1f_k)^*\overset{e}\rightarrow g^*,$ that is, $f_k^*+\frac{1}{2}\|\cdot\|^2\overset{e}\rightarrow g^*.$ Defining $h^*:=g^*-\frac{1}{2}\|\cdot\|^2,$ we have $f_k^*\overset{e}\rightarrow h^*\in\Gamma_0(\operatorname{\mathbb{R}}^n).$ Then applying \cite[Theorem 11.34]{rockwets} again, we obtain $f_k\overset{e}\rightarrow h.$ Finally, using \cite[Theorem 7.37]{rockwets} we see that $e_1f_k\overset{e}\rightarrow e_1h,$ and we conclude that $g=e_1h\in\Gamma_0(\operatorname{\mathbb{R}}^n).$ By Fact \ref{fact:epi}, we have pointwise and uniform convergence as well. Therefore, $e_1(\Gamma_0(\operatorname{\mathbb{R}}^n))$ is closed under pointwise convergence topology.
\end{proof}
On $\Gamma_{0}(\operatorname{\mathbb{R}}^n),$ we will use:
\begin{df}[Attouch-Wets metric]\label{defd}
For $f,g\in\Gamma_0(\operatorname{\mathbb{R}}^n),$ define the distance function $d:$
$$d(f,g):=\sum\limits_{i=1}^\infty\frac{1}{2^i}\frac{\|e_1f-e_1g\|_i}{1+\|e_1f-e_1g\|_i}.$$
\end{df}
In order to prove completeness of the space, we state the following lemma, whose simple proof is omitted.
\begin{lem}\label{lem6}
Define $a:[0,\infty)\rightarrow\operatorname{\mathbb{R}},$ $a(t):=\frac{t}{1+t}.$ Then
\begin{itemize}
\item[a)] $a$ is an increasing function, and
\item[b)] $t_1,t_2\geq0$ implies that $a(t_1+t_2)\leq a(t_1)+a(t_2).$
\end{itemize}
\end{lem}
\begin{prop}\label{prop1}
The space $(\Gamma_0(\operatorname{\mathbb{R}}^n),d)$ where $d$ is the metric defined in Definition \ref{defd}, is a complete metric space.
\end{prop}
\begin{proof} Items M1-M4 show that $(\Gamma_0(\operatorname{\mathbb{R}}^n),d)$ is a metric space, and item C shows that it is complete.\\
M1: Since
$$\sum\limits_{i=1}^\infty\frac{1}{2^i}=1,\mbox{ and }0\leq\frac{\|e_1f-e_1g\|_i}{1+\|e_1f-e_1g\|_i}<1\mbox{ for all }i,$$
we have that
$$\frac{1}{2^i}\geq\frac{1}{2^i}\frac{\|e_1f-e_1g\|_i}{1+\|e_1f-e_1g\|_i}\mbox{ for all }i.$$
Then
$$0\leq d(f,g)\leq1\mbox{ for all }f,g\in\Gamma_0(\operatorname{\mathbb{R}}^n).$$
Hence, $d$ is real-valued, finite, and non-negative.\\
M2: We have
\begin{align*}
d(f,g)=0&\Leftrightarrow\sum\limits_{i=1}^\infty\frac{1}{2^i}\frac{\|e_1f-e_1g\|_i}{1+\|e_1f-e_1g\|_i}=0,\\
&\Leftrightarrow\|e_1f-e_1g\|_i=0\mbox{ for all }i,\\
&\Leftrightarrow e_1f(x)-e_1g(x)=0\mbox{ for all }x,\\
&\Leftrightarrow e_1f=e_1g,\\
&\Leftrightarrow f=g\mbox{ \cite[Corollary 3.36]{rockwets}.}
\end{align*}
Hence, $d(f,g)=0$ if and only if $f=g.$\\
M3: The fact that $d(f,g)=d(g,f)$ is trivial.\\
M4: By the triangle inequality,
$$\|e_1f-e_1g\|_i\leq\|e_1f-e_1h\|_i+\|e_1h-e_1g\|_i\mbox{ for all }f,g,h\in\Gamma_0(\operatorname{\mathbb{R}}^n).$$
By applying Lemma \ref{lem6} (a), we have
$$\frac{\|e_1f-e_1g\|_i}{1+\|e_1f-e_1g\|_i}\leq\frac{\|e_1f-e_1h\|_i+\|e_1h-e_1g\|_i}{1+\|e_1f-e_1h\|_i+\|e_1h-e_1g\|_i}.$$
Then we apply Lemma \ref{lem6} (b) with $t_1=\|e_1f-e_1h\|_i$ and $t_2=\|e_1h-e_1g\|_i,$ and we have
$$\frac{\|e_1f-e_1g\|_i}{1+\|e_1f-e_1g\|_i}\leq\frac{\|e_1f-e_1h\|_i}{1+\|e_1f-e_1h\|_i}+\frac{\|e_1h-e_1g\|_i}{1+\|e_1h-e_1g\|_i}.$$
Multiplying both sides by $\frac{1}{2^i}$ and taking the summation over $i,$ we obtain the distance functions, which yields $d(f,g)\leq d(f,h)+d(h,g)$ for all $f,g,h\in\Gamma_0(\operatorname{\mathbb{R}}^n).$
\item[C:] Let $\{f_k\}_{k=1}^\infty$ be a Cauchy sequence in $(\Gamma_0(\operatorname{\mathbb{R}}^n),d),$ with $f_k\rightarrow h.$ Then for each $\varepsilon>0$ there exists $N_\varepsilon\in\operatorname{\mathbb{N}}$ such that $d(f_j,f_k)<\varepsilon$ for all $j,k\geq N_\varepsilon.$ Fix $\varepsilon>0.$ Then there exists $N\in\operatorname{\mathbb{N}}$ such that
$$\sum\limits_{i=1}^\infty\frac{1}{2^i}\frac{\|e_1f_j-e_1f_k\|_i}{1+\|e_1f_j-e_1f_k\|_i}<\varepsilon\mbox{ for all }j,k\geq N.$$
Then for any $i\in\operatorname{\mathbb{N}}$ fixed, we have $\frac{\|e_1f_j-e_1f_k\|_i}{1+\|e_1f_j-e_1f_k\|_i}<2^{i}\varepsilon,$ so that $\|e_1f_j-e_1f_k\|_i<\frac{2^i\varepsilon}{1-2^i\varepsilon}=:\hat{\varepsilon}>0,$ for all $j,k\geq N.$ Notice that $\hat{\varepsilon}\searrow0$ as $\varepsilon\searrow0.$ This gives us that $\{e_1f_k\}_{k=1}^\infty$ is a Cauchy sequence on $B_i(x)$ for each $i\in\operatorname{\mathbb{N}},$ so that $e_1f_k\overset{p}\rightarrow g$ for some function $g.$
By the same arguments as in the proof of Theorem \ref{thm2}, we know that $g=e_1h\in\Gamma_0(\operatorname{\mathbb{R}}^n),$ and hence $h\in\Gamma_0(\operatorname{\mathbb{R}}^n).$ Therefore, $(\Gamma_0(\operatorname{\mathbb{R}}^n),d)$ is closed, and is a complete metric space.
\end{proof}
On the set of Fenchel conjugates
$$(\Gamma_0(\operatorname{\mathbb{R}}^n))^*:=\{f^*:\ f\in\Gamma_{0}(\operatorname{\mathbb{R}}^n)\}$$ define a metric by
$\hat{d}(f,g):=d(f,g)$. Observe that $\Gamma_{0}(\operatorname{\mathbb{R}}^n)=(\Gamma_{0}(\operatorname{\mathbb{R}}^n))^*$.
\begin{cor}\label{thmmorconj}
Consider two metric spaces
$(\Gamma_0(\operatorname{\mathbb{R}}^n),d)$ and $((\Gamma_0(\operatorname{\mathbb{R}}^n))^*,\hat{d})$.
Define $$T:(\Gamma_0(\operatorname{\mathbb{R}}^n),d)\rightarrow ((\Gamma_0(\operatorname{\mathbb{R}}^n))^*,\hat{d}):
f\mapsto f^*.$$
Then $T$ is a bijective isometry. Consequently,
$(\Gamma_0(\operatorname{\mathbb{R}}^n),d)$ and $((\Gamma_0(\operatorname{\mathbb{R}}^n))^*,\hat{d})$ are isometric.
\end{cor}
\begin{proof} Clearly $T$ is onto. Also, $T$ is injective because of the Fenchel-Moreau Theorem \cite[Theorem 13.32]{convmono}
or \cite[Corollary 12.2.1]{convanalrock}. To see this,
let $Tf=Tg$. Then $f^*=g^*$, so $f=(f^*)^*=(g^*)^*=g$.
It remains to show that $T$ is an isometry: $(\forall f, g \in \Gamma_0(\operatorname{\mathbb{R}}^n))$ $d(f,g)=d(f^*,g^*)=\hat{d}(Tf, Tg).$
Lemma \ref{lemmorconj} states that $e_1f+e_1f^*=\frac{1}{2}\|\cdot\|^2.$ Using this, we have
\begin{align*}
d(f^*,g^*)&=\sum\limits_{i=1}^\infty\frac{1}{2^i}\frac{\sup\limits_{\|x\|\leq i}|e_1f^*(x)-e_1g^*(x)|}{1+\sup\limits_{\|x\|\leq i}|e_1f^*(x)-e_1g^*(x)|}\\
&=\sum\limits_{i=1}^\infty\frac{1}{2^i}\frac{\sup\limits_{\|x\|\leq i}\left|\frac{1}{2}\|x\|^2-e_1f(x)-\frac{1}{2}\|x\|^2+e_1g(x)\right|}{1+\sup\limits_{\|x\|\leq i}\left|\frac{1}{2}\|x\|^2-e_1f(x)-\frac{1}{2}\|x\|^2+e_1g(x)\right|}\\
&=\sum\limits_{i=1}^\infty\frac{1}{2^i}\frac{\sup\limits_{\|x\|\leq i}|e_1g(x)-e_1f(x)|}{1+\sup\limits_{\|x\|\leq i}|e_1g(x)-e_1f(x)|}\\
&=d(f,g).
\end{align*}
\end{proof}
By Theorem~\ref{thm2}, $(e_{1}(\Gamma_{0}(\operatorname{\mathbb{R}}^n)),\tilde{d})$ is a complete metric space.
\begin{cor}\label{c:convex:moreau} Consider two metric spaces $(\Gamma_{0}(\operatorname{\mathbb{R}}^n), d)$ and $(e_{1}(\Gamma_{0}(\operatorname{\mathbb{R}}^n)),\tilde{d})$.
Define $$T:\Gamma_{0}(\operatorname{\mathbb{R}}^n)\rightarrow e_{1}(\Gamma_{0}(\operatorname{\mathbb{R}}^n)):
f\mapsto e_{1}f.$$
Then $T$ is a bijective isometry, so $(\Gamma_{0}(\operatorname{\mathbb{R}}^n), d)$ and $(e_{1}(\Gamma_{0}(\operatorname{\mathbb{R}}^n)),\tilde{d})$
are isometric.
\end{cor}
\section{Baire category results}\label{sec:main}
This section is devoted to the main work of this paper. Ultimately, we show that the set of strongly convex functions is a meagre (Baire category one) set, whiel the set of convex functions that attain a strong minimum is a generic (Baire category two) set.
\subsection{Characterizations of the strong minimizer}
The first proposition describes the relationship between a function and its Moreau envelope, pertaining to the strong minimum. Several more results regarding strong minima follow.
\begin{prop}\label{thm5}
Let $f:\operatorname{\mathbb{R}}^n\rightarrow\operatorname{\overline{\R}}.$ Then $f$ attains a strong minimum at $\bar{x}$ if and only if $e_1f$ attains a strong minimum at $\bar{x}.$
\end{prop}
\begin{proof} $(\Rightarrow)$ Assume that $f$ attains a strong minimum at $\bar{x}.$ Then $$\min\limits_xf(x)=\min\limits_xe_1f(x)=f(\bar{x})=e_1f(\bar{x}).$$ Let $\{x_k\}$ be such that $e_1f(x_k)\rightarrow e_1f(\bar{x}).$ We need to show that $x_k\rightarrow\bar{x}.$ Since $$e_1f(x_k)=f(v_k)+\frac{1}{2}\|v_k-x_k\|^2$$ for some $v_k,$ and $f(v_k)\geq f(\bar{x}),$ we have
\begin{equation}\label{vk}
0\leq\frac{1}{2}\|x_k-v_k\|^2+f(v_k)-f(\bar{x})=e_1f(x_k)-e_1f(\bar{x})\rightarrow0.
\end{equation}
Since both $\frac{1}{2}\|x_k-v_k\|^2\geq0$ and $f(v_k)-f(\bar{x})\geq0,$ equation \eqref{vk} tells us that $x_k-v_{k}\rightarrow 0$ and $f(v_k)\rightarrow f(\bar{x}).$ Since $\bar{x}$ is the strong minimizer of $f,$ we have $v_k\rightarrow\bar{x}.$ Therefore, $x_k\rightarrow\bar{x},$ and $e_1f$ attains a strong minimum at $\bar{x}.$\\
$(\Leftarrow)$ Assume that $e_1f$ attains a strong minimum at $\bar{x},$ $e_1f(\bar{x})=\min e_1f.$ Then $e_1f(x_k)\rightarrow e_1f(\bar{x})$ implies that $x_k\rightarrow\bar{x}.$ Let $f(x_k)\rightarrow f(\bar{x}).$ We have
$$f(\bar{x})\leq e_1f(\bar{x})\leq e_1f(x_k)\leq f(x_k).$$
Since $f(x_k)\rightarrow f(\bar{x}),$ we obtain
$$e_1f(x_k)\rightarrow f(\bar{x})=e_1f(\bar{x}).$$
Therefore, $x_k\rightarrow\bar{x},$ and $f$ attains a strong minimum at $\bar{x}.$
\end{proof}
\begin{thm}\label{thm7}
Let $f:\operatorname{\mathbb{R}}^n\rightarrow\overline{\operatorname{\mathbb{R}}}$ have a strong minimizer $\bar{x}.$ Then for all $m\in\operatorname{\mathbb{N}},$
$$\inf\limits_{\|x-\bar{x}\|\geq\frac{1}{m}}f(x)>f(\bar{x}).$$
\end{thm}
\begin{proof} Suppose that there exists $m\in\operatorname{\mathbb{N}}$ such that $\inf\limits_{\|x-\bar{x}\|\geq\frac{1}{m}}f(x)=f(\bar{x}).$ Then there exists a sequence $\{x_k\}_{k=1}^\infty$ with $\|x_k-\bar{x}\|\geq\frac{1}{m}$ and $\lim\limits_{k\rightarrow\infty}f(x_k)=f(\bar{x}).$ Since $\bar{x}$ is the strong minimizer of $f,$
we have $x_k\rightarrow\bar{x},$ a contradiction.
\end{proof}
\begin{cor}\label{cor2}
Let $f:\operatorname{\mathbb{R}}^n\rightarrow\overline{\operatorname{\mathbb{R}}}$ have a strong minimizer $\bar{x}.$ Then for all $m\in\operatorname{\mathbb{N}},$
$$\inf\limits_{\|x-\bar{x}\|\geq\frac{1}{m}}e_1f(x)>e_1f(\bar{x}).$$
\end{cor}
\begin{proof} Applying Proposition \ref{thm5}, the proof is the same as that of Theorem \ref{thm7} replacing $f$ with $e_1f.$
\end{proof}
The next result describes a distinguished property of convex functions.
\begin{thm}\label{thm7.5}
Let $f\in\Gamma_0(\operatorname{\mathbb{R}}^n).$ Then $f$ has a strong minimizer if and only if $f$ has a unique minimizer.
\end{thm}
\begin{proof} $(\Rightarrow)$ By definition, if $f$ has a strong minimizer, then that minimizer is unique.\\
$(\Leftarrow)$ Suppose $f$ has a unique minimizer $\bar{x}.$ Because
$f\in\Gamma_{0}(\operatorname{\mathbb{R}}^n)$, by \cite[Theorem 8.7]{convanalrock},
all level-sets $\{x:f(x)\leq\alpha\},$ for any $\alpha\geq f(\bar{x}),$ have the same recession cone.
Since the recession cone of $\{x:f(x)\leq f(\bar{x})\}=\{\bar{x}\}$ is 0, \cite[Proposition 1.1.5]{convanal} gives us that
$$\liminf\limits_{\|x\|\rightarrow\infty}\frac{f(x)}{\|x\|}>0.$$
This, coupled with the fact that $f$ is convex, gives us that $f$ is coercive. Since $f$ is coercive and has a unique minimizer, we have that $\bar{x}$ is in fact a strong minimizer.
\end{proof}
\begin{ex}
The above property can fail when the function is nonconvex. Consider the continuous but nonconvex function $f:\operatorname{\mathbb{R}}\rightarrow\operatorname{\mathbb{R}},$ $f(x)=\frac{x^2}{(x^4+1)}.$
\begin{figure}[H]
\begin{center}\includegraphics[scale=0.5]{ex111.PNG}\end{center}
\end{figure}
\noindent The function has a unique minimizer $\bar{x}=0,$ but the minimizer is not strong, as any sequence $\{x^k\}$ that tends to $\pm\infty$ gives a sequence of function values that tends to $f(\bar{x}).$
\end{ex}
Using Theorem~\ref{thm7} and Corollary~\ref{cor2}, we can now single out two sets in $\Gamma_{0}(\operatorname{\mathbb{R}}^n)$
which are very important for our later proofs.
\begin{df}
For any $m\in\operatorname{\mathbb{N}},$ define the sets $U_m$ and $E_m$ as follows:
\begin{align*}
U_m&:=\left\{f\in\Gamma_0(\operatorname{\mathbb{R}}^n):\mbox{ there exists }z\in\operatorname{\mathbb{R}}^n\mbox{ such that }\inf\limits_{\|x-z\|\geq\frac{1}{m}}f(x)-f(z)>0\right\},\\
E_m&:=\left\{f\in\Gamma_0(\operatorname{\mathbb{R}}^n):\mbox{ there exists }z\in\operatorname{\mathbb{R}}^n\mbox{ such that }\inf\limits_{\|x-z\|\geq\frac{1}{m}}e_1f(x)-e_1f(z)>0\right\}.
\end{align*}
\end{df}
\begin{prop}\label{thm6}
Let $f\in\bigcap\limits_{m\in\operatorname{\mathbb{N}}}U_m.$ Then $f$ attains a strong minimum on $\operatorname{\mathbb{R}}^n.$
\end{prop}
\begin{proof} The proof follows the method of \cite[Theorem II.1]{smoothvarprincip}. Since $f\in\bigcap\limits_{m\in\operatorname{\mathbb{N}}}U_m,$ we have that for each $m\in\operatorname{\mathbb{N}}$ there exists $z_m\in\operatorname{\mathbb{R}}^n$ such that
$$f(z_m)<\inf\limits_{\|x-z_m\|\geq\frac{1}{m}}f(x).$$
Suppose that $\|z_p-z_m\|\geq\frac{1}{m}$ for some $p>m.$ By the definition of $z_m,$ we have
\begin{equation}\label{zm}
f(z_p)>f(z_m).
\end{equation}
Since $\|z_m-z_p\|\geq\frac{1}{m}>\frac{1}{p},$ we have
$$f(z_m)>f(z_p)$$
by the definition of $z_p.$ This contradicts equation \eqref{zm}. Thus, $\|z_p-z_m\|<\frac{1}{m}$ for each $p>m.$ This gives us that $\{z_m\}_{m=1}^\infty$ is a Cauchy sequence that converges to some $\bar{x}\in\operatorname{\mathbb{R}}^n.$ It remains to be shown that $\bar{x}$ is the strong minimizer of $f.$ Since $f$ is lsc, we have
\begin{align*}
f(\bar{x})&\leq\liminf_{m\rightarrow\infty} f(z_m)\\
&\leq\liminf_{m\rightarrow\infty}
\left(\inf\limits_{\|x-z_m\|\geq\frac{1}{m}}f(x)\right)\\
&\leq\inf\limits_{x\in\operatorname{\mathbb{R}}^n\setminus\{\bar{x}\}}f(x).
\end{align*}
Let $\{y_k\}_{k=1}^\infty\subseteq\operatorname{\mathbb{R}}^n$ be such that $f(y_k)\rightarrow f(\bar{x}),$ and suppose that $y_k\not\rightarrow\bar{x}.$ Dropping to a subsequence if necessary, there exists $\varepsilon>0$ such that $\|y_k-\bar{x}\|\geq\varepsilon$ for all $k.$ Thus, there exists $p\in\operatorname{\mathbb{N}}$ such that $\|y_k-z_p\|\geq\frac{1}{p}$ for all $k\in\operatorname{\mathbb{N}}.$ Hence,
$$f(\bar{x})\leq f(z_p)<\inf\limits_{\|x-z_p\|\geq\frac{1}{p}}f(x)\leq f(y_k)$$
for all $k\in\operatorname{\mathbb{N}},$ a contradiction to the fact that $f(y_k)\rightarrow f(\bar{x}).$ Therefore, $\bar{x}$ is the strong minimizer of $f.$
\end{proof}
\begin{thm}\label{cor1}
Let $f\in\bigcap\limits_{m\in\operatorname{\mathbb{N}}}E_m.$ Then $e_1f$ attains a strong minimum on $\operatorname{\mathbb{R}}^n,$ so $f$ attains a strong minimum on $\operatorname{\mathbb{R}}^n.$
\end{thm}
\begin{proof} Applying Proposition \ref{thm6}, for each $f\in\bigcap\limits_{m\in\operatorname{\mathbb{N}}}E_m$, $e_{1}f$ has
a strong minimizer on $\operatorname{\mathbb{R}}^n.$ Then
Proposition \ref{thm5} gives us that each corresponding $f$ has the same corresponding strong minimizer.
\end{proof}
\subsection{The set of strongly convex functions is dense, but of the first category}
Next, we turn our attention to the set of strongly convex functions. The objectives here are to show that the set is contained in both $U_m$ and $E_m,$ dense in $(\Gamma_0(\operatorname{\mathbb{R}}^n),d),$ and meagre in $(\Gamma_0(\operatorname{\mathbb{R}}^n),d).$
\begin{thm}\label{thm8}
Let $f:\operatorname{\mathbb{R}}^n\rightarrow\overline{\operatorname{\mathbb{R}}}$ be strongly convex. Then $f\in U_m$ and $f\in E_m$ for all $m\in\operatorname{\mathbb{N}}.$
\end{thm}
\begin{proof} Since $f$ is strongly convex, $f$ has a unique minimizer $z.$ By Lemma \ref{lem5}, $z$ is a strong minimizer, so that for any sequence $\{x_k\}$ such that $f(x_k)\rightarrow f(\bar{x}),$ we must have $x_k\rightarrow\bar{x}.$ We want to show that
\begin{equation}\label{geqm}
\inf\limits_{\|x-z\|\geq\frac{1}{m}}f(x)-f(z)>0.
\end{equation}
For any $m\in\operatorname{\mathbb{N}},$ equation \eqref{geqm} is true by Theorem~\ref{thm7}.
Therefore, $f\in U_m$ for all $m\in\operatorname{\mathbb{N}}.$ By Lemma \ref{lem3}, $e_1f$ is strongly convex. Therefore, by the same reasoning as above, $f\in E_m$ for all $m\in\operatorname{\mathbb{N}}.$
\end{proof}
We will need the following characterizations of strongly convex functions in later proofs. Note that
\ref{strong1}$\Rightarrow$\ref{strong3} has been done by Rockafellar \cite{monops}.
\begin{lem}\label{l:strongchar1}
Let $f\in \Gamma_{0}(\operatorname{\mathbb{R}}^n)$.
The following are equivalent:
\begin{enumerate}
\item\label{strong1} $f$ is strongly convex.
\item \label{strong2} $\operatorname{Prox}_{f}^{1}=k\operatorname{Prox}_{g}^{1}$ for some $0\leq k<1$ and $g\in\Gamma_{0}(\operatorname{\mathbb{R}}^n)$.
\item\label{strong3} $\operatorname{Prox}_{f}^{1}=k N$ for some $0\leq k<1$ and $N:\operatorname{\mathbb{R}}^n\rightarrow \operatorname{\mathbb{R}}^n$ nonexpansive.
\end{enumerate}
\end{lem}
\begin{proof}
\ref{strong1}$\Rightarrow$\ref{strong2}: Assume that $f$ is strongly convex. Then
$f=g+\sigma q$ where $g\in\Gamma_{0}(\operatorname{\mathbb{R}}^n)$, $q=\tfrac{1}{2}\|\cdot\|^2$, and $\sigma>0$. We have
\begin{align}
\operatorname{Prox}_{f}^{1} &=((1+\sigma)\operatorname{Id}+\partial g)^{-1}=\bigg((1+\sigma)\big(\operatorname{Id}+\frac{\partial g}{1+\sigma}\big)\bigg)^{-1}\\
&=\bigg(\operatorname{Id}+\frac{\partial g}{1+\sigma}\bigg)^{-1}\bigg(\frac{\operatorname{Id}}{1+\sigma}\bigg).
\end{align}
Define $\tilde{g}(x)=(1+\sigma)g(x/(1+\sigma))$. Then $\tilde{g}\in\Gamma_{0}(\operatorname{\mathbb{R}}^n)$,
$\partial\tilde{g}=\partial g\circ \big(\frac{\operatorname{Id}}{1+\sigma}\big)$, so
\begin{align}
\operatorname{Prox}_{\tilde{g}}^{1} & =\bigg(\operatorname{Id}+\partial g\circ \bigg(\frac{\operatorname{Id}}{1+\sigma}\bigg)\bigg)^{-1}
=\bigg((1+\sigma)\bigg(\operatorname{Id}+\frac{\partial g}{1+\sigma}\bigg)\circ\bigg(\frac{\operatorname{Id}}{1+\sigma}\bigg)\bigg)^{-1}\\
&=(1+\sigma)\bigg(1+\frac{\partial g}{1+\sigma}\bigg)^{-1}\circ\bigg(\frac{\operatorname{Id}}{1+\sigma}\bigg)\\
&=(1+\sigma)\operatorname{Prox}_{f}^{1}.
\end{align}
Therefore, $\operatorname{Prox}_{f}^{1}=\tfrac{1}{1+\sigma}\operatorname{Prox}_{\tilde{g}}^{1}$.
\ref{strong2}$\Rightarrow$\ref{strong1}: Assume $\operatorname{Prox}_{f}^{1}=k\operatorname{Prox}_{g}^{1}$ for some $0\leq k<1$ and $g\in\Gamma_{0}(\operatorname{\mathbb{R}}^n)$.
If $k=0$, then $f=\iota_{\{0\}}$, and $f$ is obviously strongly convex. Let us assume $0<k<1$.
The assumption
$(\operatorname{Id}+\partial f)^{-1}=k(\operatorname{Id}+\partial g)^{-1}$ gives
$\operatorname{Id} +\partial f=(\operatorname{Id}+\partial g)\circ (\operatorname{Id}/k)=\operatorname{Id}/k+\partial g \circ (\operatorname{Id}/k)$, so
$$\partial f=(1/k-1)\operatorname{Id}+\partial g(\operatorname{Id}/k).$$
Since $1/k>1$ and $\partial g\circ(\operatorname{Id}/k)$ is monotone, we have that $\partial f$ is strongly monotone,
which implies that $f$ is strongly convex.
\ref{strong2}$\Rightarrow$\ref{strong3}: This is clear because $\operatorname{Prox}_{g}^{1}$ is nonexpansive,
see, e.g., \cite[Proposition 12.27]{convmono}.
\ref{strong3}$\Rightarrow$\ref{strong2}: Assume $\operatorname{Prox}_{f}^{1}=kN$ where $0\leq k<1$ and $N$ is nonexpansive.
If $k=0$, then $\operatorname{Prox}_{f}^{1}=0=0\cdot 0$, so \ref{strong2} holds because $\operatorname{Prox}_{\iota_{\{0\}}}=0$.
If $0<k<1$, then
$N=1/k \operatorname{Prox}_{f}^{1}.$
As $$\operatorname{Prox}_{f}^{1}=(\operatorname{Id}+\partial f)^{-1}=\nabla (q+f)^{*}=\nabla e_{1}(f^*),$$
we have $N=\nabla (e_{1}(f^*)/k)$. This means that $N$ is nonexpansive and the gradient of a differentiable
convex function. By the Baillon-Haddad theorem \cite{baillonhaddad} or \cite[Corollary 18.16]{convmono},
$N=\operatorname{Prox}_{g}^{1}$ for some $g\in\Gamma_{0}(\operatorname{\mathbb{R}}^n)$.
Therefore, $\operatorname{Prox}_{f}^{1}=k \operatorname{Prox}_{g}^{1}$, i.e., \ref{strong2} holds true.
\end{proof}
\begin{thm}\label{thm9}
The set of strongly convex functions is dense in $(\Gamma_0(\operatorname{\mathbb{R}}^n),d).$ Equivalently,
the set of strongly convex functions is dense in $(e_{1}(\Gamma_{0}(\operatorname{\mathbb{R}}^n)),\tilde{d})$.
\end{thm}
\begin{proof} Let $0<\varepsilon<1$ and $f\in\Gamma_0(\operatorname{\mathbb{R}}^n).$ It will suffice to find $h\in\Gamma_0(\operatorname{\mathbb{R}}^n)$ such that $h$ is strongly convex and $d(h,f)<\varepsilon.$ For $0<\sigma<1,$ define $g\in\Gamma_0(\operatorname{\mathbb{R}}^n)$ by way of the proximal mapping:
$$\operatorname{Prox}_g^1:=(1-\sigma)\operatorname{Prox}_f^1=(1-\sigma)\operatorname{Prox}_{f}^{1}+\sigma \operatorname{Prox}_{\iota_{\{0\}}}.$$
Such a $g\in\Gamma_{0}(\operatorname{\mathbb{R}}^n)$ does exists because $g$ is the proximal average of $f$ and $\iota_{\{0\}}$ by \cite{proxbas}, and
$g$ is strongly convex because of Lemma~\ref{l:strongchar1}.
Define $h\in\Gamma_{0}(\operatorname{\mathbb{R}}^n)$ by
$$h:=g-e_1g(0)+e_1f(0).$$
Then $e_1h=e_1g-e_1g(0)+e_1f(0),$ so that
\begin{equation}\label{eq1}
e_1h(0)=e_1f(0),
\end{equation}
and $\operatorname{Prox}_h^1=\operatorname{Prox}_g^1.$ Fix $N$ large enough that $\sum\limits_{i=N}^\infty\frac{1}{2^i}<\frac{\varepsilon}{2}.$ Then
\begin{equation}\label{eq2}
\sum\limits_{i=N}^\infty\frac{1}{2^i}\frac{\|e_1f-e_1g\|_i}{1+\|e_1f-e_1g\|_i}\leq\sum\limits_{i=N}^\infty\frac{1}{2^i}<\frac{\varepsilon}{2}.
\end{equation}
Choose $\sigma$ such that
\begin{equation}\label{eq3}
0<\sigma<\frac{\varepsilon}{2-\varepsilon}\frac{1}{N(N+\|\operatorname{Prox}_f^1(0)\|))}.
\end{equation}
This gives us that
\begin{equation}\label{eq4}
\frac{\sigma N(N+\|P-1f(0)\|)}{1+\sigma N(N+\|P-1f(0)\|)}<\frac{\varepsilon}{2}.
\end{equation}
By equation \eqref{eq1} and the Mean Value Theorem, for some $c\in[x,0]$ we have
\begin{align*}
e_1h(x)-e_1f(x)&=e_1h(x)-e_1f(x)-(e_1h(0)-e_1f(0))\\
&=\langle\nabla e_1h(c)-\nabla e_1f(c),x-0\rangle\\
&=\langle(\operatorname{Id}-\operatorname{Prox}_h^1)(c)-(\operatorname{Id}-\operatorname{Prox}_f^1)(c),x-0\rangle\\
&=\langle-\operatorname{Prox}_h^1(c)+\operatorname{Prox}_f^1(c),x-0\rangle\\
&=\langle-(1-\sigma)\operatorname{Prox}_f^1(c)+\operatorname{Prox}_f^1(c),x\rangle\\
&=\langle\sigma\operatorname{Prox}_f^1(c),x\rangle.
\end{align*}
Using the triangle inequality, the Cauchy-Schwarz inequality, and the fact that $\operatorname{Prox}_f^1$ is nonexpansive, we obtain
\begin{align*}
|e_1h(x)-e_1f(x)|&\leq\sigma\|\operatorname{Prox}_f^1(c)\|\|x\|\\
&=\sigma\|\operatorname{Prox}_f^1(c)-\operatorname{Prox}_f^1(0)+\operatorname{Prox}_f^1(0)\|\|x\|\\
&\leq\sigma(\|\operatorname{Prox}_f^1(c)-\operatorname{Prox}_f^1(0)\|+\|\operatorname{Prox}_f^1(0)\|)\|x\|\\
&\leq\sigma(\|c\|+\|\operatorname{Prox}_f^1(0)\|)\|x\|\\
&\leq\sigma(\|x\|+\|\operatorname{Prox}_f^1(0)\|)\|x\|\\
&\leq\sigma N(N+\|\operatorname{Prox}_f^1(0)\|),
\end{align*}
when $\|x\|\leq N.$ Therefore, $\|e_1h-e_1f\|_N\leq\sigma N(N+\|\operatorname{Prox}_f^1(0)\|).$ Applying equation \eqref{eq4}, this implies that
\begin{equation}\label{eq5}
\frac{\|e_1f-e_1g\|_N}{1+\|e_1f-e_1g\|_N}\leq\frac{\sigma N(N+\|\operatorname{Prox}_f^1(0)\|)}{1+\sigma N(N+\|\operatorname{Prox}_f^1(0)\|)}<\frac{\varepsilon}{2}.
\end{equation}
Now considering the first $N-1$ terms of our $d$ function, we have
\begin{align}
\sum\limits_{i=1}^{N-1}\frac{1}{2^i}\frac{\|e_1f-e_1g\|_i}{1+\|e_1f-e_1g\|_i}&\leq\sum\limits_{i=1}^{N-1}\frac{1}{2^i}\frac{\|e_1f-e_1g\|_N}{1+\|e_1f-e_1g\|_N}\nonumber\\
&=\frac{\|e_1f-e_1g\|_N}{1+\|e_1f-e_1g\|_N}\sum\limits_{i=1}^{N-1}\frac{1}{2^i}\nonumber\\
&<\frac{\|e_1f-e_1g\|_N}{1+\|e_1f-e_1g\|_N}.\label{eq6}
\end{align}
When equation \eqref{eq3} holds, combining equations \eqref{eq2}, \eqref{eq5}, and \eqref{eq6} yields $d(h,f)<\varepsilon.$ Hence, for any arbitrary $f\in\Gamma_0(\operatorname{\mathbb{R}}^n)$ and $0<\varepsilon<1,$ there exists a strongly convex function $h\in\Gamma_0(\operatorname{\mathbb{R}}^n)$ such that $d(h,f)<\varepsilon.$ That is, the set of strongly convex functions is dense in $(\Gamma_0(\operatorname{\mathbb{R}}^n),d).$
Because $(\Gamma_0(\operatorname{\mathbb{R}}^n),d)$ and $(e_{1}(\Gamma_{0}(\operatorname{\mathbb{R}}^n)),\tilde{d})$ are isometric by Corollary~\ref{c:convex:moreau},
it suffices to apply Lemma~\ref{lem3}. The proof is complete.
\end{proof}
\begin{thm}\label{strongconvmeagre}
The set of strongly convex functions is meagre in $(e_1(\Gamma_0(\operatorname{\mathbb{R}}^n)),\tilde{d})$ where $\tilde{d}$ is
given by \eqref{e:m:envelope}. Equivalently, in $(\Gamma_{0}(\operatorname{\mathbb{R}}^n),d)$ the set of strongly convex
function is meagre.
\end{thm}
\begin{proof}
Denote the set of strongly convex functions in $e_1(\Gamma_0(\operatorname{\mathbb{R}}^n))$ by $S.$ Define
$$F_m:=\left\{g\in e_1(\Gamma_0(\operatorname{\mathbb{R}}^n)):g-\frac{1}{2m}\|\cdot\|^2\mbox{ is convex on }\operatorname{\mathbb{R}}^n\right\}.$$
We show that
\begin{itemize}
\item[a)] $S=\bigcup\limits_{m\in\operatorname{\mathbb{N}}}F_m,$
\item[b)] for each $m\in\operatorname{\mathbb{N}},$ the set $F_m$ is closed in $e_1(\Gamma_0(\operatorname{\mathbb{R}}^n)),$ and
\item[c)] for each $m\in\operatorname{\mathbb{N}},$ the set $F_m$ has empty interior.
\end{itemize}
Then $S$ will have been shown to be a countable union of closed, nowhere dense sets, hence first category.
\begin{itemize}
\item[a)] $(\Rightarrow)$ Let $f\in S.$ Then there exists $\sigma>0$ such that $f-\frac{\sigma}{2}\|\cdot\|^2$ is convex. Note that this means $f-\frac{\tilde{\sigma}}{2}\|\cdot\|^2$ is convex for all $\tilde{\sigma}\in(0,\sigma).$ Since $\sigma>0,$ there exists $m\in\operatorname{\mathbb{N}}$ such that $0<\frac{1}{m}<\sigma.$ Hence, $f-\frac{1}{2m}\|\cdot\|^2$ is convex, and $f\in F_{m}.$ Therefore, $S\subseteq\bigcup\limits_{m\in\operatorname{\mathbb{N}}}F_m.$\\
$(\Leftarrow)$ Let $f\in F_m$ for some $m\in\operatorname{\mathbb{N}}.$ Then $f-\frac{1}{2m}\|\cdot\|^2$ is convex. Thus, with $\sigma=\frac{1}{m},$ we have that there exists $\sigma>0$ such that $f-\frac{\sigma}{2}\|\cdot\|^2$ is convex, which is the definition of strong convexity of $f.$ Therefore, $F_m\subseteq S,$ and since this is true for every $m\in\operatorname{\mathbb{N}},$ we have $\bigcup\limits_{m\in\operatorname{\mathbb{N}}}F_m\subseteq S.$
\item[b)] Let $g\not\in F_m.$ Then $g-\frac{1}{2m}\|\cdot\|^2$ is not convex. Equivalently, there exist $\lambda\in(0,1)$ and $x,y\in\operatorname{\mathbb{R}}^n$ such that
\begin{equation}\label{gnotconvex}
\frac{g(\lambda x+(1-\lambda)y)-\lambda g(x)-(1-\lambda)g(y)}{\lambda(1-\lambda)}>-\frac{\|x-y\|^2}{2m}.
\end{equation}
Let $N>\max\{\|x\|,\|y\|\}.$ Choose $\varepsilon>0$ such that when $\tilde{d}(f,g)<\varepsilon$ for $f\in e_1(\Gamma_0(\operatorname{\mathbb{R}}^n)),$ we have $\|f-g\|_N<\tilde{\varepsilon}$ for some $\tilde{\varepsilon}>0.$ In particular,\footnotesize
\begin{align*}
\frac{f(\lambda x+(1-\lambda)y)-\lambda f(x)-(1-\lambda)f(y)}{\lambda(1-\lambda)}=&\frac{g(\lambda x+(1-\lambda)y)-\lambda g(x)-(1-\lambda)g(y)}{\lambda(1-\lambda)}\\
&+\frac{(f-g)(\lambda x+(1-\lambda)y)-\lambda (f-g)(x)-(1-\lambda)(f-g)(y)}{\lambda(1-\lambda)}\\
>&\frac{g(\lambda x+(1-\lambda)y)-\lambda g(x)-(1-\lambda)g(y)}{\lambda(1-\lambda)}-\frac{4\tilde{\varepsilon}}{\lambda(1-\lambda)}.
\end{align*}\normalsize
Hence, when $\tilde{\varepsilon}$ is sufficiently small, which can be achieved by making $\varepsilon$
sufficiently small, we have
$$\frac{f(\lambda x+(1-\lambda)y)-\lambda f(x)-(1-\lambda)f(y)}{\lambda(1-\lambda)}>-\frac{\|x-y\|^2}{2m}.$$
This gives us, by equation \eqref{gnotconvex}, that $f-\frac{1}{2m}\|\cdot\|^2$ is not convex. Thus, $f\not\in F_m,$ so $e_1(\Gamma_0(\operatorname{\mathbb{R}}^n))\setminus F_m$ is open, and therefore $F_m$ is closed.
\item[c)] That $\operatorname{int} F_m=\emptyset$ is equivalent to saying that $e_1(\Gamma_0(\operatorname{\mathbb{R}}^n))\setminus F_m$ is dense. Thus, it suffices to show that for every $\varepsilon>0$ and every $g\in e_1(\Gamma_0(\operatorname{\mathbb{R}}^n)),$ the open ball $\operatorname{\mathbb{B}}_\varepsilon(g)$ contains an element of $e_1(\Gamma_0(\operatorname{\mathbb{R}}^n))\setminus F_m.$\\
If $g\in e_1(\Gamma_0(\operatorname{\mathbb{R}}^n))\setminus F_m,$ then there is nothing to prove. Assume that $g\in F_m.$ Then $g$ is $\frac{1}{2m}$-strongly convex, and has a strong minimizer $\bar{x}$ by Lemma \ref{lem5}. As $g\in e_1(\Gamma_0(\operatorname{\mathbb{R}}^n)),$ $g=e_1f$ for some $f\in\Gamma_0(\operatorname{\mathbb{R}}^n).$ We consider two cases.
\begin{itemize}
\item[Case 1:] Suppose that for every $\frac{1}{k}>0,$ there exists $x_k\neq\bar{x}$ such that $f(x_k)<f(\bar{x})+\frac{1}{k}.$ Define $h_k:=\max\left\{f,f(\bar{x})+\frac{1}{k}\right\}.$ Then
$$\min h_k=f(\bar{x})+\frac{1}{k},~f\leq h_k<f+\frac{1}{k},$$
so that $e_1f\leq e_1h_k\leq e_1f+\frac{1}{k}.$ We have $g_k:=e_1h_k\in e_1(\Gamma_0(\operatorname{\mathbb{R}}^n)),$ and $\|g_k-g\|_i<\frac{1}{k}$ for all $i\in\operatorname{\mathbb{N}}.$ Choosing $k$ sufficiently large guarantees that $\tilde{d}(g_k,g)<\varepsilon.$
We see that $g_k$ does not have a strong minimizer by noting that for every $k,$ $f(\bar{x})<f(\bar{x})+\frac{1}{k},$ $f(x_k)<f(\bar{x})+\frac{1}{k},$ and $h_k(\bar{x})=h_k(x_k)=f(\bar{x})+\frac{1}{k}.$ Thus, $h_k$ does not have a strong minimizer, which implies that $g_k=e_1h_k$ does not either, by Proposition \ref{thm5}. Therefore, $g_k\not\in F_m.$
\item[Case 2:] If Case 1 is not true, then there exists $k$ such that $f(x)\geq f(\bar{x})+\frac{1}{k}$ for every $x\neq\bar{x}.$ Then we claim that $f(x)=\infty$ for all $x\neq\bar{x}.$ Suppose for the purpose of contradiction that there exists $x\neq\bar{x}$ such that $f(x)<\infty.$ As $f\in\Gamma_0(\operatorname{\mathbb{R}}^n),$ the function $\phi:[0,1]\rightarrow\operatorname{\mathbb{R}}$ defined by $\phi(t):=f(tx+(1-t)\bar{x})$ is continuous by \cite[Proposition 2.1.6]{convanalgen}. This contradicts the assumption, therefore,
$$f(x)=\iota_{\{\bar{x}\}}(x)+f(\bar{x}).$$
Consequently,
$$g(x)=e_1f(x)=f(\bar{x})+\frac{1}{2}\|x-\bar{x}\|^2.$$
Now for every $j\in\operatorname{\mathbb{N}},$ define $f_j:\operatorname{\mathbb{R}}^n\rightarrow\overline{\operatorname{\mathbb{R}}},$
$$f_j(x):=\begin{cases}
f(\bar{x}),&\|x-\bar{x}\|\leq\frac{1}{j},\\\infty,&\mbox{otherwise.}
\end{cases}$$
We have $f_j\in\Gamma_0(\operatorname{\mathbb{R}}^n),$ and
$$g_j(x):=e_1f_j(x)=\begin{cases}
f(\bar{x}),&\|x-\bar{x}\|\leq\frac{1}{j},\\
f(\bar{x})+\frac{1}{2}\left(\|x-\bar{x}\|-\frac{1}{j}\right)^2,&\|x-\bar{x}\|>\frac{1}{j}.
\end{cases}$$
Then $\{g_j(x)\}_{j\in\operatorname{\mathbb{N}}}$ converges pointwise to $e_1f=g,$ by \cite[Theorem 7.37]{rockwets}. Thus, for sufficiently large $j,$ $\tilde{d}(g_j,g)<\varepsilon.$ Since $g_j$ is constant on $\operatorname{\mathbb{B}}_{\frac{1}{j}}(\bar{x}),$ $g_j$ is not strongly convex, so $g_j\not\in F_m.$
\end{itemize}
\end{itemize}
Properties a), b) and c) all together show that the set of strongly convex function is meagre in $(e_{1}(\Gamma_{0}(\operatorname{\mathbb{R}}^n),\tilde{d})$.
Note that $(e_{1}(\Gamma_{0}(\operatorname{\mathbb{R}}^n),\tilde{d})$ and $(\Gamma_{0}(\operatorname{\mathbb{R}}^n),d)$ are isometric by Corollary~\ref{c:convex:moreau}.
The proof is complete
by using Lemma~\ref{lem3}.
\end{proof}
\subsection{The set of convex functions with strong minimizers is of second category}
We present properties of the sets $U_m$ and $E_m,$ and show that the set of convex functions
that attain a strong minimum is a generic set in $(\Gamma_{0}(\operatorname{\mathbb{R}}^n),d)$.
\begin{lem}\label{cor3}
The sets $U_m$ and $E_m$ are dense in $(\Gamma_0(\operatorname{\mathbb{R}}^n),d).$
\end{lem}
\begin{proof} This is immediate by combining Theorems \ref{thm8} and \ref{thm9}.
\end{proof}
To continue, we need the following result, which holds in $\Gamma_{0}(X)$ where $X$ is any Banach space.
\begin{lem}\label{lem1}
Let $f\in\Gamma_0(\operatorname{\mathbb{R}}^n),$ $m\in\operatorname{\mathbb{N}},$ and fix $z\in\operatorname{dom} f.$ Then
$$\inf\limits_{\|x-z\|\geq\frac{1}{m}}f(x)-f(z)>0\mbox{ if and only if }\inf\limits_{m\geq\|x-z\|\geq\frac{1}{m}}f(x)-f(z)>0.$$
\end{lem}
\begin{proof}
$(\Rightarrow)$ Suppose that for $z$ fixed, $\inf\limits_{\|x-z\|\geq\frac{1}{m}}f(x)-f(z)>0.$ Since
$$\inf\limits_{m\geq\|x-z\|\geq\frac{1}{m}}f(x)-f(z)
\geq\inf\limits_{\|x-z\|\geq\frac{1}{m}}f(x)-f(z)>0,$$
we have
$\inf\limits_{m\geq\|x-z\|\geq\frac{1}{m}}f(x)-f(z)>0.$
$(\Leftarrow)$ Let $\inf\limits_{m\geq\|x-z\|\geq\frac{1}{m}}f(x)-f(z)>0,$ and suppose that
$$\inf\limits_{\|x-z\|\geq\frac{1}{m}}f(x)-f(z)\leq0.$$
Then for each $\frac{1}{k}$ with $k\in\operatorname{\mathbb{N}},$ there exists $y_k$ with $\|y_k-z\|\geq\frac{1}{m}$ such that $f(y_k)\leq f(z)+\frac{1}{k}.$ Take $z_k\in[y_k,z]\cap\left\{x\in\operatorname{\mathbb{R}}^n:\ m\geq\|x-z\|\geq\frac{1}{m}\right\}\neq\emptyset.$ Then
$$z_k=\lambda_ky_k+(1-\lambda_k)z$$
for some $\lambda_k\in [0,1].$ By the convexity of $f$, we have
\begin{align*}
f(z_k)&=f(\lambda_ky_k+(1-\lambda_k)z)\leq\lambda_kf(y_k)+(1-\lambda_k)f(z)\\
&\leq\lambda_kf(z)+(1-\lambda_k)f(z)+\frac{\lambda_k}{k}\\
&=f(z)+\frac{\lambda_k}{k}\leq f(z)+\frac{1}{k}.
\end{align*}
Now $\inf\limits_{m\geq\|x-z\|\geq\frac{1}{m}}f(x)\leq f(z_k)\leq f(z)+\frac{1}{k},$ so when $k\rightarrow\infty$ we obtain
$$\inf\limits_{m\geq\|x-z\|\geq\frac{1}{m}}f(x)-f(z)\leq0.$$
This contradicts the fact that $\inf\limits_{m\geq\|x-z\|\geq\frac{1}{m}}f(x)-f(z)>0.$ Therefore,
$\inf\limits_{\|x-z\|\geq\frac{1}{m}}f(x)-f(z)>0.$
\end{proof}
\begin{lem}\label{thm10}
The set $E_m$ is an open set in $(\Gamma_0(\operatorname{\mathbb{R}}^n),d).$
\end{lem}
\begin{proof} Fix $m\in\operatorname{\mathbb{N}},$ and let $f\in E_m.$ Then there exists $z\in\operatorname{\mathbb{R}}^n$ such that $\inf\limits_{\|x-z\|\geq\frac{1}{m}}e_1f(x)-e_1f(z)>0.$ Hence, by Lemma \ref{lem1},
$$\inf\limits_{m\geq\|x-z\|\geq\frac{1}{m}}e_1f(x)-e_1f(z)>0.$$
Choose $j$ large enough that $\operatorname{\mathbb{B}}_m[z]\subseteq \operatorname{\mathbb{B}}_j(0).$ Let $g\in\Gamma_0(\operatorname{\mathbb{R}}^n)$ be such that $d(f,g)<\varepsilon,$
where
\begin{equation}\label{eq7}
0<\varepsilon<\frac{\inf\limits_{m\geq\|x-z\|\geq
\frac{1}{m}}e_1f(x)-e_1f(z)}{2^j\left(2+\inf\limits_{m\geq\|x-z\|\geq\frac{1}{m}}e_1f(x)-e_1f(z)\right)}<\frac{1}{2^{j}}.
\end{equation}
The reason for this bound on $\varepsilon$ will become apparent at the end of the proof. Then
$$\sum\limits_{i=1}^\infty\frac{1}{2^i}\frac{\|e_1f-e_1g\|_i}{1+\|e_1f-e_1g\|_i}<\varepsilon.$$
In particular for our choice of $j,$ we have that $2^{j}\varepsilon<1$ by \eqref{eq7}, and that
\begin{align*}
\frac{1}{2^j}\frac{\|e_1f-e_1g\|_j}{1+\|e_1f-e_1g\|_j}&<\varepsilon,\\
\|e_1f-e_1g\|_j&<2^j\varepsilon(1+\|e_1f-e_1g\|_j),\\
\sup\limits_{\|x\|\leq j}|e_1f(x)-e_1g(x)|(1-2^j\varepsilon)&<2^j\varepsilon,\\
\sup\limits_{\|x\|\leq j}|e_1f(x)-e_1g(x)|&<\frac{2^j\varepsilon}{1-2^j\varepsilon}.
\end{align*}
Define $\alpha:=\frac{2^j\varepsilon}{1-2^j\varepsilon}.$ Then $\sup\limits_{\|x\|\leq j}|e_1f(x)-e_1g(x)|<\alpha.$ Hence,
$$|e_1f(x)-e_1g(x)|<\alpha\mbox{ for all }x\mbox{ with }\|x\|\leq j.$$
In other words,
$$e_1f(x)-\alpha<e_1g(x)<e_1f(x)+\alpha\mbox{ for all }x\mbox{ with }\|x\|\leq j.$$
Since $\operatorname{\mathbb{B}}_m[z]\subseteq \operatorname{\mathbb{B}}_j(0),$ we can take the infimum over $m\geq\|x-z\|\geq\frac{1}{m}$ to obtain
\begin{equation}\label{eq8}
\inf\limits_{m\geq\|x-z\|\leq\frac{1}{m}}e_1f(x)-\alpha\leq
\inf\limits_{m\geq\|x-z\|\geq\frac{1}{m}}e_1g(x)\leq\inf\limits_{m\geq\|x-z\|\geq\frac{1}{m}}e_1f(x)+\alpha.
\end{equation}
Using equation \eqref{eq8} together with the fact that $|e_1g(z)-e_1f(z)|<\alpha$ yields
\begin{align*}
\inf\limits_{m\geq\|x-z\|\geq\frac{1}{m}}e_1g(x)-e_1g(z)&\geq\left(\inf\limits_{m\geq\|x-z\|\geq\frac{1}{m}}e_1f(x)-\alpha\right)-(e_1f(z)+\alpha)\\
&=\inf\limits_{m\geq\|x-z\|\geq\frac{1}{m}}e_1f(x)-e_1f(z)-2\alpha.
\end{align*}
Hence, if
\begin{equation}\label{eq9}
\alpha<\frac{\inf\limits_{m\geq\|x-z\|\geq\frac{1}{m}}e_1f(x)-e_1f(z)}{2},
\end{equation}
we have
\begin{equation}\label{eq10}
\inf\limits_{m\geq\|x-z\|\geq\frac{1}{m}}e_1g(x)-e_1g(z)>0.
\end{equation}
Recalling that $\alpha=\frac{2^j\varepsilon}{1-2^j\varepsilon},$ we solve equation \eqref{eq9} for $\varepsilon$ to obtain
$$\varepsilon<\frac{\inf\limits_{m\geq\|x-z\|\geq\frac{1}{m}}e_1f(x)-e_1f(z)}{2^j\left(2+\inf\limits_{m\geq\|x-z\|\geq\frac{1}{m}}e_1f(x)-e_1f(z)\right)}.$$
Thus, equation \eqref{eq10} is true whenever $d(f,g)<\varepsilon$ for any $\varepsilon$ that respects equation \eqref{eq7}. Applying Lemma \ref{lem1} to equation \eqref{eq10}, we conclude that
$$\inf\limits_{\|x-z\|\geq\frac{1}{m}}e_1g(x)-e_1g(z)>0.$$
Hence, if $g\in\Gamma_0(\operatorname{\mathbb{R}}^n)$ is such that $d(f,g)<\varepsilon,$ then $g\in E_m.$ Therefore, $E_m$ is open.
\end{proof}
We are now ready to present the main results of the paper.
\begin{thm}\label{generictheorem}
In $X:=(\Gamma_0(\operatorname{\mathbb{R}}^n),d),$ the set $S:=\{f\in\Gamma_0(\operatorname{\mathbb{R}}^n):f\mbox{ attains a strong minimum}\}$ is generic.
\end{thm}
\begin{proof}
By Lemmas \ref{cor3} and \ref{thm10}, we have that $E_m$ is open and dense in $X.$ Hence,
$G:=\bigcap\limits_{m\in\operatorname{\mathbb{N}}}E_m$ is a countable intersection of open, dense sets in $X$,
and as such $G$ is generic in $X.$ Let $f\in G.$ By Corollary \ref{cor1},
$f$ attains a strong minimum on $\operatorname{\mathbb{R}}^n.$ Thus, every element of $G$ attains a strong minimum on $\operatorname{\mathbb{R}}^n.$ Since $G$ is generic in $X$ and $G\subseteq S,$ we conclude that $S$ is generic in $X.$
\end{proof}
\begin{thm}\label{t:fullrange}
In $X:=(\Gamma_0(\operatorname{\mathbb{R}}^n),d),$ the set $S:=\{f\in\Gamma_0(\operatorname{\mathbb{R}}^n):f\mbox{ is coercive}\}$ is generic.
\end{thm}
\begin{proof}
Define the set $\Gamma_1(\operatorname{\mathbb{R}}^n):=\Gamma_0(\operatorname{\mathbb{R}}^n)+x^*,$ in the sense that for any function $f\in\Gamma_0(\operatorname{\mathbb{R}}^n),$ the function $f+\langle x^*,\cdot\rangle\in\Gamma_1(\operatorname{\mathbb{R}}^n).$ Since any such $f+\langle x^*,\cdot\rangle$ is proper, lsc, and convex, we have $\Gamma_1(\operatorname{\mathbb{R}}^n)\subseteq\Gamma_0(\operatorname{\mathbb{R}}^n).$ Now, since for any $f\in\Gamma_0(\operatorname{\mathbb{R}}^n)$ we have that $f-x^*\in\Gamma_0(\operatorname{\mathbb{R}}^n),$ this gives us that $f\in\Gamma_0(\operatorname{\mathbb{R}}^n)+x^*=\Gamma_1(\operatorname{\mathbb{R}}^n).$ Therefore, $\Gamma_1(\operatorname{\mathbb{R}}^n)=\Gamma_0(\operatorname{\mathbb{R}}^n).$ By Theorem \ref{generictheorem}, there exists a generic set $G\subseteq\Gamma_0(\operatorname{\mathbb{R}}^n)$ such that for every $f\in G,$ $f$ attains a strong minimum at some point $x,$ and hence $0\in\partial f(x).$ Then, given any $x^*$ fixed, there exists a generic set $G_{x^*}$ that contains a dense $G_\delta$ set, such that $0\in\partial(f+x^*)(x).$ Thus, for each $f\in G_{x^*}$ there exists $x\in\operatorname{\mathbb{R}}^n$ such that $-x^*\in\partial f(x)$. By Fact \ref{separable}, it is possible to construct the set $D:=\{-x_i^*\}_{i=1}^\infty$ such that $\overline{D}=\operatorname{\mathbb{R}}^n.$ Then each set $G_{x_i^*},$ $i\in\operatorname{\mathbb{N}},$ contains a dense $G_\delta$ set. Therefore, the set $G:=\bigcap\limits_{i=1}^\infty G_{x_i^*}$ contains a dense $G_\delta$ set. Let $f\in G.$ Then for each $i\in\operatorname{\mathbb{N}},$ $-x_i^*\in\partial f(x)$ for some $x\in\operatorname{\mathbb{R}}^n.$ That is, $-x_i^*\in\operatorname{ran}\partial f.$ So $D:=\bigcup\limits_{i=1}^\infty\{-x_i^*\}\subseteq\operatorname{ran}\partial f,$ and $\overline{D}\subseteq\overline{\operatorname{ran}\partial f}.$ Since $\overline{D}=\operatorname{\mathbb{R}}^n,$ we have $\operatorname{\mathbb{R}}^n=\overline{\operatorname{ran}\partial f}.$ By Facts \ref{subdiffmaxmono} and \ref{maxmonoalmostconvex}, $\operatorname{ran}\partial f$ is almost convex; there exists a convex set $C$ such that $C\subseteq\operatorname{ran} f\subseteq\overline{C}.$ Then $\overline{C}=\operatorname{\mathbb{R}}^n.$ As $C$ is convex, by \cite[Theorem 6.3]{convanalrock} we have
the relative interior
$\operatorname{ri}\overline{C}=\operatorname{ri} C,$ so $\operatorname{ri} C=\operatorname{\mathbb{R}}^n.$ Thus, $\operatorname{\mathbb{R}}^n=\operatorname{ri} C\subseteq C,$
which gives us that $C=\operatorname{\mathbb{R}}^n.$ Therefore, $\operatorname{ran}\partial f=\operatorname{\mathbb{R}}^n.$ By Fact \ref{inverse},
$\operatorname{ran}\partial f\subseteq\operatorname{dom}(f^*).$ Hence, $\operatorname{dom} f^*=\operatorname{\mathbb{R}}^n.$ By Fact \ref{domfcoercive}, we have that $\lim\limits_{\|x\|\rightarrow\infty}\frac{f(x)}{\|x\|}=\infty.$ Therefore, $f$ is coercive for all $f\in G.$ Since $G$ is generic in $X$ and $G\subseteq S,$ we conclude that $S$ is generic in $X.$
\end{proof}
\begin{thm}\label{t:fulldom}
In $(\Gamma_{0}(\operatorname{\mathbb{R}}^n), d)$, the set
$S:=\{f\in \Gamma_{0}(\operatorname{\mathbb{R}}^n):\ \operatorname{dom} f=\operatorname{\mathbb{R}}^n \}$
is generic.
\end{thm}
\begin{proof} Note that $(\Gamma_{0}(\operatorname{\mathbb{R}}^n))^*=\Gamma_{0}(\operatorname{\mathbb{R}}^n)$. In $((\Gamma_{0}(\operatorname{\mathbb{R}}^n))^*, d)$,
by Theorem~\ref{t:fullrange}, the set
$$\{f^*\in (\Gamma_{0}(\operatorname{\mathbb{R}}^n))^*:\ f^* \text{ is coercive} \}$$
is generic. Since $f^*$ is coervcive if and only if $f$ has $\operatorname{dom} f=\operatorname{\mathbb{R}}^n$ by Fact~\ref{domfcoercive},
the proof is done.
\end{proof}
Combining Theorems~\ref{generictheorem}, \ref{t:fullrange} and ~\ref{t:fulldom}, we obtain
\begin{cor} In $(\Gamma_{0}(\operatorname{\mathbb{R}}^n), d)$, the set
$$S:=\{f\in \Gamma_{0}(\operatorname{\mathbb{R}}^n):\ \operatorname{dom} f=\operatorname{\mathbb{R}}^n, \operatorname{dom} f^*=\operatorname{\mathbb{R}}^n, f \text{ has a strong minimizer}\}$$
is generic.
\end{cor}
\section{Conclusion}\label{sec:conc}
Endowed with the Attouch-Wets metric, based on the Moreau envelope,
the set of proper lower semicontinuous convex functions becomes a complete metric space. In this complete metric space,
the topology is epi-convergence topology.
We have proved several Baire category results. In particular, we have shown that in $(\Gamma_0(R^n),d)$ the set of strongly convex functions is category one, the set of functions that attain a strong minimum is category two, and the set of coercive functions is category two. Several other results about strongly convex functions and functions with strong minima are included. In future work that has already commenced, we will continue to develop the theory of Moreau envelopes,
providing characterizations and illustrative examples of how to calculate them, and extend results in
this paper to convex functions defined on Hilbert spaces or to prox-bounded functions on $\operatorname{\mathbb{R}}^n$.
\bibliographystyle{plain}
|
train/arxiv
|
BkiUdLA4ukPiEbA1tD4c
| 5 | 1 |
\section{Introduction}
The task of guaranteeing given patterns in a sufficiently large set has been a
central problem in different areas of mathematics for a long time.
Perhaps the most
famous example is the celebrated theorem of Szemer\'edi \cite{szemeredi},
which states that any sequence of positive integers with positive upper density
contains arbitrarily long arithmetic progressions.
More closely related to the present paper are the results of
Falconer \cite{falconer}, Keleti \cite{keleti} and Maga \cite{maga},
which state that for any three points in $\mathbb{R}$ or in $\mathbb{R}^2$
there exists a set of full Hausdorff dimension
that contains no similar copy to the three given points.
It is open whether the analogous result holds in higher dimension.
In case of a negative answer it would be natural to ask what Hausdorff
dimension guarantees a similar copy of three given points.
Since the similar copy of a triangle has the same angles as the original
one, the following question arose.
\begin{question}\label{question}
For given $n$ and $\alpha$, what is the smallest $d$ for which
any compact set $A\subset \mathbb{R}^n$ with Hausdorff dimension larger than $d$
contains three points that form an angle $\alpha$?
\end{question}
We use the following terminology.
\begin{defin}
We say that the set $A \subset \mathbb{R}^n$ \emph{contains the angle} $\alpha \in [0,180^\circ]$
if there exist distinct points $x, y, z \in A$ such that the angle between the vectors $y-x$ and $z-x$ is $\alpha$.
\end{defin}
\begin{defin}\label{def:basicdef}
If $n\ge 2$ is an integer and $\alpha \in [0,180^\circ]$, then let
\begin{multline*}
C(n, \alpha)= \sup \{s : \exists A \subset \mathbb{R}^n
\textrm{ compact such that } \\ \mathrm{dim}(A)=s
\textrm{ and } A \textrm{ does not contain the angle } \alpha \}.
\end{multline*}
\end{defin}
Clearly, answering Question~\ref{question} is the same as finding $C(n, \alpha)$.
Somewhat surprisingly our results highly depend on the given angle.
For $90^\circ$ we show (Theorem~\ref{n/2}) that $C(n,90^\circ)\le [(n+1)/2]$
(where $[a]$ denotes the integer part of $a$) while for other angles
we prove (Theorem~\ref{n-1}) only $C(n,\alpha)\le n-1$, which is sharp
for $\alpha=0$ and $\alpha=180^\circ$.
In the other direction, the fifth author (M\'ath\'e) constructed
compact sets of Hausdorff dimension $n/8$ not containing $\alpha$;
this construction is published separately in \cite{mathe}.
He obtains a better result ($n/4$) in the special case
when $\cos^2 \alpha$ is rational,
and an even better one ($n/2$) when $\alpha = 90^\circ$.
Table \ref{table1} shows the best known bounds for $C(n,\alpha)$.
\begin{table}[h]
\caption{Best known bounds for $C(n, \alpha)$}
\label{table1}
\centering
\begin{tabular}{l | l | l}
$\alpha$ & lower bound & upper bound\\
[0.5ex]
\hlin
$0, 180^\circ$ & $n-1$ & $n-1$ \\
$90^\circ$ & $n/2$ \cite[Thm 3.1]{mathe} & $[(n+1)/2]$\\
$\cos^2 \alpha \in \mathbb{Q}$ & $n/4$ \cite[Thm 3.2]{mathe} & $n-1$\\
other angles & $n/8$ \cite[Thm 3.4]{mathe} & $n-1$ \\
[1ex]
\end{tabular}
\end{table}
In the present paper for any
$\alpha\in (0,180^\circ)\setminus \{60^\circ, 90^\circ, 120^\circ\}$
we construct (Theorem~\ref{thm:selfsimilar})
a self-similar compact set with Hausdorff dimension
$c(\alpha)\log n$ that does not contain the angle $\alpha$.
Although this is a much weaker result in terms of the dimension of the set,
it has an advantage over M\'ath\'e's construction, namely,
the constructed sets avoid not only $\alpha$,
but also a small neighborhood of $\alpha$.
In light of the above construction it is natural to ask
what can be said if we only want to guarantee an angle
near to a given angle. In Section~\ref{approx} we show that the previously
mentioned special angles $(0, 60^\circ, 90^\circ, 120^\circ, 180^\circ)$
are really very special.
If we fix $\alpha$ and a sufficiently small $\delta$ (but do not fix $n$)
then for all other angles
the above-mentioned self-similar construction (Theorem~\ref{thm:selfsimilar})
gives a compact set with arbitrarily large Hausdorff dimension
that does not contain any angle from the $\delta$-neighborhood of $\alpha$,
while for the special angles this is not the case. More precisely,
we show that any
set with Hausdorff
dimension larger than $1$ contains angles arbitrarily close to the right angle
(Theorem~\ref{Hausdorff1}),
and that any
set with Hausdorff dimension larger than
$\frac{C}{\delta}\log(\frac{1}{\delta})$ (with an absolute constant $C$)
contains angles from the $\delta$-neighborhoods of $60^\circ$ and
$120^\circ$ (Corollary~\ref{cor:60} and Theorem~\ref{thm:120}).
For the angles $0$ and $180^\circ$ it was already known by Erd{\H o}s and F\"uredi
\cite{erdos/furedi} that any infinite set contains angles
arbitrarily close to $0$ and angles arbitrarily close to $180^\circ$.
Note that in the previous results the dimension of
the Euclidean space ($n$) did not play any role.
To sum up the results we introduce the following function $\Ch$.
\begin{defin}\label{def:ch}
If $\alpha \in [0,180^\circ]$ and $\delta>0$, then let
\begin{multline*}
\Ch(\alpha, \delta) = \sup \{\mathrm{dim}(A) :
A \subset \mathbb{R}^n \textrm{ for }\emph{some } n; \\%A \textrm{ is compact};\\
A \mbox{ does not contain any angle from } (\alpha-\delta, \alpha+\delta) \} .
\end{multline*}
\end{defin}
Theorem~\ref{thm:selfsimilar} implies that $\Ch(\alpha, \delta) = \infty$
if $\alpha$ is different from the special angles
$0$, $60^\circ$, $90^\circ$, $120^\circ$, $180^\circ$ and
$\delta$ is smaller than the distance of $\alpha$ from the special angles.
A construction of the first author (Harangi \cite{harangi}) shows that
$\Ch(\alpha, \delta) \geq \frac{c}{\delta}/\log(\frac{1}{\delta})$
for the angles $\alpha = 60^\circ, 120^\circ$.
We summarize the above results in Table~\ref{table2}.
\begin{table}[h]
\caption{Smallest dimensions that guarantee an angle in the $\delta$-neighborhood
of $\alpha$}
\label{table2}
\centering
\begin{tabular}{l | l | l}
$\alpha$ & $\Ch(\alpha, \delta)$ & \\
[0.5ex]
\hlin
$0, 180^\circ$ & $= 0$ & \\
$90^\circ$ & $= 1$ & \\
$60^\circ, 120^\circ$ & $\approx 1/\delta$ & apart from a multiplicative error $C\cdot\log(1/\delta)$ \\
other angles & $= \infty$ & provided that $\delta$ is sufficiently small\\
[1ex]
\end{tabular}
\end{table}
We emphasize the difference between the tasks of
finding an angle precisely and finding it approximately.
For example, we can find angles arbitrarily close to $90^\circ$
given that the dimension of the set is greater than $1$,
while if we want to find $90^\circ$ precisely in the set,
we need to know that its dimension is greater than $n/2$.
The following question is also closely related:
How large does the Hausdorff dimension of
a compact subset of $\mathbb{R}^n$ need to be to ensure that the set of angles
contained in the set has positive Lebesgue measure?
In \cite{iosevich} it is proved that larger than $\frac{n+1}2$ is enough and in
\cite{mathe} that $n/6$ is not enough.
\begin{notation}
\label{not:Hausdorff}
We denote the $s$-dimensional Hausdorff measure by $\mathcal{H}^s$.
By \emph{$\dim$} we denote the Hausdorff dimension.
Recall that compact sets having the property $0<\mathcal{H}^s(K)<\infty$ are
called \emph{compact $s$-sets}.
\end{notation}
Using the well-known fact that an
analytic set $A$ with positive $\mathcal{H}^s$ measure contains a compact $s$-set
(see e.g. \cite[2.10.47-48]{federer})
we get that in all of the above-mentioned results instead of compactness
it is enough to assume that the set is analytic (or Borel) and on the
other hand, we can always suppose that the given compact or analytic set
is a compact $s$-set. Thus $C(n,\alpha)$ can be also expressed as
\begin{multline*}
C(n, \alpha)= \sup \{s : \exists A \subset \mathbb{R}^n
\textrm{ analytic such that } \\ \mathrm{dim}(A)=s
\textrm{ and } A \textrm{ does not contain the angle } \alpha \},
\end{multline*}
or
\begin{multline*}
C(n, \alpha) = \sup \{s : \exists K \subset \mathbb{R}^n
\textrm{ compact such that } \\ 0<\mathcal{H}^s(K)<\infty
\textrm{ and } K \textrm{ does not contain the angle } \alpha \}.
\end{multline*}
However, as we prove it in the Appendix (Theorem~\ref{thm:transf}), some assumption about the set is
necessary, otherwise the above function would be $n$ for any $\alpha$.
In fact, for any given $n$ and $\alpha$ we construct by transfinite induction
a set in $\mathbb{R}^n$ with full Lebesgue outer measure that
does not contain the angle $\alpha$.
Note that in the definition
of $\Ch(\alpha,\delta)$ (Definition \ref{def:ch}), when we want to
find an angle in an open interval $(\alpha-\delta,\alpha+\delta)$, we have no
assumption about the set $A$. This is simply because the closure of $A$
contains an angle in $(\alpha-\delta,\alpha+\delta)$ if and only if $A$ does,
so in these problems we can always assume that $A$ is closed.
Combining this with the above mentioned well-known fact that any analytic set
with positive $\mathcal{H}^s$ measure contains a compact $s$-set we get
\begin{multline} \label{eq:compactch}
\Ch(\alpha, \delta) = \sup \{s : \exists n\ \exists K \subset \mathbb{R}^n
\textrm{ compact such that } 0<\mathcal{H}^s(K)<\infty \\
\textrm{ and } K \textrm{ does not contain any angle from }
(\alpha-\delta, \alpha+\delta) \} .
\end{multline}
In fact, when we want to find an angle near to a given angle, then we get the
same results if we replace Hausdorff dimension by upper Minkowski dimension,
but this is not as clear as the above observation
(see Corollary~\ref{cor:mink=haus}).
The following theorem,
which is the first statement of \cite[Theorem 10.11]{mattila},
plays essential role in some of our proofs.
\begin{notation}
The set of $k$-dimensional subspaces of $\mathbb{R}^n$ will be denoted by $G(n,k)$
and the natural probability measure on it by $\gamma_{n,k}$
(see e.g. \cite{mattila} for more details).
\end{notation}
\begin{thm}\label{thm:intersect2}
If $m<s<n$ and $A$ is an $\mathcal{H}^s$ measurable subset of $\mathbb{R}^n$ with $0<\mathcal{H}^s(A)<\infty$,
then
\[
\mathrm{dim}\big(A\cap(W+x)\big)=s-m
\]
for $\mathcal{H}^s \times \gamma_{n,n-m}$ almost all $(x,W) \in A \times G(n,n-m)$.
\end{thm}
In two dimensions it says that for $\mathcal{H}^s$ almost all $x\in A$, almost all lines through $x$
intersect $A$ in a set of dimension $s-1$. One would expect that this theorem also holds for half-lines instead
of lines. Indeed, Marstrand proved it in \cite[Lemma 17]{marstrand}. Although the lemma only says that it holds for lines, he actually proves it for half-lines. Therefore the following theorem is also true.
\begin{thm}\label{thm:intersect_halflines}
Let $1<s<2$ and let $A\subset \mathbb{R}^2$ be $\mathcal{H}^s$ measurable with
$0<\mathcal{H}^s(A)<\infty$. For any $x\in \mathbb{R}^2$ and $\vartheta\in [0,360^\circ)$ let
$L_{x,\vartheta}$ denote the half-line from $x$ at angle $\vartheta$. Then
\[
\mathrm{dim}\big(A\cap L_{x,\vartheta}\big)=s-1
\]
for $\mathcal{H}^s \times \lambda$ almost all
$(x,\vartheta) \in A \times [0,360^\circ)$.
\end{thm}
\section{Finding a given angle}
\label{given}
In this section we give estimates to $C(n,\alpha)$. For $n=2$ we
get the following exact result.
\begin{thm}\label{thm:bigsets_in_the_plane}
For any $\alpha \in [0,180^\circ]$ we have $C(2,\alpha)=1$.
\end{thm}
\begin{proof}
A line has dimension $1$ and it contains only the angles $0$ and $180^\circ$. A circle also has dimension $1$, but does not contain the angles $0$ and $180^\circ$. Therefore $C(2,\alpha)\ge 1$ for all $\alpha \in [0,180^\circ]$.
For the other direction let $\alpha \in [0,180^\circ]$ and $s>1$ fixed.
We have to
prove that any compact $s$-set contains the angle $\alpha$. By Theorem \ref{thm:intersect_halflines}, there exists $x\in K$ such that $\mathrm{dim}(K\cap L_{x,\vartheta})=s-1$ for almost all $\vartheta\in [0,360^\circ)$, where
$L_{x,\vartheta}$ denotes the half-line from $x$ at angle $\vartheta$.
Hence we can take $\vartheta_1, \vartheta_2\in [0,360^\circ)$ such that $|\vartheta_1-\vartheta_2|=\alpha$, and $\mathrm{dim}(K\cap L_{x,\vartheta_i})=s-1$ for $i=1,2$. If $x_i\in L_{x,\vartheta_i}\setminus \{x\}$ then the angle between the vectors $x_1-x$ and $x_2-x$ is $\alpha$, so indeed, $K$ contains the angle $\alpha$.
\end{proof}
An analogous theorem holds for higher dimensions.
\begin{thm}\label{n-1}
If $n\ge2$ and $\alpha \in [0,180^\circ]$ then $C(n,\alpha)\le n-1$.
\end{thm}
\begin{proof}
We have already seen the case $n=2$, so we may assume that $n\ge 3$.
It is enough to show that if $s>n-1$ and $K$ is a compact $s$-set, then $K$
contains the angle $\alpha$.
By Theorem \ref{thm:intersect2}, there exists $x \in K$ such that there exists a
$W \in G(n,2)$ with $\mathrm{dim}(B)=s-n+2>1$ for $B\stackrel{\textrm{\scriptsize def}}{=} A\cap (W+a)$. The set $B$ lies in a two-dimensional plane, so we can think
about $B$ as a subset of $\mathbb{R}^2$. Applying Theorem
\ref{thm:bigsets_in_the_plane} completes the proof.
\end{proof}
Now we are able to give the exact value of $C(n,0)$ and $C(n,180^\circ)$.
\begin{thm}
$C(n,0)=C(n,180^\circ)=n-1$ for all $n\ge 2$.
\end{thm}
\begin{proof}
One of the inequalities was proven in the previous theorem, while the other one is shown by the $(n-1)$-dimensional sphere.
\end{proof}
We prove a better upper bound for $C(n,90^\circ)$.
\begin{thm}
\label{n/2}
If $n$ is even then $C(n,90^\circ)\le n/2$. If $n$ is odd then $C(n, 90^\circ)\le (n+1)/2$.
\end{thm}
\begin{proof}
First suppose that $n$ is even. Let $s>n/2$ and let $K$ be a compact $s$-set. From
Theorem \ref{thm:intersect2} we know that there exists a point $x\in K$ such that
\begin{equation}\label{eq:1}
\mathrm{dim}\big(K \cap (x+W)\big)=s-n/2>0
\end{equation}
for $\gamma_{n,n/2}$ almost all $W\in G(n,n/2)$. There exists a $W\in G(n,n/2)$ such that (\ref{eq:1}) holds both
for $W$ and $W^\bot$. As $(x+W)\cap(x+W^\bot)=\{x\}$, by choosing a $y \in K \cap (x+W)$ and $z \in K \cap (x+W^\bot)$ such that $x\ne y$ and $x\ne z$, we find a right angle at $x$ in the triangle $xyz$.
Now suppose that $n$ is odd, $s>(n+1)/2$ and $K$ is a compact $s$-set. With a similar argument we can conclude that there exist
$x\in K$ and $W\in G(n,(n+1)/2)$ such that $\mathrm{dim}\big(K \cap (x+W)\big)=s-(n+1)/2>0$ and $\mathrm{dim}\big(K \cap (x+W^\bot)\big)=s-(n-1)/2>1$. If $y \in K \cap (x+W) \setminus \{x\}$ and $z \in K \cap (x+W^\bot)\setminus \{x\}$, then there is again a right angle at $x$ in the triangle $xyz$.
\end{proof}
\begin{rem}
By the following result of the fifth author (M\'ath\'e \cite{mathe})
the above estimate is sharp if $n$ is even:
for any $n$ there exists a compact set of Hausdorff dimension $n/2$
in $\mathbb{R}^n$ that does not contain $90^\circ$.
Therefore if $n$ is even, we have $C(n,90^\circ)=n/2$.
The construction uses number theoretic ideas and
even though the set contains angles arbitrarily close to $90^\circ$,
it succeeds to avoid the right angle.
In the next section we will present a different approach
where the constructed sets avoid not only a certain angle $\alpha$
but also a whole neighborhood of $\alpha$.
\end{rem}
\section{A self-similar construction}
\label{construction}
In this section we construct a self-similar set in $\mathbb{R}^n$
with large dimension such that it does not contain a certain angle
$\alpha \in (0,180^\circ)$.
On the negative side, our method does not work for the angles
$60^\circ$, $90^\circ$ and $120^\circ$. On the positive side,
the presented sets will avoid a whole neighborhood of $\alpha$, not only $\alpha$.
We start with two simple lemmas.
\begin{lemma}\label{thm:angles_in_simplices}
Let $P_0,\ldots, P_n$ be the vertices of a regular $n$-dimensional simplex. For any quadruples of indices $(i,j,k,l)$ with $i\ne j$ and $k\ne l$, the angle between the lines $P_iP_j$ and $P_kP_l$ is either $0$, $60^\circ$ or $90^\circ$.
\end{lemma}
\begin{proof}
The set $\{P_i,P_j,P_k,P_l\}$ is the set of vertices of a one-, two-, or three-dimensional regular simplex. Our assertion is clear in either of these cases.
\end{proof}
\begin{defin}
A self-similar set $K$ is said to satisfy the
\emph{strong separation condition} if there
exist similarities $S_0,\ldots,S_k$ such
that $K=S_0(K)\cup \cdots \cup S_k(K)$ and the sets
$S_i(K)$ are pairwise disjoint.
We say that the transformation $f:\mathbb{R}^n\to \mathbb{R}^n$ is a \emph{homothety} if $f$
is the identity or if $f$ has exactly one fixed point (say $O$), and there exists a nonzero real number $r$ such that for any point $P$ we have $f(P)-O=r(P-O)$. The point $O$ is called the \emph{center of the homothety}, and $r$ is called the \emph{ratio of magnification}. We call $K$ \emph{homothetic} if $S_i$ is a homothety for $i=0,1,\ldots,k$.
\end{defin}
\begin{lemma}\label{lemma:same-direction}
Let $K$ be a homothetic self-similar set,
in other words suppose that $K=S_0(K)\cup\ldots\cup S_m(K)$, where each
$S_i$ is a homothety.
Then, for any $x_0,x_1\in K$, $x_0\ne x_1$ there exist $y_0,y_1$ and
$i\ne j$ such that
$y_0\in S_i(K)$ and $y_1\in S_j(K)$ and $y_0-y_1$ is parallel to $x_0-x_1$.
\end{lemma}
\begin{proof}
Since $x_0,x_1\in K$, there exist sequences
$i_{0,1}, i_{0,2}, \ldots$ and $i_{1,1}, i_{1,2}, \ldots$ such that
\[
x_0\in S_{i_{0,1}}\Big(S_{i_{0,2}}\big(\cdots S_{i_{0,k}}(K)\big)\Big) \quad\textrm{and}\quad x_1\in S_{i_{1,1}}\Big(S_{i_{1,2}}\big(\cdots S_{i_{1,k}}(K_1)\big)\Big)
\]
for any positive integer $k$.
Let $k$ be the smallest positive integer such that $i_{0,k}\ne i_{1,k}$ (such a $k$ exists else $x_0$ and $x_1$
would coincide). Set
\[S\stackrel{\textrm{\scriptsize def}}{=} S_{i_{0,1}}\Big(S_{i_{0,2}}\big(\cdots S_{i_{0,k-1}}(\cdot)\big)\Big).
\]
There exist $y_0\in S_{i_{0,k}}(K)$ and $y_1\in S_{i_{1,k}}(K)$ such that $x_0=S(y_0)$ and $x_1=S(y_1)$. Since $S$ is also a homothety, $y_0-y_1$ is parallel to $x_0-x_1$.
\end{proof}
\begin{thm}\label{thm:selfsimilar}
For any $\varepsilon > 0$ there exists a constant $c_\varepsilon > 0$ such that
for every $n\ge2$ there exists a compact homothetic self-similar set $K\subset \mathbb{R}^n$
with $\mathrm{dim}(K) \ge c_\varepsilon \log n$ and with the property that
all angles occurring in the set fall into the $\varepsilon$-neighborhood of
the angles $\{0, 60^\circ, 90^\circ, 120^\circ, 180^\circ \}$.
In particular, for any
$\alpha\in (0,180^\circ)\setminus \{60^\circ, 90^\circ, 120^\circ\}$
we construct a compact set of dimension $c(\alpha) \log n$
that does not contain the angle $\alpha$;
moreover, the set even avoids a small neighborhood of $\alpha$.
\end{thm}
\begin{proof}
Our set $K$ will be a modified version of the Sierpi\'nski gasket.
Take a regular $n$-dimensional simplex with unit edge length in
$\mathbb{R}^n$, denote its vertices by $P_0, \ldots, P_n$ and let $K_1\stackrel{\textrm{\scriptsize def}}{=} \mathrm{conv}(\{P_0,\ldots, P_n\})$.
Fix a $0<\delta<1/2$ and denote by $S_i$ the homothety of ratio $\delta$ centered at $P_i$ ($i=0,\ldots,n$). The similarities $S_i$ ($i=0,\ldots,n$) uniquely determine a self-similar set $K$ which can also be written in the following form:
\[
K\stackrel{\textrm{\scriptsize def}}{=} \bigcap_{k=1}^\infty \bigcup_{(i_1,\ldots,i_k)\in \{0,\ldots,n\}^k} S_{i_1}\Big(S_{i_2}\big(\cdots
S_{i_k}(K_1)\big)\Big).
\]
The set $K$ clearly satisfies the strong separation condition. By \cite[Theorem 4.14]{mattila}, the dimension of $K$ is the unique positive number $s$ for which $(n+1)\delta^s=1$, therefore
\[
\mathrm{dim}(K)=\frac{\log(n+1)}{\log \frac{1}{\delta}}.
\]
We say that a \emph{direction} $V\in G(n,1)$ \emph{occurs in a set} $H\subset \mathbb{R}^n$ if there are $x,y\in H$, $x\ne y$ such that $x-y$ is parallel to $V$. We will show that the directions occurring in $K$ are actually \emph{close} to the directions occurring in $\{P_0, \ldots, P_n\}$.
Let $V\in G(n,1)$ which occurs in $K$ and let $x_0,x_1\in K$, $x_0\ne x_1$ such that $x_0-x_1$ is parallel to $V$.
By Lemma~\ref{lemma:same-direction}
there exist $y_0,y_1\in K$, $y_0\ne y_1$ such that $y_0-y_1$ is also parallel to $V$ and there exist $i\ne j$ with $y_0\in S_i(K)$ and $y_1\in S_j(K)$.
We may assume without loss of generality that $y_0\in S_0(K)$, $y_1\in S_1(K)$.
We will show that the angle $\varphi$ between $y_0-y_1$ and $P_0-P_1$ is small, which is equivalent with $\cos \varphi$ being close to 1. Let $h_i = y_i - P_i$. We have $||h_i||\le\delta$ ($i=0,1$), hence
\[
\cos \varphi = \frac{\langle y_0-y_1, P_0-P_1\rangle}{||y_0-y_1||\cdot ||P_0-P_1||}=
\frac{1+\langle h_0-h_1, P_0-P_1\rangle}{||(P_0-P_1)+(h_0-h_1)||} \ge
\frac{1-2\delta}{1+2\delta}.
\]
Set $\varepsilon(\delta)=2\arccos(\frac{1-2\delta}{1+2\delta})$. Lemma \ref{thm:angles_in_simplices} implies that the angles occurring in $K$ are in the union of the following intervals: $[0,\varepsilon]$, $[60^\circ-\varepsilon,60^\circ+\varepsilon]$, $[90^\circ-\varepsilon,90^\circ+\varepsilon]$, $[120^\circ-\varepsilon,120^\circ+\varepsilon]$, $[180^\circ-\varepsilon,180^\circ]$. If $\delta$, and therefore $\varepsilon$ is sufficiently small, then neither of these intervals contain $\alpha$.
\end{proof}
The first author (Harangi \cite{harangi}) improved this result:
he used the same methods to show that there exists a set with the same properties
and with dimension $c_\varepsilon n$.
Moreover, even for the angles $60^\circ$ and $120^\circ$
it is possible to construct large dimensional
homothetic self-similar sets avoiding these angles.
However, as the next theorem shows, one cannot
avoid the right angle with similar constructions.
\begin{thm}
\label{rectangle}
Let $K\subset \mathbb{R}^n$ be a compact self-similar set.
Suppose that we have homotheties $S_0,\ldots,S_k$ with ratios less than 1
such that $K=S_0(K)\cup S_1(K) \cup \cdots \cup S_k(K)$ and
the sets $S_i(K)$ are pairwise disjoint
(that is, the strong separation condition is satisfied).
Then $K$ contains four points that form a non-degenerate rectangle
given that $\mathrm{dim}(K)>1$.
\end{thm}
\begin{proof}
We begin the proof by defining the following map:
\[
D:\enskip K\times K \setminus \{(x,x):x\in K\}\to S^{n-1};\quad (x,y)\mapsto \frac{x-y}{||x-y||}.
\]
We denote the range of $D$ by $\mathrm{Range}(D)$.
The set $\mathrm{Range}(D)$ can be considered as
the set of directions in $K$.
First we prove that if $K$ is such a self-similar set then $\mathrm{Range}(D)$ is closed.
By Lemma~\ref{lemma:same-direction}, for any $x,y\in K$, $x\ne y$ there exist
$x'\in S_i(K)$ and $y'\in S_j(K)$ for some $i\neq j$ such that
$x'-y'$ is parallel to $x-y$.
If $d(\cdot,\cdot)$ denotes the Euclidean distance then
\[
\min_{0\le i<j\le k} d(S_i(K),S_j(K))=c>0,
\]
so $\mathrm{Range}(D)$ actually equals to the image of $D$ restricted to the set $K\times K\setminus \{(x,y): d(x,y)<c\}$. As this is a compact set, the continuous image is also compact, and so $\mathrm{Range}(D)$ is indeed compact.
Next we show that for any $v\in S^{n-1}$ there exist $x,y \in K$, $x\ne y$
such that the vectors $v$ and $D(x,y)$ are perpendicular. If this was not
true, the compactness of $\mathrm{Range}(D)$ would imply that the orthogonal
projection $p$ to a line parallel to $v$ would be a one-to-one map on $K$ with
$p^{-1}$ being a Lipschitz map on $p(K)$.
This would imply $\mathrm{dim}(K)\le1$, which is a contradiction.
For simplifying our notation, let $f\stackrel{\textrm{\scriptsize def}}{=} S_0$, $g\stackrel{\textrm{\scriptsize def}}{=} S_1$.
The homotheties $f \circ g$ and $g \circ f$ have the same ratio. Denote their
fixed points by $P$ and $Q$, respectively.
Since $P\ne Q$, there are $x,y\in K$, $x\ne y$ such that $x-y$ is
perpendicular to $P-Q$. It is easy to check that the points $f(g(x))$,
$f(g(y))$, $g(f(y))$ and $g(f(x))$ form a non-degenerate rectangle.
\end{proof}
\section{Finding angles close to a given angle}
\label{approx}
Theorem~\ref{thm:selfsimilar}
showed that for any angles $0<\alpha<180^\circ$ and $\delta>0$ such that
$0, 60^\circ, 90^\circ, 120^\circ, 180^\circ \not\in (\alpha-\delta, \alpha+\delta)$
there exist compact
sets of arbitrarily large Hausdorff dimension that do not contain
any angle from $(\alpha-\delta,\alpha+\delta)$. That is, using the notation we
introduced in Definition~\ref{def:ch}, we have $\Ch(\alpha,\delta)=\infty$ if
$\alpha \neq 0, 60^\circ, 90^\circ, 120^\circ, 180^\circ$ and $\delta$ is small enough.
In this section we show that the other claims of Table~\ref{table2} of the
introduction also hold.
We start by proving that a set that does not contain angles near to
$90^\circ$ must be very small, it cannot have Hausdorff dimension bigger
than $1$. This makes $90^\circ$ very special since
the analogous statement would be false for any other angle
$\alpha\in(0,180^{\circ})$ (see Theorem~\ref{thm:selfsimilar} and
Remark~\ref{rem:sharp}).
This result is clearly sharp since a line segment contains only $0$ and
$180^{\circ}$.
\begin{thm}
\label{Hausdorff1}
Any set $A\subset\mathbb{R}^n$ $(n\ge 2)$
with Hausdorff dimension greater than 1 contains
angles arbitrarily close to the right angle.
Thus $\Ch(90^{\circ},\delta)=1$ for any $\delta>0$.
\end{thm}
\begin{proof}
By the equivalent definition (\ref{eq:compactch}) of $\Ch$ we found
in the introduction
we can assume that $A$ is compact and
$0<\mathcal{H}^s(A)<\infty$ for some $s>1$.
Applying Theorem \ref{thm:intersect2} for $m=1$
we obtain that for $\mathcal{H}^s$ almost all $x \in A$
the set $A\cap(W+x)$ has positive dimension
for $\gamma_{n,n-1}$ almost all $W \in G(n,n-1)$.
Let us fix a point $x$ with this property
and let $y \ne x$ be an arbitrary point in $A$.
Since for any fixed $\delta>0$
the subspaces forming an angle at least $90^\circ - \delta$ with $x-y$
have positive measure, and the exceptional set in
Theorem~\ref{thm:intersect2} is of measure zero, the theorem follows.
\end{proof}
Now we prove the same result for upper
Minkowski dimension instead of Hausdorff dimension.
It is well-known that the upper Minkowski dimension
is always greater or equal than the Hausdorff dimension.
Hence the following theorem is stronger than the previous one.
\begin{thm} \label{thm:90_mink}
Any bounded set $A$ in $\mathbb{R}^n$ $(n\ge 2)$
with upper Minkowski dimension greater than 1 contains
angles arbitrarily close to the right angle.
\end{thm}
The upper Minkowski dimension can be defined in many different ways,
we will use the following definition (see \cite[Section 5.3]{mattila} for details).
\begin{defin}\label{def:mink}
By $B(x,r)$ we denote the closed ball with center $x\in \mathbb{R}^n$ and radius $r$.
For a non-empty bounded set $A\subset\mathbb{R}^n$
let $P(A, \varepsilon)$ denote the greatest integer $k$ for which
there exist disjoint balls $B(x_i,\varepsilon)$ with $x_i\in A$, $i=1,\dots,k$.
The \emph{upper Minkowski dimension} of $A$ is defined as
$$ {\overline{\mathrm{dim}}_{\mathrm{M}}}(A)\stackrel{\textrm{\scriptsize def}}{=}\sup\{s:
\limsup_{\varepsilon\rightarrow 0+}P(A,\varepsilon)\varepsilon^s=\infty\} .$$
Note that we get an equivalent definition if
we consider the $\limsup$ for $\varepsilon$'s
only of the form $\varepsilon = 2^{-k}$, $k\in \mathbb{N}$.
\end{defin}
The following technical lemma is needed not only for the proof of
Theorem~\ref{thm:90_mink} but also for the result
about finding angles near to $60^{\circ}$.
It roughly says that in a set of large upper Minkowski dimension
one can find many points such that
the distance of each pair is more or less the same.
\begin{lemma} \label{lem:mink}
Suppose that ${\overline{\mathrm{dim}}_{\mathrm{M}}}(A) > t$
for a bounded set $A\subset\mathbb{R}^n$ and a positive real $t$.
Then for infinitely many positive integers $k$
it holds that for any integer $0 < l < k$
there are more than $2^{(k-l)t}$ points in $A$
with the property that the distance of any two of them
is between $2^{-k+1}$ and $2^{-l+2}$.
\end{lemma}
\begin{proof}
Let
$$ r_k=P(A, 2^{-k}) 2^{-kt}. $$
Due to the previous definition $\limsup_{k\rightarrow\infty} r_k = \infty$.
It follows that there are infinitely many values of $k$
such that $r_k > r_l$ for all $l < k$.
Let us fix such a $k$ and let $0<l<k$ be arbitrary.
By the definition of $r_k$, there are
$r_k2^{kt}$ disjoint balls with radii $2^{-k}$ and centers in $A$.
Let $\mathcal{S}$ denote the set of the centers of these balls.
Clearly the distance of any two of them is at least $2^{-k+1}$.
Similarly, we can find a maximal system of disjoint balls
$B(x_i,2^{-l})$ with $x_i\in A$, $i=1,\dots,r_l2^{lt}$.
Consider the balls $B(x_i,2^{-l+1})$ of doubled radii.
These doubled balls are covering the whole $A$
(otherwise the original system would not be maximal).
By the pigeonhole principle,
one of these doubled balls contains at least
$$\frac{r_k2^{kt}}{r_l2^{lt}}=\frac{r_k}{r_l}2^{(k-l)t} > 2^{(k-l)t}$$
points of $\mathcal{S}$. These points clearly have the desired property.
\end{proof}
Now we are in a position to prove the theorem.
\begin{proof}[Proof of Theorem~\ref{thm:90_mink}]
We can assume that $\mbox{diam}(A)>2$.
Fix a $t$ such that ${\overline{\mathrm{dim}}_{\mathrm{M}}}(A)>t>1$.
Lemma \ref{lem:mink} tells us that there are arbitrarily large integers $k$
such that for any $l<k$ one can have more than $2^{(k-l)t}$ points in $A$
such that each distance is between $2^{-k+1}$ and $2^{-l+2}$.
Let $\mathcal{S}$ be a set of such points
and pick an arbitrary point $O\in \mathcal{S}$.
Since $\mbox{diam}(A)>2$, there exists a point $P\in A$ with $OP\ge1$.
Now we project the points of $\mathcal{S}$ to the line $OP$.
There must be two distinct points $Q_1, Q_2 \in \mathcal{S}$
such that the distance of their projection is at most
$$\frac{2^{-l+2}}{2^{(k-l)t}}= 2^{-l+2-(k-l)t} ,$$
It follows that
$$\cos \angle(\overrightarrow{Q_1Q_2},\overrightarrow{OP})\le
\frac{2^{-l+2-(k-l)t}}{2^{-k+1}}=2^{-(k-l)(t-1)+1}.$$
Since $Q_1O \le 2^{-l+2}$ and $OP\ge1$,
the angle of the lines $OP$ and $Q_1P$
is at most $C_1 2^{-l}$ with some constant $C_1$.
Combining the previous results we get that
$$ \left| \angle P Q_1 Q_2 - 90^\circ \right| \le
C_1 2^{-l} + C_2 2^{-(k-l)(t-1)} $$
with some constants $C_1, C_2$.
The right hand side can be arbitrarily small
since $t-1$ is positive and both $l$ and $k-l$ can be chosen to be large.
\end{proof}
Now we try to find angles close to $60^\circ$.
We will do that by finding three points forming an \emph{almost regular} triangle
provided that the dimension of the set is sufficiently large.
We will need a simple result from Ramsey theory.
Let $R_r(3)$ denote the least positive integer $k$
for which it holds that no matter how we color
the edges of a complete graph on $k$ vertices with $r$ colors
it contains a monochromatic triangle.
The next inequality can be obtained easily:
$$ R_r(3) \leq r \cdot R_{r-1}(3) - (r-2) .$$
(A more general form of the above inequality
can be found in e.g.~\cite[p.~90, Eq.~2]{ramsey}.)
It readily implies the following upper bound for $R_r(3)$.
\begin{lemma} \label{lem:ramsey}
For any positive integer $r \geq 2$
$$ R_r(3) \leq 3r! ,$$
that is, any complete graph on at least $3r!$ vertices
edge-colored by $r$ colors contains a monochromatic triangle.
\end{lemma}
Using this lemma we can prove the following theorem.
\begin{thm}\label{thm:tri}
There exists an absolute constant $C$ such that
whenever ${\overline{\mathrm{dim}}_{\mathrm{M}}}(A)> \frac{C}{\delta}\log(\frac{1}{\delta})$
for some bounded set $A\subset\mathbb{R}^n$ and $\delta >0$ the following holds:
$A$ contains three points that form a $\delta$-almost regular triangle,
that is, the ratio of the length of the longest and shortest sides
is at most $1 + \delta$.
\end{thm}
As an immediate consequence, we can find angles close to $60^\circ$.
\begin{cor} \label{cor:60}
Suppose that ${\overline{\mathrm{dim}}_{\mathrm{M}}}(A)> \frac{C}{\delta}\log(\frac{1}{\delta})$
for some bounded set $A\subset\mathbb{R}^n$ and $\delta >0$.
Then $A$ contains angles from the interval $(60^\circ - \delta, 60^\circ]$
and also from $[60^\circ, 60^\circ+\delta)$.
Therefore $\Ch(60^\circ,\delta)\le \frac{C}{\delta}\log(\frac{1}{\delta})$.
\end{cor}
\begin{rem}\label{rem:sharp}
The above theorem and even the corollary is essentially sharp:
the first author (Harangi \cite{harangi}) constructed a set with Hausdorff dimension
$\frac{c}{\delta}/\log(\frac{1}{\delta})$
and without any angles from the interval
$(60^\circ-\delta, 60^\circ+\delta)$, so
we have $\Ch(60^\circ,\delta)\ge \frac{c}{\delta}/\log(\frac{1}{\delta})$.
\end{rem}
\begin{proof}[Proof of Theorem \ref{thm:tri}]
Let $t=\frac{C}{\delta}\log(\frac{1}{\delta})$ and
apply Lemma \ref{lem:mink} for $l=k-1$.
We obtain at least $2^{t}$ points in $A$ such that
each distance is in the interval $[2^{-k+1},2^{-k+3}]$.
Let $a=2^{-k+1}$ and divide $[a,4a]$ into
$N=\lceil\frac{3}{\delta}\rceil$ disjoint intervals of length at most $\delta a$.
Regard the points of $A$ as the vertices of a graph.
Color the edges of this graph with $N$ colors according to
which interval contains the distance of the corresponding points.
Easy computation shows that $2^t>3N!$ (with a suitable choice of $C$).
Therefore the above graph contains
a monochromatic triangle by Lemma \ref{lem:ramsey}.
It easily follows that the three corresponding points
form a $\delta$-almost regular triangle in $\mathbb{R}^n$.
\end {proof}
\begin{rem}
The same proof yields the following:
for any positive integer $d$ and positive real $\delta$
there is a number $K(d, \delta)$ such that
whenever ${\overline{\mathrm{dim}}_{\mathrm{M}}}(A)>K(d, \delta)$ for some bounded set $A$,
one can find $d$ points in $A$ with the property that
the ratio of the largest and the smallest distance
among these points is at most $1+\delta$.
(One needs to use the fact that the Ramsey number $R_r(d)$ is finite.)
\end{rem}
In order to derive similar results for $120^\circ$ instead of $60^\circ$
we show that if large Hausdorff dimension implies the existence of an angle near $\alpha$,
then it also implies the existence of an angle near $180^\circ-\alpha$.
\begin{prop} \label{prop:suppl_angles}
Suppose that $s=s(\alpha, \delta, n)$ is a positive real number such that
any analytic set $A\subset\mathbb{R}^n$ with $\mathcal{H}^s(A)>0$ contains an angle
from the interval $(\alpha-\delta, \alpha+\delta)$.
Then any analytic set $B\subset\mathbb{R}^n$ with $ \mathcal{H}^s(B)>0$
contains an angle from the interval $(180^\circ-\alpha-\delta',180^\circ-\alpha+\delta')$
for any $\delta' > \delta$.
\end{prop}
\begin{proof}
Again, we can assume that $0<\mathcal{H}^s(B)<\infty$.
It is well-known that for $\mathcal{H}^s$ almost all $x\in B$
the set $B \cap B(x,r)$ has positive $\mathcal{H}^s$ measure for any $r>0$
\cite[Theorem 6.2]{mattila}.
If we omit the exceptional points from $B$,
this will be true for every point of the obtained set.
Assume that $B$ had this property in the first place.
Then, by the assumptions of the proposition,
any ball around any point of $B$
contains an angle from the $\delta$-neighborhood of $\alpha$.
We define the points $P_m, Q_m, R_m \in B$ recursively
in the following way. Fix a small $\varepsilon$.
First take $P_0, Q_0, R_0$ such that the angle $\angle P_0Q_0R_0$
falls into the interval $(\alpha-\delta, \alpha+\delta)$.
If the points $P_m, Q_m, R_m$ are given, then choose points
$P_{m+1},Q_{m+1}, R_{m+1}$ from the
$\varepsilon\cdot\min(Q_mP_m,Q_mR_m)$-neighborhood of $P_m$ such that
$\angle P_{m+1}Q_{m+1}R_{m+1} \in (\alpha-\delta, \alpha+\delta)$.
We can find two indices $k > l$ such that the
angle enclosed by the vectors $\overrightarrow{Q_lP_l}$ and
$\overrightarrow{Q_kP_k}$ is less than $\varepsilon$.
It is clear that if we choose $\varepsilon$ sufficiently small, then
$\angle(Q_l, Q_k,R_k)\in(180^\circ-\alpha-\delta', 180^\circ-\alpha+\delta')$.
\end{proof}
\begin{rem}\label{rem:closed}
Proposition~\ref{prop:suppl_angles} holds for $\delta'=\delta$ as well.
Surprisingly, it even holds for some $\delta'<\delta$.
The reason behind is the following.
If every analytic set $A\subset \mathbb{R}^n$ with $\mathcal{H}^s(A)>0$ contains an angle
from the interval $(\alpha-\delta, \alpha+\delta)$, then there necessarily exists a closed subinterval
$[\alpha-\gamma, \alpha+\gamma]$ ($\gamma<\delta$) such that
every analytic set $A\subset \mathbb{R}^n$ with $\mathcal{H}^s(A)>0$ contains an angle
from the interval $[\alpha-\gamma, \alpha+\gamma]$. We prove this statement in the Appendix (Theorem~\ref{thm:closed}).
This implies that $\Ch$
satisfies the symmetry property
$$\Ch(\alpha, \delta)=\Ch(180^\circ-\alpha, \delta).$$
\end{rem}
\begin{thm}\label{thm:120}
There exists an absolute constant $C$ such that
any analytic set $A\subset\mathbb{R}^n$ with
$\mathrm{dim}(A) > \frac{C}{\delta}\log(\frac{1}{\delta})$
contains an angle from the $\delta$-neighborhood of $120^\circ$.
Therefore $\Ch(120^\circ,\delta)\le \frac{C}{\delta}\log(\frac{1}{\delta})$.
\end{thm}
\begin{proof}
The claim readily follows from
Corollary \ref{cor:60}, Proposition \ref{prop:suppl_angles}
and the fact that the upper Minkowski dimension is
greater or equal than the Hausdorff dimension.
\end{proof}
\begin{rem}
In fact, as for the other angles,
in Theorem~\ref{thm:120} it is also enough to assume that
the upper Minkowski dimension is bigger than
$\frac{C}{\delta}\log(\frac{1}{\delta})$. This follows from a more general
result that we prove in the Appendix.
\end{rem}
To find angles arbitrarily close to $0$ and $180^\circ$,
it suffices to have infinitely many points.
\begin{prop} \label{prop:pi_angle}
Any $A\subset \mathbb{R}^n$ of infinite cardinality
contains angles arbitrarily close to $0$ and
angles arbitrarily close to $180^\circ$.
Therefore $\Ch(0,\delta)=\Ch(180^\circ,\delta) = 0$.
\end{prop}
\begin{proof}[Sketch of the proof]
We claim that given $N$ points in $\mathbb{R}^n$
they must contain an angle less than
$\delta_1= \frac{C}{\root{n-1}\of{N}}$
and an angle greater than $180^\circ-\delta_2$
with $\delta_2=\frac{C}{\root{n-1}\of{\log N}}$.
The former follows easily from the pigeonhole principle.
The latter is a result of Erd\H os and
F\"uredi \cite[Theorem~4.3]{erdos/furedi}.
\end{proof}
\section{Appendix}
\subsection{A transfinite construction}
We prove the following theorem, which shows
that if we allowed arbitrary sets in Definition~\ref{def:basicdef}
then $C(n,\alpha)$ would be $n$.
\begin{thm}\label{thm:transf}
Let $n\ge 2$.
For any $\alpha \in [0,180^\circ]$ there exists $H\subset \mathbb{R}^n$ such
that $H$ does not contain the angle $\alpha$, and $H$ has full Lebesgue
outer measure; that is, its complement does not contain any measurable set
with positive measure.
In particular, $\mathrm{dim}(H)=n$.
\end{thm}
The proof we present here is shorter than our original proof, this one
was suggested by Marianna Cs\"ornyei.
We need the following simple lemma, which might be well-known even for more
general sets but for completeness we present a proof.
Recall that an algebraic set is the
set of solutions of a system of polynomial equations.
\begin{lemma}\label{l:surfaces}
Less than continuum many proper algebraic subsets of $\mathbb{R}^n$ cannot cover
a Borel set of positive $n$-dimensional Lebesgue measure.
\end{lemma}
\begin{proof}
We prove by induction. For $n=1$ this is clear since proper algebraic subsets
of $\mathbb{R}$ are finite and every Borel set of positive Lebesgue measure
has cardinality
continuum.
Suppose that the lemma holds for $n-1$ but it is false for $n$,
so there exists a collection $\mathcal A$ of
less than continuum many proper algebraic subsets of $\mathbb{R}^n$ such that
they cover a Borel set $B\subset\mathbb{R}^n$ with positive Lebesgue measure.
Let $H^t$ denote the ``horizontal'' section
$H^t=\{(x_1,\ldots,x_{n-1}):(x_1,\ldots,x_{n-1},t) \in H\}$ of a set
$H\subset\mathbb{R}^n$ at ``height'' $t\in\mathbb{R}$.
If $A$ is a proper algebraic subset of $\mathbb{R}^n$ then with a finite exception
every $A_t$ is a proper algebraic subset of $\mathbb{R}^{n-1}$.
Therefore, by using the assumption that the lemma holds for $n-1$,
we get that $(\cup\mathcal{A})_t$ can contain Borel sets of positive
$n-1$-dimensional Lebesgue measure only for less than continuum many $t$.
Let $f(t)$ denote the $(n-1)$-dimensional Lebesgue measure of the Borel set
$B_t$. Since $B\subset\cup\mathcal{A}$, we obtain that
$\{t: f(t)>0\}$ has cardinality less than continuum.
On the other hand, by Fubini theorem $f$ is measurable and its integral is
the Lebesgue measure of $B$, so it is positive.
This implies that $\{t: f(t)>0\}$ is a measurable set of positive measure,
so it must have cardinality continuum, contradiction.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:transf}]
Take a well-ordering $\{B_\beta : \beta < \mathfrak{c}\}$ of the Borel subsets of
$\mathbb{R}^n$ with positive $n$-dimensional Lebesgue measure.
We will construct a sequence of points $\{x_\beta : \beta < \mathfrak{c} \}$ of $\mathbb{R}^n$
using transfinite induction
so that
\begin{equation}\label{condition}
x_{\beta}\in B_\beta \qquad \textrm{and} \qquad H_\beta= \{x_\delta: \delta\le\beta\}
\textrm{ does not contain the angle } \alpha
\end{equation}
for any $\beta<\mathfrak{c}$.
This will complete the proof since then $H=\{x_\beta:\beta < \mathfrak{c}\}$
will have all the required properties.
Suppose that $\gamma < \mathfrak{c}$ and we have already properly defined $x_\beta$
for all $\beta<\gamma$ so that (\ref{condition}) holds for all
$\beta<\gamma$.
For any $p,q\in \mathbb{R}^n$, $p\neq q$, let $A_{p,q}$
denote the set of those $x\in\mathbb{R}^n$ for which one of the angles of the triangle
$pqx$ is $\alpha$. Note that $A_{p,q}$ can be covered by three proper
algebraic subsets of $\mathbb{R}^n$. Then, by Lemma~\ref{l:surfaces},
the sets $A_{x_\delta,x_{\delta'}}$ $(\delta,\delta'<\gamma, x_\delta\neq x_{\delta'})$ cannot
cover $B_\gamma$, so we can choose a point
$$
x_\gamma\in B_\gamma\setminus \cup\{A_{x_\delta,x_{\delta'}}\ :\
\delta,\delta'<\gamma,\ x_\delta\neq x_{\delta'}\}.
$$
Then (\ref{condition}) also holds for $\beta=\gamma$.
This way we obtain a sequence $(x_\beta)_{\beta<\mathfrak{c}}$, so that
(\ref{condition}) holds for all $\beta<\mathfrak{c}$, which completes the proof.
\end{proof}
\subsection{The size of the neighborhood in the approximative problems}
Now, our goal is to prove the following theorem, which was claimed in Remark~\ref{rem:closed}.
\begin{thm}\label{thm:closed}
Suppose that $s=s(\alpha, \delta, n)$ is a positive real number such that every analytic set $A\subset\mathbb{R}^n$ with $\mathcal{H}^s(A)>0$ contains an angle
from the interval $(\alpha-\delta, \alpha+\delta)$.
Then there exists a closed subinterval
$[\alpha-\gamma, \alpha+\gamma]$ ($\gamma<\delta$) such that
every analytic set $A\subset\mathbb{R}^n$ with $\mathcal{H}^s(A)>0$ contains an angle
from the interval $[\alpha-\gamma, \alpha+\gamma]$.
\end{thm}
To prove this theorem, we need two lemmas. For $r \in (0,\infty]$ let
$$\mathcal{H}^s_r(A)=\inf \left\{ \sum_{i=1}^\infty \mbox{diam}(U_i)^s \,:\, \mbox{diam}(U_i)\le r, \ A\subset \cup_{i=1}^\infty U_i\right\},$$
thus $\mathcal{H}^s(A)=\lim_{r\to 0+} \mathcal{H}^s_r(A)$.
\begin{lemma}\label{lemma:limit}
Let $A_i$ be a sequence of compact sets converging in the Hausdorff metric to a set $A$. Then the following two statements hold.
\begin{itemize}
\item[(i)] $\Hc^s(A) \ge \limsup_{i\to\infty} \Hc^s(A_i).$
\item[(ii)] Suppose that for every $i=1,2,\ldots$ the set $A_i$ does not contain any angle from $[\alpha-\delta+1/i, \ \alpha+\delta-1/i]$. Then $A$ does not contain any angle from $(\alpha-\delta, \,\alpha+\delta)$.
\end{itemize}
\end{lemma}
\begin{proof}The first statement is well-known and easy.
To prove the second, notice that for any three points $x,y,z$ of $A$ there exist three points in $A_i$ arbitrarily close to $x,y,z$, for sufficiently large $i$.
\end{proof}
The next lemma follows easily from \cite[Theorem 2.10.17~(3)]{federer}.
For the sake of completeness, we give a short direct proof.
\begin{lemma}\label{lemma:surusegi}
Let $A\subset\mathbb{R}^n$ be a compact set satisfying $\mathcal{H}^s(A)>0$. Then there exists a ball $B$ such that $\Hc^s(A\cap B) \ge c\,\mbox{diam}(B)^s$,
where $c>0$ depends only on $s$.
\end{lemma}
\begin{proof}
We may suppose without loss of generality that $\mathcal{H}^s(A)<\infty$. (Otherwise we choose a compact subset of $A$ with positive and finite $\mathcal{H}^s$ measure. If the theorem holds for a subset of $A$ then it clearly holds for $A$ as well.)
Choose $r>0$ so that $\mathcal{H}^s_r(A)> \mathcal{H}^s(A)/2$.
Cover $A$ by sets $U_i$ of diameter at most $r/2$ such that $\sum_i \mbox{diam}(U_i)^s \le 2 \mathcal{H}^s(A)$.
Cover each $U_i$ by a ball $B_i$ of radius at most the diameter of $U_i$.
Then the balls $B_i$ cover $A$, have diameter at most $r$, and $\sum_i \mbox{diam}(B_i)^s \le 2^{1+s} \mathcal{H}^s(A)$.
We claim that one of these balls $B_i$ satisfies the conditions of the Lemma for $c=2^{-2-s}$. Otherwise we have
$$\Hc^s(A\cap B_i) < 2^{-2-s} \, \mbox{diam}(B_i)^s$$
for every $i$. Since the sets $A\cap B_i$ have diameter at most $r$, clearly $\mathcal{H}^s_r(A\cap B_i) = \Hc^s(A\cap B_i)$.
Therefore
\begin{multline*}
\mathcal{H}^s_r(A)\le \sum_i \mathcal{H}^s_r(A\cap B_i) <\sum_i 2^{-2-s} \, \mbox{diam}(B_i)^s \\ \le 2^{-2-s} 2^{1+s} \mathcal{H}^s(A) = \mathcal{H}^s(A)/2,
\end{multline*}
which contradicts the choice of $r$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:closed}]
Suppose on the contrary that there exist compact sets $K_i\subset \mathbb{R}^n$ with $\mathcal{H}^s(K_i)>0$ such that $K_i$ does not contain any angle from $[\alpha-\delta+1/i, \ \alpha+\delta-1/i]$.
Choose a ball $B_i$ for each compact set $K_i$ according to Lemma~\ref{lemma:surusegi}. Let $B$ be a ball of diameter $1$. Let $K_i'$ be the image of $K_i\cap B_i$ under a similarity transformation which maps $B_i$ to the ball $B$. Thus $\Hc^s(K_i')\ge c$. Let $K$ denote the limit of a convergent subsequence of the sets $K_i$. We can apply Lemma~\ref{lemma:limit} to this subsequence and obtain $\Hc^s(K)\ge c$, implying $\mathcal{H}^s(K)>0$. Also, $K$ does not contain any angle from the interval $(\alpha-\delta, \,\alpha+\delta)$, which is a contradiction.
\end{proof}
\subsection{Replacing Hausdorff dimension by upper Minkowski dimension}
Our final goal is showing that in the problems when we want angles only in
a neighborhood of a given angle, Hausdorff dimension can be replaced by
Minkowski dimension. This will follow from the following theorem.
As pointed out by Pablo Shmerkin, this theorem also follows
from a result of Furstenberg \cite{furstenberg}.
His result is much more general and it is not immediately trivial to see
that it implies what we need.
Therefore we give a direct self-contained proof.
\begin{thm}\label{t1}
Let $A\subset \mathbb{R}^d$ be a bounded set with upper Minkowski dimension $s>0$.
Then there exists a compact set $K$ of Hausdorff dimension $s$ such that
all finite subsets of $K$ are limits of
homothetic copies of finite subsets of $A$.
(That is, for every finite set $S\subset K$ and $\varepsilon>0$
there exists a set $S'\subset A$ and $r>0$, $t\in \mathbb{R}^d$ such that
the Hausdorff distance of $t+rS'$ and $S$ is at most $\varepsilon$.)
\end{thm}
Applying Theorem~\ref{t1} to a bounded set $A$ that does not contain any angle
from an open interval we get a compact set $K$
with the same property and with $\mathrm{dim}(K)={\overline{\mathrm{dim}}_{\mathrm{M}}}(A)$.
Thus we get the following.
\begin{cor}\label{cor:mink=haus}
For any $\alpha \in [0,180^\circ]$ and $\delta>0$, we have
\begin{multline*}
\Ch(\alpha, \delta) = \sup \{{\overline{\mathrm{dim}}_{\mathrm{M}}}(A) :
A \subset \mathbb{R}^n \textrm{ for some } n; A \textrm{ is bounded};\\
A \mbox{ does not contain any angle from } (\alpha-\delta, \alpha+\delta) \} .
\end{multline*}
\end{cor}
\begin{proof}[Proof of Theorem~\ref{t1}]
We will need to use a slightly different version of
the Hausdorff content $\mathcal{H}^s_\infty(B)$ in this proof.
Instead of covering $B \subset \mathbb{R}^d$ with arbitrary sets,
we will only consider coverings with homothetic copies of
the unit cube $[0,1]^d$. (From now on,
a cube is always assumed to be a homothetic copy of the unit cube.)
For a cube $C$, $\mbox{diam}(C)$ is just the constant multiple of
the edge length of $C$ (denoted here by $|C|$).
For the sake of simplicity, we will use $|C|$ in our definition:
for any $B \subset \mathbb{R}^d$ and $s>0$ let
$$ \widehat{ \mathcal{H} }^s_\infty(B) \stackrel{\textrm{\scriptsize def}}{=} \inf \left\{ \sum_{i=1}^\infty |C_i|^s : C_i
\mbox{ is a cube for each } i; \ B \subset \bigcup_{i=1}^\infty C_i \right\} .$$
It is easy to see that
$d^{-s/2} \mathcal{H}^s_\infty \leq \widehat{ \mathcal{H} }^s_\infty \leq \mathcal{H}^s_\infty$.
Also note that $ \widehat{ \mathcal{H} }^s_\infty([0,1]^d) = 1$ for any $0 < s \leq d$.
We may assume that $A\subset [0,1]^d$.
For a positive integer $n$ we divide the unit cube into
$n^d$ subcubes of edge length $1/n$.
Let $A_n$ be the union of the subcubes that intersect $A$.
We claim that for any fixed $0< \delta < s/2$,
for infinitely many $n$ (depending on $\delta$)
there exists a cube $C$ such that
\begin{equation}\label{theclaim}
|C| \geq n^{\frac{\delta}{2d}} / n \mbox{ and }
\widehat{ \mathcal{H} }^{s-2\delta}_\infty(C \cap A_n) \geq 2^{-s-2} |C|^{s-2\delta} .
\end{equation}
First we show how the theorem follows from this claim. If \eqref{theclaim} holds for $n$ and $C$, then let $K_n$ be the image of $C \cap A_n$ under the homothety
that maps $C$ to $[0,1]^d$.
Hence $\widehat{ \mathcal{H} }^{s-2\delta}_\infty(K_n)\ge 2^{-s-2}$.
If $S\subset K_n$ is finite, then there exists $S'$
such that the Hausdorff distance of $S$ and $S'$ is
at most $\sqrt{d} n^{-\delta/(2d)}$ and
a homothetic image of $S'$ is in $A$.
For each $\delta=1/l$ choose $n=n_l\ge l^l$ such that the claim holds.
Let $\tilde K$ be the limit of a convergent subsequence of $K_{n_l}$.
By Lemma \ref{lemma:limit} the Hausdorff dimension of $\tilde K$ is at least
$s$.
Let $K$ be a compact subset of $\tilde K$ of Hausdorff dimension $s$.
It is easy to check
that $K$ satisfies all the required properties.
It remains to prove the claim.
Since ${\overline{\mathrm{dim}}_{\mathrm{M}}}(A) = s$, $A_n$ contains at least
$n^{s-\delta}$ subcubes for infinitely many $n$.
Fix such an $n$ with $n\ge 2^{4/\delta}$. Let
$$ c=\min \left\{\widehat{ \mathcal{H} }^{s-2\delta}_\infty(B) / m:
B \mbox{ is the union of $m$ subcubes of } A_n, \ m\ge 1\right\} .$$
Since the unit cube covers $A_n$, by choosing $B$ as the union of
$m\ge n^{s-\delta}$ subcubes of $A_n$ we get
$c\le \widehat{ \mathcal{H} }^{s-2\delta}_\infty(B)/m \le 1/n^{s-\delta}$.
(On the other hand, one subcube has content $1/n^{s-2\delta}$, hence
the minimum is taken for a set $B$ for which $m$ is at least $n^\delta$.)
Suppose now that $B$ is a set for which the minimum is taken; that is,
$$\widehat{ \mathcal{H} }^{s-2\delta}_\infty(B)=cm,$$
where $B$ consists of $m$ subcubes of $A_n$.
It follows that there exists a covering of $B$ with cubes $C_i$ ($i=1,2,\ldots$) such that
$$\sum_{i=1}^\infty |C_i|^{s-2\delta} \le 2cm.$$
Let $k=n^{\delta/(2d)}$.
We say that a cube $C_i$ is ``bad'' if $|C_i|<k/n$, and ``good'' otherwise.
The total volume of the bad cubes is at most
\begin{align*}
& \sum_{C_i \text{ is bad}} |C_i|^d =
\sum_{C_i \text{ is bad}} |C_i|^{d-s+2\delta} |C_i|^{s-2\delta} \le
(k/n)^{d-s+2\delta} \sum_{i=1}^\infty |C_i|^{s-2\delta} \\
& \le 2cm (k/n)^{d-s+2\delta} \le 2m k^{d-s+2\delta} n^{-\delta-d}
\le 2m k^{d} n^{-\delta-d} = 2m n^{-\frac{\delta}{2}-d}\le \frac{m}2 n^{-d},
\end{align*}
where the last four estimates follow from $c \le 1/n^{s-\delta}$,
$\delta<s/2$, $k=n^{\delta/(2d)}$ and $n\ge 2^{4/\delta}$.
So there are at most $m/2$ subcubes that are fully covered by bad cubes.
Let $B'$ be the union of the remaining (at least $m/2$) subcubes in $B$.
Since each subcube in $B'$ must intersect a good cube $C_i$,
it follows that the cubes $2C_i$ cover $B'$, where
$2C_i$ is the cube with the same center as $C_i$ and double edge length.
Then the definition of $c$ implies that
$$
\sum_{C_i \text{ is good}} \widehat{ \mathcal{H} }^{s-2\delta}_\infty(2C_i \cap A_n) \ge
\widehat{ \mathcal{H} }^{s-2\delta}_\infty(B')\ge c\frac{m}2 .$$
On the other hand, we have
$$ \sum_{C_i \text{ is good}} |2C_i|^{s-2\delta} \le
2^{s-2\delta} \sum_{i=1}^\infty |C_i|^{s-2\delta} \le 2^{s-2\delta} 2cm
\le 2^{s+1}cm.$$
Therefore there exists a good cube $C_i$ such that
$$ \widehat{ \mathcal{H} }^{s-2\delta}_\infty(2C_i \cap A_n) \ge
2^{-s-2} |2C_i|^{s-2\delta} .$$
Thus (\ref{theclaim}) holds for the cube $C = 2 C_i$,
which completes the proof.
\end{proof}
|
train/arxiv
|
BkiUbJrxK6-gD0SrbWef
| 5 | 1 |
\subsubsection*{Acknowledgement}
We thank Sridhar Arunachalam, David Brandfonbrener, Irmak Guzey, Yixin Lin, Jyo Pari, Abitha Thankaraj, and Austin Wang for their valuable feedback and discussions. This work was supported by awards from Honda, Meta, Hyundai, Amazon, and ONR award N000142112758.
\section*{Appendix}
\section{Behavior Transformers}
\label{sec:appendix_bet}
We use Behavior Transformers from~\cite{bet} as our backbone architecture, building our conditional algorithm on top of it.
In this section, we describe the BeT architecture and the training objective to help the readers understand the details of our algorithm.
\subsection{BeT architecture}
BeT uses a repurposed MinGPT architecture to model multi-modal behavior.
It uses the MinGPT trunk as a sequence-to-sequence model that tries to predict a sequence of actions $a_{t:t+h}$ given a sequence of states or observations $o_{t:t+h}$.
Beyond just prediction the actions, however, BeT tries to model the multi-modal action distribtuion given the observations.
To create a multi-modal model over the continuous action distribution, BeT uses an action encoder-decoder architecture that can encode each action vector into a discrete latent and a smaller-norm continuous offset.
BeT does so by using an offline action dataset to create a $k$-means model of the actions.
Then, an action is encoded into its associated bin out of the $k$-bins (binning), and a small continuous offset from the associated bin.
The BeT model, given a sequence of observations $o_{t:t+h}$, predicts a $k$-dimensional multinomial distribution over the $k$-bins, as well as a $k\times |A|$ dimensional matrix for offsets associated with each action bins.
Sampling from the BeT action distribution is done via sampling a discrete bin first, taking its associated action offset, and then adding the bin center with the action offset.
\subsection{BeT training objective}
Given an observation $o$ and its associated ground truth action $a$, we will now present the simplified version of how the BeT loss is calculated.
Let us assume the BeT model prediction is $\pi(o)_d \in \mathbb R^k, \pi(o)_c \in \mathbb R^{k \times |A|}$ for the discrete and the continuous parts of the predictions.
Let us also assume that $\lfloor a \rfloor$ is the discrete bin out of the $k$ bins that $a$ belongs to, and $\langle a \rangle = a - \text{BinCenter}(\lfloor a \rfloor)$.
Then, the BeT loss becomes
\begin{equation*}
\mathcal L_{\text{BeT}} = L_{focal}(\pi(o)_d, \lfloor a \rfloor) + \lambda \cdot L_{MT}(\langle a \rangle, \pi(o)_c)
\end{equation*}
Where $L_{focal}$ is the Focal loss~\citep{lin2017focal}, a special case of the negative log likelihood loss defined as
\[\mathcal L_{focal} (p_t) = -(1-p_t)^\gamma\log (p_t)\]
and $L_{MT}$ is the multi-task loss~\citep{girshick2015fast} defined as
\[\text{MT-Loss}\left (\mathbf{a}, \left (\langle \hat a^{(j)}_i \rangle\right )_{j=1}^k\right ) = \sum_{j=1}^k \mathbb I [\lfloor \mathbf{a} \rfloor = j] \cdot \| \langle \mathbf{a} \rangle - \langle \hat a^{(j)} \rangle \|_2^2\]
\newpage
\section{Implementation Details}
\label{sec:app:hparams}
\subsection{Implementation used}
In our work, we base our C-BeT{} implementation off of the official repo published at \url{https://github.com/notmahi/bet}.
For the GCBC, WGCSL, and GoFAR baselines, we use the official repo released by the GoFAR authors \url{https://github.com/JasonMa2016/GoFAR/}.
\subsection{Hyperparameters list:}
We present the C-BeT{} hyperparameters in Table~\ref{tab:bet-hparams} below, which were mostly using the default hyperparameters in the original \cite{bet} paper:
\begin{table}[ht]
\caption{Environment-dependent hyperparameters in BeT.}
\label{tab:bet-hparams}
\centering
\begin{tabular}{lcccc}
Hyperparameter & CARLA & Block-push & Kitchen \\
\hline
Layers & 3 & 4 & 6 \\
Attention heads & 4 & 4 & 6 \\
Embedding width & 256 & 72 & 120 \\
Dropout probability & 0.6 & 0.1 & 0.1 \\
Context size & 10 & 5 & 10 \\
Training epochs & 40 & 350 & 50 \\
Batch size & 128 & 64 & 64 \\
Number of bins $k$ & 32 & 24 & 64 \\
Future conditional frames & 10 & 3 & 10
\end{tabular}
\end{table}
The shared hyperparameters are in Table \ref{tab:bet-shared-hparams}.
\begin{table}[ht]
\centering
\caption{Shared hyperparameters for BeT training}
\label{tab:bet-shared-hparams}
\begin{tabular}{lc}
Name & Value \\ \hline
Optimizer & Adam \\
Learning rate & 1e-4 \\
Weight decay & 0.1 \\
Betas & (0.9, 0.95) \\
Gradient clip norm & 1.0 \\
\end{tabular}
\end{table}
\newpage
\section{Robot Environment Demonstration Trajectories}
\begin{figure}[!h]
\centering
\includegraphics[width=\textwidth]{figs/real_kitchen_trajs_0.png}
\caption{Sample demonstration trajectories for the real kitchen environment.}
\label{fig:real_kitchen_trajs_0}
\end{figure}
\newpage
\section{Simulated Environment Rollout Trajectories}
\begin{figure}[!h]
\centering
\includegraphics[width=\textwidth]{figs/CARLAs.pdf}
\caption{Sample demonstration trajectories for the CARLA self driving environment, conditioning on going to the right path.}
\label{fig:carlas}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=\textwidth]{figs/blockpushes.pdf}
\caption{Sample demonstration trajectories for the multi-modal block pushing environment, conditioning on pushing the green block to green square and red block to red square.}
\label{fig:blockpushes}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=\textwidth]{figs/kitchens.pdf}
\caption{Sample demonstration trajectories for the Franka Kitchen environment, conditioning on completing the microwave, bottom knob, slide cabinet, hinge cabinet tasks.}
\label{fig:kitchens}
\end{figure}
\section{Discussion and Limitations}
In this work, we have presented C-BeT{}, a new approach for conditional behavior generation that can learn from offline play data. Across a variety of benchmarks, both simulated and real, we find that C-BeT{} significantly improves upon prior state-of-the-art work. However, we have noticed two limitations in C-BeT{}, particularly for real-robot behavior learning. First, if the features provided to C-BeT{} do not appropriately capture relevant objects in the scene, the robot execution often fails to interact with that object in its environment. Second, some tasks, like opening the oven door, have simpler underlying data that is not multimodal, which renders only meager gains with C-BeT{}. A more detailed analysis of these limitations are presented in Section~\ref{sec:ablation}. We believe that future work in visual representation learning can address poor environment features, while the collection of even larger play datasets will provide more realistic offline data for large-scale behavior learning models.
\label{sec:limit}
\section{Introduction}
Machine Learning is undergoing a Cambrian explosion in large generative models for applications across vision~\citep{ramesh2022hierarchical} and language~\citep{brown2020language}.
A shared property across these models is that they are trained on large and uncurated data, often scraped from the internet.
Interestingly, although these models are trained without explicit task-specific labels
in a self-supervised manner, they demonstrate a preternatural ability to generalize
by simply conditioning the model on desirable outputs (e.g. ``prompts'' in text or image generation).
Yet, the success of conditional generation from uncurated data has remained elusive for decision making problems, particularly in robotic behavior generation.
To address this gap in behavior generation, several works~\citep{lynch2019learning,spirl} have studied the use of generative models on \textit{play} data.
Here, play data is a form of offline, uncurated data that comes from either humans or a set of expert policies interacting with the environment.
However, once trained, many of these generative models require significant amounts of additional online training with task-specific rewards~\citep{gupta2019relay,singh2020parrot}.
In order to obtain task-specific policies without online training, a new line of approaches employ offline RL to learn goal-conditioned policies~\citep{levine2020offline,ma2022far}.
These methods often require rewards or reward functions to accompany the data, either specified during data collection or inferred through hand-crafted distance metrics, for compatibility with RL training.
\begin{table}[htbp]
\centering
\caption{Comparison between existing algorithms to learn from large, uncurated datasets: GCBC~\citep{lynch2019learning}, GCSL~\citep{ghosh2019learning}, Offline GCRL~\citep{ma2022far}, Decision Transformer~\cite{chen2021decision}}
\begin{tabular}{@{}lccccc@{}}
\toprule
& GCBC & GCSL & Offline RL & Decision Transformer & C-BeT{} (ours) \\ \midrule
Reward-free & \cmark & \cmark & \xmark & \xmark & \cmark \\
Offline & \cmark & \xmark & \cmark & \cmark & \cmark \\
Multi-modal & \xmark & \xmark & \xmark & \xmark & \cmark \\ \bottomrule
\end{tabular}
\label{tab:intro-table}
\end{table}
Unfortunately, for many real-world applications, data does not readily come with rewards.
This prompts the question: \textit{how do we learn conditional models for behavior generation from reward-free, play data?}
To answer this question, we turn towards transformer-based generative models that are commonplace in text generation. Here, given a prompt, models like GPT-3~\citep{brown2020language} can generate text that coherently follow or satisfy the prompt.
However, directly applying such models to behavior generation requires overcoming two significant challenges.
First, unlike the discrete tokens used in text generation, behavior generation will need models that can output continuous actions while also modeling any multi-modality present in the underlying data.
Second, unlike textual prompts that serve as conditioning for text generation, behavior generation may not have the condition and the operand be part of the same token set, and may instead require conditioning on future outcomes.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figs/intro_fig.pdf}
\caption{Multiple conditioned roll-outs of visual robot policies learned on our toy kitchen with only 4.5 hours of human play interactions.
Our model learns purely from image and proprioception without human labeling or data curation.
During evaluation, the policy can be conditioned either on a goal observation or a demonstration.
Note that the last three rows contain distractor objects in the environment that were never seen during training.}
\label{fig:intro}
\vspace{-20pt}
\end{figure}
In this work, we present Conditional Behavior Transformers (C-BeT{}), a new model for learning conditional behaviors from offline data.
To produce a distribution over continuous actions instead of discrete tokens, C-BeT{} augments standard text generation transformers with the action discretization introduced in Behavior Transformers (BeT)~\citep{bet}.
Conditioning in C-BeT{} is done by specifying desired future states as input similar to Play-Goal Conditioned Behavior Cloning (Play-GCBC)~\citep{lynch2019learning}.
By combining these two ideas, C-BeT{} is able to leverage the multi-modal generation capabilities of transformer models with the future conditioning capabilities of conditional policy learning.
Importantly, C-BeT{} does not require any online environment interactions during training, nor the specification of rewards or Q functions needed in offline RL.
We experimentally evaluate C-BeT{} on three simulated benchmarks ~(visual self-driving in CARLA~\citep{carla}, multi-modal block pushing~\citep{florence2021implicit}, and simulated kitchen~\citep{gupta2019relay}), and on a real Franka robot trained with play data collected by human volunteers. The main findings from these experiments can be summarized as:
\begin{enumerate}[leftmargin=0.3in]
\item On future-conditioned tasks, C-BeT{} achieves significantly higher performance compared to prior work in learning from play.
\item C-BeT{} demonstrates that competent visual policies for real-world tasks can be learned from fully offline data (rollouts visualized in Figure~\ref{fig:intro}). To the best of our knowledge, C-BeT{} represents the first work to demonstrate this ability from reward-free play data.
\end{enumerate}
\section{Background and Preliminaries}
\label{sec:background}
\noindent \textbf{Play-like data:}
Learning from Demonstrations~\citep{argall2009survey} is one of the earliest frameworks explored for behavior learning algorithms from offline data.
Typically, the datasets used in these frameworks have a built in assumption that the demonstrations are collected from an expert repeatedly demonstrating a single task in exactly the same way.
On the contrary, play datasets violate many of such assumptions, like those of expertise of the demonstrator, and the unimodality of the task and the demonstrations.
Algorithms that learn from such datasets sometimes assume that the demonstrations collected are from a rational agent with possibly some latent intent in their behavior~\citep{lynch2019learning}.
Note that, unlike standard offline-RL datasets~\citep{fu2020d4rl}, play-like behavior datasets neither contain fully random behaviors, nor have rewards associated with the demonstrations.
\noindent \textbf{Behavior Transformers (BeT):}
BeT~\citep{bet} is a multi-modal behavior cloning model designed particularly for tackling play-like behavior datasets.
BeT uses a GPT-like transformer architecture to model the probability distribution of action given a sequence of states $\pi(a_t \mid s_{t-h:t})$ from a given dataset.
However, unlike previous behavior learning algorithms, BeT does not assume a unimodal prior for the action distribution.
Instead, it uses a $k$-means discretization to bin the actions from the demonstration set into $k$ bins, and then uses the bins to decompose each action into a discrete and continuous component.
This support for multi-modal action distributions make BeT particularly suited for multi-modal, play-like behavior datasets where unimodal behavior cloning algorithms fail.
However, vanilla BeT only supports unconditonal behavior rollouts, which means that it is not possible to choose a targeted mode of behavior during BeT policy execution.
\noindent \textbf{Conditional behavior learning:} Generally, the problem of behavior learning for an agent is considered the task of learning a \textit{policy} $\pi: \mathcal O \rightarrow \mathcal A$ mapping from the environment observations to the agent's actions that elicit some desired behavior.
Conditional behavior learning is concerned with learning a policy $\pi : \mathcal{O} \times \mathcal{G} \rightarrow \mathcal A$ conditioned additionally on a secondary variable $g$ sampled from a distribution $p(g)$.
This condition variable could be specific environment states, latents (such as one-hot vectors), or even image observations.
The success of a conditioned policy can be evaluated either through pre-specified reward functions, distance function between achieved outcome $g'$ and specified outcome $g$, or by discounted visitation probability $d_{\pi(\cdot \mid g)} = \mathbb{E}_{\tau\sim\pi}[\sum^{\infty}_{t=0} \gamma^t \delta(\phi(o_t) = g)]$ if a mapping $\phi$ between states and achieved outcome is defined \citep{eysenbach2022contrastive}.
\noindent \textbf{Goal Conditioned Behavior Cloning (GCBC):}
In GCBC~\citep{lynch2019learning,emmons2021rvs}, the agent is presented with a dataset of (observation, action, goal) tuples $(o, a, g)$, or sequences of such tuples, and the objective of the agent is to learn a goal-conditioned behavior policy.
The simplest way to achieve so is by training a policy $\pi(\cdot \mid o, g)$ that maximizes the probability of the seen data $\pi^* = \argmax_{\pi}\prod_{(o, a, g)}\mathbb P[a \sim\pi(\cdot \mid o, g)]$.
Assuming a unimodal Gaussian distribution for $\pi(a \mid o, g)$ and a model parametrized by $\theta$, this comes down to finding the parameter $\theta$ minimizing the MSE loss, $\theta^* = \argmin_\theta \sum_{(o,a,g)} || a - \pi(o, g; \theta) ||^2$. To make GCBC compatible with play data that inherently does not have goal labels, goal relabeling from future states is often necessary.
A common form of data augmentation in training such models, useful when $\mathcal G \subset \mathcal O$, is hindsight data relabeling~\citep{andrychowicz2017hindsight}, where the dataset $\{(o, a, g)\}$ is augmented with $\{(o_{t}, a, o_{t'}) \mid t' > t\}$ by relabeling any reached state in a future timestep as a goal state and adding it to the dataset.
\section{Approach}
\begin{wrapfigure}{r}{0.5\textwidth}
\vspace{-15pt}
\begin{center}
\includegraphics[width=0.48\textwidth]{figs/carla_multipath.pdf}
\caption{Conditional behavior learning from play demonstrations. Here, a policy conditioned on reaching \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {1}}} or \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {2}}} has only one possible course of action, but conditioned on reaching \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {3}}} there are two reasonable paths.}
\label{fig:carla_fork}
\end{center}
\vspace{-15pt}
\end{wrapfigure}
Given a dataset $\{(o, a)\} \in \mathcal O \times \mathcal A$ of sequences of (observation, action) pairs from a play dataset, our goal is to learn a behavior generation model that is capable of handling multiple tasks and multiple ways of accomplishing each task.
At the same time, we wish to be able to extract desired behavior from the dataset in the form of a policy through our model, or, in terms of generative models, ``controllably generate'' our desired behavior (see Figure~\ref{fig:carla_fork}).
Finally, in the process of learning this controllable, conditional generative model, we wish to minimize the amount of additional human annotation or curation required in preparing the dataset.
The method we develop to address these needs is called Conditional Behavior Transformer.
\subsection{Conditional Behavior Transformers (C-BeT)}
\label{sec:cbet}
\noindent \textbf{Conditional task formulation:}\label{par:task_form} First, we formulate the task of learning from a play dataset as learning a conditional behavior policy, i.e. given the current state, we need to model the distribution of actions that can lead to particular future states.
For simplicity, our formulation can be expressed as $\pi: \mathcal O \times \mathcal O \rightarrow \mathcal D( \mathcal A)$ where, given a current observation $o_c$ and a future observation $o_g$, our policy $\pi$ models the distribution of the possible actions that can take the agent from $o_c$ to $o_g$.
Mathematically, given a set of play trajectories $T$, we model the distribution $\pi(a \mid o_c, o_g) \triangleq \mathbb P_{\tau \in T}(a \mid o_c = \tau_t, o_g = \tau_{t'}, t' > t)$.
Next, to make our policy more robust since we operate in the partially observable setting, we replace singular observations with a sequence of observations; namely replacing $o_c$ and $o_g$ with $\bar o_c = o^{(1:N)}_{c}$ and $\bar o_g = o^{(1:N)}_{g}$ for some integer $N$.
Thus, the final task formulation becomes learning a generative model $\pi$ with:
\begin{equation}
\pi\left (a \mid o^{(1:N)}_{c}, o^{(1:N)}_{g}\right) \triangleq \mathbb P_{\tau \in T}\left (a \mid o^{(1:N)}_{c} = \tau_{t:t+N}, o^{(1:N)}_g = \tau_{t':t'+N}, t' > t\right )
\end{equation}
\noindent \textbf{Architecture selection:} Note that the model for our task described in the previous paragraph is necessarily multi-modal, since depending on the sequences $\bar o_c$ and $\bar o_g$, there could be multiple plausible sequences of actions with non-zero probability mass.
As a result, we choose Behavior Transformers (BeT) \citep{bet} as our generative architecture base as it can learn action generation with multiple modes.
We modify the input to the BeT to be a concatenation of our future conditional observation sequence and current observation sequence.
We choose to concatenate the inputs instead of stacking them, as this allows us to independently choose sequence lengths for the current and future conditional observations.
Since BeT is a sequence-to-sequence model, we only consider the actions associated with the current observations as our actions.
We show the detailed architecture of our model in Figure~\ref{fig:cbet_arch}.
\noindent \textbf{Dataset preparation:} To train a C-BeT{} model on our play dataset $\{(o, a)\}$, we will need to appropriately prepare the dataset.
We first convert the dataset to hold sequences of observations associated with actions, $\{(o_{t:t+N}, a_{t:t+N})\}$.
Then, during training time, we dynamically augment each pair with a sequence of future observations, functionally converting our dataset into $\{(o_{t:t+N}, a_{t:t+N}, o_{t':t'+N'})\}$ for some $t' > t$, and treat the sequence $o_{t':t'+N'}$ as $\bar o_g$.
\begin{figure}
\vspace{-25pt}
\centering
\includegraphics[width=\textwidth]{figs/figure_arch_smaller.pdf}
\caption{End-to-end training and evaluation of C-BeT. (A) Our dataset consists of play data in an environment, which may contain semi-optimal behavior, multi-modal demonstrations, and failures, and does not contain any annotations or task labels. (B) We train our C-BeT{} model by conditioning on current and future states using BeT (Section~\ref{sec:background}) (C) During evaluation, our algorithm can be conditioned by target observations or newly collected demonstrations to generate targeted behavior. }
\label{fig:cbet_arch}
\vspace{-15pt}
\end{figure}
\textbf{Training objective:} We employ the same objective as BeT in training C-BeT{}.
For each of the current observation and future conditional pair, we compute the BeT loss (see appendix~\ref{sec:appendix_bet} for details)
between the ground truth actions and the predicted actions.
We compute the focal loss~\citep{lin2017focal} on the predicted action bins, and the MT-loss~\citep{girshick2015fast} on the predicted action offsets corresponding to the action bins as described in BeT.
\noindent\textbf{Test-time conditioning with C-BeT{}:}
During test time, we again concatenate our future conditional sequence with our current observations, and sample actions from our model according to the BeT framework.
While in this work, we primarily condition C-BeT{} on future observations, we also study other ways of training and conditioning it, such as binary latent vectors denoting the modes in a trajectory in our experiments, and compare its performance to observation-conditioned C-BeT{} (see Section~\ref{sec:conditioning}).
\section{Related Work}
\textbf{Outcome-conditioned behavior learning:}
Behavior learning conditioned on particular outcomes, such as reward or goals, is a long studied problem~\citep{Kaelbling93learningto,pmlr-v37-schaul15,veeriah2018many,zhao2019maximum}.
Compared to standard behavior learning, learning conditioned behavior can generally be more demanding since the same model can be expected to learn a multitude of behaviors depending on the outcome, which can make learning long-term behavior harder~\citep{levy2017learning,nachum2018data}.
As a result, a common line of work in outcome-conditioned learning is to use some form of relabeling of demonstrations or experience buffer as a form of data augmentation~\citep{Kaelbling93learningto, andrychowicz2017hindsight, ghosh2019learning} similar to what we do in the paper.
As opposed to goal or state conditioned learning, which we focus on in this paper, recently reward conditioned learning using a transformer~\citep{chen2021decision} was introduced. However, later work found that it may not work as expected in all environments~\citep{paster2022cant,brandfonbrener2022returnconditioned} and large transformer models may not be necessary~\citep{emmons2021rvs} for reward conditioned learning. In this work, we find that using transformers is crucial, particularly when dealing with high dimensional visual observation and multi-modal actions.
\textbf{Learning from play data:}
Our work is most closely related to previous works such as \citet{lynch2019learning,gupta2019relay}, which also focus on learning from play demonstrations that may not be strictly optimal and uniformly curated for a single task.
Learning policies capable of multiple tasks from play data allows knowledge sharing, which is why it may be more efficient compared to learning from demonstrations directly~\citep{zhang2018deep, rahmatizadeh2018vision,duan2017one,pari2021surprising}. \cite{gupta2022bootstrapped} attempts reset-free learning with play data, but requires human annotation and instrumentation in the environment for goal labels.
Beyond learning policies from play data, learning self-supervised visual representations that are not entirely task-specific may also be a fruitful endeavor, as shown by~\cite{young2021playful,nair2022r3m}.
\textbf{Generative modeling of behavior:}
Our method of learning a generative model for behavior learning follows a long line of work, including Inverse Reinforcement Learning or IRL \citep{russell1998learning, ng2000algorithms, ho2016generative}, where given expert demonstrations, a model tries to construct the reward function, which is then used to generate desirable behavior.
Another class of algorithms learn a generative action decoder~\citep{pertsch2020accelerating, singh2020parrot} from interaction data to make downstream reinforcement learning faster and easier, nominally making multi-modal action distribution easier.
Finally, a class of algorithms, most notably ~\citet{liu2020energy, florence2021implicit, kostrikov2021offline, nachum2021provable} do not directly learn a generative model, but instead learn energy based models that need to be sampled to generate behavior, although they do not primarily focus on goal-conditioning.
\textbf{Transformers for behavior learning:} Our work follows earlier notable works in using transformers to learn a behavior model from an offline dataset, such as \cite{chen2021decision, janner2021sequence, bet}.
Our work is most closely related to \cite{bet} as we build on their transformer architecture, while our unimodal baseline is a variant of ~\cite{chen2021decision} that learns outcome conditioned instead of reward conditioned policy.
Beyond these, \cite{dasari2020transformers, mandi2021towards} summarizes historical visual context using transformers, and \cite{clever2021assistive} relies on the long-term extrapolation abilities of transformers as sequence models. The goal of C-BeT{} is orthogonal to these use cases, but can be combined with them for future applications.
\section*{Reproducibility Statement}
As a part of our commitment to reproducibility, we plan to release all our code, data, and roll-out videos on the project website, \rurl{play-to-policy.github.io}.
We have already referred to the codebases that we build upon in Section~\ref{sec:app:hparams}, and included the hyperparameters used to train the models in our paper.
Finally, our simulated experiments are built on free environments and thus our experiments should be easily replicable.
\section{C-BeT{} on Simulated Benchmarks}
\label{sec:exp}
In this section, we discuss our experiments in simulation that are designed to answer the following key questions: How well does C-BeT{} learn behaviors from play? How important is multi-modal action modeling? And finally, how does C-BeT{} compare to other forms of conditioning?
\subsection{Baselines}\label{sec:baselines}
We compare with the following state-of-the-art methods in learning from reward-free offline data:
\begin{itemize}[leftmargin=*]
\item \textbf{Goal Conditioned Behavior Cloning (GCBC):} GCBC~\citep{lynch2019learning,emmons2021rvs} learns a policy through optimizing the probability of the seen action given any state and the end state in a trajectory. It assumes that each trajectory is optimal for reaching its final state.
\item \textbf{Weighted Goal Conditioned Supervised Learning (WGCSL):} GCSL~\citep{ghosh2019learning} is an online algorithm that operates through multiple rounds of collecting online data, relabeling, and training a policy on that data using GCBC.
WGCSL~\citep{yang2022rethinking} improves over GCSL by learning an additional value function used to weigh the GCSL loss.
Here, we compare against an offline variant of WGCSL algorithm that trains for one round until convergence.
\item \textbf{Offline Goal-Conditioned RL:} While offline RL is generally incompatible with play data without rewards, recently some offline goal-conditioned RL algorithms have emerged that optimize for a proxy reward defined through state occupancy.
We compare against one such algorithm, GoFAR~\citep{ma2022far}, which learns a value function through $f$-Advantage regression and then optimizes a policy that maximizes the goal-conditioned value.
We choose GoFAR since to the best of our knowledge, it is the only offline GCRL algorithm with experiments on a real-robot.
\item \textbf{Learning Motor Primitives from Play (Play-LMP):} Play-LMP~\citep{lynch2019learning} is a behavior generation algorithm that focuses on learning short ($\sim 30$ timesteps) motor primitives from play data.
Play-LMP does so by using a variational-autoencoder (VAE) style architecture to encode action sequences into motor program latents and decoding actions from them.
\item \textbf{Behavior Transformers (BeT):} We include unconditional BeT (Sec.~\ref{sec:background}) in our baseline to understand the improvements made by the C-BeT{} conditioning.
In practice, it acts as a ``random'' baseline that performs the tasks without regard for the goal.
\item \textbf{Unimodal C-BeT{}:} We use our method without the multi-modal head introduced in BeT.
This also corresponds to a variant of Decision Transformer conditioning on outcomes instead of rewards.
\end{itemize}
Note that neither WGCSL nor GoFAR are directly compatible with image states and goals, since they require a proxy reward function $r:\mathcal S \times \mathcal{G} \rightarrow{\mathbb R}$. Thus, we had to design a proxy reward function on the image representations, $\exp{(-(\nicefrac{1}{4}||g-s||)^2)}$ to apply them on image-based environments.
\subsection{Simulated Environments and Datasets}
\label{sec:sim_exps}
\input{tables/sim_results}
We run our algorithms and baselines on a collection of simulated environments as a benchmark to select the best algorithms to run on our real robotic setup. The simulated environments are selected to cover a variety of properties that are necessary for the real world environment, such as pixel-based observations, diverse modes in the play dataset, and complex action spaces (see Figure.~\ref{fig:sim_envs}).
\begin{figure}[!h]
\centering
\includegraphics[width=\textwidth]{figs/sim_envs.pdf}
\caption{Visualizations of simulated environments that we evaluate our methods on, from left to right: CARLA self-driving (top down view and agent POV), BlockPush, and Franka Kitchen.}
\label{fig:sim_envs}
\end{figure}
\begin{enumerate}[wide, leftmargin=*]
\item \textbf{CARLA self-driving:} CARLA~\citep{carla} is a simulated self-driving environment created using Unreal Engine. In this environment, the observations are RGB pixel values of dimension $(224, 224, 3)$, and actions are two-dimensional (accelerate/brake and steer). We use an environment with a fork in the road (see Figure~\ref{fig:carla_fork}) following two possible routes to the same goal, collecting 200 demonstrations in total. We condition on one of the two possible routes to the goal, and at the goal where choosing either of the two modes is valid.
\item \textbf{Multi-modal block-pushing:} We use the multi-modal block-pushing environment from \cite{florence2021implicit} for complicated multi-modal demonstrations. In this environment, an xArm robot pushes two blocks, red and green, into two square targets colored red and green.
All positions are randomized with some noise at episode start.
We use 1,000 demonstrations collected using a deterministic controller, and condition on just the future block positions on each baseline.
\item \textbf{Franka relay kitchen:} Originally introduced in \cite{gupta2019relay}, Relay Kitchen is a robotic environment in a simulated kitchen with seven possible tasks. A Franka Panda robot is used to manipulate the kitchen, and the associated dataset comes with 566 demonstrations collected by humans with VR controllers performing four of the seven tasks in some sequence.
\end{enumerate}
\subsection{How well does C-BeT{} learn behaviors from play?}
On each of these environments, we train conditional behavior generation models and evaluate them on a set of conditions sampled from the dataset. The success is defined by the model performing the same tasks as conditioned by the future outcome.
We see from Table.~\ref{tab:sim-results} that C-BeT{} performs significantly better compared to the baselines on all three tasks.
BeT, as our unconditioned ``random'' baseline, shows the success rate of completing tasks unconditionally, and see that none of the baselines surpasses it consistently.
Out of the MLP-based baselines, WGCSL performs best in the state-based tasks.
However, GoFAR performs best on the CARLA vision based environment where the other two MLP-based baselines fail almost completely.
We note that Play-LMP performs poorly because our tasks are long-horizon and quite far from its intended motor primitive regime, which may be challenging for Play-LMP's short-horizon auto-encoding architecture.
\subsection{How important is multi-modal action modeling?}
While we use a multi-modal behavior model in this work, it is not immediately obvious that it may be necessary. Specifically, some previous outcome-conditioned policy learning works~\citep{chen2021decision,emmons2021rvs} implicitly assume that policies are unimodal once conditioned on an outcome.
In Table~\ref{tab:sim-results} the comparison between C-BeT{} and unimodal C-BeT{} shows that this assumption may not be true for all environments, and all else being equal, having an explicitly multi-modal model helps learning an outcome conditioned policy when there may be multiple ways to achieve an outcome.
\subsection{How does C-BeT{} compare to other forms of conditioning?}\label{sec:conditioning}
\begin{wraptable}{r}{0.33\textwidth}
\input{tables/onehot}
\end{wraptable}
We consider the question of how much comparative advantage there is in getting human labels for our tasks.
We do so by adding manual one-hot (CARLA, BlockPush) or binary (Kitchen) labels to our tasks, and training and evaluating C-BeT{} with those labels.
As we see on Table~\ref{tab:onehot}, while C-BeT{} is able to get close to peak performance without human labels on two out of the three environments, on BlockPush adding human labels significantly improves the performance.
\section{C-BeT{} on Real-World Robotic Manipulation}
\label{sec:real_exps}
We now discuss our robot experiments, which are geared towards understanding the usefulness of C-BeT{} on real-world play data.
\subsection{Robotic Environment and Dataset}
\textbf{Robot setup:}
Our environment consists of a Franka Emika Panda robot, similar to the simulated Franka Kitchen environment, set up with a children's toy kitchen set (see Figure~\ref{fig:intro}).
The toy kitchen has an oven, a microwave, a pot, and two stove knobs that are relevant to our play dataset.
The action space in this environment contains the seven joint angle deltas normalized within the $[-1, 1]$ range, and a binary gripper control.
\textbf{Play dataset:}
We collected 460 sequences totaling to 265 minutes (about 4.5 hours) of play data on the toy kitchen with volunteers using a Vive VR controller to move the Franka.
While collecting the play data, we did not give the volunteers any explicit instructions about doing any particular tasks, or number of tasks, beyond specifying the interactable items, and stipulating that the pot only goes on the left stove or in the sink, to prevent dropping the pot and reaching an unresettable state.
As the observations, we save the RGB observations from two cameras on the left and right of the setup, as well as the robot's proprioceptive joint angles.
Overall, the dataset contains 45\,287 frames of play interactions and their associated actions.
\textbf{Representation learning} To simplify the task of learning policies on image space, we decouple the task of image representation learning from policy learning following \cite{pari2021surprising}.
For each camera, we first fine-tune a pretrained ResNet-18~\citep{he2016deep} encoder on the acquired frames with BYOL self-supervision~\citep{grill2020bootstrap}.
Then, during policy learning and evaluation, instead of the image from the cameras, we pass the two 512-dimensional BYOL embeddings as part of the observation.
For the proprioceptive part of the observation, we repeat the $(\sin, \cos)$ of seven joint states 74 times to get a 1036-dimensional proprioceptive representation, making our overall observation representation 2060-dimensional.
\subsection{Conditional Behavior Generation on Real Robot}
\paragraph{Behavior generation on single tasks:}
\input{tables/robot_single}
Our first experiment in the real robot is about extracting single-task policies from the play dataset.
We define our tasks as manipulating the four types of interactable objects one at a time: opening the oven door, opening the microwave door, moving the pot from the stove to the sink, and rotating a knob 90 degrees to the right.
We use appropriate conditioning frames from our observation dataset, and start the robot from the neutral state to complete the four tasks.
The result of this experiment is presented in Table~\ref{tab:single-robot-task}.
We see that on single task conditionals, C-BeT{} is able to complete all tasks except the knobs consistently, outperforming all our baselines, showing that C-BeT{} is able to extract single-task policies out of uncurated, real-world play data.
We discuss failures of C-BeT{} on the knob tasks in Section~\ref{sec:ablation}.
While our GoFAR baseline was able to move towards the task targets, it was unable to successfully grasp or interact with any of the target objects.
We believe it may be the case because unlike the robot experiment in \cite{ma2022far}, we do not have the underlying environment state, the tasks are much more complicated, and our dataset is an order of magnitude smaller ($400\,$K vs $45\,$K).
\textbf{Behavior generation for longer horizons:}
\input{tables/robot_multi}
Next, we ask how well our models work for longer-horizon conditioning with multiple tasks.
We choose play sequences from the dataset with multiple tasks completed and use their associated states as the conditions for our models.
In our roll-outs, we calculate how many tasks completed in the original sequence were also completed in the conditional roll-outs.
We calculate this metric over $3$ conditioning sequences, and report the results in Table~\ref{tab:multi-robot-task}.
We see that even without any high level controller, C-BeT{} is able to stitch together multiple tasks from play demonstrations to complete long-horizon goals.
\textbf{Generalization to prompt and environment perturbations:}
A major requirement from any robot system deployed in the real world is to generalize to novel scenarios.
We evaluate the generalizability of our learned policies in two different ways.
In the first set of experiments, we collect fresh demonstrations that were not in the training set, and we condition our policies on such trajectories.
We find that across the different tasks, even with unseen conditionings, C-BeT{} retains $67\%$ of the single-task performance, with $16/50$ task successes in total.
In the second set of experiments, we add environmental distractors in the setup (Figure~\ref{fig:intro}, bottom three rows) and run the single- and multi-task conditions on the modified environments.
We see once again that the performance drops to around $67\%$ of original with two distractors on the scene, but if we keep adding (four or more) distractors, the robot is unable to complete any tasks.
\subsection{Analysis of Failure Modes}
\label{sec:ablation}
We see a few failure modes in our experiments that may provide additional insights into learning from real-world play data. We discuss the most salient ones in this section.
\textbf{Failure in knob operation in the real world:} We see that in all of our real world experiments, the accuracy in operating the knob is consistently lower than all other tasks.
This is due to the failure of the learned representations.
Upon inspection of the dataset images' nearest neighbors in the representation space, we see that the BYOL-trained representation cannot identify the knob state better than random chance: the returned nearest neighbor differs in knob status often.
Since the representation cannot identify the knob status properly, conditioning on it naturally fails.
\textbf{Importance of a multi-modal policy architecture:}
One of our motivations behind incorporating the BeT architecture in our work is its ability to learn multi-modal action distributions.
In our experiments, we show that for some single-task conditions such as opening the oven door, having no multi-modality is sufficient (Table~\ref{tab:single-robot-task}), but for more complicated tasks and learning from a more interconnected form of play data, it is always the case that a multi-modal architecture prevents our policies from collapsing to sub-optimal solutions (Table~\ref{tab:multi-robot-task}).
|
train/arxiv
|
BkiUdgM4eIXh58QxsnrN
| 5 | 1 |
\section{Introduction}
Direct simulation Monte Carlo method (DSMC)\cite{Bird94} is a
standard method to solve the Boltzmann equation numerically. In this method
one divides space into cells of volume $V_{k}$ $(k=1,2,3,...)$ and takes a
large number ($N$) of simulated particles ($10^{3}-10^{6}$) to represent
real gas molecules. The time evolution of the gas for a short time period $%
\Delta t$ is calculated in two steps. In the first step some pairs of
particles in the same cell are chosen randomly and are allowed to collide
without changing their positions. A collision is allowed with a probability
proportional to $u\Sigma $ where $u$ is the relative velocity and $\Sigma $
is the total cross section. In the second step all particles are propagated
without collisions for a time $\Delta t$.
The method is invented by Bird and Bird introduced the method based on
physical arguments. A seminal paper of Bird\cite{Bird70} gives somewhat
heuristic arguments to justify its use to solve the Boltzmann equation. One
variant of the method was derived by Nanbu\cite{Nanbu80} starting from the
Boltzmann equation. Also it appears that essentially the same stochastic
algorithms for a homogenous gas were invented independently by people
interested in using them as a pedagogical tool to demonstrate evolution of a
gas toward Maxwell-Boltzmann(MB) distribution.\cite{Novak70} \cite{Eger82}
\cite{Bonomo84}. In order to represent time evolution of the real gas such
methods should converge to the true solution of the Boltzmann equation in
the limit of $N\rightarrow \infty ,$ $V_{k}\rightarrow 0,$ $\Delta
t\rightarrow 0.$ Convergence proofs were given by Babovsky\cite{Babovsky1}
and Babovsky and Illner\cite{Babovsky2} for Nanbu's method and by Wagner\cite
{Wagner92} for Bird's method.
The cited convergence proofs are very formal and they appear to be written
for mathematicians. In this paper we give a simple derivation of Birds no
time counter algorithm. We also show that, in DSMC, appropriately normalized
single particle probability distribution satisfies Boltzmann equation for
simple gases and Wang Chang-Uhlenbeck equation for molecular gases and their
mixtures. The language of this development is familiar to the physicist from
the well known BBGKY hierarchy.
In the next section we develop a general formalism for direct simulation. In
order to demonstrate usefulness of the formalism we apply it to some simple
money games. In the third section we apply the formalism to homogenous gases
and show that, if appropriate collision kernels are chosen, the one particle
probability distribution obeys the Boltzmann equation for simple gases and
the Wang Chang-Uhlenbeck equation for molecular gases and their mixtures. In
the fourth section we derive DSMC\ algorithm for inhomogeneous gases.
Finally in the last section we give a summary and discussion.
\section{Direct simulation as a Markov process}
\subsection{The Master Equation}
Assume that we have an assembly of things we call 'particles'. Particles can
be real particles in a gas or humans or anything you can imagine. There are $%
N$ particles in the assembly where $N$ is a very large number. Each member
of the assembly can be in any one of the 'states' where states are labeled
by the parameter $\mu $. For a real gas $\mu $ can be velocity vectors and
for an assembly of people $\mu $ can be the money in their pocket on bank
account. The $\mu $ can be discrete or continuous and it can stand for a
collection of indices that can be both continuous and discrete. For the rest
of this section we will treat $\mu $ as a continuous index. Integration over
$\mu $ is actually integration over the continuous indices and summation
over the discrete indices that $\mu $ stands for.
We play a stochastic game with this assembly. We randomly pick pairs of
particles and force them to 'collide'. A collision is an event that the
particles change their states with a prescribed probability. Suppose we
picked particles with states $\mu _{A}$ and $\mu _{B}.$ The probability that
they will end up with state labels $\mu _{C}$ and $\mu _{D}$ in the volume $%
d\mu _{C}d\mu _{D}$ is $T(\mu _{A},\mu _{B};\mu _{C},\mu _{D})d\mu _{C}d\mu
_{D}$ where $T(\mu _{A},\mu _{B};\mu _{C},\mu _{D})$ is the collision
kernel. Collision kernel is assumed to be symmetric
\begin{eqnarray}
T(\mu _{A},\mu _{B};\mu _{C},\mu _{D}) &=&T(\mu _{C},\mu _{D};\mu _{A},\mu
_{B}), \label{a10} \\
T(\mu _{A},\mu _{B};\mu _{C},\mu _{D}) &=&T(\mu _{B},\mu _{A};\mu _{D},\mu
_{C}). \label{a20}
\end{eqnarray}
Also the probabilities are normalized
\begin{equation}
\int T(\mu _{A},\mu _{B};\mu _{C},\mu _{D})\,d\mu _{C}\,d\mu _{D}=\int T(\mu
_{A},\mu _{B};\mu _{C},\mu _{D})\,d\mu _{A}\,d\mu _{B}=1. \label{a30}
\end{equation}
We define N-particle probability distribution $f^{(N)}(\mu _{1},\mu
_{2},...,\mu _{N};n)$ such that $f^{(N)}(\mu _{1},\mu _{2},...,\mu
_{N};n)d\mu _{1}d\mu _{2},...,d\mu _{N}$ is the probability of finding the
particles $1,2,...,N$ in the $d\mu _{1}d\mu _{2},...,d\mu _{N}$ phase space
volume after the $n^{th}$ collision. Since the particles are identical the $%
f^{(N)}(\mu _{1},\mu _{2},...,\mu _{N};n)$ is assumed to be completely
symmetric
\begin{equation}
f^{(N)}(\mu _{1},...,\mu _{j},...,\mu _{i},...,\mu _{N};n)=f^{(N)}(\mu
_{1},...,\mu _{i},...,\mu _{j},...,\mu _{N};n). \label{a40}
\end{equation}
We define reduced M-particle distribution as
\begin{equation}
f^{(M)}(\mu _{1},...,\mu _{N};n)=\int f^{(N)}(\mu _{1},...,\mu _{N};n)\,d\mu
_{M+1}\,d\mu _{M+2},...,d\mu _{N}. \label{a44}
\end{equation}
We will denote $f^{(M)}(\mu _{1},....,\mu _{M};n)$ $(M=1,2,...,N)$ as $%
f^{(M)}(\mu ;n)$ shortly. As a convenient notation we also define $%
f_{ij}^{(M)}(\mu _{A},\mu _{B};n)$ as
\begin{equation}
f_{ij}^{(M)}(\mu _{A},\mu _{B};n)=f^{(M)}(\mu _{1},...,\mu _{i}=\mu
_{A},...,\mu _{j=}\mu _{B},...,\mu _{M};n), \label{a45}
\end{equation}
where $\mu _{i}$ and $\mu _{j}$ are replaced with $\mu _{A}$ and $\mu _{B}$
in $f^{(M)}(\mu _{1},...,\mu _{M};n)$. Examples are
\begin{eqnarray}
f_{31}^{(N)}(\mu _{A},\mu _{B};n) &=&f(\mu _{B},\mu _{2},\mu _{A},\mu
_{4},...,\mu _{N};n) \label{a48} \\
f_{24}^{(N)}(\mu _{A},\mu _{B};n) &=&f(\mu _{1},\mu _{A},\mu _{3},\mu
_{B},\mu _{5},...,\mu _{N};n) \label{a49}
\end{eqnarray}
We are ready to start now. The equation satisfied by the $f^{(N)}(\mu ;n)$
is given by
\begin{equation}
f^{(N)}(\mu ;n+1)=\frac{1}{N(N-1)}\sum_{i=1}^{N}\sum_{j\neq i}^{N}\int
f_{ij}^{(N)}(\mu _{A},\mu _{B};n)\,T(\mu _{A},\mu _{B};\mu _{i},\mu
_{j})\,d\mu _{A}\,d\mu _{B}. \label{a66}
\end{equation}
The meaning of this equation is clear. If the last pair we collided is $i,j$
molecules, the probability of having $\mu _{i},\mu _{j}$ pairs at the end of
collision is the probability of having initial states $\mu _{A},\mu _{B}$
(represented by $f_{ij}^{(N)}(\mu _{A},\mu _{B};n)d\mu _{A}d\mu _{B}$)
multiplied by the probability of ending with $\mu _{i},\mu _{j}$
(represented by $T(\mu _{A},\mu _{B};\mu _{i},\mu _{j})$). The sum over $i,j$
and the factor $1/N(N-1)$ takes care of the fact that all pairs (respecting
order of the molecules) are possible with the probability $1/N(N-1).$ The
state of the system after $n+1$ collisions depends on the state of system
after $n$ collisions and the direct simulation game is a Markov process
actually. The eq.(\ref{a66}) is the master equation for this stochastic
process.
In order to see clearly how this equation is derived let us multiply this
with $d\mu _{1}d\mu _{2}...d\mu _{N}$. The left hand side is
\begin{equation}
f^{(N)}(\mu ;n+1)d\mu _{1}d\mu _{2}...d\mu _{N} \label{ad10}
\end{equation}
and it is the probability of the system being in the phase space volume $%
d\mu _{1}d\mu _{2}...d\mu _{N}$ after the $(n+1)^{th}$ collision. On the
right side we have
\begin{equation}
\frac{1}{N(N-1)}\sum_{i=1}^{N}\sum_{j\neq i}^{N}\int f_{ij}^{(N)}(\mu
_{A},\mu _{B};n)T(\mu _{A},\mu _{B};\mu _{i},\mu _{j})d\mu _{A}d\mu _{B}d\mu
_{1}d\mu _{2}...d\mu _{N}. \label{ad20}
\end{equation}
(Here the integration is over $\mu _{A}$ and $\mu _{B}$ only) In order to
interpret this lets us look at $i=1$ and $j=2$ term. It is the following
term
\begin{eqnarray}
&&\left[ \frac{1}{N(N-1)}\right] \left[ f^{(N)}(\mu _{A},\mu _{B},\mu
_{3},\mu _{4,}...,\mu _{N})d\mu _{A}d\mu _{B}d\mu _{3}d\mu _{4}...d\mu
_{N}\right] \nonumber \\
&&\times \left[ T(\mu _{A},\mu _{B};\mu _{1},\mu _{2})d\mu _{1}d\mu
_{2}\right] \label{ad30}
\end{eqnarray}
integrated over $\mu _{A},$ $\mu _{B}.$ In this form the terms under the
integration are product of three probabilities. $1/N(N-1)$ is the
probability of choosing $i=1,j=2\,$pair. The second parenthesis is the
probability of finding the system in $d\mu _{A}d\mu _{B}d\mu _{3}d\mu
_{4}...d\mu _{N}$ phase space volume before the collision. The last
parenthesis is the probability of taking particles one and two from $d\mu
_{A}d\mu _{B}$ to $d\mu _{1}d\mu _{2}$ interval after the collision. When
integrated over $\mu _{A},$ $\mu _{B}$ this term becomes the probability of
arriving in $d\mu _{1}d\mu _{2}...d\mu _{N}$ phase space volume after $%
(n+1)^{th}$ collision via a collision between particles one and two. If all
such term are summed over $i$ and $j$ we find the probability of probability
of arriving in $d\mu _{1}d\mu _{2}...d\mu _{N}$ phase space volume after $%
(n+1)^{th}$ collision which is the same as eq.(\ref{ad10}).
\subsection{Asymptotic Behavior of the Master Equation}
Let us introduce a short notation for state variables:
\begin{equation}
\begin{array}{ll}
X=(x_{1},x_{2,}...,x_{N}) & dX=dx_{1}dx_{2}...dx_{N} \\
Y=(y_{1},y_{2,}...,y_{N}) & dY=dy_{1}dy_{2}...dy_{N} \\
Z=(z_{1},z_{2,}...,z_{N}) & dZ=dz_{1}dz_{2}...dz_{N}
\end{array}
. \label{ab10}
\end{equation}
Then the Master equation can be written in the form
\begin{equation}
f(X;n+1)=\int P(X,Y)f(Y;n)dY. \label{ab20}
\end{equation}
The $P(X,Y)$ has $N(N-1)$ terms and each one of the terms contains $N-2$
delta functions. For example $i=1$, $j=2$ term reads as
\begin{equation}
\frac{1}{N(N-1)}T(x_{1},x_{2};y_{1},y_{2})\delta (x_{3}-y_{3})...\delta
(x_{N}-y_{N}). \label{ab30}
\end{equation}
The general expression for $P(X,Y)$ is
\begin{equation}
P(X,Y)=\frac{1}{N(N-1)}\sum_{i=1}^{N}\sum_{j\neq i}^{N}\left(
T(x_{i},x_{j};y_{i},y_{j})\prod_{k\neq i,j}^{N}\delta (x_{k}-y_{k})\right)
\label{ab40}
\end{equation}
The $P(X,Y)dX$ is the probability that the system jumps from $Y$ to $dX$
phase space volume after a collision. As can be seen directly from eq.(\ref
{ab40}) it is also symmetric: $P(X,Y)=P(Y,X).$ As a probability density it
satisfies the normalization condition
\begin{equation}
\int P(X,Y)dX=\int P(X,Y)dY=1. \label{ab50}
\end{equation}
We will need convolution of $P(X,Y)$ shortly. Let us define $W(X,Y)$ as
\begin{equation}
W(X,Y)=\int P(X,Z)P(Y,Z)dZ \label{ab60}
\end{equation}
It is easily seen that $W(X,Y)$ is symmetric ($W(X,Y)=W(Y,X)$) and it also
satisfies a normalization condition
\begin{equation}
\int W(X,Y)dX=\int W(X,Y)dY=1. \label{ab65}
\end{equation}
Now we are ready to discuss asymptotic behavior or the master equation. Let
us form $\int f^{2}(X;n+1)dX$ as
\begin{eqnarray}
\int f^{2}(X;n+1)dX &=&\int dX\left( \int P(X,Y)f(Y;n)dY\right) \left( \int
P(X,Z)f(Z;n)dZ\right) \label{ab70} \\
&=&\int W(Y,Z)f(Y;n)f(Z;n)dYdZ \label{ab80}
\end{eqnarray}
We can also write $\int f^{2}(X;n)dX$ as
\begin{equation}
\int f^{2}(X;n)dX=\int W(Y,Z)f^{2}(Y)dYdZ=\int W(Y,Z)f^{2}(Z)dYdZ
\label{ab90}
\end{equation}
which follows from eq.(\ref{ab65}). Using these two relations we can write
the following
\begin{eqnarray}
\int f^{2}(X;n+1)dX-\int f^{2}(X;n)dX &=&\int W(Y,Z)f(Y;n)f(Z;n)dYdZ
\label{ab100} \\
&&-\frac{1}{2}\int W(Y,Z)f^{2}(Y)dYdZ \nonumber \\
&&-\frac{1}{2}\int W(Y,Z)f^{2}(Z)dYdZ \nonumber
\end{eqnarray}
The right side can be written as
\begin{equation}
\int f^{2}(X;n+1)dX-\int f^{2}(X;n)dX=-\frac{1}{2}\int W(Y,Z)\left(
f(Y;n)-f(Z;n)\right) ^{2}dYdZ. \label{ab110}
\end{equation}
Since $W(Y,Z)$ is always nonnegative the expression on the right is always
negative or zero. This means $\int f^{2}(X;n)dX$ decreases after each
collision. The decrease stops when $f(Y;n)-f(Z;n)=0$ for all $Y$ and $Z$ and
this means $f(X;n)$ must be a constant. The equilibrium is reached when $%
f(X;n)$ is microcanonical distribution.
There is a final point to be discussed here. The above argument proves that
the probability density in the direct simulation always converges towards
microcanonical distribution. If the phase space is divided in separate
regions such that collisions cannot take the system from one region to
another then the above argument must be modified. If $Y$ and $Z$ belong to
different regions then $W(Y,Z)=0$ and $f(Y;n)-f(Z;n)=0$ is not required. But
if $Y$ and $Z$ belong to the same region then $W(Y,Z)\neq 0$ and $%
f(Y;n)-f(Z;n)=0$ is required. This means that $f(X;n)$ must be a constant in
each region asymptotically but they can be different constants. For direct
simulation of a gas total energy and total momentum are conserved and the
system stays on a constant total energy-total momentum shell. Asymptotically
the $f(X;n)$ will be constant on each shell but they will be different
constant for different shells.
\subsection{The hierarchy of Reduced probability distributions}
If we integrate the master equation over $d\mu _{M+1},\mu _{M+2},...,\mu
_{N} $ we obtain the equation
\begin{eqnarray}
f^{(M)}(\mu \mathbf{;}n+1) &=&\frac{(N-M)(N-M-1)}{N(N-1)}\,\,f^{(M)}(\mu
\mathbf{;}n) \label{a60} \\
&&+\frac{2(N-M)}{N(N-1)}\sum_{i=1}^{M}\int f_{i,M+1}^{(M+1)}(\mu _{A},\mu
_{B};n)\,T(\mu _{A},\mu _{B};\mu _{i},\mu _{C})\,d\mu _{A}\,d\mu _{B}\,d\mu
_{C} \nonumber \\
&&+\frac{M(M-1)}{N(N-1)}\sum_{i=1}^{M}\sum_{j\neq i}^{M}\int
f_{i,j}^{(M)}(\mu _{A},\mu _{B};n)\,T(\mu _{A},\mu _{B};\mu _{i},\mu
_{j})\,d\mu _{A}\,d\mu _{B}. \nonumber
\end{eqnarray}
The $f^{(M)}(\mu \mathbf{;}n+1)$ depends on $f^{(M+1)}(\mu ;n)$ and this
represents a hierarchy of equations similar to the well-known BBGKY hierarchy%
\cite{Huang}.
The first equation in the hierarchy is
\begin{eqnarray}
f^{(1)}(\mu \mathbf{;}n+1) &=&(1-2/N)\,f^{(1)}(\mu \mathbf{;}n) \label{a70}
\\
&&+\frac{2}{N}\int f^{(2)}(\mu _{A},\mu _{B};n)\,T(\mu _{A},\mu _{B};\mu
_{C},\mu )\,d\mu _{A}\,d\mu _{B}\,d\mu _{C}. \nonumber
\end{eqnarray}
If we make the assumption of molecular chaos (AMC)
\begin{equation}
f^{(2)}(\mu _{A},\mu _{B};n)=f^{(1)}(\mu _{A};n)\,f^{(1)}(\mu _{B};n),
\label{a80}
\end{equation}
we obtain a nonlinear equation for $f^{(1)}(\mu ;n)$ similar to the
Boltzmann equation.
From now on we will suppress the superscript $(1)$ in $f^{(1)}(\mu \mathbf{;}%
\tau )$ wherever it does not cause confusion. Using the relation
\begin{equation}
f(\mu ,n)=\int f(\mu ,n)\,f(\mu _{C},n)\,T(\mu _{A},\mu _{B};\mu _{C},\mu
)\,d\mu _{A}\,d\mu _{B}\,d\mu _{C}, \label{a90}
\end{equation}
which follows from Eq.(\ref{a30}) and the normalization of $f(\mu _{C})$ and
imposing the assumption of molecular chaos we can write eq.(\ref{a70}) as
\begin{equation}
f(\mu \mathbf{;}n+1)=f(\mu \mathbf{;}n)+\frac{2}{N}\int [f,f]\,T(\mu
_{A},\mu _{B};\mu _{C},\mu )\,d\mu _{A}\,d\mu _{B}\,d\mu _{C} \label{a101}
\end{equation}
\begin{equation}
\lbrack f,f]=f(\mu _{A},n)\,f(\mu _{B},n)-f(\mu _{C},n)\,f(\mu ,n)
\label{a103}
\end{equation}
A\ second simplification occurs for large $N.$ The $2/N$ appearing in eq.(%
\ref{a101}) is a small number and we can take $\tau =2n/N$ as a continuous
parameter which we call the collision time. Then $\Delta \tau =2/N$ and $%
\left[ f(\mu \mathbf{;}n+1)-f(\mu \mathbf{;}n)\right] /\Delta \tau $ can be
taken as $\partial f(\mu \mathbf{,}\tau )/\partial \tau $. The eq.(\ref{a101}%
) can be written in either of the following forms:
\begin{eqnarray}
\frac{\partial f(\mu ,\tau \mathbf{)}}{\partial \tau } &=&\int [f,f]\,T(\mu
_{A},\mu _{B};\mu _{C},\mu )\,d\mu _{A}\,d\mu _{B}\,d\mu _{C}. \label{a130}
\\
\frac{\partial f(\mu ,\tau \mathbf{)}}{\partial \tau } &=&-f(\mu )+\int
f(\mu _{A})f\,(\mu _{B})T(\mu _{A},\mu _{B};\mu _{C},\mu )\,d\mu _{A}\,d\mu
_{B}\,d\mu _{C}. \label{a131}
\end{eqnarray}
We will call the first equation in the hierarchy 'the first equation'
briefly for the rest of the paper. In latter parts of this paper we will
call the integral on the right side of eq.(\ref{a130}) 'the collision
integral'. From now on we will also suppress the collision time $\tau $ in $%
f(\mu \mathbf{,}\tau )$ wherever it is convenient$.$
\subsection{Justification of assumption of molecular chaos}
The only thing in this paper that is not fully rigorous is the assumption of
molecular chaos. In order to have assumption of molecular chaos valid from
the beginning we must start from an uncorrelated state
\begin{equation}
f^{(N)}(\mu _{1},\mu _{2},...,\mu _{N};n=0)=h(\mu _{1})\,h(\mu
_{2})....h(\mu _{N}), \label{a83}
\end{equation}
which is what is done in direct simulations mostly. The master equation eq.(%
\ref{a66}) should be used to justify AMC. For finite $N$ the AMC\ is not
strictly valid and the AMC\ should get better and better as $N\rightarrow
\infty $. For $M/N<<1$ the eq. (\ref{a60}) is written as
\begin{eqnarray}
f^{(M)}(\mu \mathbf{;}n+1) &=&(1-2M/N)\,\,f^{(M)}(\mu \mathbf{;}n)+O(1/N^{2})
\label{a133} \\
&&+\frac{2}{N}\sum_{i=1}^{M}\int f_{i,M+1}^{(M+1)}(\mu _{A},\mu _{B};n)
\nonumber \\
&&\times \,T(\mu _{A},\mu _{B};\mu _{i},\mu _{C})\,d\mu _{A}\,d\mu
_{B}\,d\mu _{C} \nonumber
\end{eqnarray}
where $O(1/N^{2})$ are the terms of order $1/N^{2}$. If we invoke collision
time $\tau =2n/N$ again and write $\left[ f^{(M)}(\mu \mathbf{;}%
n+1)-f^{(M)}(\mu \mathbf{;}n)\right] /\Delta \tau =\partial f^{(M)}(\mu
\mathbf{;}\tau )/\partial \tau $ and we take the limit $N\rightarrow \infty $
we obtain
\begin{eqnarray}
\frac{\partial f^{(M)}(\mu \mathbf{;}\tau )}{\partial \tau }
&=&-Mf^{(M)}(\mu \mathbf{;}\tau ) \label{a134} \\
&&+\sum_{i=1}^{M}\int f_{i,M+1}^{(M+1)}(\mu _{A},\mu _{B};\tau )\,T(\mu
_{A},\mu _{B};\mu _{i},\mu _{C})\,d\mu _{A}\,d\mu _{B}\,d\mu _{C} \nonumber
\end{eqnarray}
where $M=1,2,...,\infty $. This is an infinite chain of coupled differential
equations. If we invoke
\begin{equation}
f^{(M)}(\mu _{1},\mu _{2},...,\mu _{M};\tau )=f^{(1)}(\mu _{1}\mathbf{;}\tau
)\,f^{(1)}(\mu _{2}\mathbf{;}\tau )....f^{(1)}(\mu _{M}\mathbf{;}\tau ).
\label{a135}
\end{equation}
in the eq.(\ref{a134}) all the equations in the infinite chain are satisfies
provided $f^{(1)}(\mu \mathbf{;}\tau )$ satisfies eq. (\ref{a130}). This
proves that in the limit $N\rightarrow \infty $ the AMC remains valid for
all $\tau $ if we start from an uncorrelated initial state.
What happens if we start from a correlated state that does not satisfy AMC?
For finite $N$ there are always some correlations to any order. We know that
the system evolves towards microcanonical distribution. In the limit $%
N\rightarrow \infty $ microcanonical distribution obeys AMC. This means even
if we start from a correlated state the system will satisfy AMC better and
better as the system evolves towards equilibrium for large $N.$ Collisions
destroys correlations and It should take only a few collisions per particle
to destroy initial correlations. Moreover in the practical applications of
DSMC in gas dynamics the $N$ is almost always large and initial state is
chosen as almost uncorrelated from the beginning. Therefore using the first
equation to determine the single particle probability density is a
justifiable process.
\subsection{Collision invariants and the H-theorem}
We now show that expectation value $\left\langle g(\mu \mathbf{)}%
\right\rangle $ of a collision invariant $g(\mu )$ is conserved. The $g(\mu
) $ is a collision invariant if
\begin{equation}
\Delta g=g(\mu )+g(\mu _{C})-g(\mu _{A})-g(\mu _{B})=0. \label{a140}
\end{equation}
Multiplying eq.(\ref{a130}) and integrating over $\mu $ we obtain
\begin{equation}
\frac{d}{d\tau }\int f(\mu \mathbf{)\,}g(\mu \mathbf{)\,}d\mu =\int
[f,f]\,\,g(\mu \mathbf{)}T(\mu _{A},\mu _{B};\mu _{C},\mu )\,\,d\mu
_{A}\,d\mu _{B}\,d\mu _{C}\,d\mu . \label{a150}
\end{equation}
Using symmetries of $T(\mu _{A},\mu _{B};\mu _{C},\mu )$ and relabeling
integration variables among themselves we can write this as
\begin{equation}
\frac{d}{d\tau }\left\langle g(\mu \mathbf{)}\right\rangle =\frac{1}{4}\int
[f,f]\,\Delta g\,T(\mu _{A},\mu _{B};\mu _{C},\mu )\,d\mu _{A}\,d\mu
_{B}\,d\mu _{C}\,d\mu . \label{a160}
\end{equation}
The integral is zero because of eq.(\ref{a140})$.$
We can derive an H-theorem for the first equation. Defining $H(\tau )$ a
\begin{equation}
H(\tau )=\int f(\mu )\,\ln (f(\mu ))\,d\mu \mathbf{,} \label{b10}
\end{equation}
and using the eqs. (\ref{a10},\ref{a20}) and eq.(\ref{a130}) we can express $%
dH/d\tau $ as
\begin{equation}
\frac{dH}{d\tau }=-\frac{1}{4}\int \Phi [f]\,T(\mu _{A},\mu _{B};\mu
_{C},\mu )\,\,d\mu _{A}\,d\mu _{B}\,d\mu _{C}\,d\mu \mathbf{,} \label{b20}
\end{equation}
where
\begin{equation}
\Phi [f]=\left[ f(\mu _{A})\,\,f(\mu _{B})-f(\mu )\,\,f(\mu _{C})\right]
\left[ \ln f(\mu _{A})\,\,f(\mu _{B})-\ln f(\mu )\,\,f(\mu _{C})\right] .
\label{b31}
\end{equation}
The $\Phi [f]$ can be shown to be always nonnegative as done in all kinetic
theory books and $T(\mu _{A},\mu _{B};\mu _{C},\mu )$ is intrinsically
positive. Therefore $dH/d\tau $ is nonpositive. There are two possibilities
here. The $H$ keeps decreasing toward negative infinity or it approaches an
absolute minimum asymptotically and the system approaches toward an
equilibrium distribution. Following the usual arguments of the H-theorem,
the decrease of $H$ stops only when
\begin{equation}
\ln f(\mu _{A})+\ln f(\mu _{B})=\ln f(\mu _{C})+\ln f(\mu ), \label{b40}
\end{equation}
is satisfied which implies that $\ln f(\mu )$ is a collision invariant. If
we choose the $T(\mu _{A},\mu _{B};\mu _{C},\mu )$ such that there are
collision invariants $g_{i}(\mu )$ $(i=1,2,...,L)$ then $\ln f(\mu )$ must
be expressible as a linear combinations of these collision invariants as
\begin{equation}
\ln f(\mu )=c_{1}g_{1}(\mu )+c_{2}g_{2}(\mu )+...+c_{L}g_{L}(\mu ),
\label{b50}
\end{equation}
where $c_{1},...,c_{L}$ are parameters describing the equilibrium.
There is at least one trivial collision invariant. It is the number of
particles entering and exiting the collision which corresponds to $g_{1}(\mu
)=1$. When there are additional collision invariants the $H$ has a lower
bound usually. For the case of real gases momentum and energy are collision
invariants and this makes $H$ bounded from below.
\subsection{Example: A game of discrete money gambling}
Here we give a simple example of a direct simulation money game with finite
number of discrete states. Suppose everybody is given some random amount of
money at the beginning. Everybody in the assembly has one, two or three
dollars in their pocket. The random assignment of initial money ensures
assumption of molecular chaos from the beginning. The collisions takes place
as follows: Player 1 and player 2 share their total money such that nobody
gets more than three dollars and both players get at least one dollar. All
the possibilities satisfying these conditions have equal probabilities. If
they have total two dollars (one dollar each) then the only possibility is
that they will have one dollar each at the end with unity probability. If
they have total three dollars then the possible outcomes are (1,2) and (2,1)
with equal 1/2 probabilities. If they have total four dollars then possible
outcomes are (1,3), (3,1), (2,2) with 1/3 probability each. If they have
total five dollars then possible outcomes are (2,3) and (3,2) with 1/2
probability each. Finally if they have total six dollars (three dollars
each) then the only possibility is (3,3) with unity probability.
For this game the money is conserved in collisions and transitions between
states with equal amount of total money is possible only. For $N$ particles
the total money can have values between $N$ to $3N$ and there are a total of
$2N+1$ separate regions in phase space. One cannot cross from one to another
of these regions by making collisions.
Now that we defined the game, how does single particle distribution evolves
as we make collisions? The state variable $\mu $ is the amount of the money
in the persons pocket and it takes the values 1,2,3. Let $P_{\mu }(\tau )$
be the probability that a chosen person will have the money $\mu $ at the
collision time $\tau .$ From eq.(\ref{a131}) the $P_{\mu }(\tau )$ satisfies
\begin{eqnarray}
\frac{dP_{1}}{d\tau } &=&-P_{1}+P_{1}^{2}\,\,T(1,1,1,1)+P_{1}P_{2}\,%
\,T(1,2;2,1) \label{c10} \\
&&+P_{2}P_{1}\,\,T(2,1;2,1)+P_{1}P_{3}\,\,T(1,3;3,1) \nonumber \\
&&+P_{3}P_{1}\,\,T(3,1;3,1)+P_{2}P_{2}\,\,T(2,2;2,1), \nonumber
\end{eqnarray}
\begin{eqnarray}
\frac{dP_{2}}{d\tau } &=&-P_{2}+P_{2}^{2}\,\,T(2,2,2,2) \label{c20} \\
&&+P_{1}P_{2}\,\,T(1,2;1,2)+P_{2}P_{1}\,\,T(2,1;1,2) \nonumber \\
&&+P_{1}P_{3}\,\,T(1,3;2,2)+P_{3}P_{1}\,\,T(3,1;2,2) \nonumber \\
&&+P_{2}P_{3}\,\,T(2,3;3,2)+P_{3}P_{2}\,\,T(3,2;3,2), \nonumber
\end{eqnarray}
and
\begin{eqnarray}
\frac{dP_{3}}{d\tau } &=&-P_{3}+P_{1}P_{3}\,\,T(1,3;1,3)+P_{3}P_{1}\,%
\,T(3,1;1,3) \label{c30} \\
&&+P_{3}^{2}\,\,T(3,3;3,3)+P_{2}P_{3}\,\,T(2,3;2,3) \nonumber \\
&&+P_{3}P_{2}\,\,T(3,2;2,3)+P_{2}^{2}\,\,T(2,2,1,3). \nonumber
\end{eqnarray}
Inserting the $T$ values this can be written as
\begin{eqnarray}
\frac{dP_{1}}{d\tau } &=&-P_{1}+P_{1}^{2}+P_{1}P_{2}+\frac{2}{3}P_{1}P_{3}+%
\frac{1}{3}P_{2}^{2}, \label{c40} \\
\frac{dP_{2}}{d\tau } &=&-P_{2}+\frac{1}{3}P_{2}^{2}+P_{1}P_{2}+\frac{2}{3}%
P_{1}P_{3}+P_{2}P_{3}, \label{c41} \\
\frac{dP_{3}}{d\tau } &=&-P_{3}+\frac{2}{3}P_{1}P_{3}+P_{2}P_{3}+P_{3}^{2}+%
\frac{1}{3}P_{2}^{2}. \label{c42}
\end{eqnarray}
This is a complicated set of nonlinear differential equations. But there are
simplifying features because we know the collision invariants $g_{1}(\mu )=1$
and $g_{2}(\mu )=\mu $. Summing the eqs.(\ref{c40},\ref{c41},\ref{c42}) we
obtain
\begin{equation}
\frac{d}{d\tau }\left( P_{1}+P_{2}+P_{3}\right) =\left(
P_{1}+P_{2}+P_{3}-1\right) \left( P_{1}+P_{2}+P_{3}\right) , \label{c46}
\end{equation}
and
\begin{equation}
\frac{d}{d\tau }\left( P_{1}+2P_{2}+3P_{3}\right) =\left(
P_{1}+P_{2}+P_{3}-1\right) \left( P_{1}+2P_{2}+3P_{3}\right) . \label{c46b}
\end{equation}
The first equation tells us that since $P_{1}+P_{2}+P_{3}=1$ at the
beginning it always remains unity and probability is conserved. The second
equation tells us that since $P_{1}+P_{2}+P_{3}-1=0$ always the expectation
value $\left\langle \mu \right\rangle =P_{1}+2P_{2}+3P_{3}$ is conserved.
We denote expected money in the pocket with $m$. We have two equations
\begin{eqnarray}
P_{1}+P_{2}+P_{3} &=&1, \label{c47} \\
P_{1}+2P_{2}+3P_{3} &=&m, \label{c48}
\end{eqnarray}
from which we solve $P_{2}$ and $P_{3}$ as
\begin{eqnarray}
P_{2} &=&-2P_{1}+3-m, \label{c49} \\
P_{3} &=&P_{1}+m-2. \label{c50}
\end{eqnarray}
Inserting $P_{2}$ and $P_{3}$ in the eq.(\ref{c40}) we obtain
\begin{equation}
\frac{dP_{1}}{d\tau }=P_{1}^{2}+(m-\frac{10}{3})P_{1}+\frac{1}{3}(3-m)^{2}.
\label{ek10}
\end{equation}
Calculating roots of the quadratic term on the right we write this as
\begin{equation}
\frac{dP_{1}}{d\tau }=\left( P_{1}-r_{1}\right) \left( P_{1}-r_{2}\right) ,
\label{ek20}
\end{equation}
where $r_{1}$ and $r_{2}$ are
\begin{eqnarray}
r_{1} &=&\frac{1}{6}\left( 10-3m+\sqrt{1+3(m-1)(3-m)}\right) , \label{ek30}
\\
r_{2} &=&\frac{1}{6}\left( 10-3m-\sqrt{1+3(m-1)(3-m)}\right) . \label{ek40}
\end{eqnarray}
Notice that since $1\leq m\leq 3$ the term under the square root is always
greater than or equal to unity.
Solving eq.(\ref{ek20}) is straightforward and we obtain
\begin{equation}
P_{1}(\tau )=\frac{r_{2}(p_{0}-r_{1})-r_{1}(p_{0}-r_{2})\mathrm{e}^{-\lambda
\tau }}{(p_{0}-r_{1})-(p_{0}-r_{2})\mathrm{e}^{-\lambda \tau }},
\label{ek50}
\end{equation}
where $p_{0}=P_{1}(\tau =0)$ and $\lambda =r_{1}-r_{2}$. It is easy to
verify that $P_{1}(\infty )=r_{2}$ and $P_{1}(\tau )\,$approaches this limit
exponentially fast. One can check from eq.(\ref{ek40}) that $r_{2}=1$ at $%
m=1 $ and $r_{2}=0$ at $m=3$ and it behaves as it is expected.
The conditions $0\leq P_{2}\leq 1$ and $0\leq P_{3}\leq 1$ together with
eqs.(\ref{c49},\ref{c50}) gives conditions that $P_{1}(\tau )$ must satisfy.
These conditions are expressed as $2-m\leq P_{1}\leq (3-m)/2$ when $m\leq 2$
and $0\leq P_{1}\leq (3-m)/2$ when $m>2$. Therefore $P_{1}(\tau =0)\,$%
initial value should obey these limitations.
To find the equilibrium distribution directly without solving the
differential equation we set $dP_{\mu }/d\tau =0$ for $\mu =1,2,3$ in eqs.(%
\ref{c40},\ref{c41},\ref{c42}) and we obtain a set of algebraic nonlinear
equations. Setting $P_{1}=a$, $P_{2}=ab$, $P_{3}=ab^{2}$ all three equations
are satisfied provided the normalization condition
\begin{equation}
a(1+b+b^{2})=1, \label{c59}
\end{equation}
holds. We were able to guess this solution from the H-theorem. There are two
collision invariants $g_{1}=1$ and $g_{2}(\mu )=\mu $. The second one is a
result of conservation of money in the collisions. Therefore according to
the H-theorem we must have $\ln P_{\mu }=C_{1}+C_{2}\mu $ and this gives the
solution $P_{\mu }=ab^{\mu -1}$. We need one more relation to determine both
$a$ and $b$. This comes from expected money in the pocket:
\begin{equation}
m=a\left( 1+2b+3b^{2}\right) , \label{c60}
\end{equation}
which is a conserved quantity during the 'time' evolution and it is set by
the initial conditions. Solving these two equation we obtain
\begin{eqnarray}
a &=&\frac{1}{6}\left( 10-3m-\sqrt{1+3(m-1)(3-m)}\right) , \label{c65} \\
b &=&\left( m-2+\sqrt{1+3(m-1)(3-m)}\right) /2(3-m). \nonumber
\end{eqnarray}
Notice that $a=r_{2}$ and this agrees with solution of the differential
equation.
The H-function
\begin{equation}
H=P_{1}\ln P_{1}+P_{2}\ln P_{2}+P_{3}\ln P_{3}, \label{c70}
\end{equation}
is bounded from below for this problem since the function $x\ln x$ is
bounded from below and $0\leq P_{\mu }\leq 1$. We minimize $H$ with the
constraint that the expected money is fixed and probabilities are
normalized. The constraints can be adopted with Lagrange multipliers. Taking
the auxiliary function
\begin{eqnarray}
\Psi &=&P_{1}\ln P_{1}+P_{2}\ln P_{2}+P_{3}\ln P_{3} \label{c80} \\
&&-\lambda _{2}(P_{1}+P_{2}+P_{3}-1)-\lambda _{2}(P_{1}+2P_{2}+3P_{3}-m),
\nonumber
\end{eqnarray}
and setting $\partial \Psi /\partial P_{1}=\partial \Psi /\partial
P_{2}=\partial \Psi /\partial P_{3}=0$ we obtain the same solution $P_{\mu
}=ab^{\mu -1}$ where $a$ and $b$ satisfies the eqs.(\ref{c59},\ref{c60}).
The minimum value of H becomes
\begin{equation}
H=a\ln a+ab\ln ab+ab^{2}\ln ab^{2}=\ln (ab^{m-1}). \label{c90}
\end{equation}
\subsection{Example2: A game of continuous money gambling}
Here we give another example of direct simulation money games with
continuous states. In this case we were not even able to solve one particle
probability distribution. We just find the equation for one particle
distribution and guess the stationary one particle distribution from the
H-theorem. We then show that it satisfies the equation for single particle
probability equation.
This time initially we give players a random amount of money between zero
and, say, ten dollars. Suppose we pick a pair to collide. player1 has $\mu
_{1}$ and player2 has $\mu _{2}$ amount of money. A computer produces a
random number $p$ between zero and one. Player1 takes $p(\mu _{1}+\mu _{2})$
and player2 takes $(1-p)(\mu _{1}+\mu _{2})$ amounts of money and we pick
another pair to collide. What is the final distribution when the system
comes to equilibrium?
The probability distribution that a person will have money $\mu $ satisfies
the eq.(\ref{a131})
\begin{equation}
\frac{\partial f(\mu \mathbf{)}}{\partial \tau }=-f(\mu )+\int_{0}^{\infty
}da\int_{0}^{\infty }db\,f(a)\,f(b)\,T(a,b,\mu ,\nu )\,da\,db\,d\nu ,
\label{d10}
\end{equation}
where the collision kernel is
\begin{equation}
T(a,b,\mu ,\nu )=\frac{1}{a+b}\delta (a+b-\mu -\nu )\Theta (a)\,\Theta
(b)\,\Theta (\mu )\,\Theta (\nu ). \label{d21}
\end{equation}
Here $\Theta (x)$ is the standard step function
\begin{equation}
\Theta (x)=\left\{
\begin{array}{ll}
0\quad & {}x<0 \\
1\quad & x\geq 0
\end{array}
\right. . \label{d30}
\end{equation}
If we insert the $T(a,b,\mu ,\nu )$ given in the eq.(\ref{d21}) into the eq.(%
\ref{d10}) and perform the $\nu $ integral we obtain
\begin{equation}
\frac{\partial f(\mu \mathbf{)}}{\partial \tau }=-f(\mu )+\int_{0}^{\infty
}da\int_{0}^{\infty }db\,\Theta (a+b-\mu )\,\frac{f(b)\,f(a)}{a+b}.
\label{d40}
\end{equation}
This can be further simplified by changing variables $x=a+b$ and $y=a$ which
yields
\begin{equation}
\frac{\partial f(\mu \mathbf{)}}{\partial \tau }=-f(\mu )+\int_{\mu
}^{\infty }dx\int_{0}^{x}dy\,\frac{f(y)\,f(x-y)}{x}. \label{d50}
\end{equation}
The H-theorem insures that this equation will converge to an equilibrium
distribution as $\tau \rightarrow \infty $. Since we have money conservation
in the collisions there are two collision invariants $g_{1}(\mu )=1$ and $%
g_{2}(\mu )=\mu $. Then the equilibrium distribution is
\begin{equation}
f_{eq}(\mu )=A\,e^{-B\mu }. \label{d60}
\end{equation}
If the average money initially given to each person is $m$, the $f(\mu )$
should satisfy two conditions
\begin{eqnarray}
\int_{0}^{\infty }f(\mu )\,d\mu &=&1, \label{d80} \\
\int_{0}^{\infty }\mu \,f(\mu )\,d\mu &=&m, \label{d81}
\end{eqnarray}
and they fix the values of $A$ and $B$ in the eq.(\ref{d60}). The solution
is
\begin{equation}
f_{eq}(\mu )=\frac{1}{m}\,e^{-\mu /m}. \label{d90}
\end{equation}
If we insert this solution into eq.(\ref{d50}) we can easily check that
right side of the equation becomes zero which confirms that $f_{eq}(\mu )$
is the equilibrium distribution.
\section{ Application of the direct simulation formalism to homogenous gases}
\subsection{Center of mass frame}
In the following sections we will need some results from studying the
collision in the center of mass frame. Instead of deriving them for each
case separately we derive the relevant results once for the most general
case in this subsection and refer to formulae derived here as needed in the
following subsections. In the rest of the paper bold letters denote vector
quantities.
Particles with states $\mu _{A}=\mathbf{v}_{A}\mathbf{\ }$and $\mu _{B}=%
\mathbf{v}_{B}$ and enter the collision and particles with states $\mu _{C}=%
\mathbf{v}_{C}$ and $\mu _{D}=\mathbf{v}$ exit the collision. We define the
center of mass (CM) coordinates as
\begin{eqnarray}
\mathbf{H} &=&(m_{A}\mathbf{v}_{A}+m_{B}\mathbf{v}_{B})/(m_{A}+m_{B})
\label{e3} \\
\mathbf{H}^{\prime } &=&(m_{A}\mathbf{v}_{C}+m_{B}\mathbf{v})/(m_{A}+m_{B}),
\label{e3b}
\end{eqnarray}
and
\begin{equation}
\begin{array}{lll}
\mathbf{u=v}_{A}-\mathbf{v}_{B},\;\;\; & u=\left| \mathbf{u}\right| , &
\mathbf{n}=\mathbf{u}/u \\
\mathbf{u}^{\prime }\mathbf{=v}_{C}-\mathbf{v}, & u^{\prime }=\left| \mathbf{%
u}^{\prime }\right| ,\;\;\; & \mathbf{n}^{\prime }=\mathbf{u}^{\prime
}/u^{\prime }
\end{array}
\label{e4}
\end{equation}
where $m_{A}$ is the mass of particles $A$ and $C$ and $m_{B}$ is the mass
of particles $B$ and $D$. For one kind of gas all masses are equal and
formulae for CM velocities $\mathbf{H}$ and $\mathbf{H}^{\prime }$ reduce to
\begin{equation}
\begin{array}{ll}
\mathbf{H}=(\mathbf{v}_{A}+\mathbf{v}_{B})/2,\;\;\; & \mathbf{H}^{\prime }=(%
\mathbf{v}_{C}+\mathbf{v})/2.
\end{array}
\label{e5}
\end{equation}
Integrations over $\mathbf{v}_{A}$ and $\mathbf{v}_{B}$ can be carried over
in the variables $\mathbf{H}$ and $\mathbf{u}$. The transformation between
these two sets of variables are linear and the Jacobian is unity. Therefore
\begin{equation}
\int f(\mathbf{v}_{A},\mathbf{v}_{B})d^{3}\mathbf{v}_{A}d^{3}\mathbf{v}%
_{B}=\int f(\mathbf{H},\mathbf{u})d^{3}\mathbf{H}d^{3}\mathbf{u.}
\label{e34}
\end{equation}
In the following subsections we will deal with integrations over $\mathbf{v}%
_{A}$, $\mathbf{v}_{B}$, $\mathbf{v}_{C}$. Integrations over $\mathbf{v}_{A}$%
, $\mathbf{v}_{B}$ will be converted to integration over $\mathbf{H}$ and $%
\mathbf{u}$ in the CM\ frame. In each case there will be a Dirac delta
function removing the integral over $\mathbf{H}$. Integration over $\mathbf{v%
}_{C}$ will be converted to integration over $\mathbf{u}^{\prime }$ since $%
\mathbf{v}_{C}=\mathbf{u}^{\prime }\mathbf{+v}$ and there is no integration
over $\mathbf{v.}$ Furthermore integrations over $\mathbf{u}^{\prime }$ will
be carried in spherical coordinates as
\begin{equation}
\int f(\mathbf{u}^{\prime })d^{3}\mathbf{u}^{\prime }=\int f(\mathbf{u}%
^{\prime })(u^{\prime })^{2}du^{\prime }d\mathbf{n}^{\prime } \label{e36}
\end{equation}
and in each case there will be a Dirac delta function removing the
integration over $u^{\prime }.$ In the final expressions the integration
over solid angle $\mathbf{n}^{\prime }$ and $\mathbf{u}$ remain at the end.
In order to evaluate the integrals we will encounter in the following
subsections we must express $\mathbf{v}_{A},\mathbf{v}_{B},\mathbf{v}_{C}$
in terms of the variables $\mathbf{v,u,n}^{\prime }.$ This is a simple
exercise in collision kinetics. We will do this for the inelastic collisions
with unequal masses. This is the most general case we will deal in this
paper. We will assume that molecules have internal energies $\epsilon (A),$ $%
\epsilon (B)$ and $\epsilon (C),$ $\epsilon (D)$. Let $\epsilon =\epsilon
(A)+\epsilon (B)$ and $\epsilon ^{\prime }=\epsilon (C)+\epsilon (D)$. From
energy conservation we have $u^{\prime }(u)=\sqrt{u^{2}+2(\epsilon -\epsilon
^{\prime })/m_{r}}$ where $m_{r}=m_{A}m_{B}/(m_{A}+m_{B})$ is the reduced
mass and $m_{A}$, $m_{B}$ are masses of the colliding particles. We can
write $\mathbf{u}^{\prime }=u^{\prime }(u)\mathbf{n}^{\prime }$ and $\mathbf{%
v}_{C}=\mathbf{v+}u^{\prime }(u)\mathbf{n}^{\prime }$. From CM velocity
conservation we have
\begin{equation}
m_{A}\mathbf{v}_{A}+m_{B}\mathbf{v}_{B}=m_{A}\mathbf{v}_{C}+m_{B}\mathbf{v=}%
(m_{A}+m_{B}\mathbf{)v}+m_{A}u^{\prime }(u)\mathbf{n}^{\prime } \label{e93}
\end{equation}
and we also have $\mathbf{v}_{A}-\mathbf{v}_{B}=\mathbf{u}.$ We solve $%
\mathbf{v}_{A}$, $\mathbf{v}_{B}$, $\mathbf{v}_{C}$ from these as
\begin{eqnarray}
\mathbf{v}_{A} &=&\mathbf{v}+\frac{m_{A}}{m_{A}+m_{B}}u^{\prime }(u)\mathbf{n%
}^{\prime }+\frac{m_{B}}{m_{A}+m_{B}}\mathbf{u} \label{e100} \\
\mathbf{v}_{B} &=&\mathbf{v}+\frac{m_{A}}{m_{A}+m_{B}}u^{\prime }(u)\mathbf{n%
}^{\prime }-\frac{m_{A}}{m_{A}+m_{B}}\mathbf{u} \label{e101} \\
\mathbf{v}_{C} &=&\mathbf{v+}u^{\prime }(u)\mathbf{n}^{\prime } \label{e102}
\\
u^{\prime }(u) &=&\sqrt{u^{2}+2(\epsilon -\epsilon ^{\prime })/m_{r}}
\label{e103}
\end{eqnarray}
For one kind of gas ($m_{A}=m_{B}=m$ ) without internal states ($\epsilon
(A)=\epsilon (B)=\epsilon (C)=\epsilon (D)=0$) these equations reduce to
\begin{eqnarray}
\mathbf{v}_{A} &=&\mathbf{v}+(u\mathbf{n}^{\prime }+\mathbf{u)/2}
\label{e105} \\
\mathbf{v}_{B} &=&\mathbf{v}+(u\mathbf{n}^{\prime }-\mathbf{u)/2}
\label{e106} \\
\mathbf{v}_{C} &=&\mathbf{v+}u\mathbf{n}^{\prime } \label{e107}
\end{eqnarray}
Again for one kind of gas ($m_{A}=m_{B}=m$ and $m_{r}=m/2$) with internal
states eqs.(\ref{e100},\ref{e101},\ref{e102},\ref{e103}) reduce to
\begin{eqnarray}
\mathbf{v}_{A} &=&\mathbf{v}+\left[ u^{\prime }(u)\mathbf{n}^{\prime }+%
\mathbf{u}\right] \mathbf{/}2 \label{g75} \\
\mathbf{v}_{B} &=&\mathbf{v}+\left[ u^{\prime }(u)\mathbf{n}^{\prime }-%
\mathbf{u}\right] /2 \label{g76} \\
\mathbf{v}_{C} &=&\mathbf{v+}u^{\prime }(u)\mathbf{n}^{\prime } \label{g77}
\\
u^{\prime }(u) &=&\sqrt{u^{2}+4(\epsilon -\epsilon ^{\prime })/m}
\label{g78}
\end{eqnarray}
For a mixture of gases without internal states eqs.(\ref{e100},\ref{e101},%
\ref{e102},\ref{e103}) reduce to
\begin{eqnarray}
\mathbf{v}_{A} &=&\mathbf{v}+\frac{m_{A}}{m_{A}+m_{B}}\,u\mathbf{n}^{\prime
}+\frac{m_{B}}{m_{A}+m_{B}}\,\mathbf{u,} \label{f80} \\
\mathbf{v}_{B} &=&\mathbf{v}+\frac{m_{A}}{m_{A}+m_{B}}\,u\mathbf{n}^{\prime
}-\frac{m_{A}}{m_{A}+m_{B}}\,\mathbf{u,} \label{f81} \\
\mathbf{v}_{C} &=&\mathbf{v+}u\mathbf{n}^{\prime }. \label{f82}
\end{eqnarray}
And for a mixture of gases with internal states eqs.(\ref{e100},\ref{e101},%
\ref{e102},\ref{e103}) are the formule.
\subsection{One kind of gas without internal degrees of freedom}
The state of particles are defined by three components of the velocity
vector $\mathbf{v}$. (We use bold letters for vectors throughout this paper)
Bird's original algorithm to keep track of time in the simulation was the
'time counter method'. Later Bird introduced 'No time counter method' (NTC)
and declared time counter method 'obsolete' in his book.\cite{Bird94} Time
counter method is more difficult (if not impossible) to formulate in the
direct simulation formalism given in this paper and since NTC is the
algorithm currently used we will derive NTC algorithms only in this paper.
Here the state index $\mu $ refer the velocity vectors and the integration
over $\mu $ stands for three integrations over components of velocities. The
NTC\ kernel $S(\mathbf{v}_{A},\mathbf{v}_{B};\mathbf{v}_{C},\mathbf{v}%
)=S_{1}+S_{2}$ is given by
\begin{eqnarray}
S_{1} &=&\frac{2}{R}\delta \left( \mathbf{H}-\mathbf{H}^{\prime }\right)
\,\delta \left( u^{2}-(u^{\prime })^{2}\right) \,\sigma (\mathbf{n},\mathbf{n%
}^{\prime }) \label{e20} \\
S_{2} &=&\left( 1-\frac{u\Sigma }{R}\right) \,\delta \left( \mathbf{v}_{C}-%
\mathbf{v}_{A}\right) \,\delta \left( \mathbf{v}-\mathbf{v}_{B}\right)
\label{e30}
\end{eqnarray}
Here $\sigma (\mathbf{n},\mathbf{n}^{\prime })$ is the differential cross
section and $\Sigma \,$is the total cross section which is given by
\begin{equation}
\Sigma =\int \sigma (\mathbf{n},\mathbf{n}^{\prime })\,d\mathbf{n}^{\prime },
\label{e40}
\end{equation}
where $d\mathbf{n}^{\prime }$ is the solid angle in the direction of $%
\mathbf{n}^{\prime }$. The $\sigma (\mathbf{n},\mathbf{n}^{\prime })$
depends on the angle $\theta $ between $\mathbf{n}$ and $\mathbf{n}^{\prime
} $ ($\mathbf{n}^{\prime }\cdot \mathbf{n}=\cos \theta $). Hence $\sigma (%
\mathbf{n},\mathbf{n}^{\prime })=\sigma (\mathbf{n}^{\prime },\mathbf{n})$
and the kernel is obviously symmetric. The term $\delta (u^{2}-(u^{\prime
})^{2})=\delta (u-u^{\prime })/2u$ represents energy conservation and $%
\delta \left( \mathbf{H}-\mathbf{H}^{\prime }\right) $ represents
conservation of center of mass (CM) velocity which is the same thing as the
conservation of momentum. The kernel satisfies the normalization condition
\begin{equation}
\int S(\mathbf{v}_{A},\mathbf{v}_{B};\mathbf{v}_{C},\mathbf{v})\,d^{3}%
\mathbf{v}_{C}\,d^{3}\mathbf{v=\int }S(\mathbf{v}_{A},\mathbf{v}_{B};\mathbf{%
v}_{C},\mathbf{v})\,d^{3}\mathbf{H}^{\prime }\,d^{3}\mathbf{u}^{\prime }=1.
\label{e50}
\end{equation}
Here the integral is taken in the CM coordinates. The Jacobian of the CM
transformation is unity and $d^{3}\mathbf{u}^{\prime }=(u^{\prime
})^{2}du^{\prime }d\mathbf{n}^{\prime }$.
The $S_{2}$ part of the kernel directly transfer initial velocities to the
final velocities with a probability $\left( 1-u\Sigma /R\right) $ and hence
causes a null collision. A null collision is a collision that particles do
not change their states. The probability of making a real collision is
\begin{equation}
\int S_{1}(\mathbf{v}_{A},\mathbf{v}_{B};\mathbf{v}_{C},\mathbf{v})\,d^{3}%
\mathbf{v}_{C}\,d^{3}\mathbf{v=}\frac{u\Sigma }{R} \label{e60}
\end{equation}
where integral is calculated in the CM coordinates.
Inserting $S(\mathbf{v}_{A},\mathbf{v}_{B};\mathbf{v}_{C},\mathbf{v})$ in
eq.(\ref{a130}) we obtain
\begin{equation}
\frac{\partial f(\mathbf{v})}{\partial \tau }=\int [f,f]\,\,S_{1}(\mathbf{v}%
_{A},\mathbf{v}_{B};\mathbf{v}_{C},\mathbf{v})\,d^{3}\mathbf{v}_{A}\,d^{3}%
\mathbf{v}_{B}\,d^{3}\mathbf{v}_{C}. \label{e70}
\end{equation}
where
\begin{equation}
\lbrack f,f]=f(\mathbf{v}_{A})\,f(\mathbf{v}_{B})-f(\mathbf{v}_{C})\,f(%
\mathbf{v}), \label{e72}
\end{equation}
The $S_{2}\,$part of the kernel gives zero contribution in the collision
integral
\begin{equation}
\int [f,f]\,\,\delta \left( \mathbf{v}_{C}-\mathbf{v}_{A}\right) \,\delta
\left( \mathbf{v}-\mathbf{v}_{B}\right) \,d^{3}\mathbf{v}_{A}\,d^{3}\mathbf{v%
}_{B}\,d^{3}\mathbf{v}_{C}=0. \label{e80}
\end{equation}
We evaluate the integral in eq.(\ref{e70}) in the CM coordinates. We write $%
d^{3}\mathbf{v}_{A}\,d^{3}\mathbf{v}_{B}=d^{3}\mathbf{H}\,d^{3}\mathbf{u}$
and $d^{3}\mathbf{v}_{C}=d^{3}\mathbf{u}^{\prime }=(u^{\prime
})^{2}\,du^{\prime }\,d\mathbf{n}^{\prime }$. When we do the integral we
obtain
\begin{equation}
\frac{\partial f(\mathbf{v})}{\partial \tau }=\frac{1}{R}\int
[f,f]\,u\,\sigma (\mathbf{n},\mathbf{n}^{\prime })\,d^{3}\mathbf{u}\,d%
\mathbf{n}^{\prime }, \label{e90}
\end{equation}
where $\mathbf{v}_{A},\mathbf{v}_{B},\mathbf{v}_{C}$ are expressed in terms
of the variables $\mathbf{v,u,n}^{\prime }$ in eqs.(\ref{e105},\ref{e106},%
\ref{e107}).
The equation(\ref{e90}) is essentially the Boltzmann equation with the
difference that the Boltzmann equation is written for density in physical
space. To obtain the Boltzmann equation we write this equation for $F(%
\mathbf{v})=\left( N/V\right) f(\mathbf{v})$ where $V\,$is the volume of the
gas. Then we obtain
\begin{equation}
\frac{\partial F(\mathbf{v})}{\partial \tau }=\frac{1}{R}\left( \frac{V}{N}%
\right) \int \left[ F(\mathbf{v}_{A})F(\mathbf{v}_{B})-F(\mathbf{v}_{C})F(%
\mathbf{v})\right] \,u\,\sigma (\mathbf{n},\mathbf{n}^{\prime })\,d^{3}%
\mathbf{u}\,d\mathbf{n}^{\prime } \label{e110}
\end{equation}
Now, if we change to the variable $t=\tau V/RN=2nV/RN^{2}$ we obtain the
Boltzmann equation for a homogenous gas
\begin{equation}
\frac{\partial F(\mathbf{v})}{\partial t}=\int \left[ F(\mathbf{v}_{A})\,F(%
\mathbf{v}_{B})-F(\mathbf{v}_{C})\,F(\mathbf{v})\right] \,u\,\sigma (\mathbf{%
n},\mathbf{n}^{\prime })\,d^{3}\mathbf{u}\,d\mathbf{n}^{\prime }
\label{e120}
\end{equation}
Here $t$ must be interpreted as the physical time and $t=2nV/RN^{2}$ formula
connects the physical time $t$ and number of collision attempts $n$.
Let us state the algorithm for a homogenous gas. We choose a number $R$ big
enough such that for only very few (say less than one in thousand) pairs $%
u\Sigma /R$ will exceed unity. We make $n=RN^{2}t/2V$ collision attempts to
reach the desired time. For each pair we take a random number $r$ and we
allow the collision to happen if $r<u\Sigma /R$. If the collision is
allowed, we choose the direction of scattering $\mathbf{n}^{\prime }$
according to the probability density $\sigma (\mathbf{n},\mathbf{n}^{\prime
})/\Sigma $ and a few more random numbers are used for that. Then we
calculate and store final velocities for the colliding pairs and pick
another pair. We keep taking and colliding pairs until we reach the desired
time.
Suppose the formula $n=RN^{2}t/2V$ yields 234.783 collisions. How do you
make 0.783 collisions? The way to do this in practise is to make 234
collisions first. Then throw a random number $r$ and if $r<0.783$ then go on
to make a collision attempt. This can be justified from the formula
\begin{equation}
f(\mu \mathbf{;}n+1)=f(\mu \mathbf{;}n)+\frac{2}{N}\int [f,f]\,T(\mu
_{A},\mu _{B};\mu _{C},\mu )\,d\mu _{A}\,d\mu _{B}\,d\mu _{C}. \label{e131}
\end{equation}
After making $n$ collision attempts with the NTC\ kernel $S(\mathbf{v}_{A},%
\mathbf{v}_{B};\mathbf{v}_{C},\mathbf{v})\,$we can change the kernel to
\begin{equation}
P(\mathbf{v}_{A},\mathbf{v}_{B};\mathbf{v}_{C},\mathbf{v})=q\,S(\mathbf{v}%
_{A},\mathbf{v}_{B};\mathbf{v}_{C},\mathbf{v})+(1-q)\,\delta \left( \mathbf{v%
}_{C}-\mathbf{v}_{A}\right) \,\delta \left( \mathbf{v}-\mathbf{v}_{B}\right)
. \label{e140}
\end{equation}
This kernel makes a NTC\ collision attempt with a probability $q\,$(which
was $0.783$ in the above example) and a null collision happens with the
probability $1-q$. We use this kernel for the $(n+1)^{th}$ collision attempt
(it is permissible to change the kernel) and this causes another $\Delta
\tau =2q/N\,$collision time and $\Delta t=q(2V/RN^{2})$ real time increase.
\subsection{Mixture of gases without internal degrees of freedom}
The state of particles are defined by three components of the velocity
vector $\mathbf{v}$ and one kind index for which we will use $p,q,r,s$
characters. We have $M$ kind of gas without internal states in the mixture
and there are $N_{p}$ number of molecules of the $p^{th}$ kind. The mass of $%
p^{th}$ kind molecule is $m_{p}$. The probability density $f(\mu )=f(\mathbf{%
v,}p)$ will be written as $f^{p}(\mathbf{v})$.
Particles with states $\mu _{A}=(\mathbf{v}_{A},s),\mathbf{\ }$and $\mu
_{B}=(\mathbf{v}_{B},r)$ enter the collision and particles with states $\mu
_{C}=(\mathbf{v}_{C},q)$ and $\mu _{D}=(\mathbf{v,}p\mathbf{)}$ exits the
collision. The integration over $\mu $ such as $\int f^{p}(\mathbf{v})d\mu $
stands for three integrations over $\mathbf{v}$ and summation over $p$. The
center of mass (CM) coordinates are defined in eqs.(\ref{e3},\ref{e3b},\ref
{e4}).
The NTC\ kernel $G_{pq}^{rs}(\mathbf{v}_{A},\mathbf{v}_{B};\mathbf{v}_{C},%
\mathbf{v})=G_{1}+G_{2}$ is given by
\begin{eqnarray}
G_{1} &=&\frac{2}{R}\,\delta \left( \mathbf{H}-\mathbf{H}^{\prime }\right)
\,\,\delta \left( u^{2}-(u^{\prime })^{2}\right) \,\sigma _{pq}(\mathbf{n},%
\mathbf{n}^{\prime })\,\delta _{pr}\,\delta _{qs}, \label{f20} \\
G_{2} &=&\left( 1-\frac{u\Sigma _{pq}}{R}\right) \,\delta \left( \mathbf{v}%
_{C}-\mathbf{v}_{A}\right) \,\delta \left( \mathbf{v}-\mathbf{v}_{B}\right)
\,\delta _{pr}\,\delta _{qs}. \label{f30}
\end{eqnarray}
Here $\sigma _{pq}(\mathbf{n},\mathbf{n}^{\prime })$ is the differential
cross section between gases of the $p^{th}$ and $q^{th}$ kind and $\Sigma
_{pq}\,$is the total cross section which is given by
\begin{equation}
\Sigma _{pq}=\int \sigma _{pq}(\mathbf{n},\mathbf{n}^{\prime })\,d\mathbf{n}%
^{\prime }, \label{f40}
\end{equation}
where $d\mathbf{n}^{\prime }$ is the solid angle in the direction of $%
\mathbf{n}^{\prime }$. The $\delta _{pr}\delta _{qs}$ term in the kernel
insures that particles do not loose their identities during the collisions.
Again $\sigma _{pq}(\mathbf{n},\mathbf{n}^{\prime })=\sigma _{rs}(\mathbf{n,n%
}^{\prime })\,$due to the $\delta _{pr}\delta _{qs}$ term and we also have
the symmetry $\sigma _{pq}(\mathbf{n},\mathbf{n}^{\prime })=\sigma _{qp}(%
\mathbf{n}^{\prime },\mathbf{n})$. The kernel is obviously symmetric. The
term $\delta (u^{2}-(u^{\prime })^{2})\ $ and $\delta \left( \mathbf{H}-%
\mathbf{H}^{\prime }\right) $ have the same meanings as before and the
kernel satisfies the normalization condition
\begin{equation}
\sum_{p=1}^{M}\sum_{q=1}^{M}\int G_{pq}^{rs}(\mathbf{v}_{A},\mathbf{v}_{B};%
\mathbf{v}_{C},\mathbf{v})\,d^{3}\mathbf{v}_{C}\,d^{3}\mathbf{v}=1.
\label{f50}
\end{equation}
Again $G_{2}$ part of the kernel directly transfer initial velocities to the
final velocities with a probability $1-(u\Sigma _{rs})/R$ and hence causes a
null collision. The probability of making a real collision is
\begin{equation}
\sum_{p=1}^{M}\sum_{q=1}^{M}\int (G_{1})_{pq}^{rs}(\mathbf{v}_{A},\mathbf{v}%
_{B};\mathbf{v}_{C},\mathbf{v})\,d^{3}\mathbf{v}_{C}\,d^{3}\mathbf{v=}\frac{%
u\Sigma _{rs}}{R}, \label{f60}
\end{equation}
where integral is calculated in the CM coordinates.
Inserting $G_{pq}^{rs}(\mathbf{v}_{A},\mathbf{v}_{B};\mathbf{v}_{C},\mathbf{v%
})$ in eq.(\ref{a130}) and doing the summations over $r,s$ and doing the
integrals in the CM coordinates we obtain
\begin{eqnarray}
\frac{\partial f^{p}(\mathbf{v})}{\partial \tau } &=&\sum_{q=1}^{M}\int
G_{pq}^{pq}(\mu _{A},\mu _{B};\mu _{C},\mu )\,[f^{q},f^{p}]\,\,d^{3}\mathbf{v%
}_{A}\,d^{3}\mathbf{v}_{B}\,d^{3}\mathbf{v}_{C}, \label{f70} \\
&=&\frac{1}{R}\sum_{q=1}^{M}\int [f^{q},f^{p}]\,\,u\,\sigma _{pq}(\mathbf{n},%
\mathbf{n}^{\prime })\,d^{3}\mathbf{u\,}d\mathbf{n}^{\prime }, \label{f71}
\end{eqnarray}
where
\begin{equation}
\lbrack f^{q},f^{p}]=f^{q}(\mathbf{v}_{A})\,f^{p}(\mathbf{v}_{B})-f^{q}(%
\mathbf{v}_{C})\,f^{p}(\mathbf{v}) \label{f74}
\end{equation}
Again we write this equation for $F^{p}(\mathbf{v})=\left( N/V\right) f^{p}(%
\mathbf{v})$ and take $t=2nV/RN^{2}$ to obtain Boltzmann equation for a
mixture of homogenous gases without internal states
\begin{equation}
\frac{\partial F^{p}(\mathbf{v})}{\partial t}=\sum_{q=1}^{M}\int \left[
F^{q}(\mathbf{v}_{A})\,F^{p}(\mathbf{v}_{B})-F^{q}(\mathbf{v}_{C})\,F^{p}(%
\mathbf{v})\right] \,u\,\sigma _{pq}(\mathbf{n},\mathbf{n}^{\prime })\,d^{3}%
\mathbf{u\,}d\mathbf{n}^{\prime }. \label{f90}
\end{equation}
Here $\mathbf{v}_{A},\mathbf{v}_{B},\mathbf{v}_{C}$ are expressed in terms
of the variables $\mathbf{v,u,n}^{\prime }$ in eqs.(\ref{f80},\ref{f81},\ref
{f82}).
The algorithm is the same. We take $n=RN^{2}t/2V$ pairs and allow each
collision with a probability $(u\Sigma _{rs})/R.$ If the collision is
allowed we choose the scattering angle according to the $\sigma _{rs}(%
\mathbf{n},\mathbf{n}^{\prime })/\Sigma _{rs}$ probability distribution.
Note that the normalization of $f^{p}(\mathbf{v})\,$is given by
\begin{equation}
\sum_{p=1}^{M}\int f^{p}(\mathbf{v})\,d^{3}\mathbf{v}=1. \label{f100}
\end{equation}
The integral $\int f^{p}(\mathbf{v})d^{3}\mathbf{v}$ is conserved during the
simulation. From eq.(\ref{f70}) its rate of change is
\begin{eqnarray}
\frac{d}{d\tau }\int f^{p}(\mathbf{v})\,d^{3}\mathbf{v} &=&\int \frac{%
\partial f^{p}(\mathbf{v})}{\partial \tau }\,d^{3}\mathbf{v=}%
\sum_{q=1}^{M}\int G_{pq}^{pq}(\mathbf{v}_{A},\mathbf{v}_{B};\mathbf{v}_{C},%
\mathbf{v}) \label{f105} \\
&&\times \ \left[ f^{q}(\mathbf{v}_{A})\,f^{p}(\mathbf{v}_{B})-f^{q}(\mathbf{%
v}_{C})\,f^{p}(\mathbf{v})\right] \,d^{3}\mathbf{v}_{A}\,d^{3}\mathbf{v}%
_{B}\,d^{3}\mathbf{v}_{C}\,d^{3}\mathbf{v.} \nonumber
\end{eqnarray}
\newline
From normalization of probabilities in eqs.(\ref{a30},\ref{f50}) we have
\begin{eqnarray}
\int G_{pq}^{pq}(\mathbf{v}_{A},\mathbf{v}_{B};\mathbf{v}_{C},\mathbf{v}%
)\,d^{3}\mathbf{v}_{C}\,d^{3}\mathbf{v} &=&1 \label{f111} \\
\int G_{pq}^{pq}(\mathbf{v}_{A},\mathbf{v}_{B};\mathbf{v}_{C},\mathbf{v}%
)\,d^{3}\mathbf{v}_{A}\,d^{3}\mathbf{v}_{B} &=&1. \label{f111a}
\end{eqnarray}
Using these relations the integral on the right side of eq.(\ref{f105}) can
be written as
\begin{eqnarray}
\frac{d}{d\tau }\int f^{p}(\mathbf{v})d^{3}\mathbf{v} &=&\sum_{q=1}^{M}\int
f^{q}(\mathbf{v}_{A})\,\,f^{p}(\mathbf{v}_{B})\,d^{3}\mathbf{v}_{A}\,d^{3}%
\mathbf{v}_{B} \label{f112} \\
&&-\sum_{q=1}^{M}\int f^{q}(\mathbf{v}_{C})\,\,f^{p}(\mathbf{v})d^{3}\mathbf{%
v}_{C}\,d^{3}\mathbf{v.} \nonumber
\end{eqnarray}
These two terms are equal and they cancel each other yielding constancy of $%
\int f^{p}(\mathbf{v})d^{3}\mathbf{v}$.
The number of molecules of the $p^{th}\,$kind is
\begin{equation}
N_{p}=N\int f^{p}(\mathbf{v})\,d^{3}\mathbf{v,} \label{f120}
\end{equation}
and it remains constant as it should. Hence the $F^{p}(\mathbf{v})$ is
normalized as
\begin{equation}
\int F^{p}(\mathbf{v})\,d^{3}\mathbf{v\,}d^{3}\mathbf{x=}N_{p}, \label{f125}
\end{equation}
where $\mathbf{x}$ is position of the molecule.
\subsection{One kind of gas with internal degrees of freedom}
For a homogeneous gas with internal states the $\mu $ stands for velocity $%
\mathbf{v}$ and a discrete index (for which we use $\alpha ,\beta ,i,j$)
defining the internal quantum state of the molecule. The mass of the
molecules is $m$. Particles with states $\mu _{A}=(\mathbf{v}_{A}\mathbf{,}%
\beta )$ and $\mu _{B}=(\mathbf{v}_{B}\mathbf{,}\alpha )$ enter the
collision and particles with states $\mu _{C}=(\mathbf{v}_{C}\mathbf{,}j)$
and $\mu _{D}=(\mathbf{v,}i)$ exits the collision. The integral over $\mu $
stands for integration over $\mathbf{v}$ and summation over the internal
state index. The internal energy of molecule in the state $\gamma $ is $%
E_{\gamma }$ and $\epsilon =E_{\alpha }+E_{\beta }$ and $\epsilon ^{\prime
}=E_{i}+E_{j}$. The center of mass (CM) coordinates are defined in eqs.(\ref
{e4},\ref{e5}).
Let us define the no time counter (NTC) kernel $K_{ij}^{\alpha \beta }(%
\mathbf{v}_{A},\mathbf{v}_{B};\mathbf{v}_{C},\mathbf{v})=K_{1}+K_{2}$ where
\begin{equation}
K_{1}=\frac{1}{R}\delta (\mathbf{H}-\mathbf{H}^{\prime })\,\delta \left[
\frac{2}{m_{r}}\epsilon +u^{2}-\frac{2}{m_{r}}\epsilon ^{\prime }-(u^{\prime
})^{2}\right] \,\frac{2u}{u^{\prime }}\,\,\sigma _{ij}^{\alpha \beta }(%
\mathbf{n},\mathbf{n}^{\prime }), \label{g21}
\end{equation}
and
\begin{equation}
K_{2}=\left( 1-\frac{1}{R}\sum_{i}\sum_{j}u\Sigma _{ij}^{\alpha \beta
}\right) \delta (\mathbf{v}_{C}-\mathbf{v}_{A})\,\delta (\mathbf{v}-\mathbf{v%
}_{B})\,\delta _{i\alpha }\,\delta _{j\beta }. \label{g30}
\end{equation}
Here $m_{r}=m/2$ is the reduced mass where $m$ is the mass of the molecules
and $R$ is a chosen parameter. The $\sigma _{ij}^{\alpha \beta }(\mathbf{n},%
\mathbf{n}^{\prime })$ is differential and the $\Sigma _{ij}^{\alpha \beta }$
is the total cross section into the internal states $i,j$
\begin{equation}
\Sigma _{ij}^{\alpha \beta }=\int \sigma _{ij}^{\alpha \beta }(\mathbf{n},%
\mathbf{n}^{\prime })\,d\mathbf{n}^{\prime }, \label{g40}
\end{equation}
where $d\mathbf{n}^{\prime }$ is the solid angle in the direction of $%
\mathbf{n}^{\prime }$. This kernel is symmetric due to the reciprocity
relation of the inelastic scattering cross sections\cite{Reciprocity}
\begin{equation}
u^{2}\,\sigma _{ij}^{\alpha \beta }(\mathbf{n},\mathbf{n}^{\prime
})=(u^{\prime })^{2}\,\sigma _{\alpha \beta }^{ij}(\mathbf{n}^{\prime },%
\mathbf{n}), \label{g50}
\end{equation}
because $(u/u^{\prime })\,\sigma _{ij}^{\alpha \beta }=(u^{\prime
}/u)\,\sigma _{\alpha \beta }^{ij}$.
The $K_{2}$ part of $K_{ij}^{\alpha \beta }(\mathbf{v}_{A},\mathbf{v}_{B};%
\mathbf{v}_{C},\mathbf{v})$ directly transfers initial state to the final
state and causes a null collision. The probability of making a real
collision into the states $(i,j)$ is
\begin{equation}
P_{ij}=\int K_{1}\,d\mathbf{v}_{C}\,d\mathbf{v=}\frac{u\,\Sigma
_{ij}^{\alpha \beta }}{R}. \label{g60}
\end{equation}
Therefore total probability of making a real collision is $%
(\sum_{i}\sum_{j}\,u\Sigma _{ij}^{\alpha \beta })/R$.
Inserting the $K_{ij}^{\alpha \beta }(\mathbf{v}_{A},\mathbf{v}_{B};\mathbf{v%
}_{C},\mathbf{v})$ into the eq.(\ref{a130}) and doing the integrals in the
CM\ coordinates we obtain
\begin{equation}
\frac{\partial f_{i}(\mathbf{v})}{\partial \tau }=\frac{1}{R}\sum_{\alpha
}\sum_{\beta }\sum_{j}\int \left[ f_{\beta }(\mathbf{v}_{A})\,f_{\alpha }(%
\mathbf{v}_{B})-f_{j}(\mathbf{v}_{C})\,f_{i}(\mathbf{v})\right] \,u\,\sigma
_{ij}^{\alpha \beta }(\mathbf{n},\mathbf{n}^{\prime })\,d^{3}\mathbf{u}\,d%
\mathbf{n}^{\prime }. \label{g71}
\end{equation}
Here the $K_{2}$ part does not contribute to the collision integral as
before.
Again defining time as $t=\tau V/RN=2nV/RN^{2}$ and defining the new
functions $F_{i}(\mathbf{v})=(N/V)f_{i}(\mathbf{v})$ this is expressed as
\begin{equation}
\frac{\partial F_{i}}{\partial t}=\sum_{\alpha }\sum_{\beta }\sum_{j}\int
\left[ F_{\beta }(\mathbf{v}_{A})\,F_{\alpha }(\mathbf{v}_{B})-F_{j}(\mathbf{%
v}_{C})\,F_{i}(\mathbf{v})\right] \,\,u\,\sigma _{ij}^{\alpha \beta }(%
\mathbf{n},\mathbf{n}^{\prime })\,d^{3}\mathbf{u}\,d\mathbf{n}^{\prime },
\label{g80}
\end{equation}
where $\mathbf{v}_{A},\mathbf{v}_{B},\mathbf{v}_{C}$ are expressed in terms
of the variables $\mathbf{v,u,n}^{\prime }$ in eqs.(\ref{g75},\ref{g76},\ref
{g77},\ref{g78}). These equations are the Wang Chang-Uhlenbeck equations for
a gas with internal degrees of freedom. Here the states are assumed
nondegenerate for simplicity. Generalization to degenerate states is also
very straightforward.
Again we choose a number $R$ big enough such that for only very few (say
less than one in thousand) pairs $(\sum_{i}\sum_{j}u\Sigma _{ij}^{\alpha
\beta })/R$ will exceed unity. We chose $n=RN^{2}t/2V$ random pairs. For
each pair we take a random number $r$ and we allow the collision to happen
if $r<(\sum_{i}\sum_{j}u\Sigma _{ij}^{\alpha \beta })/R$. If collision is
allowed we choose the final state $(i,j)$ with the probability $\Sigma
_{ij}^{\alpha \beta }/(\sum_{i}\sum_{j}\Sigma _{ij}^{\alpha \beta })$ and
another random number is used to choose the final state. Finally we choose
the direction of scattering $\mathbf{n}^{\prime }$ according to the
probability density $\sigma _{ij}^{\alpha \beta }(\mathbf{n},\mathbf{n}%
^{\prime })/\Sigma _{ij}^{\alpha \beta }$ and a few more random numbers are
used for that. Then we calculate and store final velocities and state
indices for the colliding pair and go on to choose next pair.
\subsection{Mixture of gases with internal degrees of freedom}
This case is a combination of previous two cases and it is very
straightforward but unfortunately there are too many indices. The state of
particles are defined by three components of the velocity vector $\mathbf{v}$
and one kind index for which we use $p,q,r,s$ and one internal state index
for which we use $i,j,\alpha ,\beta $. We have $M$ kind of gas with internal
states in the mixture and there are $N_{p}$ number of molecules of the $%
p^{th}$ kind. The internal energy of $i^{th}$ internal state of $p^{th}$
kind molecule is $E_{i}^{p}$. The probability density $f(\mu )=f(\mathbf{v,}%
i,p)$ will be written as $f_{i}^{p}(\mathbf{v})$.
Particles with states $\mu _{A}=(\mathbf{v}_{A},\beta ,s),\mathbf{\ }$and $%
\mu _{B}=(\mathbf{v}_{B},\alpha ,r)$ enter the collision and particles with
states $\mu _{C}=(\mathbf{v}_{C},j,q)$ and $\mu _{D}=(\mathbf{v,}i,p\mathbf{)%
}$ exits the collision. We also define $\epsilon =E_{\beta }^{s}+E_{\alpha
}^{r}$ and $\epsilon ^{\prime }=E_{j}^{q}+E_{i}^{p}$. The integration over $%
\mu $ such as $\int f_{i}^{p}(\mathbf{v})d\mu $ stands for three
integrations over $\mathbf{v}$ and summations over $i$ and $p$. The center
of mass (CM) coordinates are defined in eqs.(\ref{e3},\ref{e3b},\ref{e4}).
The NTC kernel is $Q_{ij,pq}^{\alpha \beta ,rs}(\mathbf{v}_{A},\mathbf{v}%
_{B};\mathbf{v}_{C},\mathbf{v})=Q_{1}+Q_{2}$ where $Q_{1}$ and $Q_{2}$ are
defined as
\begin{equation}
Q_{1}=\frac{1}{R}\delta (\mathbf{H}-\mathbf{H}^{\prime })\,\delta \left[
\frac{2}{m_{r}}\epsilon +u^{2}-\frac{2}{m_{r}}\epsilon ^{\prime }-(u^{\prime
})^{2}\right] \frac{2u}{u^{\prime }}\,\,\sigma _{ij,pq}^{\alpha \beta ,pq}(%
\mathbf{n},\mathbf{n}^{\prime })\,\delta _{pr}\,\delta _{qs}. \label{h21}
\end{equation}
and
\begin{equation}
Q_{2}=\left( 1-\frac{1}{R}\sum_{i}\sum_{j}u\Sigma _{ij,pq}^{\alpha \beta
,pq}\right) \delta (\mathbf{v}_{C}-\mathbf{v}_{A})\,\delta (\mathbf{v}-%
\mathbf{v}_{B})\,\delta _{i\alpha }\,\delta _{j\beta }\,\delta _{pr}\,\delta
_{qs}. \label{h30}
\end{equation}
The delta functions $\delta _{pr}\delta _{qs}$ insures that the molecules do
no change identities during the collision. Here $%
m_{r}=m_{A}m_{B}/(m_{A}+m_{B})$ is the reduced mass, $R$ is a chosen
parameter. The $\sigma _{ij,pq}^{\alpha \beta ,pq}(\mathbf{n},\mathbf{n}%
^{\prime })$ is the differential cross section between species of the $%
p^{th} $ kind in the state $\alpha $ and $q^{th}$ kind in the state $\beta $
and $\Sigma _{ij,pq}^{\alpha \beta ,pq}$ is the total cross section into the
channel $(i,j)$
\begin{equation}
\Sigma _{ij,pq}^{\alpha \beta ,pq}=\int \sigma _{ij,pq}^{\alpha \beta ,pq}(%
\mathbf{n},\mathbf{n}^{\prime })\,d\mathbf{n}^{\prime } \label{h40}
\end{equation}
where $d\mathbf{n}^{\prime }$ is the solid angle in the direction of $%
\mathbf{n}^{\prime }$. The $Q_{ij,pq}^{\alpha \beta ,rs}(\mathbf{v}_{A},%
\mathbf{v}_{B};\mathbf{v}_{C},\mathbf{v})$ is also symmetric due to eq.(\ref
{g50}). The $Q_{2}$ directly transfers initial states to the final states
and causes a null collision. The probability of making a real collision into
the states $(i,j)$ is
\begin{equation}
P_{ij}=\int Q_{1}\,d\mathbf{v}_{C}\,d\mathbf{v=}\frac{u\,\Sigma
_{ij,pq}^{\alpha \beta ,pq}}{R} \label{h50}
\end{equation}
Therefore total probability of making a real collision is $%
(\sum_{i}\sum_{j}u\Sigma _{ij,pq}^{\alpha \beta ,pq})/R$.
Inserting the $Q_{ij,pq}^{\alpha \beta ,rs}(\mathbf{v}_{A},\mathbf{v}_{B};%
\mathbf{v}_{C},\mathbf{v})$ into the eq.(\ref{a130}) and doing the integrals
in the CM\ coordinates we obtain
\begin{equation}
\frac{\partial f_{i}^{p}(\mathbf{v})}{\partial \tau }=\sum_{q=1}^{M}\sum_{%
\alpha }\sum_{\beta }\sum_{j}\int [f^{q},f^{p}]_{ij}^{\alpha \beta
}\,Q_{ij,pq}^{\alpha \beta ,pq}(\mathbf{v}_{A},\mathbf{v}_{B};\mathbf{v}_{C},%
\mathbf{v})\,d^{3}\mathbf{v}_{A}\,d^{3}\mathbf{v}_{B}\,d^{3}\mathbf{v}_{C},
\label{h65}
\end{equation}
where
\begin{equation}
\lbrack f^{q},f^{p}]_{ij}^{\alpha \beta }=f_{\beta }^{q}(\mathbf{v}%
_{A})\,f_{\alpha }^{p}(\mathbf{v}_{B})-f_{j}^{q}(\mathbf{v}_{C})\,f_{i}^{p}(%
\mathbf{v}). \label{h66}
\end{equation}
After inserting $Q_{ij,pq}^{\alpha \beta ,pq}$ we obtain
\begin{equation}
\frac{\partial f_{i}^{p}(\mathbf{v})}{\partial \tau }=\frac{1}{R}
\sum_{q=1}^{M}\sum_{\alpha }\sum_{\beta }\sum_{j}\int
[f^{q},f^{p}]_{ij}^{\alpha \beta }\,u\,\sigma _{ij,pq}^{\alpha \beta ,pq}(%
\mathbf{n},\mathbf{n}^{\prime })\,d^{3}\mathbf{u\,}d\mathbf{n}^{\prime }.
\label{h67}
\end{equation}
The $Q_{2}$ part does not contribute to the collision integral as before.
Expressions of $\mathbf{v}_{A},\mathbf{v}_{B},\mathbf{v}_{C}$ in terms of $%
\mathbf{v},\mathbf{u,}\mathbf{n}^{\prime }$ are given in eqs.(\ref{e100},\ref
{e101},\ref{e102},\ref{e103})
Again defining time as $t=\tau V/RN=2nV/RN^{2}$ and defining the new
functions $F_{i}^{p}(\mathbf{v})=(N/V)\,f_{i}^{p}(\mathbf{v})$ this is
expressed as
\begin{eqnarray}
\frac{\partial F_{i}^{p}(\mathbf{v})}{\partial t} &=&\sum_{q=1}^{M}\sum_{%
\alpha }\sum_{\beta }\sum_{j}\int \left( F_{\beta }^{q}(\mathbf{v}%
_{A})\,F_{\alpha }^{p}(\mathbf{v}_{B})-F_{j}^{q}(\mathbf{v}_{C})\,F_{i}^{p}(%
\mathbf{v})\right) \label{h70} \\
&&\times u\,\sigma _{ij,pq}^{\alpha \beta ,pq}(\mathbf{n},\mathbf{n}^{\prime
})\,d^{3}\mathbf{u\,}d\mathbf{n}^{\prime }. \nonumber
\end{eqnarray}
These equations are the Wang Chang-Uhlenbeck equations for a mixture of
gases with internal degrees of freedom. Here the states are assumed
nondegenerate for simplicity again.
Again we choose a number $R$ big enough such that for only very few (say
less than one in thousand) pairs $(\sum_{i}\sum_{j}u\Sigma _{ij,pq}^{\alpha
\beta ,pq})/R$ will exceed unity. We chose $n=RN^{2}t/2V$ random pairs. For
each pair we take a random number $r$ and we allow the collision to happen
if $r<(\sum_{i}\sum_{j}u\Sigma _{ij,pq}^{\alpha \beta ,pq})/R$. If collision
is allowed we choose the final state $(i,j)$ with the probability $\Sigma
_{ij,pq}^{\alpha \beta ,pq}/(\sum_{i}\sum_{j}\Sigma _{ij,pq}^{\alpha \beta
,pq})$ and another random number is used to choose the final state. Finally
we choose the direction of scattering $\mathbf{n}^{\prime }$ according to
the probability density $\sigma _{ij,pq}^{\alpha \beta ,pq}(\mathbf{n},%
\mathbf{n}^{\prime })/\Sigma _{ij,pq}^{\alpha \beta ,pq}$ and a few more
random numbers are used for that. Then we calculate and store final
velocities and state indices for the colliding pair and go on to choose next
pair.
Note that the normalization of $f_{i}^{p}(\mathbf{v})\,$is given by
\begin{equation}
\sum_{p}\sum_{i}\int f_{i}^{p}(\mathbf{v})\,d^{3}\mathbf{v}=1. \label{h80}
\end{equation}
The expression $\sum_{i}\int f_{i}^{p}(\mathbf{v})d^{3}\mathbf{v}$ is
conserved during the simulation. From eq.(\ref{h65}) its rate of change is
\begin{eqnarray}
\frac{d}{d\tau }\sum_{i}\int f_{i}^{p}(\mathbf{v})d^{3}\mathbf{v}
&=&\sum_{i}\int \frac{\partial f_{i}^{p}(\mathbf{v})}{\partial \tau }\,d^{3}%
\mathbf{v}=\sum_{q=1}^{M}\sum_{\alpha }\sum_{\beta }\sum_{i}\sum_{j}
\label{h91} \\
&&\int [f^{q},f^{p}]_{ij}^{\alpha \beta }\,Q_{ij,pq}^{\alpha \beta ,pq}(%
\mathbf{v}_{A},\mathbf{v}_{B};\mathbf{v}_{C},\mathbf{v})\,d^{3}\mathbf{v}%
_{A}\,d^{3}\mathbf{v}_{B}\,d^{3}\mathbf{v}_{C}\,d^{3}\mathbf{v} \nonumber
\end{eqnarray}
\newline
From symmetry and normalization of the kernel given in eqs.(\ref{a10},\ref
{a20},\ref{a30}) we have
\begin{eqnarray}
\sum_{i}\sum_{j}\int Q_{ij,pq}^{\alpha \beta ,pq}(\mathbf{v}_{A},\mathbf{v}%
_{B};\mathbf{v}_{C},\mathbf{v})\,d^{3}\mathbf{v}_{C}\,d^{3}\mathbf{v} &=&1
\label{h120} \\
\sum_{\alpha }\sum_{\beta }\int Q_{ij,pq}^{\alpha \beta ,pq}(\mathbf{v}_{A},%
\mathbf{v}_{B};\mathbf{v}_{C},\mathbf{v})\,d^{3}\mathbf{v}_{A}\,d^{3}\mathbf{%
v}_{B} &=&1 \label{h121}
\end{eqnarray}
Using this, we express eq.(\ref{h91}) as
\begin{eqnarray}
\frac{d}{d\tau }\sum_{i}\int f_{i}^{p}(\mathbf{v})\,d^{3}\mathbf{v}
&=&\sum_{q=1}^{M}\sum_{\alpha }\sum_{\beta }\int f_{\beta }^{q}(\mathbf{v}%
_{A})\,f_{\alpha }^{p}(\mathbf{v}_{B})\,d^{3}\mathbf{v}_{A}\,d^{3}\mathbf{v}%
_{B} \label{h130} \\
&&-\sum_{q=1}^{M}\sum_{i}\sum_{j}\int f_{j}^{q}(\mathbf{v}_{C})\,f_{i}^{p}(%
\mathbf{v})\,d^{3}\mathbf{v}_{C}\,d^{3}\mathbf{v} \nonumber
\end{eqnarray}
These two terms are equal and they cancel each other yielding constancy of $%
\sum_{i}f_{i}^{p}(\mathbf{v})\,d^{3}\mathbf{v}$. The number of molecules of
the $p^{th}\,$kind is
\begin{equation}
N_{p}=N\sum_{i}\int f_{i}^{p}(\mathbf{v})\,d^{3}\mathbf{v} \label{h140}
\end{equation}
and as the above argument shows, it remains constant as it should. Hence the
$F_{i}^{p}(\mathbf{v})$ is normalized as
\begin{equation}
\sum_{i}\int F_{i}^{p}(\mathbf{v})\,d^{3}\mathbf{v\,}d^{3}\mathbf{x=}N_{p},
\label{h150}
\end{equation}
where $\mathbf{x}$ is position of the molecule.
\subsection{Relation to Kac's work}
Fifty years ago M. Kac\cite{Kac} introduced a master equation similar to
ours and derived the Boltzmann equation for a homogenous gas from it. Here
we summarize his work and point out similarities. We will use a different
notation than his.
Suppose we have $N$ particles in a gas contained in volume $V$. Collisions
are assumed to take place randomly within the gas. Again we have a
probability distribution $f^{(N)}(\mathbf{v}_{1},\mathbf{v}_{2},...,\mathbf{v%
}_{N};t)$ for their velocities. For brevity we will show this as $f^{(N)}(%
\mathbf{v};t)$ wherever convenient. Probability that the $i^{th}$ and $%
j^{th} $ particles having velocities $\mathbf{v}_{A}$ and $\mathbf{v}_{B}$
will collide and emerge with velocities $\mathbf{v}_{C}\,$and $\mathbf{v}%
_{D} $ in the phase space $d^{3}\mathbf{v}_{C}d^{3}\mathbf{v}_{D}$ in a time
interval $dt$ is $R(\mathbf{v}_{A},\mathbf{v}_{B};\mathbf{v}_{C},\mathbf{v}%
_{D})d^{3}\mathbf{v}_{C}d^{3}\mathbf{v}_{D}dt.$ Here the $R(\mathbf{v}_{A},%
\mathbf{v}_{B};\mathbf{v}_{C},\mathbf{v}_{D})$ is a function connected to
differential cross section but we will not need the precise relation until
later. The total collision probability in $dt$ time interval is $S(\mathbf{v}%
_{A},\mathbf{v}_{B})dt$ where $S(\mathbf{v}_{A},\mathbf{v}_{B})\,$is given
by
\begin{equation}
S(\mathbf{v}_{A},\mathbf{v}_{B})=\int R(\mathbf{v}_{A},\mathbf{v}_{B};%
\mathbf{v}_{C},\mathbf{v}_{D})d^{3}\mathbf{v}_{C}d^{3}\mathbf{v}_{D}.
\label{kac10}
\end{equation}
As usual we assume some symmetries for the $R(\mathbf{v}_{A},\mathbf{v}_{B},%
\mathbf{v}_{C},\mathbf{v}_{D})$ function:
\begin{eqnarray}
R(\mathbf{v}_{A},\mathbf{v}_{B};\mathbf{v}_{C},\mathbf{v}_{D}) &=&R(\mathbf{v%
}_{C},\mathbf{v}_{D};\mathbf{v}_{A},\mathbf{v}_{B}), \label{kac20} \\
R(\mathbf{v}_{A},\mathbf{v}_{B};\mathbf{v}_{C},\mathbf{v}_{D}) &=&R(\mathbf{v%
}_{B},\mathbf{v}_{A};\mathbf{v}_{D},\mathbf{v}_{C}). \label{kac30}
\end{eqnarray}
The $f^{(N)}(\mathbf{v}_{1},\mathbf{v}_{2},...,\mathbf{v}_{N};t)$ satisfies
the master equation
\begin{equation}
\frac{\partial f^{(N)}(\mathbf{v})}{\partial t}=-f^{(N)}(\mathbf{v}%
)\sum_{i=1}^{N}\sum_{j\neq i}^{N}S(\mathbf{v}_{i},\mathbf{v}%
_{j})+\sum_{i=1}^{N}\sum_{j\neq i}^{N}\int f_{ij}^{(N)}(\mathbf{v}_{A},%
\mathbf{v}_{B})R(\mathbf{v}_{A},\mathbf{v}_{B};\mathbf{v}_{i},\mathbf{v}%
_{j})d^{3}\mathbf{v}_{A}d^{3}\mathbf{v}_{B} \label{kac40}
\end{equation}
In order to see where this comes from we write it for infinitesimal time
interval $dt$:
\begin{eqnarray}
f^{(N)}(\mathbf{v;}t+dt) &=&f^{(N)}(\mathbf{v;}t)\left(
1-dt\sum_{i=1}^{N}\sum_{j\neq i}^{N}S(\mathbf{v}_{i},\mathbf{v}_{j})\right)
\label{kac50} \\
&&+dt\left( \sum_{i=1}^{N}\sum_{j\neq i}^{N}\int f_{ij}^{(N)}(\mathbf{v}_{A},%
\mathbf{v}_{B})R(\mathbf{v}_{A},\mathbf{v}_{B};\mathbf{v}_{i},\mathbf{v}%
_{j})d^{3}\mathbf{v}_{A}d^{3}\mathbf{v}_{B}\right) . \nonumber
\end{eqnarray}
Let us multiply both sides with $d^{3}\mathbf{v}_{1}...d^{3}\mathbf{v}_{N}.$
Then $f^{(N)}(\mathbf{v;}t+dt)d^{3}\mathbf{v}_{1}...d^{3}\mathbf{v}_{N}$ is
the probability that the velocities are in the phase space volume $d^{3}%
\mathbf{v}_{1}...d^{3}\mathbf{v}_{N}$ at time $t+dt$. The first term on the
right is
\begin{equation}
\left( f^{(N)}(\mathbf{v;}t)d^{3}\mathbf{v}_{1}...d^{3}\mathbf{v}_{N}\right)
\left( 1-dt\sum_{i=1}^{N}\sum_{j\neq i}^{N}S(\mathbf{v}_{i},\mathbf{v}%
_{j})\right) . \label{kac60}
\end{equation}
The first parenthesis is the probability that the system was in $d^{3}%
\mathbf{v}_{1}...d^{3}\mathbf{v}_{N}$ phase space volume at time $t$ and the
second parenthesis is the probability that no collisions occurred in $dt$
time interval. Their product is the probability of arriving $d^{3}\mathbf{v}%
_{1}...d^{3}\mathbf{v}_{N}$ phase space volume at $t+dt$ without making a
collision. The second term in the right side are probabilities of arriving
in $d^{3}\mathbf{v}_{1}...d^{3}\mathbf{v}_{N}$ by making collisions with
different pairs. For example let us write $i=1,$ $j=2$ term:
\begin{equation}
\int \left( f^{(N)}(\mathbf{v}_{A},\mathbf{v}_{B},\mathbf{v}_{3},...,\mathbf{%
v}_{N})d^{3}\mathbf{v}_{A}d^{3}\mathbf{v}_{B}d^{3}\mathbf{v}_{3}...d^{3}%
\mathbf{v}_{N}\right) \left( R(\mathbf{v}_{A},\mathbf{v}_{B};\mathbf{v}_{1},%
\mathbf{v}_{2})d^{3}\mathbf{v}_{1}d^{3}\mathbf{v}_{2}dt\right) .
\label{kac70}
\end{equation}
The first parenthesis under the integral is the probability that the system
was in the phase space volume $d^{3}\mathbf{v}_{A}d^{3}\mathbf{v}_{B}d^{3}%
\mathbf{v}_{3}...d^{3}\mathbf{v}_{N}$ at time $t$ and the second parenthesis
is the probability that the collision between particles one and two took
them to $d^{3}\mathbf{v}_{1}d^{3}\mathbf{v}_{2}$ phase space volume. If we
integrate this product over $\mathbf{v}_{A},\mathbf{v}_{B}$ we obtain
probability of arriving in $d^{3}\mathbf{v}_{1}...d^{3}\mathbf{v}_{N}$ at
time $t+dt$ via a collision between particles one and two. To obtain total
probability of arriving in $d^{3}\mathbf{v}_{1}...d^{3}\mathbf{v}_{N}$ at
time $t+dt$ via a collision we sum such terms over all possible pairs. This
argument clearly shows how the master equation is derived.
Writing $S(\mathbf{v}_{i},\mathbf{v}_{j})$ as
\begin{equation}
S(\mathbf{v}_{i},\mathbf{v}_{j})=\int R(\mathbf{v}_{A},\mathbf{v}_{B};%
\mathbf{v}_{j},\mathbf{v}_{i})d^{3}\mathbf{v}_{A}d^{3}\mathbf{v}_{B}
\label{kac80}
\end{equation}
the master equation can be written in a more symmetric form
\begin{equation}
\frac{\partial f^{(N)}(\mathbf{v})}{\partial t}=\sum_{i=1}^{N}\sum_{j\neq
i}^{N}\int \left( f_{ij}^{(N)}(\mathbf{v}_{A},\mathbf{v}_{B})-f^{(N)}(%
\mathbf{v})\right) R(\mathbf{v}_{A},\mathbf{v}_{B};\mathbf{v}_{i},\mathbf{v}%
_{j})d^{3}\mathbf{v}_{A}d^{3}\mathbf{v}_{B} \label{kac90}
\end{equation}
All of the results we obtained from our master equation can be obtained for
this master equation too. Kac\cite{Kac} showed that the distribution goes to
microcanonical distribution as $t\rightarrow \infty $. A hierarchy of
reduced probability equations can be obtained for this master equation too.
Kac\cite{Kac} showed that in the limit $N\rightarrow \infty $ if one starts
from uncorrelated state at $t=0$ the system always remains uncorrelated. His
arguments was different than ours.
The first equation in the hierarchy (obtained by integrating over $\mathbf{v}%
_{2},\mathbf{v}_{3},...,\mathbf{v}_{N}$ ) is
\begin{equation}
\frac{\partial f^{(1)}(\mathbf{v})}{\partial t}=2N\int \left( f^{(2)}(%
\mathbf{v}_{A},\mathbf{v}_{B})-f^{(2)}(\mathbf{v,v}_{C})\right) R(\mathbf{v}%
_{A},\mathbf{v}_{B};\mathbf{v}_{C},\mathbf{v})d^{3}\mathbf{v}_{A}d^{3}%
\mathbf{v}_{B}d^{3}\mathbf{v}_{C} \label{kac100}
\end{equation}
If we introduce AMC this equation becomes
\begin{equation}
\frac{\partial f(\mathbf{v})}{\partial t}=2N\int \left( f(\mathbf{v}_{A})f(%
\mathbf{v}_{B})-f(\mathbf{v})f(\mathbf{v}_{C})\right) R(\mathbf{v}_{A},%
\mathbf{v}_{B};\mathbf{v}_{C},\mathbf{v})d^{3}\mathbf{v}_{A}d^{3}\mathbf{v}%
_{B}d^{3}\mathbf{v}_{C}. \label{kac110}
\end{equation}
Here the superscript (1) is dropped and time $t$ is suppressed in $f^{(1)}(%
\mathbf{v;}t)$.
Now we go to center of mass frame (Equations \ref{e4},\ref{e5}). In the CM
coordinates the $R(\mathbf{v}_{A},\mathbf{v}_{B};\mathbf{v}_{C},\mathbf{v})$
is expressed as
\begin{equation}
R(\mathbf{v}_{A},\mathbf{v}_{B};\mathbf{v}_{C},\mathbf{v})=\frac{1}{V}\delta
\left( \mathbf{H}-\mathbf{H}^{\prime }\right) \,\delta \left(
u^{2}-(u^{\prime })^{2}\right) \,\sigma (\mathbf{n},\mathbf{n}^{\prime })
\label{kac140}
\end{equation}
where $V$ is the volume of the gas and $\sigma (\mathbf{n},\mathbf{n}%
^{\prime })$ is the differential cross section. Inserting this into eq.(\ref
{kac110}) and doing the integrals over the center of mass frame we obtain
\begin{equation}
\frac{\partial f(\mathbf{v})}{\partial t}=\frac{N}{V}\int \left[ f(\mathbf{v}%
_{A})\,f(\mathbf{v}_{B})-f(\mathbf{v}_{C})\,f(\mathbf{v})\right] \,u\,\sigma
(\mathbf{n},\mathbf{n}^{\prime })\,d^{3}\mathbf{u}\,d\mathbf{n}^{\prime },
\label{kac150}
\end{equation}
where $\mathbf{v}_{A},\mathbf{v}_{B},\mathbf{v}_{C}$ are expressed in terms
of the variables $\mathbf{v,u,n}^{\prime }$ in eqs.(\ref{e105},\ref{e106},%
\ref{e107}). If we write this equation for $F(\mathbf{v})=(N/V)f(\mathbf{v})$
which is velocity distribution normalized to the number density per unit
volume, we obtain the Boltzmann equation for a homogenous gas
\begin{equation}
\frac{\partial F(\mathbf{v})}{\partial t}=\int \left[ F(\mathbf{v}_{A})\,F(%
\mathbf{v}_{B})-F(\mathbf{v}_{C})\,F(\mathbf{v})\right] \,u\,\sigma (\mathbf{%
n},\mathbf{n}^{\prime })\,d^{3}\mathbf{u}\,d\mathbf{n}^{\prime }.
\label{kac160}
\end{equation}
Although both master equations have similar structures their philosophies
are different. In Kac's work the collisions happens randomly and
spontaneously in the gas whereas in direct simulation we take pairs and
force them to collide. Direct simulation has applications to systems other
than gases as we showed in the money games examples. In these systems there
are not physical processes driving the collisions and instead we make the
collisions. In Kac's work his motivation was to describe Boltzmann equation
for gases as a stochastic equation and the DSMC method had not been invented
yet. Just as in our work, Kac's method can be generalized to molecular gases
and gas mixtures and one can obtain Boltzmann equations for these cases by
defining a suitable $R(\mathbf{v}_{A},\mathbf{v}_{B};\mathbf{v}_{C},\mathbf{v%
})$ for each case.
\section{Direct simulation for an inhomogeneous gas}
In this section we study NTC algorithm of DSMC method for inhomogeneous gas.
We will not actually derive Bird's algorithm but we will define a similar
algorithm to simulate inhomogeneous gas. We will show that single particle
probability distribution of our algorithm satisfies the Boltzmann equation
for an inhomogeneous gas. Then we will argue that both algorithms give the
same results in the limit $N\rightarrow \infty $.
We divide the physical space into cells as in the Bird's method. In our
algorithm we take pairs not from the same cell but from all of the volume
and we let each pair to make a collision attempt if both of them are in the
same cell.
We divide the physical space into cells and the $k^{th}$ cell has the volume
$V_{k}$. Now let us define the functions
\begin{equation}
\Delta _{k}(\mathbf{x})=\left\{
\begin{array}{ll}
1\quad & \mathbf{{}x}\in V_{k} \\
0\quad & \mathbf{x}\notin V_{k}
\end{array}
\right. . \label{k10}
\end{equation}
We will also need the function
\begin{equation}
\Gamma (\mathbf{x},\mathbf{x}^{\prime })=\sum_{k}\frac{\Delta _{k}(\mathbf{x}%
)\Delta _{k}(\mathbf{x}^{\prime })}{V_{k}}. \label{k20}
\end{equation}
This function is zero when $\mathbf{x}$ and $\mathbf{x}^{\prime }$ are not
in the same cell and $1/V_{k}$ when they are in the same cell. Its integral
over $\mathbf{x}$ or $\mathbf{x}^{\prime }$ is unity
\begin{equation}
\int \Gamma (\mathbf{x},\mathbf{x}^{\prime })d^{3}\mathbf{x}^{\prime }=\int
\Gamma (\mathbf{x},\mathbf{x}^{\prime })d^{3}\mathbf{x}=1. \label{k30}
\end{equation}
At the end of this section we will take the limit $V_{k}\rightarrow 0$. In
this limit $\Gamma (\mathbf{x},\mathbf{x}^{\prime })=0$ for $\mathbf{x}\neq
\mathbf{x}^{\prime }$ and $\Gamma (\mathbf{x},\mathbf{x}^{\prime })=\infty $
for $\mathbf{x}=\mathbf{x}^{\prime }$ and eq.(\ref{k30}) is still satisfied.
These are properties of the Dirac delta function and we have the limit
\begin{equation}
\lim_{V_{k}\rightarrow 0}\Gamma (\mathbf{x},\mathbf{x}^{\prime })=\delta (%
\mathbf{x}-\mathbf{x}^{\prime }) \label{k40}
\end{equation}
Now we can start the discussion. We will treat the simplest case for
clarity. We develop our arguments for one kind of gas without internal
degrees of freedom. The generalization to the other cases is very
straightforward and will be summarized at the end of the section.
The state index $\mu $ represent position of the particle $\mathbf{x}$ and
the velocity $\mathbf{v.}$ The collision kernel is $Z=Z_{1}+Z_{2}$ where $%
Z_{1}$ and $Z_{2}$ are
\begin{eqnarray}
Z_{1}(\mathbf{x}_{A}\mathbf{v}_{A},\mathbf{x}_{B}\mathbf{v}_{B};\mathbf{x}%
_{C}\mathbf{v}_{C},\mathbf{x}_{D}\mathbf{v}_{D}) &=&S(\mathbf{v}_{A},\mathbf{%
v}_{B};\mathbf{v}_{C}\mathbf{,v}_{D})\, \label{k50} \\
&&\times \Gamma (\mathbf{x}_{A},\mathbf{x}_{B})\,\Omega \,\delta (\mathbf{x}%
_{C}-\mathbf{x}_{A})\,\delta (\mathbf{x}_{D}-\mathbf{x}_{B})\,, \nonumber
\end{eqnarray}
and
\begin{eqnarray}
Z_{2}(\mathbf{x}_{A}\mathbf{v}_{A},\mathbf{x}_{B}\mathbf{v}_{B};\mathbf{x}%
_{C}\mathbf{v}_{C},\mathbf{x}_{D}\mathbf{v}_{D}) &=&\,\,\left( 1-\Omega
\,\Gamma (\mathbf{x}_{A},\mathbf{x}_{B})\right) \delta (\mathbf{x}_{C}-%
\mathbf{x}_{A})\,\,\, \label{k61} \\
&&\,\,\times \delta (\mathbf{x}_{D}-\mathbf{x}_{B})\,\delta (\mathbf{v}_{C}-%
\mathbf{v}_{A})\,\delta (\mathbf{v}_{D}-\mathbf{v}_{B}). \nonumber
\end{eqnarray}
Here
\begin{equation}
\Omega \,=\left( \sum_{k}\frac{1}{V_{k}}\right) ^{-1}, \label{k70}
\end{equation}
is a constant chosen to insure that probability of making a collision in any
cell is less than unity. The $S(\mathbf{v}_{A},\mathbf{v}_{B};\mathbf{v}_{C}%
\mathbf{,v}_{D})$ is given in eqs.(\ref{e20},\ref{e30}). The $Z_{2}$ does
not change states of the of the particles and the pair will not be allowed
to make a collision attempt with a probability $\left( 1-\Omega \,\Gamma (%
\mathbf{x}_{A},\mathbf{x}_{B})\right) .$ The probability of a collision
attempt is $\Omega \Gamma (\mathbf{x}_{A},\mathbf{x}_{B})$ and in a real
collision positions of particles do not change because of the $\delta (%
\mathbf{x}_{C}-\mathbf{x}_{A})\,\delta (\mathbf{x}_{D}-\mathbf{x}_{B})$ term
in the $Z$. The $Z(\mathbf{x}_{A}\mathbf{v}_{A},\mathbf{x}_{B}\mathbf{v}_{B};%
\mathbf{x}_{C}\mathbf{v}_{C},\mathbf{x}_{D}\mathbf{v}_{D})$ is symmetric and
satisfies the normalization condition
\begin{eqnarray}
\int Z(\mathbf{x}_{A}\mathbf{v}_{A},\mathbf{x}_{B}\mathbf{v}_{B};\mathbf{x}%
_{C}\mathbf{v}_{C},\mathbf{x}_{D}\mathbf{v}_{D})d^{3}\mathbf{v}_{A}d^{3}%
\mathbf{v}_{B}d^{3}\mathbf{x}_{A}d^{3}\mathbf{x}_{B} &=&1, \label{k81} \\
\int Z(\mathbf{x}_{A}\mathbf{v}_{A},\mathbf{x}_{B}\mathbf{v}_{B};\mathbf{x}%
_{C}\mathbf{v}_{C},\mathbf{x}_{D}\mathbf{v}_{D})d^{3}\mathbf{v}_{C}d^{3}%
\mathbf{v}_{D}d^{3}\mathbf{x}_{C}d^{3}\mathbf{x}_{D} &=&1. \label{k82}
\end{eqnarray}
The $\Omega \Gamma (\mathbf{x}_{A},\mathbf{x}_{B})$ vanishes unless $\mathbf{%
x}_{A}$ and $\mathbf{x}_{B}$ are in the same cell and $\Omega \Gamma (%
\mathbf{x}_{A},\mathbf{x}_{B})=\Omega /V_{k}$ when $\mathbf{x}_{A}$ and $%
\mathbf{x}_{B}$ are in the cell $V_{k}$. The probability of having both
particles in the cell $V_{k}$ is $(N_{k}/N)^{2}$ where $N_{k}$ is the number
of particles in the cell $V_{k}$ during the collisions part of the
simulation. Therefore the probability of a pair making a collision attempt
in the $k^{th}$ cell is $p_{k}=(\Omega /V_{k})(N_{k}/N)^{2}$. The $1/V_{k}\,$%
term looks awkward in this probability but it is absolutely necessary as the
following argument shows. Suppose the physical density is uniform and
therefore $N_{k}/N=V_{k}/V$ where $V$ is the total volume. When density is
uniform we expect that the probability of having a collision in $V_{k}$ is
proportional to $V_{k}.$ When $N_{k}/N=V_{k}/V$ in inserted in $p_{k}\,$we
find $p_{k}=\Omega V_{k}/V^{2}$ which is proportional to $V_{k}$ as expected.
Now we insert the kernel $Z\,$in the eq.(\ref{a130}) to obtain
\begin{eqnarray}
\frac{\partial f(\mathbf{xv},\tau \mathbf{)}}{\partial \tau } &=&\int
[f,f]\,Z(\mathbf{x}_{A}\mathbf{v}_{A},\mathbf{x}_{B}\mathbf{v}_{B};\mathbf{x}%
_{C}\mathbf{v}_{C},\mathbf{xv}) \label{k90} \\
&&\times d^{3}\mathbf{v}_{A}\,d^{3}\mathbf{v}_{B}\,d^{3}\mathbf{v}_{C}\,d^{3}%
\mathbf{x}_{A}\,d^{3}\mathbf{x}_{B}\,d^{3}\mathbf{x}_{C} \nonumber
\end{eqnarray}
where $[f,f]\,$is
\begin{equation}
\lbrack f,f]\,=f(\mathbf{x}_{A}\mathbf{v}_{A},\tau )f(\mathbf{x}_{B}\mathbf{v%
}_{B},\tau )-f(\mathbf{x}_{C}\mathbf{v}_{C},\tau )f(\mathbf{xv},\tau ).
\label{k100}
\end{equation}
The $Z_{2}$ part of the collision kernel does not contribute to the
collision integral. After doing the delta function integrals over positions $%
\mathbf{x}_{A}\,,\mathbf{x}_{B}$ we obtain
\begin{eqnarray}
\frac{\partial f(\mathbf{xv},\tau \mathbf{)}}{\partial \tau } &=&\Omega \int
\left[ f(\mathbf{x}^{\prime }\mathbf{v}_{A},\tau )f(\mathbf{xv}_{B},\tau )-f(%
\mathbf{x}^{\prime }\mathbf{v}_{C},\tau )f(\mathbf{xv},\tau )\right]
\label{k110} \\
&&\times \Gamma (\mathbf{x},\mathbf{x}^{\prime })\,S(\mathbf{v}_{A},\mathbf{v%
}_{B};\mathbf{v}_{C}\mathbf{,v})d^{3}\mathbf{v}_{A}\,d^{3}\mathbf{v}%
_{B}\,d^{3}\mathbf{v}_{C}\,d^{3}\mathbf{x}^{\prime }. \nonumber
\end{eqnarray}
Now we insert $S=S_{1}+S_{2}$ from eqs.(\ref{e20},\ref{e30}) in this
equation. The $S_{2}$ part gives no contribution to the integral as before.
Doing the integrals over $\mathbf{v}_{A}\,,\mathbf{v}_{B}\,,\mathbf{v}_{C}\,$
in the center of mass coordinates we obtain
\begin{eqnarray}
\frac{\partial f(\mathbf{xv},\tau \mathbf{)}}{\partial \tau } &=&\frac{%
\Omega }{R}\int \left[ f(\mathbf{x}^{\prime }\mathbf{v}_{A},\tau )f(\mathbf{%
xv}_{B},\tau )-f(\mathbf{x}^{\prime }\mathbf{v}_{C},\tau )f(\mathbf{xv},\tau
)\right] \label{k113} \\
&&\times \Gamma (\mathbf{x},\mathbf{x}^{\prime })\,\sigma (\mathbf{n,n}%
^{\prime })\,d^{3}\mathbf{u}\,d\mathbf{n}^{\prime }\,d^{3}\mathbf{x}^{\prime
}. \nonumber
\end{eqnarray}
where $\mathbf{v}_{A},\mathbf{v}_{B},\mathbf{v}_{C}$ are given in eqs.(\ref
{e105},\ref{e106},\ref{e107}). In order to have complete correspondence with
the Boltzmann equation we define the new function $F(\mathbf{xv},\tau )=Nf(%
\mathbf{xv},\tau \mathbf{)}$ and we also define the new variable $t=\Omega
\tau /RN=2\Omega n/RN^{2}$ to obtain
\begin{equation}
\frac{\partial F(\mathbf{xv},t\mathbf{)}}{\partial t}=\widehat{L}_{C}F(%
\mathbf{xv,}t\mathbf{)} \label{k122}
\end{equation}
where the operator $\widehat{L}_{C}$ is defined as
\begin{eqnarray}
\widehat{L}_{C}F(\mathbf{xv,}t\mathbf{)} &=&\ \int \left[ F(\mathbf{x}%
^{\prime }\mathbf{v}_{A},t)F(\mathbf{xv}_{B},t)-F(\mathbf{x}^{\prime }%
\mathbf{v}_{C},t)F(\mathbf{xv},t)\right] \label{k150} \\
&&\times \Gamma (\mathbf{x},\mathbf{x}^{\prime })\,\sigma (\mathbf{n,n}%
^{\prime })\,d^{3}\mathbf{u}\,d\mathbf{n}^{\prime }\,d^{3}\mathbf{x}^{\prime
}. \nonumber
\end{eqnarray}
Here $t$ is interpreted as the physical time.
In the collisions part of the DSMC\ method we make collision attempts for a
time $\Delta t$ where $\Delta t$ is a small time interval. This corresponds
to $\Delta \tau =RN\Delta t/\Omega $ collision time passage or $\Delta
n=RN^{2}\Delta t/2\Omega $ pairs chosen. From eq.(\ref{k122}), after making $%
\Delta n$ collisions attempt $F(\mathbf{xv},t\mathbf{)}$ becomes $F^{*}(%
\mathbf{xv},t\mathbf{)}$%
\begin{equation}
F^{*}(\mathbf{xv},t\mathbf{)=}(1+\Delta t\widehat{L}_{C})F(\mathbf{xv},t%
\mathbf{)+}O((\Delta t)^{2}) \label{m10}
\end{equation}
where $O((\Delta t)^{2})$ is an error term of order $(\Delta t)^{2}$.
Next we perform free propagation step where $\mathbf{x\rightarrow x+}\Delta t%
\mathbf{v}$ and $\mathbf{v\rightarrow v+}\Delta t\mathbf{a}$ transformation
is made for each particle. Here $\mathbf{a=F/}m$ is the acceleration of the
particle due to the force $\mathbf{F}$ and it can depend on both position
and velocity of the particle. This changes the $N$ particle distribution
function $f^{(N)}(\mathbf{x}_{1},\mathbf{v}_{1};\mathbf{x}_{2},\mathbf{v}%
_{2};...,\mathbf{x}_{N},\mathbf{v}_{N})$ to
\begin{equation}
f^{(N)}(\mathbf{x}_{1}-\Delta t\mathbf{v}_{1},\mathbf{v}_{1}-\Delta t\mathbf{%
a}_{1};...;\mathbf{x}_{N}-\Delta t\mathbf{v}_{N},\mathbf{v}_{N}-\Delta t%
\mathbf{a}_{N}). \label{m20}
\end{equation}
The jacobian of the transformation is unity with a correction of order $%
(\Delta t)^{2}$ and therefore this expression is correct with an error of
the same order. Integrating this over $\mathbf{x}_{2},\mathbf{v}_{2};...,%
\mathbf{x}_{N},\mathbf{v}_{N}$ we find that the single particle probability
distribution $f^{(1)}(\mathbf{x},\mathbf{v)}$ changes to $f^{(1)}(\mathbf{x-}%
\Delta t\mathbf{v},\mathbf{v-}\Delta t\mathbf{a)}$ with an error term of
order $(\Delta t)^{2}$. Therefore $F^{*}(\mathbf{x},\mathbf{v},t\mathbf{)}$
becomes $F^{*}(\mathbf{x-}\Delta t\mathbf{v},\mathbf{v-}\Delta t\mathbf{a,}t%
\mathbf{)}$ which is taken as $F(\mathbf{x},\mathbf{v},t+\Delta t\mathbf{)}$%
. Hence
\begin{equation}
F(\mathbf{x},\mathbf{v},t+\Delta t\mathbf{)}=F^{*}(\mathbf{x-}\Delta t%
\mathbf{v},\mathbf{v-}\Delta t\mathbf{a,}t\mathbf{).} \label{m31}
\end{equation}
Using eq.(\ref{m10}) and expanding $F(\mathbf{x-}\Delta t\mathbf{v},\mathbf{%
v-}\Delta t\mathbf{a,}t\mathbf{)}$ up to first order terms in $\Delta t$ we
obtain
\begin{equation}
F(\mathbf{x},\mathbf{v},t+\Delta t\mathbf{)}=\left( 1-\Delta t\mathbf{v}%
\frac{\partial }{\partial \mathbf{x}}-\Delta t\mathbf{a}\frac{\partial }{%
\partial \mathbf{v}}+\Delta t\widehat{L}_{C}\right) F(\mathbf{x},\mathbf{v}%
,t)+O((\Delta t)^{2}) \label{m35}
\end{equation}
where $O((\Delta t)^{2})$ is the error terms of order $(\Delta t)^{2}$.
Taking the limit $\Delta t\rightarrow 0$ we obtain
\begin{equation}
\frac{\partial F(\mathbf{x},\mathbf{v},t\mathbf{)}}{\partial t}+\mathbf{%
v\cdot }\frac{\partial F(\mathbf{x},\mathbf{v},t\mathbf{)}}{\partial \mathbf{%
x}}+\frac{\mathbf{F}}{m}\mathbf{\cdot }\frac{\partial F(\mathbf{x},\mathbf{v}%
,t\mathbf{)}}{\partial \mathbf{v}}=\widehat{L}_{C}F(\mathbf{x},\mathbf{v},t%
\mathbf{)} \label{m40}
\end{equation}
This equation is similar to the Boltzmann equation but it is not the same.
Already when treating $\tau =2n/N$ as a continuous parameter we took $%
N\rightarrow \infty $ limit implicitly. The remaining limit is $%
V_{k}\rightarrow 0$ and we know that $\Gamma (\mathbf{x},\mathbf{x}^{\prime
})\,\rightarrow \delta (\mathbf{x}-\mathbf{x}^{\prime })$ in this limit.
After setting $\Gamma (\mathbf{x},\mathbf{x}^{\prime })\,=\delta (\mathbf{x}-%
\mathbf{x}^{\prime })$ performing the $\mathbf{x}^{\prime }$ integral the
operator $\widehat{L}_{C}$ reduces to
\begin{equation}
\widehat{L}_{C}F(\mathbf{xv)}=\int \left[ F(\mathbf{xv}_{A},t)F(\mathbf{xv}%
_{B},t)-F(\mathbf{xv}_{C},t)F(\mathbf{xv},t)\right] \sigma (\mathbf{n,n}%
^{\prime })\,d^{3}\mathbf{u}\,d\mathbf{n}^{\prime }\,. \label{m51}
\end{equation}
With this form of the $\widehat{L}_{C}$ the eq. (\ref{m40}) is the Boltzmann
equation.
Hence we have shown that in direct simulation algorithm for inhomogeneous
gas the one particle probability distribution satisfies the Boltzmann
equation. Now, how do we connect this to the Bird's NTC\ algorithm? Clearly
they are not the same. In fact our algorithm is not practical since great
majority of chosen pairs will not be in the same cell and therefore will not
make collisions.
In the time interval $\Delta t\,$we choose $\ \Delta n=RN^{2}\Delta
t/2\Omega $ pairs. The probability that each pair will make a collision
attempt in the $k^{th}$ cell is $p_{k}=(\Omega /V_{k})(N_{k}/N)^{2}.$ Let $%
n_{k}$ be the number of collision attempts that take place in $V_{k}$. The
expected value of $n_{k}$ is
\begin{equation}
\overline{n}_{k}=\Delta n\cdot p_{k}=\frac{RN_{k}^{2}}{2V_{k}}\Delta t.
\label{m60}
\end{equation}
This is the same as number of collision attempts in $V_{k}$ in Birds
algorithm. The difference is that in Birds algorithm the number of collision
attempts in each cell is fixed as $n_{k}=RN_{k}^{2}\Delta t/2V_{k}$ whereas
in our algorithm the $n_{k}$ has a probability distribution with a mean
value $RN_{k}^{2}\Delta t/2V_{k}$. The probability distribution for $n_{k}$
is given as
\begin{equation}
P(n_{k})=\frac{(\Delta n)!}{(\Delta n-n_{k})!\,(n_{k})!}%
(p_{k})^{n_{k}}(1-p_{k})^{\Delta n-n_{k}}. \label{m71}
\end{equation}
In the limit of $V_{k}\rightarrow 0$ we have $p_{k}\rightarrow 0$ and the $%
P(n_{k})$ becomes the Poisson probability distribution
\begin{equation}
P(n_{k})=\frac{(\overline{n}_{k})^{n_{k}}}{(n_{k})!}\exp (-\overline{n}_{k}).
\label{m80}
\end{equation}
The width of distributions in eqs.(\ref{m71},\ref{m80}) is of order $\sqrt{%
\overline{n}_{k}}$. For large values of $\overline{n}_{k}$ we have $n_{k}/%
\overline{n}_{k}=1+O(1/\sqrt{\overline{n}_{k}})$ where $O(1/\sqrt{\overline{n%
}_{k}})$ is a term of order $1/\sqrt{\overline{n}_{k}}$.
Now we take the limit $N_{k}\rightarrow \infty $ and $O(1/\sqrt{\overline{n}%
_{k}})$ error term vanishes. In a more mathematical language, probability
that $n_{k}/\overline{n}_{k}=1$ is unity. Hence both methods approach each
other in the limit $N_{k}\rightarrow \infty $ and single particle
probability distribution in Bird's method too should satisfy the Boltzmann
equation (eq.(\ref{m40})) in this limit.
There is an important distinction in the limits taken for both method to
satisfy the Boltzmann equation. In our algorithm we take $N\rightarrow
\infty ,$ $\Delta t\rightarrow 0$ and $V_{k}\rightarrow 0$ limits. This does
not mean that number of particles in each cell ($N_{k}$) will go to
infinity. For example for a uniform density we have $N_{k}=(N/V)V_{k}.$ Here
$N\rightarrow \infty $ and $V_{k}\rightarrow 0$ limits does not imply
anything about $N_{k}$. $NV_{k}$ can remain finite and even can go to zero
and still our algorithm satisfies the Boltzmann equation. The Bird's
algorithm requires $N_{k}\rightarrow \infty $ to satisfy the Boltzmann
equation however and this is a more stringent requirement.
We did this analysis for the simplest case of one kind of gas without
internal degrees of freedom for clarity. It is very simple to generalize
this to the other cases by replacing the kernel $S$ in eq.(\ref{k50}) with $%
G_{pq}^{rs}$ in eq.(\ref{f20}) or with $K_{ij}^{\alpha \beta }$ in eq.(\ref
{g21}) or with $Q_{ij,pq}^{\alpha \beta ,rs}$ in eq.(\ref{h21}). Then the
Boltzmann equation will be replaced by the Wang Chang-Uhlenbeck equation but
all of the arguments will remain the same.
\section{Conclusions}
Let us list our contributions in this paper.
\begin{itemize}
\item In this paper we introduced a general formalism for direct simulation
processes. We defined the direct simulation as a markov process with a
master equation and we found the master equation given in eq.(\ref{a51}).
Definition the DSMC\ algorithm as a stochastic process governed by a master
equation does not exist in the literature of the DSMC\ method to our
knowledge.
\item Starting from the master equation we showed that the N-particle
probability density evolves towards microcanonical distribution as the
number of collisions go to infinity.
\item We derived a hierarchy of equations similar to the BBGKY hierarchy
for the reduced probability densities given in eq.(\ref{a60})
\item We showed that if AMC\ approximation is employed the single particle
probability distribution satisfies an equation given in eq.(\ref{a101}). In
the limit $N\rightarrow \infty $ this reduces to eq.(\ref{a130}) which is an
equation similar to the Boltzmann equation.
\item We found the equations of the hierarchy in the limit $N\rightarrow
\infty $ (the eq.(\ref{a134}) )and showed that the ansatz $f^{(M)}(\mu
_{1},\mu _{2},...,\mu _{M};\tau )=f^{(1)}(\mu _{1}\mathbf{;}\tau
)\,f^{(1)}(\mu _{2}\mathbf{;}\tau )....f^{(1)}(\mu _{M}\mathbf{;}\tau )$
satisfies all the equations in the hierarchy provided the $f^{(1)}(\mu
\mathbf{;}\tau )$ satisfies the eq.(\ref{a130}). This ensures that in the
limit $N\rightarrow \infty $ the AMC is satisfied for all times if one
starts from an uncorrelated initial state.
\item We gave two simple examples from direct simulation money games. The
discrete money game example has the nice feature that it is exactly solvable
and we observe from the solution that the approach to the equilibrium is
exponentially fast.
\item We obtained the H-theorem and conservation of expectation values of
collision invariants. These results are familiar to most readers from the
standard treatments of the Boltzmann equation. But it is worth repeating
them here because although the equations are similar they are applied to
wide variety of different problems in the direct simulation setting, not
just to gases.
\item We applied the formalism to the direct simulation Monte Carlo method
for real homogenous gases which is a standard method to solve the Boltzmann
equation. Introducing appropriate kernels we obtained NTC\ algorithm for a
homogenous gas and we showed that the appropriately normalized single
particle probability distribution satisfies Boltzmann equation for simple
homogenous gases and Wang Chang-Uhlenbeck equations for homogenous molecular
gases and their mixtures. The derivation of conservation of $\int f^{p}(%
\mathbf{v})\,d^{3}\mathbf{v}$ for mixture of gases without internal degrees
of freedom and $\sum_{i}\int f_{i}^{p}(\mathbf{v})\,d^{3}\mathbf{v}$ for
mixture of gases with internal degrees of freedom should be also familiar to
the reader from the standard treatments of the Boltzmann equation. The novel
feature of our derivation is the significant simplification that the
normalization of $T(\mu _{A},\mu _{B},\mu _{C},\mu _{D})$ given in the
equations (\ref{a30},\ref{f111},\ref{f111a},\ref{h120},\ref{h121}) provide
to obtain the result. If we try to obtain the same result from the Boltzmann
equation we would have to use the argument that the integrals in (\ref{f111},%
\ref{f111a},\ref{h120},\ref{h121}) are functions of the collision invariants.
\item We introduced a new algorithm to do the DSMC\ calculations for an
inhomogeneous gas. Our algorithm is not practical for the actual practice of
the art because of wasting the great majority of the chosen pairs. We showed
that the single particle probability distribution satisfies the Boltzmann
equation in our algorithm in the limits $N\rightarrow \infty ,$ $\Delta
t\rightarrow 0$ and $V_{k}\rightarrow 0$. We also showed that Bird's
algorithm for DSMC converges to our algorithm if $N_{k}\rightarrow \infty $
is taken in addition to the limits $\Delta t\rightarrow 0$ and $%
V_{k}\rightarrow 0$. Birds algorithm requires more stringent requirements to
satisfy the Boltzmann equation. To prevent any misunderstanding we stress
here that our algorithm is not intended as a practical scheme to implement
DSMC calculations. The Bird's algorithm does not easily fit in the direct
simulation formalism presented in this paper whereas the algorithm we
presented does. We showed that our algorithm gives the Boltzmann equation in
the limits $N\rightarrow \infty ,$ $\Delta t\rightarrow 0$ and $%
V_{k}\rightarrow 0$ and we also showed that our algorithm and Bird's
algorithm converges to each other if we go to more stringent limit of $%
N_{k}\rightarrow \infty $. Therefore we proved indirectly that Birds
algorithm satisfies Boltzmann equation in the limit $N_{k}\rightarrow \infty
$, $\Delta t\rightarrow 0$ and $V_{k}\rightarrow 0$. Therefore we introduced
our algorithm as a tool to study convergence of Bird's method and not as a
practical way of doing DSMC calculations.
\end{itemize}
Meaning of the convergence here should be interpreted according to the
ensemble theory of statistical mechanics. We imagine practically infinite
number of identical systems (computers with human operators) doing the same
direct simulation and call this the ensemble. The $f^{(1)}(\mu ;\tau )d\mu $
represents ratio of number of particles in $d\mu $ to the total number of
particles averaged over all the ensemble. When you perform a direct
simulation on a computer you are just one member of the ensemble. Your
results will show statistical fluctuations. But when you do the same
simulation many times with different initial states chosen according to a
uncorrelated probability distribution $f^{(N)}(\mu _{1},\mu _{2},...,\mu
_{N};n=0)=\,h(\mu _{1})h(\mu _{2})....h(\mu _{N})$ you form your own
ensemble and averages over them will nicely follow $f^{(1)}(\mu ;\tau )$
obtained by solving eq.(\ref{a130}) with the initial value $f^{(1)}(\mu
;\tau =0)=h(\mu ) $.
This work can generalize to chemical reactions and radiative processes in a
more or less straightforward fashion. But there are enough number of
subtleties such that we leave them to future publications.
A simplified version of this paper\cite{AJP} containing only one kind of
homogenous gas without internal degrees of freedom is published in American
Journal of Physics. The material in that paper makes a small fraction of the
material in this paper. The present paper contains much new material and
overlap between the two papers is small.
|
train/arxiv
|
BkiUejnxK7IDND_hDel8
| 5 | 1 |
\section{Introduction}
In this paper we derive exact solutions for a diffusive predator-prey system \cite{pet},
\begin{eqnarray}
u_t &=& u_{xx}-\beta u+(1+\beta)u^2-u^3-uv \nonumber \\
v_t &=& v_{xx}+kuv-mv-\delta v^3
\label{qqq1}
\end{eqnarray}
where $k$, $\delta$, $m$ and $\beta$ are positive parameters. Subscripts $x$ and $t$ denote partial derivatives. The equations are expressed in dimensionless variables, where scaling of space and time have been introduced so to have the equation in a simple form. The biological meaning of the each term has been discussed in \cite{pet}, which we briefly review. The model is of the predator-prey kind, with $u$ and $v$ being the densities of prey and predator. Spatial redistribution of the population is governed by diffusive dynamics, with both species having the same diffusivities, \cite{okubo}. In absence of the predator, the temporal dynamics of the prey density is of the Allee type \cite{all,cour}, small populations being not viable. Presence of the predator population negatively affects the prey population. In turn, the predator population is totally dependent on prey availability, as the only term in the predator equation that represents the growth of the population is the $kuv$ term, where $k$ quantifies the gain in natality due to prey consumption. The parameters $\beta$ and $m$ represent the per capita mortality rate of prey and predator respectively, in the linear, small populations, limit. Finally, the $\delta v^3$ is a closure relation taking into account the effects of higher trophic levels \cite{petr,steele}.
To investigate the dynamics of the above diffusive predator-prey system the authors of Ref. \cite{pet} have assumed the following relations between the parameters, namely $m=\beta$ and $k+\frac{1}{\sqrt{\delta}}=\beta+1$. In other words it has been assumed that the per capita mortality rate of prey and predator are equal and the rate of biomass production at the lower level must be consistent with the rate of biomass assimilation at the upper level of the web \cite{dub, she, owe}. Under this assumption Eq. (\ref{qqq1}) reads
\begin{eqnarray}
\label{qqq6}
u_t &=& u_{xx}-\beta u+(k+\frac{1}{\sqrt{\delta}})u^2-u^3-uv \nonumber \\
v_t &=& v_{xx}+kuv-\beta v-\delta v^3.
\end{eqnarray}
In our work also we consider the predator-prey system (\ref{qqq6}) only. We construct exact analytic solutions to the equation (\ref{qqq6}) in order to understand the properties of this model for different parametric values. To do so we employ the $\big(\frac{G'}{G}\big)$ expansion method \cite{bek,dai,wan,zay,zha}. This method has been applied to several nonlinear evolutionary equations. Here we demonstrate that the utility of this method in exploring dynamics of a diffusive predator-prey system. While implementing $\big(\frac{G'}{G}\big)$ expansion method to equation (\ref{qqq6}) we obtain exact solutions for two different wave speeds (vide Eqs. (\ref{qqq17}) and (\ref{qqq18})). In both the cases we give three different types of exact solutions.
The plan of the paper is as follows. To begin with, in Sec. 2, we describe the $\big(\frac{G'}{G}\big)$ expansion method. In Sec. 3, we consider Eq. (\ref{qqq6}) and derive exact solutions of it. Finally, we present our conclusion in Sec. 4.
\section{The $(\frac{G^{\prime}}{G})$-expansion method }
\label{sec:1}
In this section we discuss briefly the method of finding exact solutions for a system of nonlinear partial differential equations (PDEs) using $(\frac{G^{\prime}}{G})$-expansion method.
Suppose that the system of nonlinear PDEs is of the form
\begin{eqnarray}
\label{aaa1}
P(u,v,u_t,u_x,v_x,u_{tt},u_{xt},u_{xx},...)=0,\nonumber\\
Q(u,v,v_t,u_x,v_x,u_{tt},u_{xt},u_{xx},...)=0,
\end{eqnarray}
where $u=u(x,t)$ and $v=v(x,t)$ are two unknown functions and $P$ and $Q$ are polynomials in $u=u(x,t)$ and
$v=v(x,t)$ and their partial
derivatives. The $(\frac{G^{\prime}}{G})$ expansion method involves the following four steps.
${\bf Step\;1:}$ Let us introduce the travelling wave reduction
\begin{equation}
\label{aaa2}
u(x,t)=u(\xi), \;\; v(x,t)=v(\xi), \qquad \xi=x-ct,
\end{equation}
in the PDE (\ref{aaa1}) so that the latter becomes
\begin{eqnarray}
\label{aaa3}
P(u,v,-cu',u',v',c^2u'',-cu'',u'',...)=0,\nonumber\\
Q(u,v,-cv',u',v',c^2u'',-cu'',u'',...)=0,
\end{eqnarray}
where prime denotes differentiation with respect to the new variable $\xi$.
${\bf Step\;2:}$ Suppose that the solution of $(\ref{aaa3})$ can be expressed by a polynomial in $(\frac{G'}{G})$, that is
\begin{eqnarray}
\label{aaa4}
u(\xi)&=& \alpha_m\Big(\frac{G'}{G}\Big)^m+\alpha_{m-1}\Big(\frac{G'}{G}\Big)^{m-1}
+\alpha_{m-2}\Big(\frac{G'}{G}\Big)^{m-2}+...,\nonumber\\
v(\xi)&=& \beta_n\Big(\frac{G'}{G}\Big)^n+\beta_{n-1}\Big(\frac{G'}{G}\Big)^{n-1}
+\beta_{n-2}\Big(\frac{G'}{G}\Big)^{n-2}+...,
\end{eqnarray}
with $G=G(\xi)$ is the solution of the second order linear damped harmonic oscillator equation
\begin{equation}
\label{aaa5}
G''+\lambda G'+\mu G=0.
\end{equation}
In the above $\alpha_m, \beta_n, \alpha_{m-1}, \beta_{n-1}..., \alpha_0, \beta_0, \lambda,$ and $\mu$ are constants
and $\alpha_m \neq 0$, $\beta_n \neq 0$. The positive integers $m$ and $n$ can be determined by substituting (\ref{aaa4}) in (\ref{aaa3}) and considering the homogeneous
balance between the highest order derivative and nonlinear terms appearing in (\ref{aaa3}).
${\bf Step\;3:}$ Substituting (\ref{aaa4}) in (\ref{aaa3}) and eliminating the variable $G''$ in the resultant equations by using (\ref{aaa5}) one gets two polynomials equations in $(\frac{G'}{G})$. Now equating each coefficients of $(\frac{G'}{G})^m$ and $(\frac{G'}{G})^n$ to zero one obtains a set of algebraic equations for the parameters $\alpha_m, \beta_n, \alpha_{m-1}, \beta_{n-1}..., \alpha_0, \beta_0, \lambda,$ and $\mu$. Solving these algebraic equations one can get exact values for these coefficients.
${\bf Step\;4:}$ Substituting the values of $\alpha_m, \beta_n, \alpha_{m-1}, \beta_{n-1}..., \alpha_0, \beta_0, \lambda,$ and $\mu$ and $c$ and the general solution of (\ref{aaa5}) in (\ref{aaa4}) one can obtain three different types of travelling wave solutions for the given system of nonlinear PDEs.
\section {Diffusive predator-prey system}
In this section, we apply the method described in the previous section to the nonlinear PDEs (\ref{qqq6})
and construct exact solutions. Substituting (\ref{aaa2}) into (\ref{qqq6}) we get the following system of ordinary differential equations (ODEs), namely
\begin{eqnarray}
\label{qqq9}
u''+cu'-\beta u+\big(k+\frac{1}{\sqrt{\delta}}\big) u^2-u^3-uv=0, \nonumber \\
v''+cv'+kuv-\beta v-\delta v^3=0.
\end{eqnarray}
Suppose that the solution of ODEs (\ref{qqq9}) can be expressed by a polynomial in $(\frac{G'}{G})$ which is of the form (\ref{aaa4}).
Substituting (\ref{aaa4}) and their derivatives in (\ref{qqq9}) and performing the homogeneous balance between $u''$ and $u^3$ and $v''$ with $v^3$ in resultant equation we find $m=1$ and $n=1$. So we fix the polynomials (\ref{aaa4}) be of the form
\begin{eqnarray}
\label{qqq13}
u(\xi)=\alpha_1 \Big(\frac{G'}{G}\Big)+\alpha_0, \;\;
v(\xi)=\beta_1 \Big(\frac{G'}{G}\Big)+\beta_0, \;\; \alpha_1, \beta_1 \neq 0.
\end{eqnarray}
Substituting the expressions (\ref{qqq13}) and their derivatives in (\ref{qqq9}) and rearranging the resultant equation in the descending powers of $\Big(\frac{G'}{G}\Big)$ we arrive at
\begin{align}
\label{qqq15}
[2\alpha_1-\alpha_1^3]\Big(\frac{G'}{G}\Big)^3+[3\alpha_1\lambda-c\alpha_1+k\alpha_1^2+\frac{\alpha_1^2}{\sqrt{\delta}}-3\alpha_1^2\alpha_0-\alpha_1\beta_1]\Big(\frac{G'}{G}\Big)^2 && \nonumber \\
+[(2\mu+\lambda^2)\alpha_1-c\lambda \alpha_1-\beta\alpha_1+2k\alpha_0\alpha_1+\frac{2\alpha_1\alpha_0}{\sqrt{\delta}}-3\alpha_0^2\alpha_1-\alpha_1\beta_0-\alpha_0\beta_1]\Big(\frac{G'}{G}\Big) && \nonumber \\
+(\mu \alpha_1\lambda-c\mu \alpha_1-\beta \alpha_0+\alpha_0^2+\frac{\alpha_0^2}{\sqrt{\delta}}-\alpha_0^3-\alpha_0\beta_0) = 0, \qquad \qquad \qquad
\end{align}
\begin{eqnarray}
\label{qqq16}
[2\beta_1-\delta\beta_1^3]\Big(\frac{G'}{G}\Big)^3+[3\beta_1\lambda-c\beta_1+k\alpha_1\beta_1-3\delta\beta_1^2\beta_0]\Big(\frac{G'}{G}\Big)^2 && \nonumber \\
+[(2\mu+\lambda^2)\beta_1-c\lambda \beta_1-\beta\beta_1+k\alpha_0\beta_1+k\alpha_1\beta_0-3\delta\beta_0^2\beta_1]\Big(\frac{G'}{G}\Big) && \nonumber \\
+(\mu \beta_1\lambda-c\mu \beta_1-\beta \beta_0+k\alpha_0\beta_0-\delta\beta_0^3) = 0. \qquad \qquad
\end{eqnarray}
Equating the coefficients of $(\frac{G'}{G})^m$, $m=0,1,2,3,$ to zero in equations (\ref{qqq15}) and (\ref{qqq16}) we get the following set of algebraic equations, namely
\begin{align}
\label{qqa17}
& 2\alpha_1-\alpha_1^3 =0, \nonumber \\
&3\alpha_1\lambda-c\alpha_1+k\alpha_1^2+\frac{\alpha_1^2}{\sqrt{\delta}}-3\alpha_1^2\alpha_0-\alpha_1\beta_1=0, \nonumber \\
&(2\mu+\lambda^2)\alpha_1-c\lambda \alpha_1-\beta\alpha_1+2k\alpha_0\alpha_1+\frac{2\alpha_1\alpha_0}{\sqrt{\delta}}-3\alpha_0^2\alpha_1-\alpha_1\beta_0-\alpha_0\beta_1 =0, \nonumber \\
&\mu \alpha_1\lambda-c\mu \alpha_1-\beta \alpha_0+\alpha_0^2+\frac{\alpha_0^2}{\sqrt{\delta}}-\alpha_0^3-\alpha_0\beta_0 =0.\\
\label{qqa18}
&2\beta_1-\delta\beta_1^3 =0, \nonumber \\
&3\beta_1\lambda-c\beta_1+k\alpha_1\beta_1-3\delta\beta_1^2\beta_0 =0, \nonumber \\
&(2\mu+\lambda^2)\beta_1-c\lambda \beta_1-\beta\beta_1+k\alpha_0\beta_1+k\alpha_1\beta_0-3\delta\beta_0^2\beta_1 =0, \nonumber \\
&\mu \beta_1\lambda-c\mu \beta_1-\beta \beta_0+k\alpha_0\beta_0-\delta\beta_0^3 =0.
\end{align}
Solving the above system of algebraic equations (\ref{qqa17}) and (\ref{qqa18}) we obtain two sets of values for the constants $\alpha_1$, $\alpha_0$, $\beta_1$, $\beta_0$ and $c$:
\begin{eqnarray}
\label{qqq17}
(a)\quad \alpha_1=\pm \sqrt{2}, \; \beta_0=\frac{\alpha_0}{\sqrt{\delta}}, \; \beta_1=\pm \sqrt{\frac{2}{\delta}}, \; c=\mp \frac{k}{\sqrt{2}},\; \lambda=\mp \frac{k-2\alpha_0}{\sqrt{2}},\nonumber \\ \beta=k\alpha_0-\alpha_0^2+2\mu. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad
\end{eqnarray}
\begin{eqnarray}
\label{qqq18}
(b)\quad \alpha_1=\pm \sqrt{2},\; \beta_0=\frac{\alpha_0}{\sqrt{\delta}}, \; \beta_1=\pm \sqrt{\frac{2}{\delta}}, \; \lambda=\pm \frac{\alpha_0^2+2\mu}{\sqrt{2}\alpha_0},\qquad \qquad \quad \nonumber \\ \quad c=\pm \frac{1}{\sqrt{2}}\big(2k-3\alpha_0+\frac{6\mu}{\alpha_0}\big),\; \;\beta=-\frac{(\alpha_0^2-2\mu)(-k\alpha_0+\alpha_0^2-2\mu)}{\alpha_0^2}.
\end{eqnarray}
Since both the sets separately satisfy the algebraic equations in $(\ref{qqa17})$ and $(\ref{qqa18})$ they individually form a compatible solution. From each set we derive an exact solution for the nonlinear PDEs (\ref{qqq6}).
To begin with let us take the values given in $(\ref{qqq17}).$
With the values given in $(\ref{qqq17})$, the solution $(\ref{qqq13})$ reads
\begin{eqnarray}
\label{qqq19}
u(\xi)=\pm \sqrt{2}\bigg(\frac{G'}{G}\bigg)+\alpha_0, \;\;\;
v(\xi)=\pm \sqrt{\frac{2}{\delta}}\bigg(\frac{G'}{G}\bigg)+\frac{\alpha_0}{\sqrt{\delta}}.
\end{eqnarray}
It is known that the linear damped harmonic oscillator equation (\ref{aaa5}) admits three different types of solutions depending on the
values of $\lambda$ and $\mu$, namely
Case 1: $\;\lambda^2-4\mu > 0$
\begin{equation}
\label{qa17}
G(\xi) = e^{(-\lambda/2)\xi}\Big(c_1\sinh \frac{\sqrt{\lambda^2-4\mu}}{2}\xi+ c_2\cosh \frac{\sqrt{\lambda^2-4\mu}}{2}\xi\Big)
\end{equation}
Case 2:$\;\lambda^2-4\mu < 0$
\begin{equation}
\label{qa18}
G(\xi) = e^{(-\lambda/2)\xi}\Big(c_1\cos \frac{\sqrt{4\mu-\lambda^2}}{2}\xi+ c_2\sin \frac{\sqrt{4\mu-\lambda^2}}{2}\xi\Big)
\end{equation}
Case 3: $\lambda^2-4\mu = 0$
\begin{equation}
\label{qa19}
G(\xi) = (c_1+c_2\xi)e^{(-\lambda/2)\xi}.\qquad \qquad \qquad \qquad \qquad \qquad \quad
\end{equation}
Substituting (\ref{qa17})-(\ref{qa19}) into (\ref{qqq19}) we arrive at the following form of solutions, namely
Case 1: $\lambda^2-4\mu \;\textgreater\; 0 $
\begin{align}
\label{qqq21}
\quad u(\xi)=\pm \sqrt{2}\Bigg(-\frac{\lambda}{2}+\frac{\sqrt{\lambda^2-4\mu}}{2}\Bigg(\frac{c_1\cosh \frac{\sqrt{\lambda^2-4\mu}}{2}\xi+c_2\sinh \frac{\sqrt{\lambda^2-4\mu}}{2}\xi}{c_1\sinh \frac{\sqrt{\lambda^2-4\mu}}{2}\xi+ c_2\cosh \frac{\sqrt{\lambda^2-4\mu}}{2}\xi}\Bigg)\Bigg)+\alpha_0,\nonumber \\
\quad v(\xi)=\pm \sqrt{\frac{2}{\delta}}\Bigg(-\frac{\lambda}{2}+\frac{\sqrt{\lambda^2-4\mu}}{2}\Bigg(\frac{c_1\cosh \frac{\sqrt{\lambda^2-4\mu}}{2}\xi+c_2\sinh \frac{\sqrt{\lambda^2-4\mu}}{2}\xi}{c_1\sinh \frac{\sqrt{\lambda^2-4\mu}}{2}\xi+ c_2\cosh \frac{\sqrt{\lambda^2-4\mu}}{2}\xi}\Bigg)\Bigg)+\frac{\alpha_0}{\sqrt{\delta}}.
\end{align}
\begin{figure}
\begin{center}
\includegraphics[width=.70\linewidth]{prey1}
\end{center}
\caption{The densities of prey (solid line) and predator (dashed line) as given by the exact solution (\ref{qqq21}) shown for time $t=0$; when $\lambda\textgreater2\sqrt{\mu}$; Parameters are $\alpha_0=1.2$, $k=5.9$, $\delta=3$, $\mu=0.2$, $c_1=20$, $c_2=10$.}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=.70\linewidth]{prey2}
\end{center}
\caption{The densities of prey (solid line) and predator (dashed line) as given by the exact solution (\ref{qqq23}) shown for time $t=50$; when $\lambda\textless2\sqrt{\mu}$; Parameters are $\alpha_0=3$, $k=12.2$, $\delta=2$, $\mu=5$, $c_1=20$, $c_2=-10$.}
\label{s2}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=.65\linewidth]{prey3}
\end{center}
\caption{The densities of prey (solid line) and predator (dashed line) as given by the exact solution (\ref{qqq25}) shown for time $t=10$
; when $\lambda=2\sqrt{\mu}$; Parameters are $\delta=3$, $k=2.03$, $\mu=1$, $c_1=20$, $c_2=10$.}
\end{figure}
Case 2: $\lambda^2-4\mu \;\textless\; 0 $
\begin{align}
\label{qqq23}
\quad u(\xi)=\pm \sqrt{2}\Bigg(-\frac{\lambda}{2}+\frac{\sqrt{4\mu-\lambda^2}}{2}\Bigg(\frac{-c_1\sin \frac{\sqrt{4\mu-\lambda^2}}{2}\xi+c_2\cos \frac{\sqrt{4\mu-\lambda^2}}{2}\xi}{c_1\cos \frac{\sqrt{4\mu-\lambda^2}}{2}\xi+ c_2\sin \frac{\sqrt{4\mu-\lambda^2}}{2}\xi}\Bigg)\Bigg)+\alpha_0, \nonumber \\
\quad v(\xi)=\pm \sqrt{\frac{2}{\delta}}\Bigg(-\frac{\lambda}{2}+\frac{\sqrt{4\mu-\lambda^2}}{2}\Bigg(\frac{-c_1\sin \frac{\sqrt{4\mu-\lambda^2}}{2}\xi+c_2\cos \frac{\sqrt{4\mu-\lambda^2}}{2}\xi}{c_1\cos \frac{\sqrt{4\mu-\lambda^2}}{2}\xi+ c_2\sin \frac{\sqrt{4\mu-\lambda^2}}{2}\xi}\Bigg)\Bigg)+\frac{\alpha_0}{\sqrt{\delta}}.
\end{align}
Case 3: $\lambda^2-4\mu=0$
\begin{eqnarray}
\label{qqq25}
\quad u(\xi)=\pm \sqrt{2}\Bigg(\frac{c_2}{c_1+c_2\xi}-\frac{\lambda}{2}\Bigg)+\alpha_0, \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \nonumber \\
\quad v(\xi)=\pm \sqrt{\frac{2}{\delta}}\Bigg(\frac{c_2}{c_1+c_2\xi}-\frac{\lambda}{2}\Bigg)+\frac{\alpha_0}{\sqrt{\delta}}.\qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad
\end{eqnarray}
where $\alpha_0=-\frac{2\sqrt{2\mu}+k}{2}$,
$\xi=x\pm\Big(\frac{k}{\sqrt{2}}\Big)t $, $c_1$ and $c_2$ are arbitrary constants.
For the second set of values we end up with the same form of solution with the only difference in the values in the parameters $c$, $\beta$ and $\lambda$:
Case 1: $\lambda^2-4\mu \; \textgreater\; 0 $
\begin{align}
\label{qqq22}
\quad u(\xi)=\pm \sqrt{2}\Bigg(-\frac{\lambda}{2}+\frac{\sqrt{\lambda^2-4\mu}}{2}\Bigg(\frac{c_1\cosh \frac{\sqrt{\lambda^2-4\mu}}{2}\xi+c_2\sinh \frac{\sqrt{\lambda^2-4\mu}}{2}\xi}{c_1\sinh \frac{\sqrt{\lambda^2-4\mu}}{2}\xi+ c_2\cosh \frac{\sqrt{\lambda^2-4\mu}}{2}\xi}\Bigg)\Bigg)+\alpha_0,\nonumber \\
\quad v(\xi)=\pm \sqrt{\frac{2}{\delta}}\Bigg(-\frac{\lambda}{2}+\frac{\sqrt{\lambda^2-4\mu}}{2}\Bigg(\frac{c_1\cosh \frac{\sqrt{\lambda^2-4\mu}}{2}\xi+c_2\sinh \frac{\sqrt{\lambda^2-4\mu}}{2}\xi}{c_1\sinh \frac{\sqrt{\lambda^2-4\mu}}{2}\xi+ c_2\cosh \frac{\sqrt{\lambda^2-4\mu}}{2}\xi}\Bigg)\Bigg)+\frac{\alpha_0}{\sqrt{\delta}}.
\end{align}
Case 2: $\lambda^2-4\mu \;\textless\;0 $
\begin{align}
\label{qqq24}
\quad u(\xi)=\pm \sqrt{2}\Bigg(-\frac{\lambda}{2}+\frac{\sqrt{4\mu-\lambda^2}}{2}\Bigg(\frac{-c_1\sin \frac{\sqrt{4\mu-\lambda^2}}{2}\xi+c_2\cos \frac{\sqrt{4\mu-\lambda^2}}{2}\xi}{c_1\cos \frac{\sqrt{4\mu-\lambda^2}}{2}\xi+ c_2\sin \frac{\sqrt{4\mu-\lambda^2}}{2}\xi}\Bigg)\Bigg)+\alpha_0, \nonumber \\
\quad v(\xi)=\pm \sqrt{\frac{2}{\delta}}\Bigg(-\frac{\lambda}{2}+\frac{\sqrt{4\mu-\lambda^2}}{2}\Bigg(\frac{-c_1\sin \frac{\sqrt{4\mu-\lambda^2}}{2}\xi+c_2\cos \frac{\sqrt{4\mu-\lambda^2}}{2}\xi}{c_1\cos \frac{\sqrt{4\mu-\lambda^2}}{2}\xi+ c_2\sin \frac{\sqrt{4\mu-\lambda^2}}{2}\xi}\Bigg)\Bigg)+\frac{\alpha_0}{\sqrt{\delta}}.
\end{align}
Case 3: $\lambda^2-4\mu=0$
\begin{eqnarray}
\label{qqq26}
\quad u(\xi)=\pm \sqrt{2}\Bigg(\frac{c_2}{c_1+c_2\xi}-\frac{\lambda}{2}\Bigg)+\alpha_0, \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \nonumber \\
\quad v(\xi)=\pm \sqrt{\frac{2}{\delta}}\Bigg(\frac{c_2}{c_1+c_2\xi}-\frac{\lambda}{2}\Bigg)+\frac{\alpha_0}{\sqrt{\delta}}. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad
\end{eqnarray}
where $\alpha_0=\sqrt{2\mu}$,
$\xi=x\mp \frac{1}{2}\Big(2\sqrt{2}k-3\sqrt{2}\alpha_0+\frac{6\sqrt{2}\mu}{\alpha_0}\Big)$, $c_1$ and $c_2$ are arbitrary constants.
{\bf \section {Discussion and Conclusion}}
In this paper, we have constructed exact solutions for a diffusive predator-prey system which is modeled by a system of two coupled nonlinear PDEs. Using the $\big(\frac{G'}{G}\big)$ expansion method, we have derived exact solutions for two different wave speeds.
The solutions that we have obtained are singular and cannot be taken at face value describing actual situations in ecology. Notwithstanding they present a very interesting property: depending on the sign of $\lambda^2 -4\mu$ , the nature of the solution changes from a single structure to a periodic one, which is akin to pattern forming systems.
We express $\lambda^2 -4\mu$ in terms of the original parameters in the equation, we get $\lambda^2 -4\mu = k/2-2\beta$. Therefore, if $k^2>4\beta$ we have a single structure like in Fig.(1). And if $k^2<4\beta$ we have a periodic structure. The meaning of $k$ is that it measures the gain in natality obtained by the predator, and $\beta$ is its mortality. It follows that periodic structure formation comes from the strength of mortality. In a more intuitive way, $\beta $ is the inverse of the typical time for a predator population to decay in absence of preys. If this time is short, we have a periodic pattern, the population continuously decaying and recovering. If this time is long, a smoother dynamics shows up.
The equations for which we could find new solutions, besides the previously known, \cite{pet}, have the obvious drawback that matching of coefficients is necessary. This is the same situation in most cases in the subject of exact solutions for reaction-diffusion equations, beyond the specific cases of interest in biology, \cite{we}. However, the broad view insight gained remains of interest as the systems considered contain many elements of more realistic, non solvable ones.
{\bf \section*{Acknowledgements}}
RAK and MS wish to thank CNPq (Brazil) and DST (India) for the financial support through major research projects.
|
train/arxiv
|
BkiUePnxK1UJ-rWoKVsL
| 5 | 1 |
\subsection*{\contentsname}\vspace{-2mm}\@starttoc{toc}}
\newcommand{na\"{\i}ve}{na\"{\i}ve}
\usepackage[nosort]{cite}
\usepackage{collect}
\renewcommand{\vec}[1]{{\bf #1}}
\newcommand{\mathcal{N}}{\mathcal{N}}
\newcommand{\Sigma \mspace{-10mu} \big/ }{\Sigma \mspace{-10mu} \big/ }
\newcommand{\overset{!}{=}}{\overset{!}{=}}
\newcommand{\begin{eqnarray}}{\begin{eqnarray}}
\newcommand{\beal}[1]{\begin{eqnarray}\label{#1}}
\newcommand{\eea}{\end{eqnarray}}
\newcommand{\be}{\begin{equation}}
\newcommand{\bel}[1]{\begin{equation}\label{#1}}
\newcommand{\ee}{\end{equation}}
\newcommand{\rf}[1]{Eq.~(\ref{#1})}
\newcommand{\f}[2]{\frac{#1}{#2}}
\newcommand{\nonumber}{\nonumber}
\newcommand{{\mathcal D}}{{\mathcal D}}
\newcommand{\cor}[1]{\left\langle{#1}\right\rangle}
\newcommand{\todo}[1]{{}\bigskip\noindent{\tt To do: #1}\bigskip}
\newcommand{${\mathcal N}=4$}{${\mathcal N}=4$}
\newcommand{${\mathcal N}=4$ SYM}{${\mathcal N}=4$ SYM}
\newcommand{\quad\quad\quad\quad}{\quad\quad\quad\quad}
\newcommand{\tilde{\Pi}}{\tilde{\Pi}}
\newcommand{\partial}{\partial}
\makeatother
\begin{document}
\maketitle
\thispagestyle{empty}
\begin{abstract}It has recently been understood that the hydrodynamic series
generated by the M\"uller-Israel-Stewart theory is divergent, and that this
large-order behaviour is consistent with the theory of
resurgence. Furthermore, it was observed that the physical origin of this
is the presence of a purely damped nonhydrodynamic mode. It is very
interesting to ask whether this picture persists in cases where the spectrum
of nonhydrodynamic modes is richer. We take the first step in this
direction by considering the simplest hydrodynamic theory which, instead of
the purely damped mode, contains a pair of nonhydrodynamic modes of complex
conjugate frequencies. This mimics the pattern of black brane quasinormal
modes which appear on the gravity side of the AdS/CFT description of
${\mathcal N}=4$ SYM\ plasma. We find that the resulting hydrodynamic series is divergent
in a way consistent with resurgence and precisely encodes information about
the nonhydrodynamic modes of the theory.
\end{abstract}
\newpage
\section{Introduction}
\label{sec:intro}
Recent years have seen significant advances in the formulation of relativistic
hydrodynamic theories \cite{Baier:2007ix,Romatschke:2009im}. This is the
result of great interest in the heavy ion collision program, which aims to
establish bulk properties of nuclear matter in extreme conditions
\cite{Shuryak:2014zxa}. Relativistic hydrodynamic models have been essential
in uncovering the basic features observed in experiments at RHIC and LHC. As a
result, it has become clear that the hydrodynamic approach can be viewed in
the same spirit as the effective field theory paradigm of quantum field
theory. This has led to posing (and sometimes answering) foundational
questions about the meaning of hydrodynamics.
The point of departure is the idea that the expectation value of the
energy-momentum tensor can be expanded in gradients of hydrodynamic
variables. It has recently been shown in some specific cases that this
hydrodynamic gradient series is divergent
\cite{Heller:2013fn,Heller:2015dha}. Furthermore, the precise way in which the
series diverges encodes information about the nonhydrodynamic modes which are
not included explicitly in the hydrodynamic description. This pattern is
reminiscent of what has been noted in the context of divergent perturbation
expansions in other contexts.
The first example of such behaviour of the hydrodynamic gradient expansion was
the case of ${\mathcal N}=4$\ supersymmetric Yang-Mills
theory (SYM), where the series was calculated to high order using the
AdS/CFT correspondence \cite{Heller:2013fn}. To
make the problem manageable, the specific case of boost-invariant
flow \cite{Bjorken:1982qr,Janik:2005zt} was considered. The gradient expansion was computed up
to order $242$, and it was observed that the series diverges in factorial
fashion. Applying the Borel transform revealed singularities in the Borel
plane occurring precisely at the locations corresponding to the complex
frequencies of the leading quasinormal modes of the dual black brane
geometry. The singularities are related by complex
conjugation and are off the real axis, which means that the corresponding
degrees of freedom have oscillatory as well as decaying features.
The second example where this kind of behaviour was observed is the
hydrodynamic series generated by M\"uller-Israel-Stewart (MIS) theory
\cite{Muller:1967zza,Israel:1979wp}. In this case the series is again
divergent, but the singularities in the Borel plane lie on the real axis,
which would correspond to purely decaying quasinormal modes. Since in this
example one has full control of the problem it is possible to resum the
series using ideas from the theory of resurgence \cite{Heller:2015dha}. This
example is very interesting from the point of view of Borel summation, since
the naive application of the inverse Borel transform leads to an imaginary
ambiguity. Proper accounting of the nonhydrodynamic degrees of freedom in a
way precisely consistent with resurgence theory yields a real and
unambiguous result
(up to a constant of integration).
This result was shown to
be consistent with an attractor
solution in the original MIS equation, which constitutes a natural definition
of the meaning of hydrodynamics beyond the gradient expansion. This gives a
strong indication that in cases where a numerical solution is not available,
resurgence techniques may be used to mine the hydrodynamic gradient expansion
for universal features at times well before hydrodynamic behaviour is
typically expected.
The pattern seen in this problem is actually rather typical of the way
resurgence theory clarified the role and meaning of perturbative expansions in
physics. The role of resurgence in the cancellation of ambiguities in quantum
mechanics is well known. In the perturbative study of observables such as the
ground state energy of the anharmonic oscillator
\cite{Bender:1969si,Bender:1990pd}, the observable studied was seen to be
"non-Borel summable" along the positive real line, since this was a so-called
Stokes line with singularities in the corresponding Borel plane. Nevertheless,
once all nonperturbative sectors associated with higher multi-instanton
corrections were taken into consideration, a process called median resummation
was seen to provide a real and unambiguous result (see, e.g.,
\cite{Bogomolny:1980ur,ZinnJustin:1981dx,ZinnJustin:1982td,ZinnJustin:1983nr,Jentschura:2004ib,Jentschura:2004cg,Ambrozinski:2012zw}
for examples of ambiguity cancelation in the context of quantum
mechanics,\footnote{It was seen in that in many quantum mechanical systems a
very simple exact quantisation condition can be derived for the energy
eigenvalues \cite{Dunne:2013ada,Dunne:2014bca}, relating perturbative and
nonperturbative phenomena, which complements the usual large-order relations
coming from resurgence.} and Refs. \cite{Argyres:2012ka,Dunne:2012ae} for its
generalisation to quantum field theory). This process consists in the proper
summation of all existing sectors (perturbative and nonperturbative) for a
given observable, in what is called a transseries. The use of median
resummation for the case of transseries with one and two real instanton
actions was studied in detail in Ref. \cite{Aniceto:2013fka}, and examples of
recent applications are Refs. \cite{Heller:2015dha} and
\cite{Aniceto:2015rua,Dorigoni:2015dha}.
The cancellation of nonperturbative ambiguities is just one application of a
much larger structure behind the asymptotic behaviour of perturbative
series. In fact, resurgent analysis and transseries give us a straightforward,
systematic path of determining the analytic properties of the observables, the
Stokes phenomena associated with singular directions, and the resummation
properties leading to unambiguous results.\footnote{See for example
\cite{Candelpergher:1993np,Delabaere:2006ed,Seara:2003ss,sauzin14,Dorigoni:2014hea,Edgar:2008ga}
for reviews on resurgent analysis and transseries, and
\cite{Marino:2008ya,Aniceto:2011nu,Marino:2012zq,Upcoming:2015} for
introductions to resurgence in physical settings. For recent applications to
topological strings, supersymmetric quantum mechanics and QFTs, see also
\cite{Santamaria:2013rua,Couso-Santamaria:2014iia,Hatsuda:2015owa,Couso-Santamaria:2015hva,Basar:2013eka,Behtash:2015kva,Cherman:2014ofa,Dunne:2015ywa}. }
The crucial role of resurgence in the study of the analytic properties and
Stokes phenomena within physical contexts is exemplified in
Ref. \cite{Couso-Santamaria:2015wga}, where from a large-$N$ expansion one can
retrieve the properties of the corresponding transseries solution not only for
real, finite $N$, but also as an analytic function in the variable $N$ (see
also Ref. \cite{Cherman:2014xia} for another toy example of strong-weak coupling
interpolation).
One can ask moreover about the usefulness of the transseries and resurgence in
cases where the resummation procedure in the direction of interest is not
singular.
When our interest is in the result along the positive real line,
and the singular directions (Stokes lines) are in the complex plane away from
this axis, one could be led to believe that only the perturbative
series would be necessary. But as it is known from resurgent theory, and
evidence was seen, for example, in Ref. \cite{Grassi:2014cla}, the existence of
Stokes lines with a positive real component will introduce nonperturbative
sectors which need to be added to the original asymptotic perturbative series
in the form of a transseries in order to obtain a consistent
result.
It would clearly be interesting to apply the ideas of resurgence theory to the
case of ${\mathcal N}=4$\ SYM. Applications of resurgence for supersymmetric gauge
theories were already seen in the case of localisable observables
\cite{Aniceto:2014hoa} and relations to quantum mechanical systems
\cite{Basar:2015xna}. Here we turn to a hydrodynamic model which shares some
of the simplicity of MIS theory, but contains a richer spectrum of
nonhydrodynamic modes in a way which resembles some aspects of what is known
about ${\mathcal N}=4$\ SYM.
It is important to recall here the
philosophy behind Ref. \cite{Heller:2015dha}: models like MIS are regarded as
means of generating the hydrodynamic gradient expansion which is then
analysed as if it came from a microscopic theory.
Specifically, we will
study a hydrodynamic theory which
contains analogs of quasinormal modes whose frequencies possess both real and
imaginary parts, as is the case for ${\mathcal N}=4$\ SYM (and unlike MIS). In such a case
one would expect that the singularities of the Borel transform would be off the
real axis. The simplest
such example is one of the models put forth in \cite{Heller:2014wfa}, where
nonhydrodynamic modes corresponding to quasinormal modes of ${\mathcal N}=4$\ SYM were
incorporated into a MIS-like theory.
This model generates the same hydrodynamic expansion
as MIS theory up to second order in gradients (higher orders differ, of
course).
We show that also in this
case one can identify attractor behaviour which sets in well before the
hydrodynamic limit of large times.
We study the hydrodynamic series in this model in the spirit of
Ref.~\cite{Heller:2015dha} and find a similar picture, albeit with novel
elements. The hydrodynamic series is divergent and its summation requires
exponentially suppressed corrections reflecting in a quantitative manner the
spectrum of nonhydrodynamic modes present. These exponential corrections to
the hydrodynamic series can be viewed as a completion to a transseries. By
using the formalism
elaborated
in \cite{Aniceto:2011nu} we show that the
divergent series satisfy relations expected on the basis of resurgence
theory. From this perspective there is a novel aspect: the ``actions'' are
complex, as is the leading nonanaliticity exponent. This introduces some
technical difficulties in applying convergence acceleration. The physical
reason for these features is, however, entirely clear: they correspond to the
fact that the nonhydrodynamic modes present in this theory are not purely
decaying (i.e., the quasinormal mode frequencies are not purely imaginary).
The structure of this paper is as follows. We start by reviewing the important
aspects of hydrodynamic theories in Section~\ref{sec:hydro}, followed by more specific
properties of the MIS causal hydrodynamic theory in
Section~\ref{sec:mis}. Section~\ref{sec:misres}
then presents the natural contact between resurgence and the ambiguity
cancellation of the MIS theory (reviewing the results of
Ref.~\cite{Heller:2015dha} in light of
Refs.~\cite{Aniceto:2011nu,Aniceto:2013fka}) as a warmup example toward the
extended theories of hydrodynamics.
Section~\ref{sec:ext} introduces the hydrodynamic model which we will
consider in the main part of this article.
The application of resurgence techniques to this theory is the main focus of
our work and is described in Section~\ref{sec:extres}. We will close with a
brief summary and ideas for the future in Section~\ref{sec:sum}.
\section{Hydrodynamics}
\label{sec:hydro}
Phenomenological equations of hydrodynamics are designed to
reproduce the gradient expansion of the energy-momentum tensor in some
microscopic theory up to some order (typically 1 or 2). The evolution
equations are the conservation equations
\bel{cons}
\nabla_\mu T^{\mu\nu} = 0
\ee
of the energy-momentum tensor
expressed in terms of the {\em hydrodynamics variables}. Specifically, the
energy-momentum tensor in the hydrodynamic theories considered here can be
presented as
\bel{hydro}
T^{\mu \nu} = {\cal E} \, u^{\mu} u^{\nu} + {\cal P} ({\cal E}) (
\eta^{\mu \nu} + u^{\mu} u^{\nu} ) + \Pi^{\mu \nu},
\ee
where $\Pi^{\mu \nu}$ is the shear stress tensor (discussed in detail below),
${\cal E}$ is the energy density and $ {\cal P}$ is the pressure, expressed
in terms of the energy density by an assumed equation of state. In conformal
theories in $d=4$ dimensions it takes the form
\be
{\cal P} ({\cal E})=\f{1}{3} {\cal E} \, .
\ee
The energy density ${\cal E}$ is often expressed in terms of the
``effective temperature'' $T\sim{\cal E}^{1/4}$. The field $u$ is the flow
velocity, defined as a timelike eigenvector of the energy-momentum tensor.
The spacetime dependent energy density (or effective temperature) and flow
velocity are the hydrodynamic variables, the evolution of which one wishes to
describe.
Their precise definition away from equilibrium is what constitutes a choice of
hydrodynamic frame (see, e.g., Ref.~\cite{Bhattacharya:2011tra}). We adopt the
Landau frame, which means that
we impose the condition that the shear stress tensor is transverse to the
flow:
\be
u_\mu \Pi^{\mu \nu} = 0\, .
\ee
The hydrodynamic gradient expansion
is the approximation of $\Pi^{\mu \nu}$ by a series of terms, graded
by the number of spacetime gradients of the hydrodynamic fields $u^\mu$ and $T$.
To proceed, it is highly advantageous to exploit to the fullest the constraints
imposed by conformal symmetry. This desire has led to the development of the
so-called Weyl-covariant
formulation \cite{Loganayagam:2008is} of conformal relativistic hydrodynamics,
in which the evolution equations assume a very compact form.
We will not review this formalism here, but we will mention some of
its basic features. The essential idea is to introduce a (nondynamical)
``Weyl connection''
\be
{\cal A}_{\mu} = u^{\lambda} \nabla_{\lambda} u_{\mu} - \frac{1}{3}
\nabla_{\lambda} u^{\lambda} u_{\mu} \, ,
\ee
to define a derivative operator, denoted here by
${\mathcal D}_\mu$, which is covariant under Weyl transformations (spacetime dependent
rescalings) of the metric.
The action of the Weyl-covariant derivative depends on the tensor on which it
acts. A general formula can be found in Ref.~\cite{Loganayagam:2008is}.
It will also be convenient to define
\be
{\mathcal D}\equiv u^{\mu} {\mathcal D}_{\mu} \ .
\ee
and
\be
\sigma^{\mu\nu} = {\mathcal D}^\mu u^\nu + {\mathcal D}^\nu u^\mu , \qquad \omega^{\mu\nu} =
{\mathcal D}^\mu u^\nu - {\mathcal D}^\nu u^\mu \, .
\ee
These objects are transverse and transform homogeneously under Weyl
transformations~\cite{Loganayagam:2008is}.
The Landau-Lifschitz formulation of relativistic viscous
hydrodynamics~\cite{LLfluid} asserts that
\bel{PiLL}
\Pi^{\mu \nu} = - \eta \sigma^{\mu \nu} \, ,
\ee
where $\eta$ is the shear viscosity.
Unfortunately, the resulting theory does not have a well-posed initial value
problem due to superluminal signal
propagation~\cite{Hiscock:1985zz,PhysRevD.62.023003}.
The same problem will occur if on the right-hand side of \rf{PiLL} one
includes any finite number of terms graded
by the number of derivatives of $T$ and $u^{\mu}$. In principle these
problems appear at short distances, where hydrodynamics is not expected to
apply~\cite{Geroch:1995bx,Geroch:2001xs}, but for practical applications this
is no consolation because
acausality leads to numerical instabilities. For practical
purposes it is therefore necessary to replace \rf{PiLL} by a prescription
which effectively generates all orders in the gradient expansion.
\section{MIS causal hydrodynamics}
\label{sec:mis}
MIS theory resolves the causality problem by promoting the shear stress tensor
$\Pi^{\mu\nu}$ to an independent dynamical field which satisfies a relaxation
type differential equation \cite{Muller:1967zza,Israel:1979wp} chosen to
augment the conservation law \rf{cons}. Consistency with the gradient
expansion requires that terms of at least second order be included, since the
derivative of the shear stress tensor is of second order.
If all terms admitted by symmetry are incorporated, the leading
terms in the gradient expansion of the shear stress tensor can be written as
\bel{st2ord}
\Pi^{\mu\nu} = -\eta \sigma^{\mu\nu} + \eta\tau_\Pi {\mathcal D}\sigma^{\mu\nu} +
\lambda_1 {\sigma^{<\mu}}_\lambda \sigma^{\nu>\lambda} + \lambda_2
{\sigma^{<\mu}}_\lambda \omega^{\nu>\lambda} + \lambda_3
{\omega^{<\mu}}_\lambda \omega^{\nu>\lambda} \, ,
\ee
where $<\dots>$ denotes symmetrization and subtracting the trace, and
$\tau_{\Pi}$ and $\lambda_i$ are phenomenological
parameters~\cite{Baier:2007ix} (the second-order transport coefficients).
If the energy-momentum tensor is calculated in some microscopic conformal
theory and expressed in terms of the hydrodynamic variables up to second order
in gradients, the result will be of the form of \rf{st2ord} with
some specific values of the transport coefficients. It has, for example,
been obtained as the long-wavelength effective description of strongly coupled
${\cal N} = 4$ SYM plasma in the framework of the AdS/CFT
correspondence~\cite{Janik:2005zt,Heller:2007qt,Baier:2007ix,Bhattacharyya:2008jc}.
The main idea of treating hydrodynamics as an effective theory is to write
down an evolution equation, the gradient expansion of which generates
\rf{st2ord}, together with additional terms which
are of third order and above. This can be done by eliminating
$\sigma^{\nu\lambda}$ in the second-order terms in favor of $\Pi^{\mu\nu}$
using \rf{PiLL}. The result can be written as
\bel{mis2}
(\tau_\Pi {\mathcal D} + 1) \Pi^{\mu\nu} = -\eta \sigma^{\mu\nu} +
\f{\lambda_1}{\eta^2} {\Pi^{<\mu}}_\lambda \Pi^{\nu>\lambda}
- \f{\lambda_2}{\eta}
{\Pi^{<\mu}}_\lambda \omega^{\nu>\lambda} + \lambda_3
{\omega^{<\mu}}_\lambda \omega^{\nu>\lambda} \, .
\ee
Solving this iteratively yields \rf{st2ord} up to higher-order terms, as
desired. The coefficients of these higher-order terms will
all be expressed in terms of the second-order transport coefficients which appear
explicitly in \rf{mis2}.
Linearization of the resulting theory reveals a single, purely decaying,
nonhydrodynamic mode in
addition to hydrodynamic modes \cite{Baier:2007ix}.
This mode (which we refer to as the MIS mode) decays on a scale set by $\tau_\Pi$.
Furthermore, the resulting theory is causal as long as $T \tau_{\Pi}\geq \eta/s$.
This approach has enjoyed great success in describing the evolution of
quark-gluon plasma~\cite{Luzum:2008cw}.
In Ref.~\cite{Heller:2015dha}, the special case of Bjorken
flow~\cite{Bjorken:1982qr} was
considered, and we do the same in our work.
Due to a very high degree of symmetry imposed, the hydrodynamic equations
reduce to a set
of ordinary differential equations.
The symmetry in question, boost invariance, can be taken to mean that in
proper time-rapidity coordinates
$\tau, y$ (related to Minkowski coordinates $t, z$ by the relations $t = \tau
\cosh y$ and
$z = \tau \sinh y$) the energy density, flow velocity and shear stress tensor
depend only on the proper time $\tau$. The MIS equations \rf{mis2} then
reduce to
\beal{miseqn}
\tau \dot{\epsilon} &=& - \frac{4}{3}\epsilon + \phi\nonumber\, , \\
\tau_\Pi \dot{\phi} &=&
\frac{4 \eta}{3 \tau }
- \frac{\lambda_1\phi^2}{2 \eta^2}
- \frac{4 \tau_\Pi\phi}{3 \tau }
- \phi \, ,
\eea
where the dot denotes a proper time derivative
and $\phi\equiv-\Pi^{y}_{y}$,
the single independent component of the shear stress
tensor.
In a conformal theory $\epsilon \sim T^4$ and
the transport coefficients satisfy
\bel{contra}
\tau_\Pi = \frac{ C_{\tau \Pi }}{T}, \qquad \lambda_1 = C_{\lambda_1}
\frac{\eta}{T}, \qquad \eta = C_\eta\ s \, ,
\ee
where $s$ is the entropy density and $C_{\tau \Pi }, C_{\lambda_1}, C_\eta$ are
dimensionless constants. In the case of
${\mathcal N}=4$ SYM\ their values are known from fluid-gravity duality~\cite{Bhattacharyya:2008jc}:
\bel{symvalues}
C_{\tau \Pi } = \frac{2-\log (2)}{2 \pi} , \qquad C_{\lambda_1} = \frac{1}{2
\pi}, \qquad C_\eta = \frac{1}{4 \pi} \, .
\ee
To simplify the discussion we will consider the case $C_{\lambda_1} = 0$.
This choice does not modify the nonydrodynamic sector in a
qualitative way, so our study of resurgence is not
affected. The hydrodynamic theory still matches ${\mathcal N}=4$ SYM\ at the
level of first-order (viscous) hydro, which is physically by far the most
significant point.
Using \rf{contra} one can turn the system of equations \rf{miseqn} into a
single second-order differential equation for the proper time
dependence of the temperature $T(\tau)$.
It proves fruitful to introduce the dimensionless variables
\bel{wfdef}
w =T\tau, \quad f = \f{\tau}{w}\f{dw}{d\tau} \, .
\ee
In terms of these, the second-order ordinary differential equation for $T(\tau)$
implies a first-order equation for $f(w)$,
\bel{eq:first-order-MIS}
C_{\tau\Pi}f\,f' +
4C_{\tau\Pi}f^{2}+\left(w-\frac{16C_{\tau\Pi}}{3}\right)f - \frac{4C_{\eta}}{9}
+ \frac{16C_{\tau\Pi}}{9}-\frac{2w}{3}=0 \, ,
\ee
where $f'$ stands for the derivative of $f(w)$ with respect to $w$.
It is this equation which is the starting point for our analysis of MIS
theory.
The late proper time behaviour of the system is governed by hydrodynamics. In
terms of the dimensionless variable $w$ this translates to the limit
$w\rightarrow\infty$. One can easily determine the coefficients of the series
solution valid for large $w$:
\bel{mishydro}
f(w) = \f{2}{3} + \frac{4 C_\eta}{9 w}+\frac{8 C_\eta C_{\tau \Pi}}{27
w^2}+ O(\frac{1}{w^3}) \, .
\ee
This expansion corresponds to the hydrodynamic gradient expansion
\cite{Heller:2011ju}. By examining the behaviour of the coefficients in
\rf{mishydro} one can show that the series as divergent. This fact reflects
the presence of the nonhydrodynamic MIS mode. As shown in
\cite{Heller:2015dha} the series solution can be summed using Borel techniques
by incorporating exponential corrections to the hydrodynamic expansion. The
result is a transseries, as described in detail in the following section.
\section{Resurgence and ambiguity cancellation}
\label{sec:misres}
As an introduction to the methods of resurgence theory we first
consider the case of MIS reviewed in the previous section. Unlike the analysis
presented in Ref.~\cite{Heller:2015dha}, we will not discuss ambiguity cancellation
at the level of the analytic continuation of the Borel transform.
We will instead make use of the consistency conditions derived from alien
calculus (for a recent review and derivations of the formulae used in this
Section see Refs.~\cite{Aniceto:2011nu,Dorigoni:2014hea,Upcoming:2015}).
As was already seen in Ref.~\cite{Heller:2015dha}, this
example presents some remarkable resurgent properties, while being a
very simple application of the expressions derived in
Ref.~\cite{Aniceto:2011nu,Aniceto:2013fka}.
In order to capture the full
solution to the first-order differential equation \rf{eq:first-order-MIS} from
a perturbative expansion,
one needs a transseries Ansatz with one parameter
\be
f(w,\sigma) = \sum_{n=0}^{+\infty} \sigma^{n} \mathrm{e}^{-nAw}
\Phi_{n} \left(w\right) \, ,
\ee
where $\sigma$ is a parameter to be fixed by the physical properties
of our system -- in our case these are reality and initial
conditions. The constant $A$ is often referred to as the
instanton action, due to its interpretation in applications to perturbation
expansions in quantum field theory \cite{Bogomolny:1980ur,ZinnJustin:1981dx,ZinnJustin:1982td,ZinnJustin:1983nr,Jentschura:2004ib,Jentschura:2004cg,Ambrozinski:2012zw}.
The $\Phi_{n}\left(w\right)$ are perturbative
expansions around the nonperturbative, exponentially suppressed sectors
with contributions weighted by $\mathrm{e}^{-nAw}$
\begin{equation}
\Phi_{n}\left(w\right)=w^{\beta_{n}}\sum_{k=0}^{+\infty}a_{k}^{(n)}w^{-k}
\, .
\label{eq:trans-series-one-param}
\end{equation}
The \textquotedbl{}instanton\textquotedbl{} action $A$ and coefficients
$\beta_{n}$ (the first associated
with
the position of the cuts appearing
in the Borel plane, and the latter associated with the type of these
branch cuts) were determined in Ref.~\cite{Heller:2015dha} to be
\be
A=\frac{3}{2}C_{\tau\Pi}, \qquad \beta_{n}\equiv
n\beta=-n\frac{C_{\eta}}{C_{\tau\Pi}} \, .
\ee
These coefficients, as well as the perturbative coefficients $a_{k}^{(n)}$
can be determined iteratively by substituting the transseries Ansatz
(\ref{eq:trans-series-one-param}) into the differential equation
(\ref{eq:first-order-MIS}). For the perturbative series $\Phi_0 (w)$ this
leads to
\beal{eq:mishydro}
a_{0}^{(0)} &=& \f{2}{3} \nonumber\\
a_{1}^{(0)} &=& \f{4 C_\eta}{9}\nonumber\\
a_{k+1}^{(0)} &=& C_{\tau\Pi} \left(\f{16}{3} a_{k}^{(0)} - \sum_{n=0}^{k}
(4-n) a_{k-n}^{(0)} a_{n}^{(0)} \right) ,\quad k>1 \, .
\eea
This recursion relation
makes it manifest that the series is indeed divergent.
The expansions $\Phi_{n}\left(w\right)$ are asymptotic, and
the coefficients $a_{k}^{(n)}$ were seen to grow factorially for
large enough order $k$. The corresponding Borel transforms, schematically
of the form\footnote{The rule is to substitute $w^{-\alpha}\rightarrow s^{\alpha-1}/\Gamma\left(\alpha\right)$,
but one needs to remove any initial terms with $\alpha<0$, and add
them separately at a later stage: hence the $m_{\mathrm{min}}$ introduced in
\rf{eq:Borel-one-param-sectors}. This does not change the
asymptotic nature of the series.}
\be
\mathcal{B}\left[\Phi_{n}\right]\left(s\right)=\sum_{m=m_{\mathrm{min}}}^{+\infty}a_{m}^{(n)}
\frac{s^{m-n\beta-1}}{\Gamma\left(m-n\beta\right)} \, ,
\label{eq:Borel-one-param-sectors}
\ee
have a nonzero radius of convergence. The radius of convergence is in fact
given by the position of the first branch cut in the Borel plane, which is at
a distance $s=A$. Following the analysis presented in Ref.~\cite{Aniceto:2011nu}
(see Section 2 of this paper for more details), we know that in the case of a
one-parameter transseries with real positive
instanton action $A$, the
sectors $\Phi_{0},\, \Phi_{1}$ will have cuts starting at positions $s=\ell A$
for $\ell \ge 1$ in the positive real axis, while the sectors $\Phi_{n}$ with
$n \ge 2$ will have cuts both in the negative and positive real lines on the
Borel plane: a finite number in the negative real axis at $s=\ell_{1} A$ with
$\ell_{1}=1,\cdots,n-1$, and an infinite number in the positive real axis at
$s=\ell_{2}A$ with $\ell_{2}\ge1$.
Given the Borel transforms, \rf{eq:Borel-one-param-sectors}, one can use
suitable Pad\'{e} approximants (see Ref.~\cite{Aniceto:2011nu}) and resum each
sector via the Laplace transform. The resummation can be easily performed in
directions $\theta$ in the complex plane where the Borel transforms
$\mathcal{B}\left[\Phi_{n}\right]$ do not have singular behaviour:
\begin{equation}
\mathcal{S}_{\theta}\Phi_{n}\left(w\right)=\int_{0}^{+\infty\,\mathrm{e}^{\mathrm{i}\theta}}ds\,
\mathrm{e}^{-sw} \mathcal{B}\left[\Phi_{n}\right] \left(s\right) \, .
\label{eq:resummation-one-param-sector}
\end{equation}
\noindent
This resummation can then be trivially analytically continued up to the
singular directions in the Borel plane, also known as Stokes lines. The
resummed transseries can then be defined by
\begin{equation}
\mathcal{S}_{\theta}f\left(w,\sigma\right)=\sum_{n=0}^{+\infty}
\sigma^{n}\mathrm{e}^{-nAw} \mathcal{S}_{\theta} \Phi_{n}\left(w\right) \, .
\label{eq:resummed-one-param-trans-series}
\end{equation}
The transseries parameter $\sigma$ is free at this stage. Its (in general
complex) value can be determined by
enforcing some physical constraints on the transseries -- in this case
suitable initial and reality conditions for \rf{vcp}.
For physical reasons, we are ultimately interested in real and positive values
of the expansion parameter $w$. For the particular
case in this Section, we have one further problem: the Borel transforms
of the perturbative and nonperturbative sectors have branch cuts
in the positive real axis starting at positions $s=\ell A$. This means
that the positive real axis is a Stokes line, a singular direction
in the complex plane where
the Stokes phenomenon occurs.
The Laplace transform
in (\ref{eq:resummation-one-param-sector}) is ill defined, because
of these branch cuts, and we have to define lateral resummations by
avoiding these singularities either from above or from below the real
axis:
\begin{equation}
\mathcal{S}_{\pm}\Phi_{n}\left(w\right)=\int_{0}^{+\infty\mathrm{e}^{\pm\mathrm{i}\epsilon}}
ds\,\mathrm{e}^{-sw}\mathcal{B}\left[\Phi_{n}\right]\left(s\right) \, .
\label{eq:lateral-resummations}
\end{equation}
This introduces a nonperturbative ambiguity: if the coefficients $a_{m}^{n}$ are real, then the difference between
these two lateral resummations for each sector is pure imaginary and of
the order of $\mathrm{e}^{-Aw}$. For example, starting with
the lateral resummation above the real axis, we can define its real
and imaginary contribution
\be
\mathcal{S}_{+}\Phi=\frac{1}{2}\left(\mathcal{S}_{+}+\mathcal{S}_{-}\right)
\Phi +\frac{1}{2} \left(\mathcal{S}_{+}-\mathcal{S}_{-}\right) \Phi \equiv
\mathcal{S}_{R}\Phi + \mathrm{i}\mathcal{S}_{I}\Phi \, ,
\ee
where $\mathcal{S}_{I}\Phi_{n}\left(w\right)\sim\mathrm{e}^{-Aw}$.
One is interested in having a nonambiguous real transseries solution,
i.e., a transseries of the type (\ref{eq:resummed-one-param-trans-series})
but where now the resummation should be thought of as one of the lateral
resummations $\mathcal{S}\rightarrow\mathcal{S}_{\pm}$. Because $\sigma$
is a complex number, the real and imaginary contributions from each
sector $\mathcal{S}_{R,I}\Phi_{n}$ will mix with the real and imaginary
part of the parameter $\sigma\equiv\sigma_{R}+\mathrm{i}\sigma_{I}$,
and one can determine the total real and imaginary contributions to
the transseries coming from every sector. This was done in detail in \cite{Aniceto:2013fka}.
Due to the resurgent properties of the transseries, choosing the parameter
$\sigma$ properly will cancel the imaginary \textquotedbl{}ambiguous\textquotedbl{}
contribution to the transseries, leaving us with a nonambiguous real
result. This procedure coincides with the so-called median resummation.
In \cite{Aniceto:2013fka} it was seen that for a one-parameter transseries with real
coefficients and singularities in the Borel plane lying in the positive
real axis,\footnote{Recall that higher nonperturbative sectors $\Phi_n$ with
$n\ge2$ will naturally have singularities also in the negative real line, as
it is expected from a one-parameter transseries. This is a natural part of
the resurgent analysis and its consequences are already integrated in the
analysis presented in Refs.~\cite{Aniceto:2011nu,Aniceto:2013fka}.} the median
resummation was achieved by setting the imaginary
part of the transseries parameter to
\be
\mathrm{i}\sigma_{I}=-\frac{1}{2}S_{1} \, .
\ee
Here $S_{1}$ is the so-called Stokes constant associated with the Stokes transition
across the positive real axis. The real part of the parameter $\sigma$
does not get fixed by these requirements, and
remains as an integration constant, to be fixed by some initial condition.
The nonambiguous real transseries
solution to this problem is therefore given by
\be
\mathcal{S}_{R}f=\mathcal{S}_{+}f\left(w,\sigma_{R}-\frac{1}{2}S_{1}\right)
= \sum_{n=0}^{+\infty}\left(\sigma_{R}-\frac{1}{2}
S_{1}\right)^{n}\mathrm{e}^{-nAw}\mathcal{S}_{+} \Phi_{n}\left(w\right) \, .
\ee
The Stokes constant can be determined directly by using resurgence
formulae predictions for the large-order behaviour of the perturbative
series (as well as higher sectors).
From the resurgent analysis of the one-parameter transseries, it was seen
in Ref.~\cite{Aniceto:2011nu} that the discontinuity of the sectors $\Phi_{k}$
in the positive real direction $w=\left|w\right|\mathrm{e}^{\mathrm{i}\theta}$
with $\theta=0$ is given by
\begin{equation}
\mathrm{Disc}_{0}\Phi_{k}=-\sum_{\ell=1}^{+\infty}
\frac{(k+\ell)!}{k!\,\ell!}\left(S_{1}\right)^{\ell} \mathrm{e}^{-\ell
Aw}\Phi_{k+\ell} \left(w\right) \, .
\label{eq:discontinuity-zero-one-param-sectors}
\end{equation}
For the particular case of the perturbative expansion we need only
to set $k=0$. For $k\ge2$ the sectors $\Phi_k$ will also have discontinuities
in the direction $\theta=\pi$. The full expressions for these discontinuities
were also derived in Ref.~\cite{Aniceto:2011nu} (in Section 2), but given the
length of such expressions we refer the reader to that reference for more
details. The fact that our transseries is resurgent translates
directly to the existence of large-order relations between the coefficients
$a_{k}^{(n)}$ and $a_{k'}^{(m)}$ of neighbouring sectors $n,m$.
These large-order relations can be derived using Cauchy's theorem,
\begin{equation}
F\left(w\right)=\oint_{z=w}\frac{dz}{2\pi\mathrm{i}}\frac{F\left(z\right)}{z-w}
= \sum_{\theta-\mathrm{sing}}\int_{0}^{+\infty\mathrm{e}^{\mathrm{i}\theta}}
\frac{dz}{2\pi\mathrm{i}} \frac{\mathrm{Disc}_{\theta}F\left(z\right)}{z-w} +
\oint_{\infty} \frac{dz}{2\pi\mathrm{i}}\frac{F\left(z\right)}{z-w} \, ,
\label{eq:Cauchy-thm}
\end{equation}
where the sum is over all singular directions $\theta$ of the asymptotic
expansion $F(w)$. On the rhs, we have deformed the contour of integration to
encircle all the discontinuities (associated with the
singular directions $\theta$) and the contribution at infinity.
Under certain conditions \cite{Bender:1990pd,Collins:1977dw},
the contribution at infinity can be
seen to vanish by scaling arguments and we are left with the integration over
the discontinuities of $F\left(w\right)$.
As an example choose the perturbative sector $\Phi_{0}$ and use variables
$x=w^{-1}\ll1$. We can easily write
\be
x\Phi_{0}\left(x\right) = \oint_{z=x}\frac{dz}{2\pi\mathrm{i}}
\frac{z\Phi_{0}\left(z\right)}{z-x} = \int_{0}^{+\infty}
\frac{dz}{2\pi\mathrm{i}} \frac{z\,\mathrm{Disc}_{0}\Phi_{0}(z)}{z-x} \, .
\ee
In the above formula we used that the perturbative series has only
a discontinuity in the direction $\theta=0$.\footnote{Recall again that for
$F(w)=\Phi_{k}(w)$ with $k=0,1$ we have only one singular direction
$\theta=0$, while for $k\ge2$ we have two singular directions
$\theta=0,\pi$.} We can now use
(\ref{eq:discontinuity-zero-one-param-sectors})
and expand both sides for small $x$. Comparing equal powers of $x$
we find
\be
a_{m}^{(0)}\simeq-\sum_{k=1}^{+\infty}
\frac{\left(S_{1}\right)^{k}}{2\pi\mathrm{i}}
\frac{\Gamma\left(m+k\beta\right)}{(kA)^{m+k\beta}}
\sum_{h=0}^{+\infty}a_{h}^{(k)} \prod_{\ell=1}^{h}
\frac{k\,A}{(m+k\beta-\ell)},\:m\gg1 \, .
\ee
This is a large-order relation with connects coefficients of the perturbative
series $a_{m}^{(0)}$ for large-order $m$ with coefficients of the
one-instanton series $a_{h}^{(1)}$ at low order, up to contributions of the
two-instanton series $a_{h}^{(2)}$ exponentially
suppressed by $2^{-m}$, and so on.
Note that each of the sums appearing above is asymptotic. Similar
expressions have been derived for the coefficients of the other sectors, and
can be found in Ref.~\cite{Aniceto:2011nu}. For the coefficients of the higher
sectors $\Phi_k,\, k\ge2$ other Stokes constants will appear in the
large-order relations, because of the discontinuity in the direction
$\theta=\pi$
(see Section 2 of Ref.~\cite{Aniceto:2011nu} for more details).
Returning to the
perturbative series, we can write the large-order relations more explicitly
in the following way:
\begin{eqnarray}
\label{eq:ratio-one-param-pert}
\frac{2\pi\mathrm{i}\,A^{m+\beta}}{\Gamma\left(m+\beta\right)}a_{m}^{(0)} &
\simeq &
-S_{1}\sum_{h=0}^{+\infty}a_{h}^{(1)}\prod_{\ell=1}^{h}\frac{A}{(m+\beta-\ell)}+\mathcal{O}\left(2^{-m}\right)
\\
& \simeq &
-S_{1}\left(a_{0}^{(1)}+ \frac{A}{m}a_{1}^{(1)} + \frac{A^{2}a_{2}^{(1)} -
A\left(\beta-1\right)a_{1}^{(1)}}{m^{2}} + \cdots\right) +
\mathcal{O}\left(2^{-m}\right) \, . \nonumber
\eea
To obtain the second line we expanded the first line for large $m$.
Now it becomes clear how one can determine the Stokes constant.
Having calculated the coefficients $a_{m}^{(0)}$ and $a_{n}^{(1)}$
iteratively from the differential equation, we can now analyse the
convergence of the lhs to the Stokes constant (times the value
of $a_{0}^{(1)}$), and thus determine $S_{1}$.
To carry out this calculation we make use of the accelerated convergence of
the Richardson transforms \cite{Marino:2007te,Garoufalidis:2010ya,Schiappa:2013opa} (see
Fig.~\ref{fig:stokes-one-param}). This leads to the determination of the
Stokes constant as $S_1=-0.00547029853 \,\mathrm{i}$, which matches the result
in Ref.~\cite{Heller:2015dha}.
\begin{figure}[ht]
\center
\includegraphics[height=0.4\textheight]{iStokes-0-2-5.png}
\caption{Convergence of the large-order perturbative series (in blue) to the
Stokes constant, using Richardson transforms of order 2 (green) and 5 (red). }
\label{fig:stokes-one-param}
\end{figure}
It is important to note that we can check the
predictions obtained by resurgence techniques for the large-order
behaviour of the perturbative series even without the knowledge of
the Stokes constant. To do so, we analyse the convergence of the ratio
of coefficients to the predicted values
\bel{rat1}
R_m \equiv \frac{a_{m+1}^{(0)}A}{a_{m}^{(0)}m} \, .
\ee
On the basis of the analysis we have presented above, we expect
\be
R_m \simeq\
\left(1+\frac{\beta}{m}\right)\left(1-\frac{A}{m}\frac{a_{1}^{(1)}}{a_{0}^{(1)}}+\frac{A^{2}
\left(a_{1}^{(1)}/a_{0}^{(1)}\right)^{2}+\left(2\beta-1\right)
A\,a_{1}^{(1)}/a_{0}^{(1)}-2A^{2}a_{2}^{(1)}/a_{0}^{(1)}}{m^{3}} +
\cdots\right) \, .
\ee
This quantity is clearly of the form
\be
R_m \simeq \sum_{k=0}^{+\infty} \frac{c_k}{m^k} \, ,
\ee
where the coefficients $c_k$ are directly determined from the large $m$
expansion of the large-order relation given above.
This makes it possible to use Richardson transforms to accelerate
convergence. As seen in Fig.~\ref{fig:ratio-one-param}, the ratio
\rf{rat1} converges to unity $c_{0}=1$ rather quickly. If the Richardson
transform (of order 10) is used, already at $m=20$ this ratio
differs from unity no more than one part in $10^8$ (and at $m=100$ the Richardson
transform of order 10 has an error of $10^{-16}$).
\begin{figure}[ht]
\center
\includegraphics[height=0.4\textheight]{R0-0-2-5.png}
\caption{Convergence of the ratio (\ref{eq:ratio-one-param-pert}), in blue, to
the leading term in the large-order relation $c_0 = 1$ (light blue). The accelerated
convergence is shown using Richardson transforms of
order 2 (green) and 5 (red).}
\label{fig:ratio-one-param}
\end{figure}
Finally, note that one can
easily check for consistency the value of any coefficient $c_k$
predicted by resurgence by checking the convergence of
\be
\widetilde{R}_m (k) \equiv \left(R_m-\sum_{r=0}^{k-1}\frac{c_r}{m^r}\right)\,
m^k \simeq c_k+\mathcal{O}(m^{-1}) \, .
\ee
In Fig. \ref{fig:ratio-c5-one-param} this convergence can be seen to the
predicted value of $c_5=-31.1456818997329$.
For a Richardson transform of order 5, the error of the predicted value is
$10^{-8}$.
\begin{figure}[ht]
\center
\includegraphics[height=0.4\textheight]{R5-0-2-5.png}
\caption{Convergence of the ratio (\ref{eq:ratio-one-param-pert}), in blue, to
the large-order relation coefficient $c_5$ (light blue). The accelerated
convergence is shown using Richardson transforms of
order 2 (green) and 5 (red).}
\label{fig:ratio-c5-one-param}
\end{figure}
This Section has served as a warmup for some of the resurgence
techniques that we will be using in what follows. We needed to resolve a
nonperturbative ambiguity due to a singularity on the positive real axis
and obtained results consistent with Ref.~\cite{Heller:2015dha}. The
following Section applies resurgence techniques to an extended
hydrodynamic theory which involves a second-order differential equation with two
Stokes lines in the complex plane, both away from the positive real axis. Even
though there will be no ambiguities in the resummation procedure
along the positive real axis, resurgence plays a determinant role in the
construction of the full transseries answer. The methods of resurgence that
will be used to
study this problem are a generalisation of what was presented in this Section
and in Ref.~\cite{Aniceto:2011nu} (a review of nonlinear resurgent transseries
with two and more parameters can be found in Ref.~\cite{Upcoming:2015}).
Note finally that one can of course solve \rf{eq:first-order-MIS}
numerically. These exact solutions
rapidly converge to an attractor which determines universal behaviour which
emerges after nonhydrodynamic degrees of freedom decay. The transseries
solution captures this behaviour: adding just the lowest order in the
transseries makes it possible to match the attractor solution by choosing the
real part of the transseries parameter $\sigma$ appropriately
\cite{Heller:2015dha}.
While this work was in preparation, the authors became aware of
Ref.~\cite{Basar:2015ava} which has some overlap with this Section.
\section{Extended theories of hydrodynamics}
\label{sec:ext}
As reviewed in Section \ref{sec:mis}, MIS theory contains a single purely
damped nonhydrodynamic mode, the presence of which is reflected in the
divergence of the gradient expansion. The occurrence of this mode is enough to
furnish a causal hydrodynamic
theory close to local equilibrium. This theory has been very successful in
describing the evolution of quark-gluon plasma produced in heavy ion
collisions, beginning with proper times of less than a fermi/c. It has,
however, been noted by many authors
\cite{Chesler:2009cy,Heller:2011ju,Jankowski:2014lna} that the pressure
anisotropy at these early times is still very large, and the system is not
close to equilibrium. It is natural to suspect that the nonhydrodynamic MIS
mode not only regulates the causality and stability issues of
Navier-Stokes hydrodynamics, but contributes in a very nontrivial way to the
physical implications of this model. This provides strong motivation to try to
understand better the role of nonhydrodynamic modes, and
how they can be matched with a microscopic description. For modeling early
nonequilibrium dynamics one would expect that incorporating further
nonhydrodynamic degrees of freedom should provide a better description.
A significant step leading in this direction was taken in
Ref.~\cite{Heller:2014wfa}, where extended hydrodynamic theories were
formulated in
the context of ${\mathcal N}=4$ SYM. These theories attempted to match the effective theory
to the pattern of the least damped black brane quasinormal modes which govern
the approach to hydrodynamics.
In this paper we focus on the simplest model discussed in
Ref.~\cite{Heller:2014wfa} in which there
is a pair nonhydrodynamic modes which are not purely decaying.
The familiar relaxation equation MIS theory, \rf{mis2}, is
replaced by
\bel{eqpi2s}
\left((\f{1}{T} {\mathcal D})^2\right. + \left. 2\Omega_I \f{1}{T} {\mathcal D} + |\Omega|^2\right)
\Pi^{\mu \nu} = - \eta |\Omega|^2 \sigma^{\mu\nu} - c_\sigma \f{1}{T} {\mathcal D}\left(\eta
\sigma^{\mu\nu}\right) + \ldots \, ,
\ee
where the ellipsis denotes contributions of second and higher order in
gradients. The parameter
\bel{omega}
\Omega \equiv \Omega_R + i \Omega_I
\ee
is the complex ``quasinormal mode'' frequency. The coefficient $c_\sigma$ affects
the region of stability in parameter space
\cite{Heller:2014wfa}. By solving \rf{eqpi2s} in the gradient
expansion one can also check that $c_\sigma$ contributes to second order
transport coefficients.
However, in our work this coefficient does not play a qualitative role
and we will set it to zero.
The appearance of the second derivative in \rf{eqpi2s} is what leads to
nonhydrodynamic modes which are not purely decaying.
Indeed, the linearization of equations \rf{cons} and \rf{eqpi2s} around flat
space reveals a pair of
nonhydrodynamic modes with complex frequencies $\Omega$ and $-\bar{\Omega}$. In
the case of ${\mathcal N}=4$ SYM\ the leading quasinormal mode frequencies have the values
\cite{Nunez:2003eq}
\bel{omegas}
\Omega_{R} \approx 9.800 , \qquad \Omega_{I} \approx 8.629 \, .
\ee
and these are the values we assume in our calculations\footnote{The values in
\rf{omegas} differ from those in
Table 1 of Ref.~\cite{Nunez:2003eq} (corresponding to an operator of conformal
weight $\Delta=4$) by a factor of $2 \pi$.}.
As in the case of MIS theory, upon imposing boost invariance the hydrodynamic
equations reduce to an ordinary differential equation for the
temperature.
Of course, in the present case, the equation is of third order.
Introducing new variables as in \rf{wfdef} one can rewrite it as a
second-order differential equation for the function $f(w)$:
\bel{vcp}
w f^2 f''+\alpha f f'+12 f^2 f'+w f f'^2+\frac{\beta +
\gamma f + \delta f^2 + 12 f^3}{w} = 0 \, ,
\label{eq:bifex}
\ee
where $f'$and $f''$ are the first and second derivatives of $f(w)$
with respect to $w$, and
\begin{eqnarray}
\alpha &\equiv& -8 + 2 w \Omega_I , \nonumber\\
\beta &\equiv& -\f{128}{27} - \f{32}{27} C\eta C_{\tau\Pi} - \f{4}{9} w (C_\eta |\Omega|^2 -
8 \Omega_I) - \f{2}{3} w^2 |\Omega|^2 , \nonumber\\
\gamma &\equiv& \f{176}{9} + \f{4}{3} C_\eta C_{\tau\Pi} - \f{32}{3} w \Omega_I + w^2 |\Omega|^2, \nonumber\\
\delta &\equiv& - \f{80}{3} + 8 w \Omega_I \, .
\eea
This is the analog of \rf{eq:first-order-MIS} of MIS theory.
For physical reasons it is clear that at late times (large $w$) the solution
must tend to $2/3$,
which corresponds to ideal fluid behaviour. It is easy to see
analytically that this is indeed the asymptotic solution. One can easily
determine the large-$w$ expansion of solutions:
\be
f(w) = \f{2}{3} + \f{4 C_\eta}{9} \f{1}{w} + \f{8 C_\eta (C_{\tau\Pi} + 2
\Omega_I)}{27 |\Omega|^2} \f{1}{w^2} + \dots \, .
\ee
As expected, the first two terms coincide with what one obtains in
MIS theory (see \rf{eq:mishydro}), whereas the third term is
different. This series can be calculated
up to essentially any order and can be shown to be divergent, as discussed in
much detail in the following section.
At early times, which correspond to small values of $w$, one finds a
unique real
power series solution of the form
\be
f(w) = \f{8}{9} + \f{9 C_\eta |\Omega|^2- 8 \Omega_I}{3(20 + 9 C_\eta
C_{\tau\Pi})} w + \dots \, .
\ee
By examining numerical solutions of \rf{eq:bifex} it is
clear that (similarly to the case of MIS theory) this is the small $w$
behaviour of an attractor solution
valid in the entire range of $w$.
Since \rf{vcp} is of second order, one must
specify both $f$ and $f'$ at the initial value of $w$.
As seen in Fig.~\ref{fig:attract}, setting
initial conditions at various values of $w$ shows that the numerical solutions
converge to the attractor.
However, unlike in the MIS case, the numerical solutions do not decay monotonically but
oscillate around the attractor.
\begin{figure}[ht]
\center
\includegraphics[height=0.4\textheight]{attract1.png}
\caption{Numerical solutions converge (nonmonotonically) to the numerical
attractor (magenta).
}
\label{fig:attract}
\end{figure}
\section{Resurgence in extended hydrodynamics}
\label{sec:extres}
We are interested
in solving \rf{eq:bifex} as an expansion in large values of $w\gg1$.
If we use a transseries Ansatz of the type (\ref{eq:trans-series-one-param})
and substitute it into this equation, we easily find two complex conjugate
values for the instanton action
\be
A_{\pm}=\frac{3}{2}\left(\Omega_{I}\pm\mathrm{i}\Omega_{R}\right) \, ,
\ee
Equivalently, we can write $A_{+}=\frac{3}{2}\mathrm{i}\bar{\Omega}$,
while $A_{-}=-\frac{3}{2}\mathrm{i}\Omega$. We then have two types
of nonperturbative contributions and thus, following
Refs.~\cite{Aniceto:2011nu,Upcoming:2015},
we find that we need a two-parameter transseries to fully describe
the solutions to this equation,
\begin{equation}
f\left(w,\sigma_{\pm}\right)=\sum_{n,m=0}^{+\infty}\sigma_{+}^{n}\sigma_{-}^{m}
\mathrm{e}^{-(nA_{+}+mA_{-})w}\Phi_{(n|m)}\left(w\right) \, ,
\label{eq:trans-series-two-param}
\end{equation}
where $\Phi_{(n|m)}(w)$ are the perturbative expansions in $w^{-1}$
around each sector. The perturbative sector is given by taking $n=m=0$.
These expansions are of the form
\begin{equation}
\Phi_{(n|m)}\left(w\right)=w^{\beta_{n,m}}\sum_{k=0}^{+\infty}a_{k}^{(n|m)}w^{-k}
\, ,
\label{eq:exp-sectors-two-param}
\end{equation}
where the coefficients $\beta_{n,m}$
reflect the type of branch cut
singularities in the Borel plane,
and the $a_{k}^{(n|m)}$ are the expansion
coefficients which
can be determined iteratively by substituting Eq.~(\ref{eq:trans-series-two-param})
into Eq.~(\ref{eq:bifex}). Assuming furthermore
that $\beta_{n,m}=n\beta_{+}+m\beta_{-}$, we find
\be
\beta_{\pm}=C_{\eta}\left(\Omega_{I}\pm\mathrm{i}\Omega_{R}\right) \, ,
\ee
together with recursion equations for the coefficients $a_{k}^{(n|m)}$.
Because $A_{\pm}$ are complex conjugate, as well as $\beta_{\pm}$,
and given that the coefficients of Eq.~(\ref{eq:bifex})
are all real, we
see that that the coefficients $a_{k}^{(n|m)}$ will be complex conjugates
of $a_{k}^{(m|n)}$ (and consequently all $a_{k}^{(n|n)}$ will be real).
By studying numerically the behaviour of the coefficients of the perturbative series
we see that these grow factorially for large enough order $k$. This is
directly related to the behaviour of the Borel transforms.
If we define the Borel transform for each sector
\be
\mathcal{B}\left[\Phi_{(n|m)}\right]\left(s\right)=\sum_{k=k_{\mathrm{min}}}a_{k}^{(n|m)}\frac{s^{k-\beta_{n,m}-1}}{\Gamma\left(k-\beta_{n,m}\right)}
\, ,
\ee
we find a nonzero radius of convergence and branch cuts starting
at positions $s_{\pm,\ell}=\ell \, A_{\pm}$. Note that $k_{\mathrm{min}}$ is
the minimum value of $k$ such that every power of $s$ appearing in the Borel
transform is non-negative (this does not change the asymptotic properties of
the series). In Fig.~\ref{fig:Borelpoles}, we see this behaviour
for the Borel transform of the perturbative series
\be
\mathcal{B}\left[\Phi_{(0|0)}\right]\left(s\right) = \sum_{k=0}a_{k+1}^{(0|0)}
\frac{s^{k}}{\Gamma\left(k+1\right)} \, .
\ee
In order to analyse the singularities of the Borel transform, we use the
method of Pad\'{e} approximants, where the series above is approximated by a
ratio of polynomials.\footnote{In the diagonal case used here, this ratio is
of the same order, half of the number of coefficients determined for the
original series (see for example Ref.~\cite{Aniceto:2011nu} for more details).}
Positions of the zeros of the polynomial in the denominator
reflect the singular behaviour of the Borel transform: these poles condense in
certain directions, and indicate cuts in the Borel plane.
\begin{figure}[ht]
\center
\includegraphics[height=0.4\textheight]{BorelPolesv2.png}
\caption{Poles of the diagonal Borel-Pad\'{e}
approximant
of order 300 associated with $\Phi_{(0|0)}$ in the Borel $s-$plane.
The red dots indicate values of the multiple instanton actions $\ell \,
A_{\pm},\,\ell\ge1$.
}
\label{fig:Borelpoles}
\end{figure}
We check the resurgent properties of the transseries
\rf{eq:trans-series-two-param} by determining the large-order behaviour
predicted by resurgence for the perturbative series $\Phi_{(0|0)}$ (the
procedure can then
be generalised for higher sectors). We first need to determine the associated
discontinuities, and then
make use of Cauchy's theorem (\ref{eq:Cauchy-thm}), in the same manner
as for the one-parameter example previously studied.
\footnote{Note that for higher sectors there will naturally be additional
singular directions in the Borel plane, associated with different
combinations of the two instanton actions $A_{\pm}$, much in the same way as
for the one-parameter transseries the sectors $\Phi_n,\,n\ge2$ has
discontinuities in both the positive and negative real axis.}
In the present case we have two singular directions defined by the
two actions $A_{\pm}$:
\be
\theta_{\pm}=\pm\arctan\left(\frac{\Omega_{R}}{\Omega_{I}}\right)\equiv\pm\theta_{A}.
\ee
Each of these directions will have a different Stokes constant associated
with it, which we will call $S_{\pm}$.
Following the ideas of \cite{Aniceto:2011nu,Upcoming:2015}
we can write down the discontinuities associated with the singular directions as
\begin{eqnarray*}
\mathrm{Disc}_{\theta_{+}}\Phi_{(0|0)}\left(w\right) & = &
-\sum_{\ell=1}^{+\infty}\left(S_{+}\right)^{\ell}\mathrm{e}^{-\ell
A_{+}w}\Phi_{(\ell|0)}\left(w\right),\\
\mathrm{Disc}_{\theta_{-}}\Phi_{(0|0)}\left(w\right) & = &
-\sum_{\ell=1}^{+\infty}\left(S_{-}\right)^{\ell}\mathrm{e}^{-\ell
A_{-}w}\Phi_{(0|\ell)}\left(w\right) \, .
\end{eqnarray*}
Rewriting these results for the variable $x=w^{-1}$,
making use of Cauchy's theorem (\ref{eq:Cauchy-thm}) for the function
$x\Phi_{(0|0)}\left(x\right)$, and expanding for small
$x$, we arrive at the large-order predictions ($m\gg1$)
\begin{eqnarray}
a_{m}^{(0|0)} & \simeq &
-\sum_{k\ge1}\frac{\left(S_{+}\right)^{k}}{2\pi\mathrm{i}}\frac{\Gamma\left(m+k\beta_{+}\right)}{\left(kA_{+}\right)^{m+k\beta_{+}}}\sum_{h\ge0}a_{h}^{(k|0)}\frac{\Gamma\left(m+k\beta_{+}-h\right)}{\Gamma\left(m-k\beta_{+}\right)}\left(kA_{+}\right)^{h}-
\nonumber\\
& &
-\sum_{k\ge1}\frac{\left(S_{-}\right)^{k}}{2\pi\mathrm{i}}\frac{\Gamma\left(m+k\beta_{-}\right)}{\left(kA_{-}\right)^{m+k\beta_{-}}}\sum_{h\ge0}a_{h}^{(0|k)}\frac{\Gamma\left(m+k\beta_{-}-h\right)}{\Gamma\left(m-k\beta_{-}\right)}\left(kA_{-}\right)^{h} \, .
\eea
Given that all the coefficients $a_{m}^{(0|0)}$ are real, and that
the pairs $\beta_{\pm},\,A_{\pm}$ and $a_{h}^{(k|0)},a_{h}^{(0|k)}$
are complex conjugate, we can easily see that $\frac{S_{+}}{2\pi\mathrm{i}}$
has to be complex conjugate of $\frac{S_{-}}{2\pi\mathrm{i}}$, and
so the Stokes constants are related by
\be
S_{-}=-\overline{S}_{+} \, .
\ee
It will be convenient to define
$A_{\pm}=\left|A\right|\mathrm{e}^{\pm\mathrm{i}\theta_{A}}$,
$\beta_{\pm}=\left|\beta\right|\mathrm{e}^{\pm\mathrm{i}\theta_{\beta}}=\beta_{R}\pm\mathrm{i}\beta_{I}$,
$S_{\pm}=\pm\left|S\right|\mathrm{e}^{\pm\mathrm{i}\theta_{S}}$.
The leading
behaviour of the large-order relations written above is
dictated by the sectors $a_{h}^{(1|0)}$ and $a_{h}^{(0|1)}$,
\be
a_{m}^{(0|0)} \simeq -\frac{S_{+}}{2\pi\mathrm{i}}
\frac{\Gamma\left(m+\beta_{+}\right)}{A_{+}^{m+\beta_{+}}}
\sum_{h\ge0}a_{h}^{(1|0)} \prod_{\ell=1}^{h}
\frac{A_{+}}{(m+\beta_{+}-\ell)}+h.c.+\mathcal{O} \left(2^{-m}\right) \, ,
\label{eq:large-order-pert}
\ee
where $h.c.$ stands for the Hermitian conjugate. Unlike the one-parameter case
previously studied, in these large-order relations there will always be a
dependence on the Stokes constant $S_{+}$. Thus, before proceeding with deeper
tests of these relations, we need to numerically determine the Stokes
constants. This can be done as follows. Defining
\be
Q_m = - \frac{1}{2\pi\mathrm{i}} \frac{\Gamma\left(m+\beta_{+}\right)}{A_{+}^{m+\beta_{+}}}
\sum_{h\ge0}a_{h}^{(1|0)} \prod_{\ell=1}^{h} \frac{A_{+}}{(m+\beta_{+}-\ell)}
\label{eq:def-Qm}
\ee
one has
\be
a_{m}^{(0|0)} \simeq S_{+} Q_m + \mathrm{h.c.}+\mathcal{O}\left(2^{-m}\right)
\, .
\label{eq:conv-pert-series-Qm}
\ee
If we can determine a resummed value for the $Q_m\equiv
\left|Q_m\right|\mathrm{e}^{\mathrm{i}\theta_{Q}(m)}$, for each $m$, then it
will easily follow that
\be
\f{a_{m+1}^{(0|0)}}{a_{m}^{(0|0)}} \simeq \f{|Q_{m+1}|}{|Q_{m}|}
\f{\cos\left(\theta_Q (m+1) +\theta_S\right)}{\cos\left(\theta_Q (m) +
\theta_S\right)} \, .
\label{eq:large-ord-argS-v1}
\ee
Note that this relation is still a large-order relation, i.e., it is
valid for large values of $m$. The argument of the Stokes constant can then be
found by rewriting this large-order relation \rf{eq:large-ord-argS-v1},
\be
\tan\theta_S = \frac{g(m) \cos\theta_Q(m) - \cos\theta_Q(m+1)}{g(m)
\sin\theta_Q(m) - \sin\theta_Q(m+1)} \, ,
\label{eq:large-ord-argS-v2}
\ee
where
\be
g(m)\equiv \f{a_{m+1}^{(0|0)}}{a_{m}^{(0|0)}} \f{|Q_m|}{|Q_{m+1}|} \, .
\ee
To determine the resummed values of $Q_m$, first notice that the sum present
in (\ref{eq:def-Qm}) is asymptotic for large $m$:
\be
\eta(m)\equiv\sum_{h\ge0}a_{h}^{(1|0)} \prod_{\ell=1}^{h}
\frac{A_{+}}{(m+\beta_{+}-\ell)} \, \simeq \,
\sum_{k=0}^{+\infty}\,\frac{\eta_k}{m^k} \, .
\label{eq:asympt-large-m}
\ee
The coefficients $\eta_k$ are fully determined by the value of
$A_{+},\,\beta_{+}$ and the coefficients $a_h^{(1|0)}$. The latter were
determined from the recurrence relations coming from the original differential
equation, up to $h=100$. The above sum can be computed via the
Borel-Pad\'{e} resummation method (see Ref.~\cite{Aniceto:2011nu} for more
details):\footnote{This sum could also be approximated by performing an
optimal truncation for each value of $m$.}
\begin{itemize}
\item We first determine the Borel transform corresponding to the asymptotic
sum $\eta(m)$, \rf{eq:asympt-large-m}.
\item We approximate this Borel transform by a diagonal Pad\'{e}
approximant of order $N=50$, denoted by $\mathrm{BP}_{50}\left[\eta\right]$.
\item The resummed series $\mathcal{S}\eta (m)$ is then determined via the
usual Laplace transform.
along the positive real axis as we want $m\in\mathbb{N}$
\be
\mathcal{S}\eta(m) \,= \,\int_{0}^{+\infty}
ds\,\mathrm{e}^{-s\, m}\,\mathrm{BP}_{50}\left[\eta\right]\left(s\right) \, .
\ee
This was performed for
$m=1,\cdots,100$.\footnote{For
this sum, the positive real axis is
not a Stokes line, and there is no ambiguity associated with the
resummation.}
\end{itemize}
We can finally rewrite the resummed $Q_m$ as
\be
\mathcal{S}Q_m = - \frac{1}{2\pi\mathrm{i}} \frac{\Gamma\left(m+\beta_{+}\right)}{A_{+}^{m+\beta_{+}}}
\mathcal{S}\eta(m) \, .
\label{eq:resummed-Qm}
\ee
With this result we can determine the argument of the Stokes constant
$\theta_S\equiv\arg\left(S_{+}\right)$ for each value of $m$ via the relation
(\ref{eq:large-ord-argS-v2}), by substituting
the resummed value $\mathcal{S}Q_m$ for the $Q_m$.
The result is illustrated in
Fig. \ref{fig:stokesarg}: the phase becomes essentially independent of $m$ and
is given by $\theta_S = -1.710276$ (the estimated error is of order
$10^{-6}$).
\begin{figure}[ht]
\center
\includegraphics[height=0.4\textheight]{argsplusv2.png}
\caption{Convergence of the phase of the Stokes constant $\theta_S \equiv
\arg\left(S_{+}\right)$. We plot (\ref{eq:large-ord-argS-v2}) for each $m$
using the resummed $Q_m$.
}
\label{fig:stokesarg}
\end{figure}
One can similarly calculate the modulus of the Stokes constant.
From \rf{eq:conv-pert-series-Qm} it follows that for large $m$
the modulus of the Stokes constant $\left|S\right|\equiv\left|S_{+}\right|$
should converge to
\be
\left|S\right|\simeq\frac{a_m^{(0|0)}}{2\left|Q_m\right|\cos\left(\theta_S +
\theta_Q (m)\right)} \, ,
\label{eq:large-ord-modS}
\ee
with the $Q_m$
replaced by the resummed ones given in \rf{eq:resummed-Qm}.
This convergence can be seen in
Fig. \ref{fig:stokesmod},
implying the value $\left|S\right|=4.728045$ (with error of order $10^{-6}$).
\begin{figure}[ht]
\center
\includegraphics[height=0.4\textheight]{modsplusv2.png}
\caption{Convergence of the large-order relation Eq.~(\ref{eq:large-ord-modS})
to the modulus of the Stokes constant $\left|S\right| \equiv
\left|S_{+}\right|$.}
\label{fig:stokesmod}
\end{figure}
This concludes the numerical calculation of the Stokes constant. Using its
value we can now check the resurgence large-order relations
(\ref{eq:large-order-pert}).
Let us define
a new quantity by
\be
\Omega\left(m\right) \equiv \frac{2 \pi}{\left|S\right|}
\frac{\left|A\right|^{m+\beta_R} \mathrm{e}^{-\beta_{I}
\theta_A}}{m^{\beta_R}\,\Gamma(m)} \, a_m^{(0|0)} \, .
\ee
Making use of the asymptotic expansion (\ref{eq:asympt-large-m}), as well as
the
following large $m$
expansion\footnote{As an example we present the first
three terms in this expansion: $\gamma_0=1$, $\gamma_1=\frac{1}{2}\,\beta_{+}\left(\beta_{+}-1\right)$, and
$\gamma_3=\frac{1}{24}\,\beta_{+}\left(\beta_{+}-1\right)\left(\beta_{+}-2\right)\left(3\beta_{+}-1\right)$.}
\be
\gamma(m) \equiv
\frac{\Gamma(m+\beta_{+})}{m^{\beta_R}\,\Gamma(m)}=\sum_{k=0}^{+\infty}\frac{\gamma_k}{m^k}
\, ,
\ee
we can easily find the large order behaviour for $\Omega(m), m\gg1$:
\be
\Omega (m) \, \simeq \,\mathrm{e}^{\mathrm{i}\,\Theta (m)} \sum_{k=0}^{+\infty} \frac{c_k}{m^k}+h.c.
\, = \, 2 \sum_{k=0}^{+\infty} \frac{\left|c_k\right|}{m^k} \,
\cos\left(\Theta(m) + \theta_c(k)\right) \, .
\label{eq:large-order-check}
\ee
The coefficients $c_k \equiv
\left|c_k\right|\mathrm{e}^{\mathrm{i}\theta_c (k)}$ appearing in this
expression are defined by
\be
\sum_{k \ge 0} c_k\,m^{-k} \equiv \gamma(m) \, \eta(m)
\ee
and the angle $\Theta(m)$ is
\be
\Theta (m) \equiv \frac{\pi}{2} + \theta_S - \theta_A
\left(m+\beta_R\right)-\beta_I \log\left|A\right| \, .
\ee
Note that the coefficients $c_k$ are known numerically, because both
expansions $\eta(m),\,\gamma(m)$ are known.
Analysing the relation (\ref{eq:large-order-check}) we find that to leading
order in $m$ (
since
$c_0=1$)
\be
\Omega(m) \simeq 2 \cos\left(\Theta(m)\right) + \mathcal{O}\left(m^{-1}\right)
\, .
\ee
In Figs. \ref{fig:large-ord-coeff0-5-100} and \ref{fig:large-ord-coeff0} we
can see the convergence of the numerical results to the predicted behaviour
for two different ranges of $m$: for the range of $m<100$ we can see a slow
convergence, getting more accurate for higher values of $m$; in the range
$500<m<600$ there is already complete consistency between numerical results
and predicted behaviour.
\begin{figure}[ht]
\center
\includegraphics[height=0.4\textheight]{largeordc0-5-100.png}
\caption{Convergence of the large-order relation (\ref{eq:large-order-check})
to the expected behaviour predicted by the coefficient $c_0(m)\equiv 2
\cos\left(\Theta(m) \right) $.}
\label{fig:large-ord-coeff0-5-100}
\end{figure}
\begin{figure}[ht]
\center
\includegraphics[height=0.4\textheight]{largeordc0.png}
\caption{
Consistency of the large-order relation (\ref{eq:large-order-check})
with the expected behaviour predicted by the coefficient $c_0(m)\equiv 2
\cos\left(\Theta(m) \right) $ at large $m$.
}
\label{fig:large-ord-coeff0}
\end{figure}
We can also study the convergence of the large-order relations to a general coefficient $c_k$ for some
specific $k$ by subtracting the first $k-1$ elements of the series and
multiplying the result by $m^k$.
For example, we can check the convergence to
the term $k=2$ in the relation (\ref{eq:large-order-check}) by plotting
\be
\left(\Omega(m) - 2 \sum_{k=0}^{1} \frac{\left|c_k\right|}{m^k} \,
\cos\left(\Theta(m) + \theta_c(k)\right)\right)m^2 \simeq 2 \left|c_2\right|
\, \cos\left(\Theta(m) + \theta_c(2)\right) +\mathcal{O}\left(m^{-1}\right).
\label{eq:large-ord-coeff-c2}
\ee
This convergence can be seen in Fig. \ref{fig:large-ord-coeff2}
: for the range of large $m$ presented, we find consistency between numerical and predicted results.
\begin{figure}[ht]
\center
\includegraphics[height=0.4\textheight]{largeordc2.png}
\caption{
Consistency of the large-order relation
\rf{eq:large-ord-coeff-c2} with the expected behaviour predicted by the
coefficient $c_2(m)\equiv 2
\left|c_2\right| \, \cos\left(\Theta(m) + \theta_c(2)\right) $ at large $m$.
}
\label{fig:large-ord-coeff2}
\end{figure}
It is important to note that the convergence to higher coefficients $c_k$ is
highly nontrivial, and is based on the assumption that the transseries is
resurgent
and that the value of the Stokes constant
has been correctly determined. If either of these assumptions had failed,
we would not have found convergence of the numerics to higher
orders predicted by resurgence.
It is also of importance to point out the slight deviation of the numerical
data from the predicted values
in Fig. \ref{fig:large-ord-coeff2}. The main reason for this is that
we have determined
(based on the same numerical data)
the values of the Stokes constants with an
error of $10^{-6}$; this error will eventually cause such a
deviation. In order to get more
accurate results in the convergence to higher
sectors, one would need to determine more coefficients of the sectors
$\Phi_{(1|0)}$ and $\Phi_{(0|1)}$ and use them to lower the numerical error
of the Stokes constant calculation.
Now that we have confirmed the resurgent properties of the perturbative
series,\footnote{The resurgent properties of higher nonperturbative sectors
can also be checked, once higher sectors are determined via recursion
relations and resummations of the lower sectors are performed.}
we turn to the central question: how to resum our two-parameter transseries
(\ref{eq:trans-series-two-param}).
We want to resum our transseries for positive real coupling. Because
the singularities in the Borel plane are away from this direction,
we can perform the integration of the Laplace transform (\ref{eq:resummation-one-param-sector}),
where now the sectors being resummed are the $\Phi_{(n|m)}$. There is no
ambiguity involved in this calculation, and the resummed transseries
(for the positive real line $\theta=0$) is
given by
\begin{equation}
\mathcal{S}_{0}f\left(w,\sigma_{\pm}\right) = \sum_{n,m=0}^{+\infty}
\sigma_{+}^{n} \sigma_{-}^{m}\mathrm{e}^{-(nA_{+}+mA_{-})w} \mathcal{S}_{0}
\Phi_{(n|m)} \left(w\right) \, .
\label{eq:resummed-trans-series-two-param}
\end{equation}
We could be tempted to set the transseries
parameters $\sigma_{\pm}=0$, which would leave us with only the perturbative
series. But the nonperturbative sectors will give us some real exponentially
suppressed contributions that we should not neglect as they will play
a role in the correct final answer.\footnote{If the actions had a negative real
part, then one should in fact set
the parameters to zero when considering the transseries in the positive
real axis, as not to have exponentially enhanced contributions.} In fact,
this was already seen in other problems of resummation
\cite{Grassi:2014cla,Couso-Santamaria:2015wga}.
Consequently, we should allow for nonzero $\sigma_{\pm}$.
For real values of $w$
we expect
a real solution, and we know that sectors $\Phi_{(n|m)}$ are complex
conjugate to $\Phi_{(m|m)}$, with the instanton actions also being
complex conjugates. Therefore
in order to have a real solution we need to have
$\sigma_{+}^{n}\sigma_{-}^{m}$ be complex conjugate
to $\sigma_{+}^{m}\sigma_{-}^{n}$ for any $m,n$. In particular putting
$m=0,n=1$ we find
\be
\sigma_{+}=\overline{\sigma}_{-}\equiv\sigma \, .
\ee
Writing the first few terms of the resummed transseries
(\ref{eq:resummed-trans-series-two-param}),
we have ($A_{\pm}=A_{R}\pm\mathrm{i}A_{I}$)
\begin{eqnarray*}
\mathcal{S}_{0}f(w,\sigma) & = &
\sum_{n=0}^{+\infty}\mathrm{e}^{-nA_{R}w} \sum_{m=0}^{n}\sigma^{n-m}
\bar{\sigma}^{m}\mathrm{e}^{-\mathrm{i}(n-2m)A_{I}w}\mathcal{S}_{0}\Phi_{(n|m)}\left(w\right)\\
& = &
\mathcal{S}_{0}\Phi_{(0|0)}\left(w\right)+\mathrm{e}^{-A_{R}w}2\,\mathrm{Re}\left(\sigma\mathrm{e}^{-\mathrm{i}A_{I}w}\mathcal{S}_{0}\Phi_{(1|0)}\left(w\right)\right)+\\
& &
+\mathrm{e}^{-2A_{R}w}\left[2\,\mathrm{Re}\left(\sigma^{2}\mathrm{e}^{-2\mathrm{i}A_{I}w}\mathcal{S}_{0}\Phi_{(2|0)}\left(w\right)\right)+\left|\sigma\right|^{2}\mathcal{S}_{0}\Phi_{(1|1)}\left(w\right)\right]+\mathcal{O}\left(\mathrm{e}^{-3A_{R}w}\right) \, .
\end{eqnarray*}
Note that the complex number $\sigma$ is not determined by the above
analysis. This freedom corresponds exactly to the two integration constants expected
for a solution of a second-order ordinary differential equation and can be
fixed by imposing suitable initial conditions.
\section{Summary and conclusions}
\label{sec:sum}
The equations of hydrodynamics constitute a physically well-motivated coarse
grained description of a wide range of phenomena. It has recently become clear
that they provide a new area of application for resurgence ideas. We have
tried to describe a mature version of these ideas in the context of MIS
theory, which provides the simplest example of an infinite hydrodynamic
series. This series is divergent in a way which encodes information about the
nonhydrodynamic mode present in MIS theory.
The main point of this paper was to apply these methods to a hydrodynamic
model which aims to describe a richer spectrum of nonhydrodynamic modes,
inspired by what is seen in ${\mathcal N}=4$ SYM. We have shown in some detail that also in
this theory the hydrodynamic solution is the leading term in a transseries
expansion. These results confirm general expectations concerning the nature of
gradient expansions \cite{Dunne}. They also provide an
interesting example of resurgent transseries, where the nonperturbative
sectors not only have the expected exponentially suppressed behaviour at late
times ($w\gg1$), but also an oscillatory one. This oscillatory behaviour will
become more pronounced in early times, when these sectors are no longer
suppressed -- even though there are no ambiguities in this problem, the full
transseries is still needed to account for this behaviour. From the point of
view of resurgence theory, this oscillatory behaviour also brought novel
features. Because the large-order relations cannot be disentangled from the
Stokes constants, and one cannot use normal convergence acceleration methods
due to the oscillations, we needed to introduce a Borel-Pad\'{e} resummation
of the first nonperturbative sectors to accurately determine both modulus and
argument of the Stokes constant. This then allowed us to check the large-order
relations with high accuracy.
From a physical perspective, one would like to understand cases where the
series expansion is generated directly from some underlying microscopic
quantum theory, such as strongly coupled ${\mathcal N}=4$ SYM. In this case there is an
infinite sequence of nonhydrodynamic modes corresponding to the black brane
quasinormal modes \cite{Kovtun:2005ev,Nunez:2003eq}. To include more than a
single pair of complex conjugate quasinormal modes would involve
multiparameter transseries, with each quasinormal mode defining (in principle)
a separate Stokes line.
In practice one would first aim at understanding the effects of the leading
modes -- those with the longest relaxation times. In the case of
boost-invariant flow in ${\mathcal N}=4$ SYM, the hydrodynamic series has already been
computed to
high order, and the calculation of at least a few terms of the 1-instanton
sector series is feasible. In conjunction with the methods developed in the
study presented here, this opens up the possibility of at least checking
consistency with resurgence in this very important case.
\vskip 2em
{\bf Acknowledgments:} we would like to thank Michał Heller
and Ricardo Schiappa for useful comments
on the manuscript.
I.A. was supported by the National Science Centre grant
2012/06/A/ST2/00396.
M.S. was supported by the National Science Centre
grant 2012/07/B/ST2/03794.
\bibliographystyle{utphys}
|
train/arxiv
|
BkiUcG45jDKDyDz_8G-f
| 5 | 1 |
\section{INTRODUCTION} \label{INTRODUCTION}
Gas fueling to a galactic center is very important for activity of
active galactic nuclei, growth of supermassive black holes (SMBHs),
nuclear starbursts, formation of super star clusters in a galactic
central region, and other interesting phenomena. Our galaxy is very
interesting for this study because of the following reasons. First,
the center of our galaxy is the most closest galactic center. Its
distance is about $8.0\hspace{+.2em}\mathrm{kpc}$
(\citealt{eisenhauer03:_geomet_deter_of_distan_to_galac_center}).
Therefore, there are many observational data with high resolution over
wide wavelength. It is easy to compare those with numerical
simulations. Secondly, there is evidence of recent mass supply in our
galactic center. There are young massive compact star clusters (the
Arches, Quintuplet, and Central clusters) in the Galactic center.
These clusters are located within 30 pc from the Galactic center and
have a number of OB stars (\citealt{figer:02}). Because of the short
age of the young stars, such massive star formation occurred within
last several million years in the central region of the Galaxy
(\citealt{mezger96:_galac_center}). Formation of these clusters
requires a large amount of gas. The circumnuclear gas disk (CND),
which is dense ($10^{5}\hspace{+.2em}\mathrm{cm^{-3}}$), clumpy, and turbulent with large
line widths ($\geq 40\hspace{+.2em}\mathrm{km\sep s^{-1}}$) (\citealt{CH:99}), has radius of a few
pc and a mass of $\approx 10^{6}\hspace{+.2em}\mathrm{M_{\odot}}$ (\citealt{christopher:05}).
\citet{CH:99} found a gas stream from the giant molecular cloud (the
$20\hspace{+.2em}\mathrm{km\sep s^{-1}}$ cloud) near the Galactic center to the CND. This may be
gas inflow to the CND.
It is expected that a vast of gas is supplied from the central
molecular zone (CMZ), which is a ring-like gas distribution and
extends over the range of Galactic longitude $-1.5^{\circ}\le l \le
2^{\circ}$, to the Galactic center
(\citealt{serabyn:95,morris96:_galac_center_envir}). The size of the
CMZ is $\sim 200\hspace{+.2em}\mathrm{pc}$. It is certainly formed by the large-scale bar
(\citealt{binney:91,morris96:_galac_center_envir,sawada:04}) and has a
large amount of molecular gas $5$-$10\times 10^{7}\hspace{+.2em}\mathrm{M_{\odot}}$
(\citealt{serabyn:95}). However, it is unclear how the gas is
transported further in. Secondary effects like dissipation can then
drive the gas further in but at slower rate (e.g. \citealt{heller01:_doubl_bars_in_disk_galax}),
or likewise gravitational instabilities (e.g. \citealt{fukuda00:_effec_of_self_gravit_of}),
magnetic viscosity (e.g. \citealt{morris96:_galac_center_envir}).
For the gas feeding, many authors show an important role of a bar
(e.g. \citealt{athanassoula:92}). Nested bars, which consists of a
outer bar and inner bars, may play an important role in the gas
feeding to galactic centers. This idea was firstly proposed by
\citet{SFB:89} as a mechanism for fueling AGNs. Inspired by the idea
of \citet{SFB:89}, many numerical studies have been performed
(\citealt{FM:93,friedli:96,maciejewski97:_regul_orbit_and_period_loops,MS:00,heller01:_doubl_bars_in_disk_galax,shlosman02:_nested_bars_in_disk_galax,RSL:02,maciejewski:02,ES:03,ES:04,HSA:06,DS:07,shen07:_obser_proper_of_doubl_barred}).
\citet{FM:93} performed three dimensional simulations of gas and stars
and showed that an inner bar can drive gas infall to a galactic
center.
Nested bars are observed in nearby barred galaxies in a large fraction
($\sim 30\%$)
(\citealt{wozniak:95,FW:96,jungwiert:97,erwin:02,erwin:04}). The
large fraction indicates that nested bars are dynamically stable or
recurrent structures. Nested bars are expected to be dynamically
decoupling, since the orientations of both bars are random
(\citealt{buta93:_metric_charac_of_nuclear_rings}). Dynamical
decoupling of these was also reported in many numerical studies
(\citealt{FM:93,MS:00,RSL:02,ES:04,DS:07}).
An increasing number of observational studies show effect of nested
bars in gas flows in central regions of galaxies. \citet{fathi:06}
observed the central region of the double-barred galaxy, NGC 1097,
with high resolution, using GMOS-IFU and HST-ACS. They show clear
evidence of radial streaming motion down to about 10 pc from the
nucleus by mapping the gas velocity fields.
\citet{schinnerer06:_molec_gas_dynam_in_ngc,schinnerer07:_bar_driven_mass_build_up}
observed molecular emissions in the central region of the nearby
double-barred spiral galaxy NGC 6946 with very high spacial
resolution ($\lesssim 1''$) with the IRAM Plateau de Bure
interferometer. They showed that there are nuclear massive gas clumps
and straight dust lanes inside the inner bar. They concluded that the
inner bar is closely related with the pile-up of molecular gas to the
nucleus. \citet{meier08:_nuclear_bar_catal_star_format} observed the
central region of barred galaxy, Maffei 2, with BIMA and OVRO. They
show a nuclear ring, whose radius is $\sim 80\hspace{+.2em}\mathrm{pc}$ and mass is
$6.9\times 10^{6}\hspace{+.2em}\mathrm{M_{\odot}}$, well inside the bar and that overall
morphology of gas, including the nuclear ring, can be explained by a
nuclear bar by comparing the position-velocity diagram of molecular
gas with orbits of molecular clouds in their nuclear bar model.
These studies support the important role of inner bars in transporting
gas to a galactic center.
Recently, observational studies show evidence of an inner bar in our
galaxy, which is much smaller than the outer bar of semi-major axis
$3.5\hspace{+.2em}\mathrm{kpc}$. \citet{alard:01} studied surface density of old
stellar population in the inner bulge by using the 2MASS data and show
evidence of the inner bar. \citet{nishiyama:05,nishiyama:06}
investigated the shift of the peak position of red clump stars
distribution over $|l|<10.5^{\circ}$ using the IRAF 1.4 m telescope
with the near-infrared camera SIRIUS and showed that the gradient of
this shift clearly changes in $|l|<4^{\circ}$. They interpreted that
this structure may be due to the inner bar.
We study the possibility that the inner bar play an important role in
the mass supply from the CMZ to the Galactic center. Previous
theoretical studies have not reported the case of large contribution
of inner bars in gas supply to a galactic center
(e.g. \citealt{maciejewski:02,RSL:02}). They studied limited cases.
We investigate various inner bar models in this paper. We perform two
dimensional hydrodynamical simulations in a gravitational potential
model of our galaxy, assuming several inner bar models. In the
simulations, we systematically change the mass and the axial ratio of
inner bar models, since parameters of the inner bar is not clear from
observations.
In $\S 2$, we give our gravitational models and numerical method.
In $\S 3$, we show the results of our simulations.
In $\S 4$, we discuss gravitational instability
and evolution of nuclear gas disks, which are obtained in our numerical results.
In $\S 5$, we summarize our study.
\section{MODEL} \label{MODEL}
\subsection{Gravitational Potential of the Galaxy} \label{subsec:gravitational_potential}
As the gravitational potential of our galaxy except for an inner bar,
we assume the model of \citet{bissantz:03} for the Galactic bulge, the
stellar disk, the outer bar, the spiral arms, and the dark halo
($R>500\hspace{+.2em}\mathrm{pc}$) and \citet{LZM:02} for the nuclear bulge
($R<500\hspace{+.2em}\mathrm{pc}$) and the SMBH. \citet{bissantz:03} simulated gas
motion in our galaxy potential model, which consists of the Galactic
bulge, the stellar disk, the outer bar, the spiral arms and the dark
halo. They gave pattern speed of the outer bar and the spiral arms
($\Omega _{\mathrm{OB}}\approx 60\hspace{+.2em}\mathrm{Gyr} ^{-1}$ and $\Omega _{\mathrm{SP}}\approx 20\hspace{+.2em}\mathrm{Gyr}
^{-1}$, respectively) to reproduce observational gas kinematics of
molecular clouds. \citet{LZM:02} analyzed IRAS and COBE DIRBE data of
the central $500\hspace{+.2em}\mathrm{pc}$ of our galaxy. They gave mass distribution
of the nuclear bulge, which is distinguished from the Galactic bulge
by its flat disk-like feature, assuming a constant mass-to-light
ratio. They estimated that the nuclear bulge has a mass of $1.4\pm
0.6\times 10^{9}\mathrm{M_{\odot}}$. We assume the rotation curve obtained from
the mass distribution of the nuclear bulge and the SMBH in $R\leq
500\hspace{+.2em}\mathrm{pc}$ for the rotation curve of the total stellar mass. We
connect smoothly the rotation curves obtained from the nuclear bulge
and the SMBH in $R\leq 500\hspace{+.2em}\mathrm{pc}$ and from the stellar component of
of \citet{bissantz:03} in $R>500\hspace{+.2em}\mathrm{pc}$. Details on the
gravitational potential of the outer bar, the spiral arms, and the
dark halo are described in \citet{bissantz:03}.
Fig. \ref{fig:rotation_curve} shows the rotation curve of one of our
models, the model S33. In this figure, we use axially averaged mass
distribution of the inner and outer bars. In
Fig. \ref{fig:angular_velocity_curve}, we show the angular velocity
curve of the model S33. In this figure, there is the local maximum of
$\Omega -\kappa /2$ at $150\hspace{+.2em}\mathrm{pc}$, where $\Omega$ is the angular
velocity and $\kappa$ is the epicyclic frequency. We point out that the
curve of $\Omega -\kappa /2$ in $R<\hspace{+.2em} 500\hspace{+.2em}\mathrm{pc}$ is rather
uncertain, since it is difficult to measure accurately the mass
profile in this scale.
\subsection{Inner bar potential} \label{subsec:inn_bar_pot}
We assume Ferrers bar models for inner bars, since a density profile
of the inner bar is not observationally confirmed. The Ferrers bar
model has a density distribution as
\begin{equation}
\rho (x,y,z)=\rho _{0}\left(1-\frac{x^{2}}{a^{2}}-\frac{y^{2}}{b^{2}}-\frac{z^{2}}{c^{2}}\right)^{n},
\end{equation}
where $\rho _{0}$ is density at the origin
(\citealt{ferrers:877}). We assume $n=1$ and $b=c$. $\rho_{0}$ is
related with mass of the inner bar $M_{\mathrm{IB}}$ through
$\rho_{0}=\frac{15M_{\mathrm{IB}}}{8\pi ab^{2}}$ for $n=1$.
Parameters we choose are given in
Table.\ref{table:small_inner_bar_models} and
\ref{table:large_inner_bar_models}.
We assume two cases of the length of the semi-major axis of the inner
bar models, $a_{\mathrm{IB}}=200\hspace{+.2em}\mathrm{pc}$ and $600\hspace{+.2em}\mathrm{pc}$ from
the following studies. \citet{wozniak:95} performed the BVRI survey of
36 disk galaxies selected as candidates for having an inner bar or a
triaxial bulge within the outer bar. They showed that outer to inner
bar axis ratios, $a_{\mathrm{OB}}/a_{\mathrm{IB}}$, are in the range of
$3.7$ to $18.0$ with the mean value of $7.2$. \citet{FW:96} observed 13
disk galaxies, which had been classified into galaxies likely having
an inner bar or a triaxial bulge within the outer bar in
\citet{wozniak:95}, with JHK band. They show a similar result,
$4.0\leq a_{\mathrm{OB}}/a_{\mathrm{IB}}\leq 13.4$ with the mean value
of $7.2$. In our galaxy, the ranges above corresponds to
$a_{\mathrm{IB}}=200$-$875\hspace{+.2em}\mathrm{pc}$ for the semi-major axis of the
outer bar, $3.5\hspace{+.2em}\mathrm{kpc}$. If our galaxy is a normal nested barred
galaxy, our assumed values of $a_{\mathrm{IB}}$ is in this range.
In our assumption on sizes of the inner bar models, we also consider the fact
that inner bars often coexist with nuclear rings
(\citealt{buta93:_metric_charac_of_nuclear_rings,shaw:93,erwin:02}).
\citet{erwin:02} found that 60\% of their sample galaxies with nuclear
rings have inner bars. In such galaxies, inner bars are often
surrounded by nuclear rings and the size of the inner bars is
comparable with that of the nuclear rings. In our galaxy, if the CMZ
corresponds to such a nuclear ring, the size of the inner bar may be
comparable with the size of the CMZ ($R\approx 200\hspace{+.2em}\mathrm{pc}$). This is
consistent with the projected size of the inner bar of
\citet{alard:01} that is $\sim 1.5^{\circ}-2^{\circ}\approx
200-300\hspace{+.2em}\mathrm{pc}$. However, the size of the inner bar proposed by
\citet{nishiyama:05,nishiyama:06} is $\approx 520\hspace{+.2em}\mathrm{pc}$, and it is
much larger than the size of the CMZ.
Our assumption on sizes of the inner bar models is consistent with
recent numerical simulations. \citet{DS:07} and
\citet{shen07:_obser_proper_of_doubl_barred} investigated formation of
long-lived inner bar from a psuedobulge by performing N-body
simulations. They showed that inner bar ends are much smaller than
their corotation radius $R_{\mathrm{CR}}$. Similar result is also
obtained in \citet{FM:93}. The $R_{\mathrm{CR}}$ of the inner bar is
as large as $600\hspace{+.2em}\mathrm{pc}$ in our models, if the pattern speed of the
inner bar is near the local maximum of $\Omega -\kappa/2$ (our choice
is intended to be consistent with the N-body simulations; see
below). $a_{\mathrm{IB}}$ may be less than $600\hspace{+.2em}\mathrm{pc}$. Since the
curve $\Omega -\kappa/2$ in $R<500\hspace{+.2em}\mathrm{pc}$ is rather uncertain, we
assume two cases of $a_{\mathrm{IB}}$. Hereafter, we call the inner
bar models with $a_{\mathrm{IB}}=200\hspace{+.2em}\mathrm{pc}$ \textit{small} inner bars
and with $a_{\mathrm{IB}}=600\hspace{+.2em}\mathrm{pc}$ \textit{large} inner bars.
We assume that the mass
of the inner bar (see Sec. \ref{subsec:numerical_method})
is a part of the mass distribution of the nuclear bulge of
\citet{LZM:02}. We give the mass of the inner bar models in
Table. \ref{table:small_inner_bar_models} and
\ref{table:large_inner_bar_models}. As shown in
Fig. \ref{fig:rotation_curve}, the mass of the inner bar models are
quite smaller than the total mass within the radius of semi-major axis of
the inner bars.
We assume that pattern speeds of the inner bar models are near the
local maximum of $\Omega -\kappa/2$, which is located at about
$150\hspace{+.2em}\mathrm{pc}$ (see Fig. \ref{fig:angular_velocity_curve}). This is
consistent with N-body simulations of formation of nested barred
galaxies (\citealt{FM:93,RSL:02}). We also assume that the inner bars
are prograde. In some small inner bar models, we change the pattern
speed around the local maximum of $\Omega -\kappa /2$ to investigate
the effect of the pattern speed on mass inflow rate to the galactic
center. The range of the pattern speed in each model is
$175$-$375\hspace{+.2em}\mathrm{km\sep s^{-1}\sep\kpc^{-1}}$. We give the pattern speeds in Table
\ref{table:pattern_speeds_of_models}.
We use $Q_{T}\equiv (F_{\phi}/F_{r})_{\mathrm{max}}$ as a measure of
the strength of the inner bar, where $(F_{\phi}/F_{r})_{\mathrm{max}}$
is a maximum value of a ratio of the azimuthal component of the
gravitational force of the inner bar to the radial component of the
total gravitational force within $500\hspace{+.2em}\mathrm{pc}$. $Q_{T}$ of each model
are given in Table \ref{table:small_inner_bar_models} and
\ref{table:large_inner_bar_models}.
Hereafter, we specify the inner bar models by the name in the
Table \ref{table:small_inner_bar_models} and
\ref{table:large_inner_bar_models} and by the value of the pattern speed
as S33 ($\omegaib{300}$). In the case of the large inner bar models,
we omit the value of the pattern speed, since we assume the same
pattern speed for them.
\subsection{Numerical method} \label{subsec:numerical_method}
We use the advection upstream splitting method (AUSM) for numerical
hydrodynamics (\citealt{liou93:_new_flux_split_schem}). The AUSM is
one of flux vector splitting schemes. In the AUSM, advection and propagation
of acoustic wave are recognized as physically distinct
processes. Therefore, the advective terms and the pressure terms in
the flux vector are split separately. This makes the formula of the
flux vector at the cell face very simple and leads to a reduction of
numerical operations without loss of accuracy. The robustness and good
performance of the AUSM in the application to galactic gas simulations
are well tested by many authors
(\citealt{colina00:_nuclear_bar_star_format_and,wada01:_numer_model_of_multip_inter,mori02:_early_metal_enric_by_pregal_outfl}). To
obtain higher order spatial resolution, we use the second order MUSCL
interpolation with the van Albada limiter function
(e.g. \citealt{RK:95}). The AUSM with the MUSCL interpolation is easy
to implement due to its simple form and well suitable to capture shock
waves in even rarefied medium. In our simulations, we do not use ``gas
recycling
law''(e.g. \citealt{athanassoula:92,englmaier00:_densit_wave_insid_inner_lindb_reson}),
since we do not intend to seek a steady state of the flow and our
simulation time is much shorter than a timescale of exhaust a large
fraction of gas in the systems by star formation.
In order to resolve gas motion in the galactic center region, we use
two dimensional polar grids extending from $5\hspace{+.2em}\mathrm{pc}$ to $10\hspace{+.2em}\mathrm{kpc}$
in the Galactic radius. We divided radial grids into 370
logarithmically and azimuthal grids into 300 equally keeping the shape
of each cell nearly square. The radial spacing $\Delta R$ of the
grids decreases inwards. Very high spacial resolution is achieved in
the central region, e.g. $\Delta R\approx 0.1\hspace{+.2em}\mathrm{pc}$ at $R=5\hspace{+.2em}\mathrm{pc}$.
We assume isothermal, non-self-gravitating, and non-viscous gas for
simplicity. We do not consider a viscous term in the hydrodynamical equations.
We use the equation of state of ideal gas with temperature of $10000$
K, which corresponds to the sound speed of $c_{s}\approx 10\hspace{+.2em}\mathrm{km\sep s^{-1}}$
and random motion of interstellar gas implicitly. We do not consider
star formation and feedback process, such as supernovae and stellar
mass loss, in this paper.
We assume a rotationally supported gas disk for the initial state.
This disk is flat and has infinitesimal thickness. Its outer radius
and mass are $10\hspace{+.2em}\mathrm{kpc}$ and $10^{10}\hspace{+.2em}\mathrm{M_{\odot}}$, respectively. The
initial surface density of the disk is uniform in all models.
The radial outer and inner boundary conditions are free and the azimuthal
boundary condition is periodic. We record mass flux passing through
the inner and the outer boundary for checking mass conservation.
In order to avoid spurious phenomena, we introduce the
non-axisymmetric components, such as the inner bar, the outer bar, and
the spiral arms, of the gravitational potential slowly, compared to
the rotational speed of each component. We gradually deform the
gravitational potential of the inner bar from a spherical shape,
\begin{equation}
\rho(x,y,z)=\rho_{0}'\left(1-\frac{x^{2}+y^{2}+z^{2}}{r_{0}^{2}}\right),
\end{equation}
where $\rho_{0}'=\frac{15M_{\mathrm{IB}}}{8\pi r_{0}^{2}}$ and
$r_{0}=\frac{a+b}{2}$, to its assumed one from $t=100\hspace{+.2em}\mathrm{Myr}$ to
$250\hspace{+.2em}\mathrm{Myr}$ as in \citet{athanassoula:92}. We also similarly
introduce the Fourier component of the gravitational potential of the
outer bar and the spiral arms given by \citet{bissantz:03} from
$t=0\hspace{+.2em}\mathrm{Myr}$ to $t=100\hspace{+.2em}\mathrm{Myr}$.
We use the super computer SR11000/K1 of the Hokkaido university
Information Initiative Center (IIC) for our simulations.
\section{NUMERICAL RESULTS} \label{RESULTS}
We perform the hydrodynamical simulations for various masses and
axial ratios of the inner bar systematically. In the small inner bar
models, we also vary their pattern speed.
We find a large amount of gas concentration to the galactic center in
both sizes of the inner bar models. In small inner bar models, the
inner bars induce gas inflow to the galactic center for
$0.05<Q_{T}<0.3$ without destroying the 200 pc gas ring, if
$\Omega _{\mathrm{IB}}\approx (\Omega -\kappa /2)_{\mathrm{max}}\approx
300\hspace{+.2em}\mathrm{km\sep s^{-1}\sep\kpc^{-1}}$ for $0.12\gtrsim Q_{T}\gtrsim 0.3$ and if
$\Omega _{\mathrm{IB}}\approx 225\hspace{+.2em}\mathrm{km\sep s^{-1}\sep\kpc^{-1}}$ for $Q_{T}\gtrsim 0.05$. On the
other hand, in large inner bar models, gas concentration occurs if
$Q_{T}>0.1$ and the inner bar destroys the 200 pc gas ring. In the
following subsections, we describe the results in more detail.
\subsection{The no-inner bar case} \label{subsec:no-inner}
We perform hydrodynamical simulation in the Galaxy model without the
inner bar to compare with the models with the inner bars. We show
result of the no-inner bar model (model N) in Fig.
\ref{fig:no_inner_bar}.
Gas ridges are formed in the outer bar region by $t=100\hspace{+.2em}\mathrm{Myr}$. Gas
in the galactic disk flows into the central region ($R<300\hspace{+.2em}\mathrm{pc}$)
along the gas ridges. This result is almost the same results of
\citet{bissantz:03}. In our numerical results, a gas ring is formed at
the radius of $150$-$250\hspace{+.2em}\mathrm{pc}$. Its mass is almost constant at the
value of $\approx 3\times 10^{8}\hspace{+.2em}\mathrm{M_{\odot}}$ after $t=100\hspace{+.2em}\mathrm{Myr}$.
The mass and size of the ring correspond to the CMZ, of which extent
is $-1.5^{\circ}\leq l \leq 2^{\circ}$ and mass is $5$-$10\times
10^{7}\hspace{+.2em}\mathrm{M_{\odot}}$. We find similar gas rings in the models with inner
bars. Hereafter we call these rings the 200 pc gas rings.
The radius of the 200 pc gas ring is well inside the position of the
ILR of the outer bar ($\sim 750\hspace{+.2em}\mathrm{pc}$). This result agrees with
\citet{regan:03}. They showed that size of nuclear ring is related to
population of $x_{2}$ orbits, rather than the position of ILRs of an
outer bar when gas motion is in the non-linear regime of hydrodynamics
in the barred potential.
Inside of the 200 pc gas ring, there are weak gas spirals. Their
pattern speed agrees with the pattern speed of the outer bar. These
spirals are density waves found by
\citet{englmaier00:_densit_wave_insid_inner_lindb_reson}.
\citet{englmaier00:_densit_wave_insid_inner_lindb_reson} show that
gaseous spirals are formed inside the ILR of a bar in their numerical
simulations of non-self-gravitating gaseous disks and that such
gaseous spirals are supported by pressure force and stationary in the
bar frame. The gaseous spirals in our simulation have similar
property. Hereafter we call these spirals the nuclear spirals. The
nuclear spirals become more tightly wound as approaching to the
galactic center. Near 20 pc from the center, nuclear spirals are
highly tight winding. An average mass inflow rate from $100\hspace{+.2em}\mathrm{Myr}$
to $500\hspace{+.2em}\mathrm{Myr}$ is very small, $\approx 3.6\times 10^{-4}\hspace{+.2em}\mathrm{M_{\odot}\sep yr^{-1}}$.
This radial gas inflow may be due to the nuclear spirals, since the
total gravitational torque on the gas in the nuclear spirals region is
consistent with the average mass inflow rate. In order to confirm
this, we calculate the total gravitational torque on the gas
inside $R=60\hspace{+.2em}\mathrm{pc}$ from the outer bar. The time averaged total
gravitational torque within $R=60\hspace{+.2em}\mathrm{pc}$ between $100$-$500\hspace{+.2em}\mathrm{Myr}$ is
$-7.6\times 10^{49}\hspace{+.2em}\mathrm{g\hspace{+.2em} cm^{2}\hspace{+.2em} s^{-2}}$. The
mass inflow rate is as large as $\sim 10^{-3}\hspace{+.2em}\mathrm{M_{\odot}\sep yr^{-1}}$ by this torque.
This is consistent with the average mass inflow rate from
$100\hspace{+.2em}\mathrm{Myr}$ to $500\hspace{+.2em}\mathrm{Myr}$.
Such nuclear spirals were not formed in the simulations of
\citet{bissantz:03}. This may be due to the lack of the spacial
resolution in the nuclear spirals region in the simulations of
\citet{bissantz:03}.
\citet{englmaier00:_densit_wave_insid_inner_lindb_reson} have shown
that in simulations with insufficient spacial resolution to resolve
nuclear spiral waves, they are quickly damped out due to numerical
viscosity.
\subsection{The small inner bar models} \label{subsec:small_inner_bar}
We find that a large amount of gas concentrates to the galactic center
in the small inner bar models with $Q_{T}\gtrsim 0.05$ in some range
of $\Omega _{\mathrm{IB}}$. We divide our results into two cases, the high gas
mass concentration case and the low gas mass concentration case.
If $Q_{T}\gtrsim 0.12$ (S42, S43, S33, S34), high gas mass
concentration to the galactic center occurs for both $\Omega _{\mathrm{IB}}\approx
(\Omega -\kappa /2)_{\mathrm{max}}\approx 300\hspace{+.2em}\mathrm{km\sep s^{-1}\sep\kpc^{-1}}$ and
$\Omega _{\mathrm{IB}}\sim 225\hspace{+.2em}\mathrm{km\sep s^{-1}\sep\kpc^{-1}}$. If $0.05\lesssim Q_{T}\lesssim 0.12$
(S41, S32, and S23), high gas mass concentration to the galactic
center occurs only for $\Omega _{\mathrm{IB}}\sim 225\hspace{+.2em}\mathrm{km\sep s^{-1}\sep\kpc^{-1}}$. One exception
is the model S24, in which high gas mass concentration occur for both
$\Omega _{\mathrm{IB}}\approx (\Omega -\kappa /2)_{\mathrm{max}}$ and $\Omega _{\mathrm{IB}}\sim
225\hspace{+.2em}\mathrm{km\sep s^{-1}\sep\kpc^{-1}}$ in spite of $0.05\lesssim Q_{T}\lesssim 0.12$.
\subsubsection{The high gas mass concentration cases} \label{subsubsec:high_gas_mass_con_small}
Here, we describe the results of the model S33 ($\omegaib{300}$) in
detail, since time evolution of gas distribution in the inner bar
region are similar to the high gas mass concentration cases.
We show the time evolution of the surface density of gas in the
central 1 kpc square in the model S33 ($\omegaib{300}$) in
Fig. \ref{fig:S33a}. One of characteristic gas distribution is
straight shocks inside the inner bar (see Fig.\ref{fig:VF_S33}).
These shocks appear after the inner bar potential is introduced and
become stronger with calculation time (see Fig. \ref{fig:S33a}d-f).
These shocks extend from the galactic central disk to the inner edge
of the 200 pc gas ring and are efficient to supply a large amount of
gas to the galactic center. A massive nuclear gas disk forms in
$R\lesssim 15\hspace{+.2em}\mathrm{pc}$. Its mass reaches as large as $10^{7}\hspace{+.2em}\mathrm{M_{\odot}}$
at $t=500\hspace{+.2em}\mathrm{Myr}$. Hereafter we call this disk the nuclear gas disk.
An elliptical gas ring is formed around the inner bar and is elongated
along the inner bar (see Fig. \ref{fig:S33a}f). Similar elliptical
gas ring is shown in \citet{maciejewski:02}. Shape and surface
density of this ring changes as the inner bar rotates. In
Fig.\ref{fig:VF_S33}, the ellipticity of the ring is larger at $\Delta\theta
=90^{\circ}$ than at $\Delta\theta=0^{\circ}$, while
surface density of the ring is higher at $\Delta\theta =0^{\circ}$
than at $\Delta\theta =90^{\circ}$, where $\Delta\theta$ is the angle
between major axes of the inner bar and the outer bar. The velocity
fields in the elliptical ring are smoothly connected to that of
surrounding gas.
In Fig. \ref{fig:Mass_inflows} we show the time evolution of the gas
mass within 20 pc, $M_{20}(t)$, in the model S33 ($\omegaib{300}$).
As deformation of the inner bar proceeds ($t=100$-$250\hspace{+.2em}\mathrm{Myr}$),
$M_{20}(t)$ rapidly increases with time. Then, $M_{20}(t)$ saturates
($t=250-350\hspace{+.2em}\mathrm{Myr}$). Similar phenomenon was reported by
\citet{maciejewski:02}. \citet{maciejewski:02} showed that an inner
bar keeps gas away from the galactic center and gas inflow due to the
inner bar is negligible after it reaches its full strength. In the
corresponding stage, in our simulations, velocity fields and gas
distribution inside the 200 gas pc ring is perturbed by the inner bar.
After velocity fields and gas distribution is quasi-steady, increase
of gas inflow to the galactic center begins at $t=350\hspace{+.2em}\mathrm{Myr}$. This
second inflow continues to the end of the simulations. $M_{20}(t)$
attains $\sim 10^{7}\hspace{+.2em}\mathrm{M_{\odot}}$ at $t=500\hspace{+.2em}\mathrm{Myr}$. An average second
mass inflow rate is $\sim 10^{7}\hspace{+.2em}\mathrm{M_{\odot}} / 100\hspace{+.2em}\mathrm{Myr} \approx
0.1\hspace{+.2em}\mathrm{M_{\odot}\sep yr^{-1}}$. We discuss the difference between our results and
that of \citet{maciejewski:02} in Sect. 4.
Occurrence of the second mass inflow depends on pattern speed of the
inner bar. We show time evolution of $M_{20}(t)$ for various pattern
speeds of the inner bar in the model S33 in the lower panel of
Fig.\ref{fig:Mass_inflows}. Figure \ref{fig:Mass_inflows} shows that
the second mass inflow occurs when the pattern speed is in
$290$-$325\hspace{+.2em}\mathrm{km\sep s^{-1}\sep\kpc^{-1}}$ and in $200$-$225\hspace{+.2em}\mathrm{km\sep s^{-1}\sep\kpc^{-1}}$ in the model S33.
We summarize $M_{20}(t=500\hspace{+.2em}\mathrm{Myr})$ for small inner bar models in
Table \ref{table:HGMC_SIB}. In this table, we denote models in which
the second mass inflow occurs by bold letters. We note the models by
daggers, in which $M_{20}(t=500\hspace{+.2em}\mathrm{Myr})$ exceeds the stellar mass
within $20\hspace{+.2em}\mathrm{pc}$ $M_{\star}(<20\hspace{+.2em}\mathrm{pc})\approx 2\times
10^{7}\hspace{+.2em}\mathrm{M_{\odot}}$ (see Fig. 14 in \citealt{LZM:02}). In this case, we
should solve self-consistently both the motion of gas and stars in the
Galactic central region. Double daggers show the models in which the
second mass inflows begin just before the end of the simulation. In
these models, more mass will inflow into the galactic center, if we
continue the simulations.
Mass of the nuclear gas disk increases with time periodically by the
second mass inflow. Similar periodicity have been reported in
\citet{SH:02}. This case may be closely related with resonance
phenomena between the outer bar and the inner bar. In our results,
sufficient elongation for a small inner bar and a suitable $\Omega _{\mathrm{IB}}$
are needed for the second mass inflow.
\subsubsection{The low gas mass concentration cases} \label{subsubsec:low_gas_mass_con_small}
In small inner bar models with $Q_{T}<0.05$, gas mass concentration to
the galactic center is small (S31, S21, S22, S13, and S14). In these
models, loose two gas spirals appear in the inner bar region instead
of straight shocks. These nuclear gas spirals become tightly wound
near the center and a less massive gas disk appears in $R<20\hspace{+.2em}\mathrm{pc}$
from the center.
The low gas mass concentration is due to the absence of the second
mass inflow. We show the time evolution of $M_{20}(t)$ in the model
S21 ($\omegaib{300}$) by a dashed line in the upper left panel of
Fig. \ref{fig:Mass_inflows}. This figure shows that $M_{20}(t)$
saturates after the first mass increase and the second mass inflow
does not occur till the end of the simulation. We test the time
evolution of $M_{20}(t)$ of the model S21 for various pattern speed,
as shown in the upper right panel of Fig. \ref{fig:Mass_inflows}.
There is no second mass inflow in a range of $\Omega _{\mathrm{IB}}
=175$-$325\hspace{+.2em}\mathrm{km\sep s^{-1}\sep\kpc^{-1}}$. Thus, we conclude that the second mass inflow
needs $Q_{T}\gtrsim 0.05$.
We address characteristic gas distribution seen in the low
gas mass concentration models, since it is clear evidence of an inner weak
bar. In Fig. \ref{fig:weak_small_inner_bars}, we show the snapshots
of surface density of the model S21 for two pattern speeds, $\Omega _{\mathrm{IB}}
=200$ and $300\hspace{+.2em}\mathrm{km\sep s^{-1}\sep\kpc^{-1}}$. In both models, the loose two gas spirals
are formed in the inner bar region and are surrounded by the gas
rings. Similar structure is observed in the double-barred galaxy, NGC
1097. In this galaxy, the loose gas spirals are observed within the
starburst ring (\citealt{PMR:05,fathi:06}). Contrary to
\citet{PMR:05}, it is possible that loose gas spirals are formed by an
inner bar without any peculiar assumption.
\subsection{The large inner bar models} \label{subsec:large_inner_bar}
\subsubsection{The high gas mass concentration cases} \label{subsubsec:high_gas_mass_con_large}
In large inner bar models with $Q_{T}\gtrsim 0.11$ (L42, L43, L33,
L34, and L35), a large amount of gas concentrates to the galactic
center for $\omegaib{325}$. In Fig. \ref{fig:L42a} we show the time
evolution of the surface density of gas of the model L42. In
$t=100$-$250\hspace{+.2em}\mathrm{Myr}$, elongation of the 200 pc gas ring increases.
As can be seen in Fig. \ref{fig:L42a}b-d, the 200 pc gas ring is highly
elongated in $t=150$-$300\hspace{+.2em}\mathrm{Myr}$. At $t=350\hspace{+.2em}\mathrm{Myr}$, the 200 pc
gas ring shrinks to less than $R=150\hspace{+.2em}\mathrm{pc}$ (see
Fig. \ref{fig:L42a}f). Then, large part of the gas of the 200 pc gas
ring rapidly concentrates into the galactic center and a very massive
gas disk is formed at the center. The mass of the disk highly exceeds
$10^{8}\hspace{+.2em}\mathrm{M_{\odot}}$. The final value of $M_{20}(t)$ is unreal because
of the same reason described in Sect. \ref{subsubsec:high_gas_mass_con_small}.
\subsubsection{The low gas mass concentration cases} \label{subsubsec:low_gas_mass_con_large}
In the case of $Q_{T}<0.11$ (L41, L31, L32, L22, L23, L24, L13, L14,
and L15), a large amount of gas do not concentrate to the galactic
center. The inner bar changes the shape of the 200 pc gas ring into
more elliptical. The orientation of the deformed 200 pc gas
ring is almost parallel to the inner bar. In these models, there is
no enhancement of the mass inflow rate to the center. The average
mass inflow rate over the simulation time is as small as the
no-inner bar case.
\section{DISCUSSION} \label{DISCUSSION}
\subsection{Mass supply due to nested bars} \label{subsec:mass_supply}
We have shown that mass supply process due to the nested bars is very
efficient process by the numerical simulations. There are possible
scenarios of the mass supply to the Galactic center.
\citet{athanassoula:92} showed that gas ridges can reach a galactic
center if a large-scale bar is very strong. However, the axial ratio
of the outer bar of our galaxy is $\approx 3$
(\citealt{stanek:97,rattenbury07:_model_galac_bar_using_ogle}).
Hence, it is unlikely that mass supply to the Galactic center is due
to the `past' strong outer bar.
\citet{fukuda00:_effec_of_self_gravit_of} simulated self-gravitational
instability of a nuclear gas ring and showed that a part of gas in the
ring falls into a galactic center, since the gas transfers its
angular momentum to a very massive clump, which is formed due to the
fragmentation of the gas ring and subsequent mass accretion by
surrounding gas. This process can explain mass supply to the galactic
center if the CMZ corresponds to such a nuclear gas ring. In this
simulation, as the result of the fragmentation, the nuclear ring is
disrupted. This is not consistent with the CMZ in our galaxy.
We have shown a large amount of gas concentration to the Galactic
center, by performing two dimensional hydrodynamical simulations with
various inner bar parameters (size of semi-major axis, mass, axial
ratio, and pattern speed of the inner bar). We have performed simulations
for inner bars with $\Omega _{\mathrm{IB}}\approx (\Omega -\kappa
/2)_{\mathrm{max}}$, since this pattern speed is consistent with the
N-body simulation results (\citealt{FM:93,RSL:02}). We also have
performed simulations changing the pattern speed of the inner bar for
the small inner bar models to investigate effect of the pattern speed
on mass inflow rate. We have assumed the two sizes of the semi-major axis
of the inner bar, $200\hspace{+.2em}\mathrm{pc}$ and $600\hspace{+.2em}\mathrm{pc}$. We have found the
high gas mass concentration in both size of the inner bar.
In the small inner bar models, The high gas mass concentration occurs
for certain ranges of $Q_{T}$ and $\Omega _{\mathrm{IB}}$. In the models with
$Q_{T}\gtrsim 0.12$, the second mass inflow to the galactic center
occurs for $\Omega _{\mathrm{IB}}\approx (\Omega -\kappa /2)_{\mathrm{max}}$.
However, in models with $Q_{T}\lesssim 0.12$, the second mass inflow
does not occur for $\Omega _{\mathrm{IB}}\approx (\Omega -\kappa
/2)_{\mathrm{max}}$. For $0.05\lesssim Q_{T}\lesssim 0.12$, the
second mass inflow occurs for $\Omega _{\mathrm{IB}}\sim 225\hspace{+.2em}\mathrm{km\sep s^{-1}\sep\kpc^{-1}}$. Thus, the high
gas mass concentration cases for the small inner bar models are
divided into two cases:
\begin{enumerate}
\item $0.05\lesssim Q_{T}\lesssim 0.12$ and $\Omega _{\mathrm{IB}}\sim 225\hspace{+.2em}\mathrm{km\sep s^{-1}\sep\kpc^{-1}}$
\item $Q_{T}\gtrsim 0.12$, and $\Omega _{\mathrm{IB}}\approx (\Omega-\kappa /2)_{\mathrm{max}}$
or $\Omega _{\mathrm{IB}}\sim 225\hspace{+.2em}\mathrm{km\sep s^{-1}\sep\kpc^{-1}}$
\end{enumerate}
One exception is the model S24, in which high gas mass concentration
occurs for both $\Omega _{\mathrm{IB}}\approx (\Omega -\kappa /2)_{\mathrm{max}}$
and $\Omega _{\mathrm{IB}}\sim 225\hspace{+.2em}\mathrm{km\sep s^{-1}\sep\kpc^{-1}}$ in spite of $0.05\lesssim
Q_{T}\lesssim 0.12$. These results are summarized in
Fig. \ref{fig:Qt-Omega}. In this figure, the results of the model
S25($\omegaib{250}$) and the model S41($\omegaib{250}$) occupy the
same point at $(Q_{T},\hspace{+.2em}\Omega _{\mathrm{IB}} )=(0.115,250)$. High gas mass
concentration occurs in the model S25($\omegaib{250}$), while it does
not occur in the model S41($\omegaib{250}$).
The second mass inflow rates change periodically in the models
which are denoted by asterisks in Table \ref{table:HGMC_SIB}
(see also the lower panel of Fig. \ref{fig:Mass_inflows}).
These periodic changes imply that the second mass inflow is
a resonance phenomenon between the outer bar and the inner bar,
since the second mass inflow rate increases with the time intervals
which are roughly the figure rotation period of the inner bar
measured in the rotational frame of the outer bar.
The high gas mass concentration cases in the small inner bars models
are consistent with observations in our galaxy. In these models, a
nuclear gas disk forms. Its size and its mass are $R\lesssim
15\hspace{+.2em}\mathrm{pc}$ and $\sim 10^{7}\hspace{+.2em}\mathrm{M_{\odot}}$, respectively. Interestingly,
the size of the nuclear gas disk is very close to the location of the
Arches cluster and the Quintuplet cluster. Moreover, the nuclear gas
disk is massive enough to form these star clusters (we discuss this
point in Sect.\ref{subsec:evol_nuclear_gas_disk}). Kinematics of gas
induced by the inner bar is consistent with the molecular gas
observations (we discuss this point in Sect.\ref{subsec:lv_diagram}).
On the other hand, in the small inner bar models with $Q_{T}<0.05$,
the inner bar does not highly enhance mass inflow to the galactic
center. Hence, the inner bar in our galaxy is $Q_{T}\gtrsim 0.05$, if
mass supply to the Galactic center is due to the inner bar.
There is difference between the size of the small inner bar models and
the inner bar reported by \citet{nishiyama:05,nishiyama:06}.
\citet{nishiyama:05,nishiyama:06} trace ridge of distribution of red
clump stars but do not show profile of gravitational potential of the
inner bar. Our numerical results are consistent with their report, if
non-axisymmetric component of gravitational potential of the inner bar
is small beyond $R=200\hspace{+.2em}\mathrm{pc}$.
The large inner bars in our models are not consistent with
observations in our galaxy, if mass supply to the galactic center is
caused by the large inner bar. In the large inner bar models with
$Q_{T}\gtrsim 0.11$, high gas mass concentration occurs and the 200 pc
gas ring is destroyed. This does not correspond to our galaxy. In the
models with $Q_{T}<0.11$, the inner bar does not induce a large mass
inflow to the galactic center. From these results, large inner bar is
difficult to be the case in the Galaxy, if mass supply to the Galactic
center is due to an inner bar.
It is observed that velocity dispersion of gas clouds in the
central region of the Galaxy is higher than that in the Galactic disk
(\citealt{rohlfs87:_kinem_and_physic_param_of}).
\citealt{englmaier97:_two_modes_of_gas_flow} show that the gas flow
can change drastically when the sound speed is changed, since
existence and strengths of shocks depend on $c_{s}$. In order to
confirm the effect of the sound speed on the mass inflow, we try a
test calculation in which the inner bar parameters are the same as the
model S33 ($\omegaib{300}$) and the artificial radial profile of
$c_{s}$, which rises from $\approx 10\hspace{+.2em}\mathrm{km\sep s^{-1}}$ at the inner edge of
the 200 pc gas ring to $20\hspace{+.2em}\mathrm{km\sep s^{-1}}$ at the center, is assumed. The
straight shocks in the inner bar become weaker and the mass inflow
rate becomes smaller. We will study the effect of the sound speed on
the gas flow further in a future work considering realistic cooling
and heating process.
\subsection{Evolution of the nuclear gas disk } \label{subsec:evol_nuclear_gas_disk}
We have shown that small massive gas disks form in the small inner bar
models for $Q_{T}\gtrsim 0.05$ and their size are $\sim 15\hspace{+.2em}\mathrm{pc}$.
It is interesting to study the self-gravitational instability of the
nuclear gas disk. In an axisymmetric uniform thin gas disk,
the dispersion relation of the small radial density perturbation in the
axisymmetric mode is
\begin{equation}
\omega^{2}=c^{2}_{s}k^{2}-2\pi G\Sigma |k|+\kappa^{2},
\end{equation}
where $\omega$ is the frequency of the perturbation, $c_{s}$ is the
sound speed of gas, $k$ is the wave number of the perturbation, $G$ is
the gravitational constant, $\Sigma$ is the surface density of the
thin disk, and $\kappa$ is the epicyclic frequency (\citealt{BT:87}).
From the dispersion relation, the density perturbation can grow if
\begin{equation}
Q\equiv\frac{c_{s}\kappa}{\pi G\Sigma}\lesssim 1,
\end{equation}
where $Q$ is the Toomre $Q$-value.
We define $\Sigma _{\mathrm{crit}}$ as the surface density for $Q=1$,
\begin{equation}
\Sigma _{\mathrm{crit}}(R)=\frac{c_{s}(R)\kappa (R)}{\pi G},
\end{equation}
which may be the minimum surface density for the gravitational instability.
Using $\Sigma _{\mathrm{crit}}$, we define $M_{\mathrm{crit}}$ as
\begin{equation}
M_{\mathrm{crit}}(R)=\int^{R}_{0}2\pi R^{'}\Sigma _{\mathrm{crit}}(R^{'})dR^{'}.
\end{equation}
$M_{\mathrm{crit}}(R)$ may be a measure of gravitational instability of
the disk. In the central several tens parsecs of the galaxy, there is
evidence for strong magnetic fields
(e.g. \citealt{chuss03:_magnet_field_in_cool_cloud}). Magnetic fields
have an important role in the gravitational stability of the disk. To
consider effect of the magnetic fields in the linear analysis, we
assume simple configuration of the magnetic fields, since it is
observationally unclear. We assume that the
magnetic fields are parallel to the disk and homogeneous,
$\bm{B}=B_{0}\bm{e_{\phi}}$, where $B_{0}$ is a strength of the
magnetic fields and $\bm{e_{\phi}}$ is the base vector of the
azimuth. \citet{FL:97} derived the dispersion relation
\begin{equation}
\omega^{2}=(c^{2}_{s}+c^{2}_{A})k^{2}-2\pi G\Sigma |k|+k^{2}
\end{equation}
for this configuration, where $c_{A}$ is a Alfv{\'e}n velocity,
\begin{equation}
c_{A}=\sqrt{\frac{B^{2}}{4\pi\rho}}.
\end{equation}
We use this dispersion relation for our analysis. We assume that gas
clumps are formed from perturbations with the largest growth rate.
The wave length of the density perturbation with the largest growth
rate is given by
\begin{equation}
\lambda _{\mathrm{max}}=\frac{2\pi}{k_{\mathrm{max}}}=\frac{2c^{2}_{\mathrm{eff}}}{G\Sigma _{\mathrm{crit}}}
=\frac{2\pi c_{\mathrm{eff}}(R)}{\kappa (R)},
\end{equation}
where $c_{\mathrm{eff}}\equiv\sqrt{c^{2}_{s}+c^{2}_{A}}$.
Gas clump mass is estimated as
\begin{equation}
M_{\mathrm{clump}}=\pi\left(\frac{\lambda _{\mathrm{max}}}{2}\right)^{2}\Sigma _{\mathrm{crit}}
=\frac{\pi^{2}c^{3}_{\mathrm{eff}}(R)}{G\kappa (R)}.
\end{equation}
Application this results to our numerical results shows that strong
magnetic fields, which is comparable with the strongest magnetic
fields observed in the Galactic central region, enable massive gas clumps to
grow and these are comparable to the mass of the young massive star clusters
in the Galactic center. Figure \ref{fig:toomre_instability} show the
result of the application for the model S33 ($\omegaib{300}$), which
is one of the high gas mass concentration cases in the small inner bar
models. In this figure, we assume that the gas in the nuclear gas
disk sufficiently cools down to $T=100$ K ($c_{s}\approx 1\hspace{+.2em}\mathrm{km\sep s^{-1}}$).
From this figure, the nuclear gas disk becomes gravitationally
unstable after $t=300\hspace{+.2em}\mathrm{Myr}$, if effect of the magnetic fields is
very weak. The mass of the disk is $6.7\times 10^{5}\hspace{+.2em}\mathrm{M_{\odot}}$ at
that time. The mass of the gas clumps is $100$-$300\hspace{+.2em}\mathrm{M_{\odot}}$ from
equation (10). If $B_{0}=1$ mG, the disk becomes unstable after
$t=450\hspace{+.2em}\mathrm{Myr}$. The mass of the disk is $2.9\times 10^{6}\hspace{+.2em}\mathrm{M_{\odot}}$
at that time. The mass of the gas clumps is $1.0$-$3.0\times
10^{4}\hspace{+.2em}\mathrm{M_{\odot}}$. This mass is comparable to that of the young
massive star clusters in the Galactic center.
Massive gas clumps can be formed even in the non-magnetic case. To
investigate the non-linear evolution of the nuclear gas disk in the
non-magnetic case, we perform very high resolution hydrodynamical
simulations in paper II. In paper II, we show that many massive
compact gas clumps are formed by gravitational instability of the
cooling gas disk in the non-magnetic case. Typical mass and size of
the clumps are several $10^{3}\hspace{+.2em}\mathrm{M_{\odot}}$ to $10^{4}\hspace{+.2em}\mathrm{M_{\odot}}$ and less
than a few parsecs, respectively. The largest gas clumps have a mass
of $\sim 10^{5}\hspace{+.2em}\mathrm{M_{\odot}}$. This is much larger than
$100$-$300\hspace{+.2em}\mathrm{M_{\odot}}$. This is because small gas clumps, which are
formed rapidly from growth of density perturbation in the cooling
disk, collide each other and merge into more massive clumps. The
Arches and Quintuplet clusters have a mass of $\sim 10^{4}\hspace{+.2em}\mathrm{M_{\odot}}$
and a size of $<1\hspace{+.2em}\mathrm{pc}$. If we assume a star formation efficiency
of $\sim 0.1$, these clusters can be formed from the gas clump of mass
$\sim 10^{5}\hspace{+.2em}\mathrm{M_{\odot}}$, which is comparable to the largest gas clumps
in our numerical results of paper II.
\subsection{Longitude-velocity diagrams of gas flow in the nested bars} \label{subsec:lv_diagram}
We make longitude-velocity ($l$-$v$) diagrams from our numerical
results with the following two aims. One is to compare our numerical
results with observations in our galaxy. Another is to show that
characteristic features of gas motion induced by the inner bar can be
evidence of inner bars in external galaxies.
Figure \ref{fig:lv} shows the $l$-$v$ diagrams of the model S33
($\omegaib{300}$), which is one of the high gas mass concentration
cases in the small inner bar models, for $\Delta\theta' =0^{\circ}$
and $90^{\circ}$, where $\Delta\theta'$ is the angle between the
direction of the inner bar and the Sun-Galactic center line. In the
diagram, we assume that the outer bar is inclined at an angle of
$20^{\circ}$ with respect to the Sun-Galactic center line, that the
distance of the Sun from the Galactic center is $8\hspace{+.2em}\mathrm{kpc}$, and that
the circular velocity of the Sun is $220\hspace{+.2em}\mathrm{km\sep s^{-1}}$. These assumption
is based on the results of \citet{bissantz:03}. In this figure, we
classify the gas components of the results into 7 groups by colors
(the detail of the classification is described in the caption of
Fig. \ref{fig:lv}) according to property of gas motion. The nuclear
gas disk component is shown by the red points in Fig. \ref{fig:lv}.
The nuclear gas disk component in Fig. \ref{fig:lv} is weakly
dependent on $\Delta\theta'$, since circular motions dominate in the
disk. The straight shocks component is shown by the green points in
Fig. \ref{fig:lv}. The feature of the straight shocks depends on
$\Delta\theta'$. When the inner bar is perpendicular to the outer
bar, the straight shocks component is clearly distinguishable from the
nuclear gas disk component and the 200 pc gas ring component. The
elliptical gas ring component is shown by the purple points in
Fig. \ref{fig:lv}. The feature of this ring strongly depends on
$\Delta\theta'$, since it is elongated along the inner bar.
There are many observational studies on gas distribution and
kinematics in the central region of our galaxy.
\citet{stark04:_gas_densit_stabil_and_starb} give the $l$-$v$ diagram
of highly excited rotational emission lines of CO (J=4-3 and J=7-6) in
the central region of our galaxy observed by AST/RO. They cover a
range of $-1.2^{\circ}<l<2^{\circ}$. Their $l$-$v$ diagram traces
high density components of molecular gas.
\citet{rodriguez-fernandez06:_coupl_dynam_and_molec_chemis} show the
$l$-$v$ diagram of CO (J=2-1) using the published data. Their $l$-$v$
diagram covers the same region as the $l$-$v$ diagram of
\citet{stark04:_gas_densit_stabil_and_starb}, but traces diffuse
molecular gas. It is known that there are two compact GMCs, the
$20\hspace{+.2em}\mathrm{km\sep s^{-1}}$ cloud and the $50\hspace{+.2em}\mathrm{km\sep s^{-1}}$ cloud in the Galactic center
region. The $20\hspace{+.2em}\mathrm{km\sep s^{-1}}$ cloud is located at $R\lesssim 10\hspace{+.2em}\mathrm{pc}$
from the center in the projection. It has a total mass of $\sim
3\times 10^{5}\hspace{+.2em}\mathrm{M_{\odot}}$ and its radial velocities is in a range of
$\sim 5$-$25\hspace{+.2em}\mathrm{km\sep s^{-1}}$. The $50\hspace{+.2em}\mathrm{km\sep s^{-1}}$ cloud have a mass of $\sim
10^{5}\hspace{+.2em}\mathrm{M_{\odot}}$ (\citealt{mezger96:_galac_center}). The positions of
these GMCs in the $l$-$v$ diagram are shown in
\citet{nagayama07:_compl_survey_of_centr_molec}.
\citet{oka07:_co_j_survey_of_galac_center} give the $l$-$v$
diagram of a highly excited rotational emission line of CO (J=3-2)
with high resolution from $l=-0.2^{\circ}$ to $0.1^{\circ}$. They show
that there is a pair of high velocity emission (they are named
CND$^{+}$ and CND$^{-}$ in their paper) within $0.05^{\circ}\approx
6.5\hspace{+.2em}\mathrm{pc}$ from Sgr A*. The line-of-sight velocity of CND$^{+}$ and
CND$^{-}$ is $50$-$100\hspace{+.2em}\mathrm{km\sep s^{-1}}$ and $-50$-$-120\hspace{+.2em}\mathrm{km\sep s^{-1}}$,
respectively.
Our numerical results is consistent with the observations in the
central region of our galaxy. The nuclear gas disk component in
Fig. \ref{fig:lv} for $\Delta\theta' =90^{\circ}$ is in the longitude
range of $-0.2^{\circ}\lesssim l\lesssim 0.2^{\circ}$ and in the
velocity range of $-100\hspace{+.2em}\mathrm{km\sep s^{-1}}\lesssim v\lesssim 100\hspace{+.2em}\mathrm{km\sep s^{-1}}$. This
is same range of the most inner $x_{2}$ orbit shown in
\citet{stark04:_gas_densit_stabil_and_starb}. The velocity range of
the nuclear gas disk agrees with that of the CND. The similar
agreement between the nuclear gas disk component and the CND is also
found in the $l$-$v$ diagram of
\citet{rodriguez-fernandez06:_coupl_dynam_and_molec_chemis} and
\citet{oka07:_co_j_survey_of_galac_center}. The $20\hspace{+.2em}\mathrm{km\sep s^{-1}}$ and
$50\hspace{+.2em}\mathrm{km\sep s^{-1}}$ clouds lie in the same region as the nuclear gas disk in
the $l$-$v$ diagram
(\citealt{nagayama07:_compl_survey_of_centr_molec}). Thus, the nuclear
gas disk component well corresponds to the observations. There are
not clear high velocity components corresponding to the elliptical gas
ring component for $\Delta\theta' =0^{\circ}$ in the $l$-$v$ diagrams
of \citet{stark04:_gas_densit_stabil_and_starb} and
\citet{rodriguez-fernandez06:_coupl_dynam_and_molec_chemis}. The
other gas components in our $l$-$v$ diagrams occupy the same region in
their $l$-$v$ diagrams. Thus, our $l$-$v$ diagrams for $\Delta\theta'
=90^{\circ}$ well corresponds to our Galaxy.
We compare our numerical results with molecular gas observation in
Maffei 2. Our $l$-$v$ diagram at $\Delta\theta'=90^{\circ}$ well
corresponds to the $\mathrm{CO}$ position-velocity ($p$-$v$) diagrams of the
nuclear region of Maffei 2
(\citealt{meier08:_nuclear_bar_catal_star_format}).
\citet{meier08:_nuclear_bar_catal_star_format} performed an
observation of the nuclear region of Maffei 2 with high spacial
resolution with the OVRO and BIMA arrays and found a parallelogram
feature and two intense features at both side of the parallelogram
feature in their $p$-$v$ diagrams. The parallelogram feature extends
over $-5''\lesssim p\lesssim 15''$ and $-125\hspace{+.2em}\mathrm{km\sep s^{-1}} \lesssim v
\lesssim 125\hspace{+.2em}\mathrm{km\sep s^{-1}}$ in their diagrams. The two intense features are
located at $(p,v)\approx (-15'',50\hspace{+.2em}\mathrm{km\sep s^{-1}})$ and $(20'', -50\hspace{+.2em}\mathrm{km\sep s^{-1}})$
in their diagram . They explain these features by simple linear
orbits in their nuclear bar model. The nuclear gas disk component in
our $l$-$v$ diagram corresponds to the parallelogram feature. The
straight shocks component and the elliptical gas component in our
$l$-$v$ diagram well correspond to the two intense features. Thus,
our results strongly support their interpretation that Maffei 2 likely
has an nuclear bar. We propose that the nuclear gas disk component,
the straight shocks component, and the elliptical gas component are
indirect evidence for an inner bar. Observation of molecular gas in
the nuclear region of external barred galaxies with high spacial
resolution, e.g. ALMA, can give evidence of inner bars, even if they
are hidden by a large amount of gas and dust.
\subsection{Important role of central mass concentration} \label{subsec:cmc}
We discuss the difference between our numerical results and numerical
results of \citet{maciejewski:02}. In our simulations, the massive
nuclear gas disks are formed in the galactic center. Formation of the
nuclear gas disks is due to the straight shocks inside the inner
bars. On the other hand, both such nuclear gas disks and straight
shocks are not formed in \citet{maciejewski:02}, although they also
simulated gas flow in a nested barred model.
We consider the central mass concentration as the main reason for the
difference, since major difference between our models and their model
is central mass concentration. We assumed the high central mass
concentration that is modeled on the basis of the nuclear bulge
profile given by \citet{LZM:02}, while the central mass concentration
in the model of \citet{maciejewski:02} is low (see Fig. 3 in
\citealt{MS:00}). It is shown that a high central mass concentration
in a barred potential strongly affects orbital structure of stars and
gas
(\citealt{fukuda98:_effec_of_centr_super_black,fukuda00:_effec_of_self_gravit_of,ann:05}).
The central mass concentration tends to change the shape of the orbits
of stars into rounder shapes at the nearer central region of the
galaxy. When the galaxy has an inner bar, the shapes of the orbits
are elongated at the radii which are comparable to the semi-major axis
of the inner bar. In smaller radii, the shape of the orbits changes
into the circular orbits in the inner bar potential, if the central
mass concentration is sufficiently high. Straight shocks may form, if
the shape of the orbits rapidly vary as the radii becomes small, since
gas collides each other at the region where the orbits are overcrowded
and dissipates. Hence, a high central mass concentration is important
for formation of straight shocks and therefore formation of nuclear
gas disks. We conclude that the difference between our numerical
results and numerical results of \citet{maciejewski:02} is mainly due
to the difference in a central mass concentration.
It is important to study a self-consistent model of nested barred
galaxies with high central mass concentrations and their stability.
\section{SUMMARY} \label{SUMMARY}
We summarize our study as follows:
\begin{enumerate}
\item We have performed two dimensional hydrodynamical simulations to
investigate mass supply process by nested bars. We have assumed the
gravitational potential model of our galaxy, based on the Galaxy
models of \citet{bissantz:03} and the nuclear bulge profile given by
\citet{LZM:02} adding an inner bar. We have assumed two cases of the
size of the inner bar models, $a_{\mathrm{IB}}=200\hspace{+.2em}\mathrm{pc}$ and
$600\hspace{+.2em}\mathrm{pc}$.
\item In the small inner bar models, a large amount of gas
concentrates into the galactic center for 1) $0.05\lesssim
Q_{T}\lesssim 0.12$ and $\Omega _{\mathrm{IB}} \sim 225\hspace{+.2em}\mathrm{km\sep s^{-1}\sep\kpc^{-1}}$ and 2)
$Q_{T}\gtrsim 0.12$ and, $\Omega _{\mathrm{IB}}\approx (\Omega -\kappa
/2)_{\mathrm{max}}$ or $\Omega _{\mathrm{IB}}\sim 225\hspace{+.2em}\mathrm{km\sep s^{-1}\sep\kpc^{-1}}$. The straight
shocks are formed within the inner bar. This is partly due to that
$Q_{T}$ in these models is high and partly due to that the central
mass concentration in our models is high. The straight shocks sweep
gas in the inner bar region. The gas trapped by the straight shocks
falls into the galactic center and then the nuclear gas disk is formed
at the center. The size and mass of the nuclear gas disk are
$\lesssim 15\hspace{+.2em}\mathrm{pc}$ and $\sim 10^{7}\hspace{+.2em}\mathrm{M_{\odot}}$, respectively.
\item In the large inner bar models, a large amount of gas
concentrates into the galactic center for $Q_{T}>0.11$. In the course
of the gas concentration, the inner bar destroys the 200 pc gas ring.
The destruction of the 200 pc gas ring is not consistent with the
CMZ. We conclude that the inner bar of our galaxy is not both large
and strong, if recent mass supply to the galactic center is due to the
inner bar of our galaxy.
\item The high gas mass concentration cases in the small inner bar
models well agree with the observed feature as follows. Extent and
kinematics of the nuclear gas disk in our results are consistent with
the observations of the molecular gas in the central region of our
galaxy. The size of the nuclear gas disk is very close to the
location of the Arches cluster and the Quintuplet cluster, and its
mass is enough to form these star clusters.
\item We have discussed the self-gravitational instability of the
nuclear gas disk formed in our simulations. Assuming magnetic fields
as strong as observed one in the central tens parsecs of our galaxy,
the most rapid growing unstable mode corresponds to gas clumps which
have comparable mass to the Arches and Quintuplet cluster. In next
paper, we will study non-linear evolution of massive nuclear gas
disks.
\item We have shown the characteristic features in the $l$-$v$ diagram
induced by the small inner bar. These features can be clues about
existence of inner bars in extra galaxies. They will be useful for
future observation of central regions of galaxies, e.g. ALMA.
\end{enumerate}
\acknowledgments We thank Masayuki Fujimoto and Kazuo Sorai for fruitful
discussions. This work has been supported in part by the Hokkaido
University Grant Program for New Fusion of Extensive Research Fields,
in part by grants-in-aid for Scientific Research (14340058,19540233)
of the Japan Society for the Promotion of Science.
\bibliographystyle{apj}
|
train/arxiv
|
BkiUbzHxK1yAgXBVDch7
| 5 | 1 |
\section{Introduction and Previous Work}
Datasets play an important role in driving research in supervised machine learning research. Some prominant examples being MNIST \cite{lecun-mnisthandwrittendigit-2010} for hand written digit classification, CIFAR10 \cite{cifar10} and IMAGENET \cite{imagenet} for image classification and generative models, etc. For solving math word problems, semantic rules and various models have been proposed in NLP community since 1963 starting from \cite{bobrow}, \cite{briars1984}, \cite{feigenbaum1963computers}. Some word problems in \cite{fletcher1985} are of the form: {\tt Lucy has two dimes. Sarah has six dimes. How many dimes do they have altogether?} or {\tt Dan has six books. Jill has two books. How many
books does Dan have more than Jill?} This paper uses Kintsch and Greeno's (1985) theory of comprehension and solution for arithmetic word problems above. These papers used classical approaches with semantic rules. Recently, machine learning models have been used for which large labelled dataset is essential. Hence, there is a dire need of large question-answer dataset for mathematics and science problems; such dataset can have impact on online education, intelligent tutoring and automated grading. For intelligent tutoring, not just the answers, but the step by step hint can be provided; this is explored in \cite{kang2016}. However, tutoring requires some knowledge graph representation. Although, this was shown for simple algebraic and geometric mathematics problems, it remains a challenging task for more advanced problems. No wonder tutoring is a complex task as nicely pointed out in detail in \cite{kenneth2013}. Given that intelligent tutoring is one of the most challenging task, the datasets and innovative architectures would play a critical role to succeed in this endevour. Recently, question answer dataset\footnote{\url{https://github.com/deepmind/mathematics\_dataset}} for mathematics was proposed in \cite{AnalysingMR} and for word problem sample dataset was proposed in \cite{SMWPSDL1}, and a comparison of results for character to character encoding for transformer and for LSTM is shown. This dataset has selected problems in mathematics for math exams for British 16 year old school children. Some sample questions are: ${\tt Factorise~ x^2 + 7x}$ or {\tt Three letters picked without replacement from qqqkkklkqkkk. Give prob of sequence qql.} In \cite{clark2018think}, a set of 7787 multiple choice questions in high school science questions is proposed as ARC (AI2 Reasoning challenge, 2018). A sample question from this dataset is: {\tt Which property of a mineral can be determined just by looking at it? (A) luster [correct] (B) mass (C) weight (D) hardness}. Moreover, with the ARC challenge a large corpus of 14 million science sentences relevant to the question-answer set is also proposed. A sample sentence from the corpus is: {\tt Random motion of the air molecules and turbulence provide upward forces that may counteract the downward force of gravity.} Such a corpus allows language understanding and questions with linguistic variations. We remark that any other corpus can be used for training the given architecture for linguistic understandings, which is further trained on the given datasets. For the ARC challenge, several baseline neural models were proposed. There are datasets for logical reasoning and English comprehension. For example, in \cite{weston2015aicomplete}, logical reasoning question answer dataset is proposed. The reasoning is considered to be of various types such as problems involving single supporting fact, two supporting fact, counting, path finding, size reasoning, etc. A sample question for path finding is: {\tt The kitchen is north of the hallway. John is hungry. The bathroom is west of the bedroom. John goes to the kitchen.
The den is east of the hallway. John grabbed the apple there. The office is south of the bedroom. Daniel is hungry. How do you go from den to kitchen? How do you go from office to bathroom?}. The last two sentences are questions with answers {\tt west, north} and {\tt north, west} respectively. This dataset is part of bAbI project\footnote{\url{https://github.com/facebookarchive/bAbI-tasks}} of facebook research. For algebra word problems, a dataset\footnote{\url{http://groups.csail.mit.edu/rbg/code/wordprobs/}} and code is proposed in \cite{kushman-etal-2014-learning}. Most of these word problems correspond to solving system of linear equations, their method derives these equations, then solves it. A sample question answer in this dataset taken from \cite{kushman-etal-2014-learning} is: {\tt An amusement park sells 2 kinds of tickets. Tickets for children cost \$1.50. Adult tickets cost \$4. On a certain day, 278 people entered the park. On that same day the admission fees
collected totaled \$792. How many children were admitted on that day? How many adults
were admitted?} with soutions {\tt x = 128, y = 150}. Continuing along these lines in \cite{wang-etal-2017-deep}, they propose to translate the math word problem to equation using recurrent neural network (RNN) without doing any complex feature extractions. To the best of our knowledge, a comprehensive {\bf opensource} dataset for mathematics and science for pre-college and college level have been missing. To this end, in the following, we announce a new large dataset named SCIMAT, and we show preliminary results and comparisons.
\section{SCIMAT: Large Science and Mathematics dataset}
We announce a large dataset\footnote{\url{https://github.com/misterpawan/scimat2}} of hundreds of millions of question-answer for mathematics and science for pre-college and college level, which typically is taught to 15-19 age group around the world. The list of topics covered are: Acids And Bases, Atomic Structure, Stoichiometry, Thermodynamics, Units And Dimensions, Kinematics, Laws of Motion, Work Power Energy, Rotatory Motion, Gravitation, Electricity, Moving Charges and Magnetism, Electro Magnetic Induction, Alternating Current, Electro Magnetic Waves, Ray Optics and Optical Instruments, Wave Optics, Dual Nature of Matter, Mechanical Properties of Solids and Liquids, Thermal Properties of Matter, Kinetic theory of Gases, Sound, Waves And Oscillations, SemiConductors, Communication Systems, etc.
Each topic contains several subtopics, where each subtopics has hundreds of thousands of question answer dataset.
\subsection{Sample Questions in Science}
\small
\begin{enumerate}
\item {\bf Question:} 33 mL of a solution of HNO3 is found to be completely neutralised by 45 mL of a given solution of NaOH. If we take 12 mL of the same solution of HNO3, the amount of NaOH solution (the same solution as before) required to neutralise it will be.
{\bf Answer:} 16.36 ml
\item {\bf Question:} If a diatomic gas of 1 moles at 68 atm and volume 68 lit is adiabatically changed to volume 188 lit, then what will be the pressure.
{\bf Answer :} 16.4atm
\item {\bf Question:} A body is dropped from a height of 9578 m with an initial velocity of 42 m/s. With what velocity will it strike the ground ?
{\bf Answer:} 435.3 m/s
\item {\bf Question:} A 9062 N force is applied on a body of mass 980 kg placed on a smooth surface, then what is the resulting acceleration obtained ?
{\bf Answer:} 9.2 m/s2
\item {\bf Question:} The volume of 549 g of a substance is 116 cm3. If the density of liquid in which substance is placed is 4 g/cm3, will the substance float or sink ?
{\bf Answer:} sink
\item {\bf Question:} If a 822 V battery is connected across an unknown resistor, there is 224 A in the circuit, find the value of resistance of the resistor ?
{\bf Answer:} 3.7 ohm
\item{\bf Question:} A square coil of side 3 cm consists of 31 turns and carries a current of 5 A. The coil is suspended vertically and the normal to the plane of the coil makes an angle of 53 degress with the direction of a uniform horizontal magnetic field of magnitude 17 tesla. What is the magnitude of the torque experienced by the coil.
{\bf Answer:} 1.9 newton-m
\item {\bf Question:} A series LCR circuit is connected to a variable frequency 230 V source with L = 193 H, C = 72 muF, R = 176 ohm. Determine the rms potential drop across resistance?
{\bf Answer:} 230 volt
\item {\bf Question:} Suppose that the electric field amplitude of an electromagnetic wave is E0 = 1936 N/C and that its frequency is v = 1512 MHz. Find an expression for B?
{\bf Answer:} 6.45e-06sin[3.17e+01x-9.50e+09t]
\item {\bf Question:} During blood transfusion, the needle is inserted in a vein where the gauge pressure is 1720 Pa. If the blood container is placed at 177 mm above the earth level so that blood may just enter the vein, is it safe for the patient?.
{\bf Answer:} yes, patient is safe
\item {\bf Question:} A sound wave travels at a speed of 29980.8 m/s, if it's wavelength is 32 m, will the sound wave be audible ?
{\bf Answer:} audible
\item {\bf Question:} For an amplitude modulated wave, the maximum amplitude is found to be 18.62 V while the minimum amplitude is found to be 7.91 V. Determine the modulation index.
{\bf Answer:} 0.4
\end{enumerate}
\normalsize
Similarly, for mathematics, we append datasets from calculus (differentiation and integration), linear algebra (rank, row reduced echelon form, determinant, trace, etc), set operations, statistics, number theory, probability, etc. Some sample questions in this dataset are the following:
\subsection{Sample Question in Mathematics}
\small
\begin{enumerate}
\item {\bf Question:} Differentiate 293 * x * (sin(x) + sec(x)) with respect to x
\par
{\bf Answer:} 293 * x * (cos(x) + tan(x) * sec(x)) + 293 * sin(x) + 293 * sec(x)
\par
\item {\bf Question:} Integrate cot(4*x \textsuperscript{$\wedge$}2) + sec(22*x\textsuperscript{$\wedge$}2) with respect to x
\par
{\bf Answer:} 8*x*( -cot(4 * x\textsuperscript{$\wedge$}2 ) \textsuperscript{$\wedge$}2 - 1) + 44*x* tan(22*x\textsuperscript{$\wedge$}2 ) * sec(22*x\textsuperscript{$\wedge$}2 ) \par
\item {\bf Question:} 2 * f ( x ) + 8 * Derivative ( f ( x ) , x ) + Derivative ( f ( x ) , ( x, 2 ) ) = 0
\par
{\bf Answer:} f ( x ) = ( C1 * exp ( x * ( 1 - sqrt ( 6 ) ) ) + C2 * exp ( x * ( 1 + sqrt ( 6 ) ) ) )
\item {\bf Question:} Calculate the Rank of Matrix [ [2, 1, 3, 7] , [1, 0, 4, 2 ] , [ 3, 1, 7, 9 ] ]
{\bf Answer:} 2 \par
\item {\bf Question:} Calculate the Trace of Matrix [ [ 13, 38, 61 ] , [ 29, 1, 39 ] , [ 92, 16, 45 ] ]
{\bf Answer:} 59 \par
\item {\bf Question:} What is the union of \{ 2, 6, 7, 8, 9 \} with \{ 3, 7, 8 \}
{\bf Answer: } \quad \{ 2, 3, 6, 7, 8, 9 \}
\item {\bf Question:} What is the median of the sequence ( 20, 38, 4, 21, 31, 94, 55)
{\bf Answer: } 31
\item {\bf Question:} What is 2 (base 3) in base 8?
{\bf Answer:} 2
\item {\bf Question:} Expand (-s + s + 2*s**5)*(4 - 1 - 2) - 3*s**5 + 4*s**5 + 0*s**5 - 2*s**5 - s**5 + 5*s**5 + (3*s**2 - 4 + 4)*(5*s**3 - 5*s**3 - s**3).
{\bf Answer:} 2*s**5
\item {\bf Question:} Three letters picked without replacement from {a: 3, c: 1, b: 7, d: 3}. Give prob of sequence bdc.
{\bf Answer:} 1/104
\end{enumerate}
\section{Numerical Experiments}
\small
\begin{table*}
\parbox{.45\linewidth}{
\centering
\begin{tabular}{ll}
\cmidrule(r){1-2}
{Type of problem} &
\begin{tabular}[c]{@{}l@{}}{C2C }\\{Accuracy} \end{tabular} \\
\midrule
Differentiation of sum & 99\% \\
Differentiation of product & 100\% \\
Differentiation of composition & 100\% \\
Integration of sum & 100\% \\
Integration of product & 100\% \\
Integration of composition & 92.5\% \\
Addition of matrices & 49\% \\
Subtraction of matrices & 74\% \\
Transpose of matrix & 100\% \\
Determinant of matrix & 32\% \\
Multiplication of matrices & 32\% \\
Trace of a matrix & 100\% \\
Product of matrix with a Scalar & 100\% \\
Row Reduced echelon & 76\% \\
Rank of a matrix & 92.5\% \\
Mean of a sequence & 95\%\\
Variance of a sequence & 39.5\% \\
Median of a sequence & 99\% \\
Set Union & 100\% \\
Set intersection & 97.5\%\\
Set difference & 100\% \\
Symmetric difference between sets & 100\% \\
\bottomrule
\end{tabular}
\caption{Comparison of our model trained on new datasets with Char2Char transformer. The Char2Char is denoted by C2C.}
\label{tab:new}
}
\hfill
\parbox{.45\linewidth}{
\begin{tabular}{ll}
\toprule
\cmidrule(r){1-2}
{Type of problem} &
\begin{tabular}[c]{@{}l@{}}{C2C }\\{Accuracy} \end{tabular} \\
\midrule
Neutralization & 82.6\% \\
Adiabatic & 76.3\% \\
Refrigrator & 82.6\% \\
Estimated value & 61.8\% \\
Force, mass, acceleration & 45.5\% \\
Momentun conservation & 78.7\% \\
Kinetic energy & 73.5\% \\
Balancing a metre stick & 81.5\% \\
Gravitational field & 94.2\% \\
Float or sink? & 98\% \\
Ohms Law & 89.0\% \\
Torque due to magnetic field & 84.5\%\\
LCR circuit & 91.3\% \\
Mirror formula for concave & 79.6\% \\
Is the sound audible? & 76\% \\
Sound wave propogation & 31.5\% \\
modulation index & 89.6\% \\
Force between wires & 75.2\% \\
Conservation of momentum & 14.5\% \\
Potential energy & 63\% \\
Work, mass, velocity & 10\% \\
\bottomrule
\end{tabular}
\caption{Accuracy for science datasets with Char2Char transformer. The Char2Char is denoted by C2C. See dataset with folder names.}
\label{tab:science}
}
\end{table*}
\normalsize
The code for training and testing is written in Python and {\tt PyTorch} framework is used. The models are trained on dual {\tt Intel Xeon E5-2640 v4} processors, providing 40 virtual cores per node, 128 GB of 2400MT/s DDR4 ECC RAM and four {\tt Nvidia GeForce GTX 1080} TiGPUs, providing 14336 CUDA cores.
We use the standard transformer described in \cite{vaswani2017} with our own specifications as follows. We use an encoder which is composed of stack of $N = 4$ identical layers. The embedding size (dmodel) = 128, attention heads ($h$) = 8. The inner layer size of feed forward network used in each layers of encoder stack (dff) = 512. We minimize the sum of log probabilities of the correct tokens via the Adam optimizer with adaptive learning rate. The model was trained for 100 epochs. For floating point answers, accuracy for two digits after decimal place was matched. In Table \ref{tab:new}, \ref{tab:science}, we find that there are datasets where it is challenging to obtain high accuracy, and robust architecture or encoding is required. Since lately, many other variants of transformers were proposed, in Table \ref{compare}, we compare various different transformer with word-to-word and char-to-char encoding. In general, we found that char2char gives best accuracy.
\small
\begin{table}[t]
\centering
\setlength{\tabcolsep}{1.249em}
\begin{tabular}{@{}lccc@{}}
\toprule
\multicolumn{1}{c}{Type of problem} & {\begin{tabular}[c]{@{}c@{}}C2C Trans.\end{tabular}} & {\begin{tabular}[c]{@{}c@{}}W2W Trans.\end{tabular}} & {\begin{tabular}[c]{@{}c@{}}W2W Perfor.\end{tabular}} \\
\midrule
pH & \textbf{99.8\%} & 97.3\% & 84.7\% \\
Compare number of atoms & \textbf{97.5\%} & 94.1\% & 94.4\% \\
Operations with significant digits & \textbf{80.4\%} & 72.9\% & 67.4\% \\
Equation of motion & 8.5\% & \textbf{12.8\%} & 12.2\% \\
Kinetic energy & \textbf{73.5\%} & 72.4\% & 71.8\% \\
Float or sink? & 98\% & 98.7\% & \textbf{98.8\%} \\
Series/Parallel combination of resistance & \textbf{88.9\%} & 32.5\% & 45.7\% \\
\bottomrule
\end{tabular}
\caption{Compare various transformers. Here C2C is char-to-char encoding, W2W is word-to-word encoding, Perfor. stands for performer \cite{choromanski2021rethinking}, and Trans. stands for Transformer \cite{vaswani2017}.}
\label{compare}
\end{table}
\normalsize
\begin{figure}[h]
\centering
\includegraphics[scale=0.18]{time.pdf}
\caption{\label{fig:time}Time for generating some datasets. The generator codes are provided in the data repository.}
\end{figure}
\clearpage
\begin{ack}
This work was done at IIIT, Hyderabad. The authors acknowledge all the support of the institute.
\end{ack}
|
train/arxiv
|
BkiUeivxK7DgtAAQJ0G-
| 5 | 1 |
\section{Introduction}\label{sec:intro}
Word clouds and tag clouds are popular tools for visualizing text. The
practical tool, Wordle~\cite{wordle09}, took word clouds to the next
level with high quality design, graphics, style and
functionality. Such word cloud visualizations provide an appealing way
to summarize the content of a webpage, a research paper, or a
political speech. Often such visualizations are used to contrast two
documents; for example, word cloud visualizations of the speeches
given by the candidates in the 2008 US Presidential elections were
used to draw sharp contrast between them in the popular media.
While some of the more recent word cloud visualization tools aim to
incorporate semantics in the layout, none provides any guarantees
about the quality of the layout in terms of semantics. We propose a
mathematical model of the problem, via a simple
edge-weighted graph. The vertices in the graph are the words in the
document.
The edges in the graph correspond to semantic relatedness,
with weights corresponding to the strength of the relation. Each
vertex must be drawn as an axis-aligned rectangle (\emph{box}, for
short) with fixed dimensions. Usually, the dimensions will be
determined by the size of the word in a certain font, and the
font size will be related to the importance of the word.
The goal is to ``realize'' as many edges as possible, by
contacts between their corresponding rectangles; see
Fig.~\ref{fig:complexity-classes}.
\subsection{Related Work.}
Hierarchically clustered document collections are visualized with
self-organizing maps~\cite{HKK96} and Voronoi
treemaps~\cite{brandes12}. The early word-cloud approaches did not
explicitly use semantic information, such as word relatedness, in
placing the words in the cloud. More recent approaches attempt to do
so, as in ManiWordle~\cite{maniwordle} and in parallel tag
clouds~\cite{collins-09}. The most relevant approaches rely on
force-directed graph visualization methods~\cite{Cui_2010_wordcloud}
and a seam-carving image processing method together with a
force-directed heuristic~{\em et al.}~\cite{wu2011semantic}.
The semantics-preserving word cloud problem is related to classic
graph layout problems, where the goal is to draw graphs so that vertex
labels are readable and Euclidean distances between pairs of vertices
are proportional to the underlying graph distance between
them. Typically, however, vertices are treated as points and label
overlap removal is a post-processing step~\cite{dwyer05,gh10}.
In {\em rectangle representations} of graphs, vertices are
axis-aligned rectangles with non-intersecting interiors and edges
correspond to rectangles with non-zero length common boundary. Every
graph that can be represented this way is planar and every triangle in
such a graph is a facial triangle; these two conditions are also
sufficient to guarantee a rectangle
representation~\cite{thomassen1986interval,rosenstiehl1986rectilinear,buchsbaum08,fusy2009transversal}.
In a recent survey, Felsner~\cite{felsner2013rectangle} reviews many
rectangulation variants, including squarings.
Algorithms for area-preserving rectangular cartograms are also
related~\cite{r-rsc-34}. Area-universal rectangular representations
where vertex weights are represented by area have been
characterized~\cite{eppstein2012area} and edge-universal
representations, where edge weights are represented by length of
contacts have been studied~\cite{nollenburg2013edge}. Unlike
cartograms, in our setting there is no inherent geography, and hence,
words can be positioned anywhere. Moreover, each word has fixed
dimensions enforced by its frequency in the input text, rather than
just fixed area.
\begin{figure}[tb]
\centering
\includegraphics[width=.5\textwidth]{complexity-classes-coloured-new}
\caption{A hierarchical word cloud for complexity classes. A class
is above another class when the former contains the latter. The
font size is the square root of millions of Google hits for the
corresponding word. This is an instance of the problem variant
\textsc{Hier}-\fbcr.}
\label{fig:complexity-classes}
\end{figure}
\subsection{Our Contribution.}
The input to the problem variants that we consider is a sequence
$B_1,\ldots,B_n$ of axis-aligned boxes with fixed positive dimensions.
Box $B_i$ is encoded by $(w_i,h_i)$, where $w_i$ and $h_i$ are its
width and height. \rotate{For some of our results, some boxes may be
rotated by $90^\circ$, which means exchanging $w_i$ and $h_i$.}
A \emph{representation} of the boxes $B_1,\ldots,B_n$ is a map that
associates with each box a position in the plane so that no two
boxes overlap. A \emph{contact} between two boxes is a line segment
(possibly a point) in the boundary of both. If two boxes are in
contact, we say that they \emph{touch}. If two boxes touch and one
lies above the other, we call this a \emph{vertical contact}. We
define \emph{horizontal contact} symmetrically. For $1 \le i\neq j
\le n$, a non-negative \emph{profit}~$p_{ij}$ represents the gain for
making boxes $B_i$ and $B_j$ touch. The \emph{supporting graph} has a
vertex for each box and an edge for each non-zero profit. Finally, we
define the \emph{total profit} of a representation to be the sum of
profits over all pairs of touching boxes.
Our problems and results are as follows.
{\bfContact Representation of Word Networks\xspace (\textsc{CROWN}\xspace):} In this decision problem,
we assume 0--1 profits. The task is to decide whether there
exists a representation of the boxes with total profit $\sum_{i\neq
j}p_{ij}$. This is equivalent to finding a representation whose
contact graph contains the supporting graph as a subgraph. If such a
representation exists, we say that it \emph{realizes the supporting
graph} and that the instance of the \textsc{CROWN}\xspace{} problem is
\emph{realizable}. We show that \textsc{CROWN}\xspace is strongly {\cal NP}\xspace-hard
even if restricted to trees and weakly {\cal NP}\xspace-hard if restricted stars;
see Theorem~\ref{thm:trees:hardness}.
We also consider two variants of the problem that can be solved
efficiently. First we present a linear-time algorithm for \textsc{CROWN}\xspace on
so-called irreducible triangulations; see
Section~\ref{sec:triangulation}. Then we turn to the problem variant
\textsc{Hier}-\fbcr, where the supporting graph is a single-source directed
acyclic graph with fixed plane embedding, and the task is to find a
representation in which each edge corresponds to a vertical contact
directed upwards; see Fig.~\ref{fig:complexity-classes}. We solve
this variant efficiently; see Section~\ref{sec:hierarchy}.
{\bf\textsc{Max}-\fbcr:} In this
optimization problem, the task is to find a representation of the
given boxes maximizing the total profit. We present constant-factor
approximation algorithms for stars, trees, and planar graphs, and a
$2/(\Delta+1)$-approximation for graphs of maximum
degree~$\Delta$; see Section~\ref{sec:optimize}.
We have implemented two approximation algorithms and
evaluated them experimentally in comparison to three existing
algorithms (two of which semantics-aware). Based on a dataset of 120
Wikipedia documents our best method outperforms the best previous
methods by more than 45\%; see Section~\ref{sec:experimental}.
\extremal{We also consider an extremal
version of the \textsc{Max}-\fbcr{} problem and show that if the supporting
graph is $K_n$ ($n \geq 5$) and each profit is $1$, then there
always exists a representation with total profit $2n-2$ and that
this is sometimes the best possible. Such a representation can be
found in linear time.
}
{\bf \textsc{Area}-\fbcr} is
as follows: Given a realizable instance of \textsc{CROWN}\xspace, find a
representation that realizes the supporting graph and minimizes the
area of a box containing all input boxes. We show that this
problem is {\cal NP}\xspace-hard even if restricted to paths; see
Section~\ref{sec:area}.
\section{The \textsc{CROWN}\xspace{} problem}\label{sec:realize}
In this section, we investigate the complexity of \textsc{CROWN}\xspace for several
graph classes.
\begin{theorem}\label{thm:trees:hardness}
\textsc{CROWN}\xspace is (strongly) {\cal NP}\xspace-hard. The problem remains strongly {\cal NP}\xspace-hard
even if restricted to trees and weakly {\cal NP}\xspace-hard if restricted to
stars.
\end{theorem}
\begin{proof}
To show that \textsc{CROWN}\xspace on stars is weakly {\cal NP}\xspace-hard,
we reduce from the weakly {\cal NP}\xspace-hard problem \prob{Partition}, which
asks whether a given multiset of $n$ positive integers
$a_1, \ldots, a_n$ that sum to~$B$ can be partitioned
into two subsets, each of sum $B/2$. We construct a star graph whose
central vertex corresponds to an $(B/2,\delta)$-box (for some $0 <
\delta < \min_i a_i$). We add four leaves corresponding to
$(B,B)$-squares and, for $i=1,\dots,n$, a leaf corresponding to an
$(a_i,a_i)$-square. It is easy to verify that there is a
realization for this instance of \textsc{CROWN}\xspace if and only if the set can be
partitioned.
To show that \textsc{CROWN}\xspace is (strongly)
{\cal NP}\xspace-hard, we reduce from \prob{3-Partition}: Given a
multiset~$S$ of $n = 3m$ integers with $\sum S = mB$, is there a
partition of~$S$ into $m$ subsets $S_1,\ldots,S_m$ such that $\sum
S_1 = \dots = \sum S_m=B$? It is known that \prob{3-Partition} is
{\cal NP}\xspace-hard even if, for every $s \in S$, we have $B/4 < s < B/2$,
which implies that each of the subsets $S_1,\dots,S_m$ must contain
exactly three elements~\cite{npproofs}.
Given an instance $S = \{s_1,s_2,\ldots,s_n\}$ of \prob{3-Partition}
as described above, we define a tree $T_S$ on $n + 4(m-1) +7$
vertices as in Fig.~\ref{fig:tree:hardness} (for $n=9$ and $m=3$).
Let $K = (m+1)B + m+1$. We make a vertex~$c$ of size $(K,1/2)$.
For each $i=1,\dots,n$, we make a vertex~$v_i$ of size $(s_i, B)$.
For each $j=0,\dots,m$, we make vertices~$u_j$ and~$b_j$ of size
$(1,B)$ and vertices $\ell_j$ and $r_j$ of size $(B/2,B)$. Finally,
we make vertices $a_1,\dots,a_5$ of size $(K,K)$, and vertices $d_1$
and $d_2$ of size $(B/2,B)$.
%
The tree $T_S$ is as shown by the thick lines in
Fig.~\ref{fig:tree:hardness}: vertex~$c$ is adjacent to all the
$v_i$'s, $u_j$'s, $a$'s, and $d$'s; and each vertex $u_j$ is
adjacent to $b_j$, $\ell_j$, and $r_j$.
\begin{figure}[tb]
\centering
\includegraphics[width=4in]{hardness-trees-alt}
\caption{Given an instance~$S$ of \prob{3-Partition}, we construct a
tree~$T_S$ (thick red line segments) and define boxes such that
$T_S$ has a realization if and only if $S$ is feasible.}
\label{fig:tree:hardness}
\end{figure}
We claim that an instance $S$ of \prob{3-Partition} is feasible if
and only if $T_S$ can be realized with the given box sizes. It is
easy to see that~$T_S$ can be realized if~$S$ is feasible: we simply
partition vertices $v_1,\dots,v_n$ into groups of three (by vertices
$u_0,\dots,u_m$) in the same way as their widths $s_1,\dots,s_n$ are
partitioned in groups of three; see Fig.~\ref{fig:tree:hardness}.
For the other direction, consider any realization of~$T_S$. By abusing
notation, we refer to the box of some vertex~$v$ also as~$v$.
Since~$c$ touches the five large squares
$a_1,\dots,a_5$,
at least three sides of~$c$ are partially covered by some~$a_k$ and
at least one horizontal side of~$c$ is completely covered by
some~$a_k$. Since~$c$ has height~1/2 only, but touches all the
$v_i$'s and $u_j$'s and $d_1$ and $d_2$ (each of height $B>1$), all
these boxes must
touch~$c$ on its free horizontal side, say, the bottom side.
Furthermore, the sum of the widths of the boxes exactly matches the
width of $c$; so they must pack side by side in some order.
This means that the only free boundary of $u_j$ is at the bottom,
and $u_j$ must make contact there with $b_j$, $\ell_j$, and $r_j$.
This is only possible if $b_j$ is placed directly beneath $u_j$, and
$\ell_j$ and $r_j$ make contact with the bottom corners of $u_j$.
(They need not appear to the left and right as shown in
Fig.~\ref{fig:tree:hardness}.) Because the sum of the widths of the
$b_j$'s, $\ell_j$'s, and $r_j$'s exactly matches the width of $c$,
they must pack side by side, and therefore the $u_j$'s are spaced
distance $B$ apart. There is a gap of width $B/2$ before the
first $u_j$ and after the last $u_j$. These gaps are too wide for
one box in $v_1,\ldots,v_n$ and too small for two of them since
their widths are contained in the \emph{open} interval $(B/4,B/2)$.
Therefore, the boxes $d_1$ and $d_2$ must occupy these gaps, and the
boxes $v_1,\ldots,v_n$ are packed into $m$ groups each of width~$B$,
as required.
\begin{comment}
Now observe that $u_0$ touches $b_0$ whose width is the same as
that of~$c$. Thus, $b_0$ must be attached to the bottom side
of~$u_0$, between the two large squares that are attached to the
left and right sides of~$c$. This, in turn, forces vertices
$\ell_1,\ldots,\ell_m,r_0,\ldots,r_{m-1}$ to use the space
between~$b_0$ and $v_1,\ldots,v_n$, to the left and right of the
vertices in $u_0,\ldots,u_m$ with the corresponding indices. Note
that one of~$u_0$ and~$u_m$ must touch the large square on the
left and that the other must touch the large square on the right.
Otherwise one of these squares, say~$a_1$, would touch a
size-$(B/2,B-1)$ box, say~$\ell_1$. But then the space of size
$(B/2,1)$---see the bold rectangle in
Fig.~\ref{fig:tree:hardness}(b)---delimited by~$\ell_1$, by its
parent~$u_1$, by~$a_1$, and by~$c$ would be too large for one
vertex in $v_1,\ldots,v_n$ and too small for two of them since
their sizes are contained in the \emph{open} interval $(B/4,B/2)$.
Leaving a gap inside this space would mean, however, that we
cannot make all of $v_1,\ldots,v_n,u_0,\ldots,u_m$ adjacent
to~$c$. Hence, the only way to realize~$T_S$ is to pack the boxes
without any gaps such that vertices $u_0,\ldots,u_m$ partition
$v_1,\ldots,v_n$ into groups of three and each group has
width~$B$, i.e., the width of two size-$(B/2,B-1)$ boxes.
\end{comment}
\end{proof}
\rotate{Note that the proof of the weak {\cal NP}\xspace-hardness for stars still works in
case rectangles may be rotated because all boxes are squares---but
one. The same holds for the strong {\cal NP}\xspace-hardness for trees;
for details see Appendix~\ref{sec:fbcr}.}
Although \textsc{CROWN}\xspace is {\cal NP}\xspace-hard in general, there are graph classes for
which the problem can be solved efficiently. In the remainder of this
section, we investigate such a class---irreducible
triangulations---, and we consider a restricted variant of \textsc{CROWN}\xspace:
\textsc{Hier}-\fbcr.
\subsection{The \textsc{CROWN}\xspace{} problem on irreducible triangulations}
\label{sec:triangulation}
A box representation is called a \emph{rectangular dual} if the
union of all rectangles is again a rectangle whose boundary is formed
by exactly four rectangles. A graph $G$ admits a rectangular dual if
and only if $G$ is planar, internally triangulated, has a quadrangular
outer face and does not contain separating
triangles~\cite{buchsbaum08}. Such graphs are known as
\emph{irreducible triangulations}. The four outer vertices of an
irreducible triangulation are denoted by $v_N$, $v_E$, $v_S$, $v_W$ in
clockwise order around the outer quadrangle. An irreducible
triangulation $G$ may have exponentially many rectangular
duals. Any rectangular dual of~$G$, however, can be built up by
placing one rectangle at a time, always keeping the union of the placed
rectangles in staircase shape.
\wormhole{quasi-triangulated}
\begin{theorem}
\label{thm:quasi-triangulated}
\textsc{CROWN}\xspace on irreducible triangulations can be solved in linear time.
\end{theorem}
\begin{proof}[sketch]
The algorithm greedily builds up the supporting graph $G$, similarly
to an algorithm for edge-proportional rectangular
duals~\cite{nollenburg2013edge}.
We define \emph{concavity} as a point on the boundary of the so-far
constructed representation, which is a bottom-right or top-left
corner of some rectangle. Start with a vertical and a
horizontal ray emerging from the same point $p$, as placeholders for
the right side of $v_W$ and the top side of $v_S$,
respectively. Then at each step consider a concavity, with
$p$ as the initial one. Since each concavity $p$ is contained
in exactly two rectangles, there exists a unique rectangle $R_p$
that is yet to be placed and has to touch both these rectangles. If
by adding $R_p$ we still have a staircase shape representation, then
we do so. If no such rectangle can be added, we conclude that $G$ is
not realizable; see Fig.~\ref{fig:quasi-triangulated}. The complete
proof is in the appendix.
\end{proof}
\begin{figure}[tb]
\subfloat{\includegraphics{quasi-triangulated-start}}
\hfill
\subfloat{\includegraphics{quasi-triangulated-intermediate}}
\hfill
\subfloat{\includegraphics{quasi-triangulated-infeasible}}
\caption{Left: starting configuration with rays $v_S$ and $v_W$.
Center: representation at an intermediate step: vertex $w$ fits
into concavity $p$ and results in a staircase, vertex $v$ fits
into concavity $s$ but does not result in a staircase. Adding
box~$w$ to the representation introduces a new concavity~$q$ and
allows wider boxes to be placed at~$r$. Right: no box can be
placed, so the algorithm terminates.}
\label{fig:quasi-triangulated}
\end{figure}
\subsection{The \textsc{Hier}-\fbcr{} problem}
\label{sec:hierarchy}
The \textsc{Hier}-\fbcr{} problem is a restricted variant of the \textsc{CROWN}\xspace{} problem
that can be used to create word clouds with a hierarchical structure;
see Fig.~\ref{fig:complexity-classes}.
The input is a directed acyclic graph $G$ with only one sink and with
a plane embedding. The task is to find a representation that
\emph{hierarchically realizes $G$}, meaning that for each directed
edge $(v,u)$ in $G$ the top of the box for $v$ is in contact with the
bottom of the box for $u$.
If the embedding of $G$ is not fixed, the problem is {\cal NP}\xspace-hard
even for a tree, by an easy adaptation of the proof of
Theorem~\ref{thm:trees:hardness}.
(Remove the vertices $a_2, a_3, a_4$, and orient the remaining edges
of $T_S$ upward according to the representation shown in
Fig.~\ref{fig:tree:hardness}.)
However, if we fix the embedding of the supporting graph $G$,
then \textsc{Hier}-\fbcr{} can be solved efficiently.
\begin{theorem}\label{thm:hierarchical-planar}
\textsc{Hier}-\fbcr{} can be solved in polynomial time.
\end{theorem}
\begin{proof}
Let $G$ be the given supporting graph,
with vertices corresponding to boxes
$B_1,\ldots,B_n$ where $B_i$ has height $h_i$ and width $w_i$, and
$B_1$ is the unique sink.
We first check that the orientation and embedding of $G$ are
compatible, that is, that incoming edges and outgoing edges are
consecutive in the cyclic order around each vertex.
The main idea is to set up a system of linear equations for the $x$-
and $y$-coordinates of the sides of the boxes. Let variables $t_i$
and $b_i$ represent the $y$-coordinates of the top and bottom of
$B_i$ respectively, and variables $\ell_i$ and $r_i$ represent the
$x$-coordinates of the left and right of $B_i$ respectively. For
each $i=1,\dots,n$, impose the linear constraints
$t_i = b_i + h_i$ and $r_i = \ell_i + w_i$. For each directed edge
$(B_i, B_j)$, impose the constraints
$t_i=b_j, r_i > \ell_j$, and $r_j > \ell_i$. The last two
constraints force $B_i$ and $B_j$ to share some $x$-range in which
they can make vertical contact. Initialize $t_1=0$.
With these equations, variables $t_i$ and $b_i$ are completely
determined since every box $B_i$ has a directed path to $B_1$.
Furthermore, the values for $t_i$ and $b_i$ can be found using a
depth-first-search of $G$ starting from $B_1$.
The $x$-coordinates are not yet determined and depend on the
horizontal order of the boxes, which can be established as follows.
We scan the boxes from top to bottom, keeping track of the
left-to-right order of boxes intersected by a horizontal line that
sweeps from $y=0$ downwards. Initially the line is at $y=0$ and
intersects only~$B_1$. When the line reaches the bottom of a box
$B$, we replace $B$ in the left-to-right order by all its
predecessors in $G$, using the order given by the plane embedding.
In case multiple boxes end at the same $y$-coordinate, we make the
update for all of them. Whenever boxes $B_a$ and $B_b$ appear
consecutively in the left-to-right order, we impose the constraint
$r_a \le \ell_b.$
The scan can be performed in $O(n \log n)$ time using a priority
queue to determine which boxes in the current left-to-right order
have maximum $b_i$ value. The resulting system of equations has
size $O(n)$ (because the constraints correspond to edges of a planar
graph). It is straightforward to verify that the system of
equations has a solution if and only if there is a representation of
the boxes that hierarchically realizes $G$. The constraints define
a linear program (LP) and can be solved efficiently. (A feasible
solution can be found faster than with an LP, but we omit the details
in this paper.)
\end{proof}
\begin{comment}
\begin{proof}
Let $G$ be the given supporting graph, that is, a directed acyclic
embedded planar graph with vertex set of boxes $\mathcal{B} =
\{B_1,\ldots,B_n\}$. Let $h_i$ and $w_i$ be the height and width
of box $B_i$, $i=1,\ldots,n$, and $B_1$ be the unique sink. Our
algorithm consists of three phases.
{\bf Phase 1:} We first check if $G$ is \emph{bimodal}, i.e., if
for every vertex the cyclic sequence of its incident edges
consists of a contiguous sequence of incoming edges and a
contiguous sequence of outgoing edges (both possibly empty). It is
obvious that bimodality is a necessary condition for hierarchical
realizability, so we stop if $G$ is not bimodal. \todo{MN: used
the established term bimodal here, saves one line.}
{\bf Phase 2:} Here we check whether the given heights of boxes
are compatible with the orientation of $G$. More precisely, we set
for each box $B_i$ two numbers $\mathrm{low}_i$ and
$\mathrm{high}_i$, which correspond to the $y$-coordinate of the
bottom and top side of $B_i$, respectively. In particular, we set
$\mathrm{low}_1 = 0$, for every $i = 1,\ldots,n$ we set
$\mathrm{high}_i = \mathrm{low}_i + h_i$, and for every edge $B_i
\to B_j$ we set $\mathrm{high}_i = \mathrm{low}_j$. This can be
done with one iteration of breadth-first search of $G$. If one
number would have to be set to two different values, then $G$ can
not be hierarchically realized and the algorithm stops.
{\bf Phase 3:} Here we check whether the given widths of boxes are
compatible with the orientation and embedding of $G$ and compute a
representation hierarchically realizing $G$, if it exists. Since
we already know the $y$-coordinates for each box it suffices to
compute a valid assignment of $x$-coordinates. To avoid overlaps,
any two boxes whose $y$-coordinates intersect interiorly must have
interiorly disjoint $x$-coordinates. \todo{MN: the y-coordinates
are two values for each box; what can be (non-)disjoint are the
y-ranges $[low_i,high_i]$ and $[low_j,high_j]$ of two boxes. Am
I wrong?} Since $G$ has a unique sink we can determine which of
the two boxes lies to the left and which to the right: consider
for every box $B_i$ the leftmost and rightmost directed path from
$B_i$ to $B_1$ and say that $B_i$ \emph{lies to the left of} $B_j$
if the leftmost path of $B_i$ joins the leftmost path of $B_j$
from the left. Similarly, $B_i$ \emph{lies to the right of} $B_j$
if the rightmost path of $B_i$ joins the rightmost path of $B_j$
from the right. Note that if $B_i$ lies to the left of $B_j$ then
$B_j$ does not lie to the left of $B_i$, but $B_i$ may also lie to
the right of $B_j$.
More precisely, we introduce for each box $B_i$ two variables
$\mathrm{left}_i$ and $\mathrm{right}_i$, which correspond to the
$x$-coordinate of the left and right side of $B_i$,
respectively. We consider the equations
\begin{equation}\label{eq:hierarchical-planar-eq}
\mathrm{right}_i = \mathrm{left}_i + w_i \hspace{5em} \text{for }i = 1,\ldots,n
\end{equation}
which ensure that each box $B_i$ has width $w_i$. If the
$y$-coordinates of $B_i$ and $B_j$ intersect interiorly, that is,
if $\max\{\mathrm{low}_i,\mathrm{low}_j\} < \min\{\mathrm{high}_i,
\mathrm{high}_j\}$, we have inequalities
\begin{eqnarray}
\mathrm{right}_i &\leq& \mathrm{left}_j \hspace{5.6em} \text{for $B_i$ to the left of $B_j$, and}\label{eq:hierarchical-planar-ineq1}\\
\mathrm{left}_i &\geq& \mathrm{right}_j \hspace{5em} \text{for $B_i$ to the right of $B_j$}\label{eq:hierarchical-planar-ineq2}
\end{eqnarray}
which ensure that $B_i$ and $B_j$ do not intersect
interiorly. Finally, for every directed edge $B_i \to B_j$ we
consider the inequalities
\begin{eqnarray}
\mathrm{right}_i &\geq& \mathrm{left}_j \hspace{5em} \text{and} \label{eq:hierarchical-planar-edge1}\\
\mathrm{left}_i &\leq& \mathrm{right}_j. \label{eq:hierarchical-planar-edge2}
\end{eqnarray}
which ensure that boxes $B_i$ and $B_j$ touch. It is easy to
verify that the solutions of the system of linear
equations~\eqref{eq:hierarchical-planar-eq} and
inequalities~\eqref{eq:hierarchical-planar-ineq1}--\eqref{eq:hierarchical-planar-edge2}
on variables $\mathrm{left}_i$ and $\mathrm{right}_i$
correspond
to representations hierarchically realizing $G$. Thus if a
solution is found, the algorithm defines a representation by
placing box $B_i$ with its bottom-left corner onto the point
$(\mathrm{left}_i,\mathrm{low}_i)$, $i =1,\ldots,n$. If no
solution exists, then $G$ can not be hierarchically realized and
the algorithm stops.
The first two phases can be easily carried out in linear time. In
the third phase, finding all leftmost and rightmost paths and
deciding for every pair $B_i$, $B_j$ whether $B_i$ lies left or
right of $B_j$, can also be done in linear time. Setting up the
equations and inequalities takes at most quadratic time since
there are $\mathcal{O}(n^2)$ inequalities. These constraints
define a linear program and can be solved in polynomial time.
(A feasible solution can be found faster than with LP, but we omit
the details in this paper.)
\end{proof}
\end{comment}
\rotate{We can show that \textsc{Hier}-\fbcr becomes weakly
NP-complete if rectangles may be rotated, by a simple reduction from
\prob{Subset Sum} (details in Appendix~\ref{sub:hier}). }
\begin{comment}
We remark that many of the
inequalities~\eqref{eq:hierarchical-planar-ineq1}--\eqref{eq:hierarchical-planar-edge2}
are redundant because they imply each other. In fact one can bring
everything down to a linear number of inequalities. Moreover, the
use of an LP solver can be avoided by defining an appropriate
directed graph on the variables $\mathrm{left}_i$ and
$\mathrm{right}_i$ whose edges correspond to the leftover
inequalities, and finding a feasible solution by a vertex-weighted
topological ordering. This way one can lower the runtime to be
linear in $n$. However, for simplicity we decided not to present
more details on this faster algorithm here.
\end{comment}
\section{The \textsc{Max}-\fbcr{} problem}\label{sec:optimize}
In this section, we study approximation algorithms for
\textsc{Max}-\fbcr\extremal{ and consider an extremal variant of the problem}.
\subsection{Approximation Algorithms.}
We present approximation algorithms for \textsc{Max}-\fbcr
restricted to certain graph classes. Our basic building
blocks are an approximation algorithm for stars and an exact
algorithm for cycles. Our general technique is to find a collection
of disjoint stars or cycles in a graph. We begin with stars, using a
reduction to
the \prob{Maximum Generalized Assignment Problem} (GAP) defined as
follows:
Given a set of bins with capacity constraints and a set of items that
may have different sizes and values in each bin, pack a
maximum-value subset of items into the bins. It is known that the
problem is {\cal NP}\xspace-hard (\prob{Knapsack} and \prob{Bin
Packing} are special cases of \prob{GAP}), and there exists an
$(1-1/e)$-approximation
algorithm~\cite{Fleischer2011}. In the remainder, we assume that there
is an $\alpha$-approximation algorithm for \prob{GAP},
setting $\alpha = 1-1/e > 0.632$.
\begin{theorem}\label{thm:approx-star}
There exists an
$\alpha$-approximation algorithm for \textsc{Max}-\fbcr{} on stars.
\end{theorem}
\begin{proof}
Let $B_0$ denote the box corresponding to the center of the star. In
any optimal solution for the \textsc{Max}-\fbcr{} problem there are four boxes
$B_1,B_2,B_3,B_4$ whose sides contain one corner of $B_0$
each. Given $B_1,B_2,B_3,B_4$, the problem reduces to assigning each
remaining box $B_i$ to
one of the four sides of $B_0$, where it makes contact for its whole
length;
see Fig.~\ref{fig:approx-star}.
\begin{figure}[t]
\centering
\includegraphics[width=8cm]{star-approx}
\caption{An optimal representation for the \textsc{Max}-\fbcr{} problem whose
supporting graph is a star with center $B_0$. The striped boxes
did not fit into the solution.}
\label{fig:approx-star}
\end{figure}
This is a special case of \prob{GAP}: The bins are the
four sides of $B_0$,
the size of an item is its width for the horizontal bins and its
height for the vertical bins,
and the value of an item is the profit of its adjacency to the
central box.
We can now apply the algorithm for the \prob{GAP} problem, which
gives an $\alpha$-approximation for the set of boxes. To get an
approximation for the \textsc{Max}-\fbcr{} problem, we consider all possible
ways of choosing boxes $B_1,B_2,B_3,B_4$, which increases the runtime
only by a polynomial factor.
\end{proof}
\rotate{In the case where rectangles may be rotated by $90^\circ$,
the \textsc{Max}-\fbcr{} problem on a star reduces to an easier problem, the
\prob{Multiple Knapsack Problem}, where every item has the same size
and value no matter which bin it is placed in. This is because we
will always attach a rectangle $B$ to the central rectangle of the
star using the smaller dimension of $B$. There is a PTAS for
\prob{Multiple Knapsack}~\cite{Chekuri}. Therefore, there is a PTAS
for \textsc{Max}-\fbcr on stars if we may rotate rectangles.}
A \emph{star forest} is a disjoint union of stars.
Theorem~\ref{thm:approx-star} applies to a star forest since we can
combine the solutions for the disjoint stars.
\begin{theorem}
\label{thm:approx-from-stars}
\textsc{Max}-\fbcr on the class of graphs that can be partitioned in
polynomial time into $k$ star forests admits an
$\alpha/k$-approximation algorithm.
\end{theorem}
\begin{proof}
The algorithm is to partition the edges of the supporting graph into
$k$ star forests, apply the approximation algorithm of
Theorem~\ref{thm:approx-star} to each star forest, and take the best
of the $k$ solutions. This takes polynomial time. We claim this
gives the desired approximation factor. Consider an optimum
solution, and let \ensuremath{W_\mathrm{\!opt}}\xspace be the total profit of edges that are
realized as contacts. By the pigeon hole principle, there is a star
forest $F$ in the partition with realized profit at least
$\ensuremath{W_\mathrm{\!opt}}\xspace/k$ in the optimum solution. Therefore our approximation
achieves at least $\alpha \ensuremath{W_\mathrm{\!opt}}\xspace/k$ profit for~$F$.
\end{proof}
\begin{corollary}
\label{cor:approx}
\textsc{Max}-\fbcr admits
\begin{itemize}
\item an $\alpha/2$-approximation algorithm on trees,
\item an $\alpha/5$-approximation algorithm on planar graphs.
\end{itemize}
\end{corollary}
\begin{proof}
It is easy to partition any tree into two star forests in linear
time. Moreover, it is known that every planar graph has star
arboricity at most $5$, that is, it can be partitioned into at most
$5$ star forests, and such a partition can be found in polynomial
time~\cite{Hakimi199693}. The results now follow directly from
Theorem~\ref{thm:approx-from-stars}.
\end{proof}
Our star forest partition method is possibly not optimal. Nguyen
\textit{et al.}~\cite{nguyen2008approximating} show how to find a star
forest of an arbitrary weighted graph carrying at least half of the
profits of an optimal star forest in polynomial-time. We can't, however,
guarantee that the approximation of the optimal star forest
carries a positive fraction of the total profit in an optimal solution
to \textsc{Max}-\fbcr. Hence, approximating \textsc{Max}-\fbcr
for general graphs remains an open problem. As a first step into this
direction, we present a constant-factor approximation for supporting
graphs with bounded maximum degree. First we need the following lemma.
\begin{lemma}\label{lem:build-cycle}
Given a sequence of $n \geq 3$ boxes, we can find a representation
realizing the $n$-cycle in linear time.
\end{lemma}
\begin{proof}
Let $C = (v_1, v_2, \ldots, v_n)$ be a cycle. Let $W$ be the sum of
all the widths, $W=\sum_i w_i$, and let $t$ be maximum index such
that $\sum_{i \le t} w_i < W/2$. We place $v_1, v_2, \ldots, v_t$
side by side in order from left to right with their bottoms on a
horizontal line $h$. We call this the ``top channel''. Starting
from the same point on $h$ we place $v_n, v_{n-1}, \ldots, v_{t+2}$
side by side in order from left to right with their tops on $h$. We
call this the ``bottom channel''. Note that $v_1$ and $v_n$ are in
contact. It remains to place $v_{t+1}$ in contact with $v_t$ and
$v_{t+2}$. It is easy to show that the following works: add
$v_{t+1}$ to the channel of minimum width, or in case of a tie,
place $v_t$ straddling the line $h$.
\end{proof}
\begin{figure}[t]
\centering
\includegraphics{cycle-layout-new}\hspace{1cm}
\includegraphics[width=5cm]{extremal-LB-continue}
\caption{Left: Realizing cycle $(v_1,\ldots,v_{10})$. Right: $8$
adjacencies with $5$ boxes in Theorem~\ref{thm:extremal}.}
\label{fig:combined}
\end{figure}
Following the idea of Theorem~\ref{thm:approx-from-stars}, we can
approximate \textsc{Max}-\fbcr{} by applying Lemma~\ref{lem:build-cycle} to a
partition of the supporting graph into sets of disjoint cycles.
\begin{theorem}\label{thm:approx-from-cycles}
\textsc{Max}-\fbcr on the class of graphs that can be partitioned into $k$
sets of disjoint cycles (in polynomial time) admits a
(polynomial-time) algorithm that achieves total profit at least
$\frac{1}{k} \sum_{i\neq j} p_{ij}$. In particular, there is
a
$1/k$-approximation algorithm for \textsc{Max}-\fbcr on this graph class.
\end{theorem}
\begin{corollary}
\label{cor:delta-approx}
\textsc{Max}-\fbcr{} on graph of maximum degree~$\Delta$ admits a
$2/(\Delta+1)$-approximation.
\end{corollary}
\begin{proof}
As Peterson~\cite{peterson} shows, the edges of any graph of maximum
degree $\Delta$ can be covered by $\lceil\Delta/2\rceil$
sets of cycles, and such sets can be found in polynomial time. The
result now follows from Theorem~\ref{thm:approx-from-cycles}.
\end{proof}
\extremal{
\subsection{An Extremal \textsc{Max}-\fbcr{} Problem.}
In the following, we bound the maximum number of contacts that can
be made when placing $n$ boxes.
It is easy to see that for $n=2,3$ any set of boxes allows $2n-3$
contacts. In case $n=4$ the boxes can be arranged so that their
corners meet at a point, thus realizing $2n-2$ contacts. For larger
$n$ we have:
\begin{theorem}\label{thm:extremal}
For $n\ge 5$ and any set of $n$ boxes, the boxes can be placed in
the plane to realize $2n-2$ contacts. For some sets of boxes this
is the best possible.
\end{theorem}
\begin{proof}
Let $B_1, \ldots, B_n$ be any set of boxes. We place the first 5
boxes to make 8 contacts, and place the remaining boxes to make 2
contacts each for a total of $8+2(n-5) = 2n-2$ contacts. Among the
first 5 boxes, let $B_1$ and $B_2$ be the boxes with largest height,
and $B_3$ and $B_4$ be the boxes with largest width. Place the five
boxes as in Fig.~\ref{fig:combined}. Place the remaining boxes one
by one as in the proof of Lemma~\ref{lem:build-cycle} along the
horizontal line between $B_2$ and $B_3$. Then each remaining box
makes two new contacts.
Next we describe a set of $n$ boxes for which the maximum number of
contacts is $2n-2$. Let $B_i$ be a square box of side length
$2^i$. Consider any placement of the boxes and partition the
contacts into horizontal and vertical contacts. Here we assume that
a point contact of two boxes is horizontal if the point is the
south-west corner of the first box and the north-east corner of the
second; otherwise, a point contact is vertical. From the side
lengths of boxes, it follows that neither set of contacts contains a
cycle. Thus each set of contacts has size at most $n-1$ for a total
of $2n-2$.
\end{proof}
}
\begin{comment}
Consider a set $\mathcal{B} = \{B_1,\ldots,B_n\}$ of $n$ boxes with
fixed dimensions, the complete graph, $G=K_n$, as support graph, and
all profits worth 1 unit.
Denote by $f(\mathcal{B})$ the maximum number of adjacencies that
can be realized among the $n$ boxes in $\mathcal{B}$. Further we
define $f(n) = \min\{f(\mathcal{B}) \;:\; |\mathcal{B}| = n\}.$
\begin{theorem}
\label{thm:extremal}
For $n = 2,3,4$ we have $f(n) = 2n-3$ and for every $n \geq 5$ we
have $f(n) = 2n-2$.
\end{theorem}
\begin{proof}
It is easy to verify the lower bound $f(n) \geq 2n-3$ for the base
cases $n=2,3,4$. So let $n \geq 5$ and fix $\mathcal{B} =
\{B_1,\ldots,B_n\}$ to be any set of $n$ boxes. We have to show that
$f(\mathcal{B}) \geq 2n-2$, that is, that we can position the boxes
so that $2n-2$ pairs of boxes touch. We start by selecting five
arbitrary boxes $B_1,B_2,B_3,B_4,B_5$. Without loss of generality,
let $B_1$ and $B_2$ be the boxes with largest height, and $B_3$ and
$B_4$ be the boxes with largest width among $\{B_3,B_4,B_5\}$. We
place the five boxes as in Fig.~\ref{fig:combined}. The remaining
$n-5$ boxes are added to the picture in any order in such a way that
every box realizes two adjacencies at the time it is placed. To this
end it is enough to apply the procedure described in
Lemma~\ref{lem:build-cycle} taking $B_2,B_3$ as the first two boxes.
Next consider the upper bounds. We have $f(n) \leq 2n-3$ for $n=2,3$
simply because a pair of boxes can touch only once. We have $f(4)
\leq 5$ because contact graphs of boxes are planar graphs in which
every triangle is an inner face, which rules out $K_4$. So let $n
\geq 5$. We show that $f(n) \leq 2n-2$, by constructing a set of $n$
boxes for which, in any arrangement of the boxes, at most $2n-2$
pairs of boxes touch. For $i=1,\ldots,n$ we define $B_i$ to be a
square box of side length $2^i$. Consider any placement of the boxes
$B_1,\ldots,B_n$. We partition the contacts into horizontal contacts
and vertical contacts. From the side length of boxes, it follows
that neither set of contacts contains a cycle, i.e., consists of at
most $n-1$ contacts. This gives at most $2n-2$ contacts in total.
\end{proof}
\end{comment}
\section{The \textsc{Area}-\fbcr{} problem}\label{sec:area}
The same supporting graph can often be realized by different contact
representations, not all of which are equally useful or visually
appealing when viewed as word clouds. In this section we consider the
\textsc{Area}-\fbcr{} problem and show that finding a ``compact'' representation
that fits into a small bounding box is another {\cal NP}\xspace{}-hard problem.
The reduction is from the (strongly) {\cal NP}\xspace{}-hard $2$D \prob{Strip
Packing} problem~\cite{LMM02}: The input is a set $R$
of $n$ rectangles with height and weight functions $w: R \rightarrow
\ensuremath{\mathbb{N}}\xspace$ and $h: R \rightarrow \ensuremath{\mathbb{N}}\xspace$, and a strip of width $W$ and height
$H$. All the input numbers are bounded by some polynomial in $n$.
The task is to pack the given rectangles into the
strip.
The \prob{Strip Packing} problem is actually equivalent to \textsc{Area}-\fbcr{}
when the supporting graph is an independent set. However, edges in
the supporting graph impose additional constraints on the
representation, which might make \textsc{Area}-\fbcr{} easier. The following
theorem (proved in the appendix) shows that this is not the case.
\wormhole{path-packing-hard}
\begin{theorem}
\label{thm:path-packing-hard}
\textsc{Area}-\fbcr is {\cal NP}\xspace-hard even on paths.
\end{theorem}
\section{Experimental Results}\label{sec:experimental}
We implemented our new methods for constructing word clouds: the
\textsc{Star Forest} algorithm based on extracting star forests
(Corollary~\ref{cor:approx}), and the \textsc{Cycle Cover} algorithm
based on decomposing edges of a graph into cycle covers
(Theorem~\ref{thm:approx-from-cycles}). We compared the algorithms
with the existing method from~\cite{wordle09} (referred to as
\textsc{Random}), the algorithm from~\cite{Cui_2010_wordcloud}
(referred to as \textsc{CPDWCV}), and the algorithm
from~\cite{wu2011semantic} (referred to as \textsc{Seam Carving}).
Our dataset is 120 Wikipedia documents, with 400 words or more. For
the word clouds, we removed stop-words (e.g., ``and'', ``the'',
``of''), and constructed supporting graphs $G_{50}$ and $G_{100}$ for
$50$ and $100$ the most frequent words respectively. Implementation
details are provided in the appendix.
We compare the percentage of realized profit in the representation of
the supporting graphs. Since \textsc{Star Forest} handles planar
supporting graphs, we first extract a maximal planar subgraph of $G$,
and then apply the algorithm on the subgraph. The percentage of
realized profit is presented in the table. Our results indicate that,
in terms of the realized profit, \textsc{Cycle Cover} and \textsc{Star
Forest} outperform existing approaches; see Fig.~\ref{fig:wordle}.
In practice, \textsc{Cycle Cover} realizes more than $17 \%$ of the
total profit of graphs with $50$ vertices. On the other hand, existing
algorithms may perform better in terms of compactness, aspect ratio,
and other aesthetic criteria; we leave a deeper comparison of word
cloud algorithms as a future research direction.
\smallskip
\begin{center}
\begin{tabular}{@{}lr<{\qquad}r<{\quad\qquad}@{}}
Algorithm & \multicolumn{1}{c}{Realized Profit of $G_{50}$} &
\multicolumn{1}{c}{~~~~~Realized Profit of $G_{100}$} \\
\toprule
\textsc{Random}~\cite{wordle09} & $3.4 \%$ & $2.2 \%$ \\
\textsc{CPDWCV}~\cite{Cui_2010_wordcloud} & $12.2 \%$ & $8.9 \%$ \\
\textsc{Seam Carving}~\cite{wu2011semantic} & $7.4 \%$ & $5.2 \%$ \\
\textsc{Star Forest} & $11.4 \%$ & $8.2 \%$ \\
\textsc{Cycle Cover} & $17.8 \%$ & $13.8 \%$ \\
\end{tabular}
\end{center}
\section{Conclusions and Future Work}
\label{sec:conclusions}
We formulated the Word Rectangle Adjacency Contact (\textsc{CROWN}\xspace{}) problem,
motivated by the desire to provide theoretical guarantees for
semantics-preserving word cloud visualization. We described efficient
algorithms for variants of \textsc{CROWN}\xspace, showed that some
variants are {\cal NP}\xspace-hard, and presented several approximation
algorithms. A natual open problem is to find an approximation
algorithm for general graphs with arbitrary profits.
\medskip\noindent{\bf Acknowledgments.} Work on this problem began at
Dagstuhl Seminar 12261. We thank the organizers, participants, Therese
Biedl, Steve Chaplick, and G\"unter Rote.
\bibliographystyle{alpha}
|
train/arxiv
|
BkiUdVA4ubng9ckPcZDG
| 5 | 1 |
\section{Introduction}
Compressive sensing (CS) initially emerged around the year 2006 \citep{Donoho2006,Candes2006}. The aim of CS is to recover a $s$-sparse signal $\mathbf{x}\in \mathbb{C}^N$ from $m$ noisy observations $\mathbf{y} \in \mathbb{C}^m$:
\begin{align*}
\mathbf{y}=A\mathbf{x}+\mathbf{e} \tag{1}
\end{align*}
In (1), $\mathbf{x}$ has $s$ non-zero elements implying that $s=\lVert\mathbf{x}\rVert_0$, $A\in \mathbb{C}^{m\times N}$ is a measurement matrix with $m\ll N$ satisfying restricted isometry property \citep[chapter 6]{Foucart2013} and $\mathbf{e}\in\mathbb{C}^m$ is additive noise such that $\lVert\mathbf{e}\rVert_2\le \eta$ for some $\eta\ge0$.
To recover $\mathbf{x}$ in $(1)$, it can be translated into a quadratically constrained $\ell_1$-minimization problem:
\begin{align*}
\underset{\mathbf{z}\in\mathbb{C}^N}{\text{minimize}}\quad \lVert\mathbf{z}\rVert_1 \quad \quad \text{subject to} \quad \lVert A\mathbf{z}-\mathbf{y}\rVert_2\le \eta.
\end{align*}
There are several specific algorithms to solve the optimization problem, e.g. orthogonal matching pursuit (OMP), compressive sampling matching pursuit (CoSaMP), iterative hard thresholding (IHT) and hard thresholding pursuit (HTP) \citep[chapter 3]{Foucart2013}. All the algorithms mentioned above need sparsity $s$ as an input. However, $s$ is typically unknown in practice. The difficulty to recover $\mathbf{x}$ consists in estimating the sparsity $s$.
In this study, we propose a novel estimator for sparsity by using Bayesian hierarchical model (BHM). By assuming the sparsity $s$ to be random, our target parameter is the mathematical expectation of $s$, $E(s)$. The assumption of the randomness of $s$ is reasonable in practice, since noise commonly exists in the process of acquiring signals such that it is impossible to obtain two sparse signals that are exactly the same even under the same control \citep{Henkelman1985}.
We estimate $E(s)$ by using an observed 2D sparse image $\mathbf{x}$. The unbiasedness of the estimator is derived analytically, and it can be confirmed through a simulation study. Another interesting finding is that the estimator is asymptotically normally distributed under regular conditions, which could be used to construct confidence interval of $E(s)$. This property is also confirmed through the simulation study. In the simulation study, 2D magnetic resonance (MR) image is considered as an input signal. MR images are not sparse in general, but they are compressible \citep{Haldar2010}. For instance, they can be be transformed into a sparse image, $\mathbf{x}$, through wavelet thresholding techniques \citep{Prinosil2010}. In MR imaging circumstance, the measurement matrix $A$ is a partial Fourier matrix and $\mathbf{y}$ is the measurements in frequency domain called $k$-space. In practice, once the estimate from the model is obtained, it could be directly used for recovering future MR images under the framework of CS if they are believed to have the same sparsity level $(E(s))$ after sparsification, for example but not limited to two scans of one person's brain.
This paper is organized as follows. The statistical theory is introduced in Section 2. In Section 3, the methods used for the simulation study and real data analysis are specified. Results are presented in Section 4, followed by conclusion and discussion in Section 5. The paper is closed with proofs of the statistical properties in Appendix.
\section{Theory}
Since $s=\lVert\mathbf{x}\rVert_0$, the concrete non-zero value of each element in $\mathbf{x}$ is not of interest. Assume that $s$ is random, in the sense of that each element of $\mathbf{x}$ is randomly assigned by either zero or non-zero value. Thus, instead of $s$, the mathematical expectation of $s$, $E(s)$, is the parameter of interest. In this section, we introduce a BHM \citep{Cressie2011} to construct a new estimator for $E(s)$ of an observed sparse image $\mathbf{x}$.
BHM is a statistical model consisting of multiple layers. It is common to have three layers in the model. The first layer is data model used to model observed data, the second layer is process model used to model the unknown parameters of interest in data model, and the last one is hyperparameter model used to model unknown hyperparameters. Let $o_i=\mathbbm{1}_{(x_i\ne 0)}, \text{ where } \mathbbm{1}$ is the indicator function. A Bernoulli distribution can be used to describe the assumption of randomness of $s$:
\begin{align*}
\text{Layer 1}\quad o_i|p_i\sim Ber(p_i),\quad 0<p_i<1
\end{align*}
Let $\mathbf{o}=\{o_1,o_2,...,o_N\}$ and $\mathbf{p}=\{p_1,p_2,...,p_N\}$, where $N$ is the number of elements in $\mathbf{o}$ as well as in $\mathbf{x}$.
Then a second layer is needed to describe the distribution of $p_i$ given by
\begin{align*}
\text{Layer 2}\quad \text{logit}(p_i)= \mu+m_i+\epsilon_i
\end{align*}
where $\mu$ is intercept term, $m_i$ represents structured spatial effect and $\epsilon_i$ represents unstructured spatial effect. It is very common to include these two types of spatial effects, since there is typically a correlated random structure that induces correlation based on neighborhood structure as well as an uncorrelated random noise that varies from pixel to pixel \citep{Carroll2015}. Let $\mathbf{m}=\{m_1,m_2,...,m_N\}$ be normal distributed with common mean $\mathbf{0}$ and precision matrix $Q(\boldsymbol{\theta})$, i.e. $\mathbf{m}\sim N(\mathbf{0}, Q(\boldsymbol{\theta}))$. The precision matrix is defined as the inverse of covariance matrix of the random field. Let $\boldsymbol{\epsilon}=\{\epsilon_1,\epsilon_2,...,\epsilon_N\}$ be independent and identically distributed normal with common mean $\mathbf{0}$ and precision matrix $\tau_{iid} \cdot\mathrm{I}_N$, i.e. $\boldsymbol{\epsilon}\sim N(\mathbf{0},\tau_{iid}\cdot \mathrm{I}_N)$, where $\tau_{iid}$ is marginal precision and $\mathrm{I}_N$ is an identity matrix with dimension $N\times N$. There are several spatial models using Markov property to construct the precision matrix $Q$ such that the random field $\mathbf{m}$ is called as Gaussian Markov Random Field (GMRF), for instance, Besag model \citep{Besag1991}, Leroux model \citep{Leroux2000} by using neighborhood information. The advantage of using GMRF is to produce great computational benefits \citep{Rue2005}. To use such models, conditional properties of the GMRF should be specified in advance, i.e. the order of neighborhood structure.
More often a GMRF with Mat\'{e}rn covariance is more intuitive and easier to specify for two distinct sites $i$ and $j$ than to specify the conditional properties, especially in geostatistics.
Mat\'{e}rn covariance is defined as \citep{Matern1960}:
\begin{align*}
Cov(m_i,m_j)=\frac{\sigma^2}{\Gamma(\nu)\times 2^{\nu-1}}(\kappa\lVert i-j\rVert_2)^\nu K_\nu(\kappa\lVert i-j\rVert_2),
\end{align*}
where $K_\nu$ is the modified Bessel function of the second kind, $\Gamma$ is the Gamma-function, $\nu$ is a smoothness parameter, $\kappa$ is a range parameter and $\sigma^2$ is marginal variance. Two important properties of a random field with Mat\'{e}rn covariance are that the random field is stationary and isotropic (when the distance is Euclidean) and the
covariance decreases as the distance between two sites $i$ and $j$ increases \citep{Matern1960,Stein1999}.
In this study, a GMRF with Mat\'{e}rn covariance is used. Actually, a continuous Gaussian random field with Mat\'{e}rn covariance function is a solution to stochastic partial
differential equation \citep{WHITTLE1954,WHITTLE1963},
\begin{align*}
(\kappa^2-\Delta)^{\alpha/2}z(\mathbf{s})&=\mathcal{W}(\mathbf{s}),\quad \alpha=\nu+d/2,
\end{align*}
where $d$ is the dimension of the Gaussian random field $z(\mathbf{s})$ with location index $\mathbf{s}$, $\Delta$ is the Laplacian operator and $\mathcal{W}(\mathbf{s})$ is a
random field with Gaussian white noises.
\citet{Lindgren2011} have proposed that a GMRF defined on a regular unit distance lattice with a sparse precision can be used
to represent a continuous Gaussian random field with Mat\'{e}rn covariance for $\nu\in \mathbb{Z}^+$. The sparseness of the precision matrix $Q$ is controlled by $\nu$. The smaller the $\nu$
is, the sparser the $Q$ is. For example, if $\nu=1$, $Q$ can be regarded as a precision matrix defined through the third order of neighborhood structure, and the non-zero values of $Q$ are controlled by $\kappa$.
\begin{align*}
\text{Layer 3}\quad &\text{It provides the prior distributions of the unknown hyperparameters such as}\\ &\mu, \boldsymbol{\theta}=\{\sigma, \kappa, \nu \} \text{ and } \tau_{iid}.
\end{align*}
Bayesian inference is applied to obtain the posterior distribution of $\mathbf{p}$, i.e. the distribution of $\mathbf{p}|\mathbf{o}$.
By using the posterior mean of $\mathbf{p}$, we construct an estimator for the mean sparsity $E(s)$:
\begin{align*}
\widehat{E(s)}=\sum_{i=1}^N E\left(p_i|\mathbf{O}\right),
\end{align*}
where $\mathbf{O}$ is a random field from which $\mathbf{o}$ is sampled. Its statistical properties are presented in the following propositions, of which the proofs are left in Appendix.
\begin{restatable}{proposition}{propone}
\label{prop1}
$\widehat{E(s)}$ is an unbiased estimator and its variance equals to the summation over all the elements of the covariance matrix of $E(\mathbf{p}|\mathbf{O})$.
\end{restatable}
Before presenting the next proposition, we introduce one definition and some notations which help to understand the proposition as well as its proof.
\begin{defini}
A set of random variables, of which the dimension could be any positive integer, is said to be $\rho$-radius dependent if any two random variables in the set are independent as long as the distance between the two random variables is greater than $\rho$, where $\rho\ge0$.
\end{defini}
\begin{remark}
$\rho=0$ implies that the random variables are independent.
\end{remark}
\begin{remark}
Under the model setting with Mat\'{e}rn covariance, $\{\text{logit}(p_i), 1\le i\le N\}$ are $\rho$-radius dependent random variables $(\rho>0)$, where the smoothing parameter $\nu$ is related to the spatial dependence $\rho$. For example, if $\nu=1$, $\rho=2$ and if $\nu=2$, $\rho=3$.
\end{remark}
Let $\rho^*=\left \lceil{\rho}\right \rceil$, the smallest integer greater than or equal to $\rho$.
Let $\phi$ be a positive integer greater than $\rho^*$. Let $n_1, n_2$ be the dimensional size of a sparse image such that $n_1 \times n_2 = N$. And the sparse image can be divided into a set of independent squares and borders which separate the squares. Each square has dimension $(2\phi+1)\times (2\phi+1)$ and consists of $(2\phi+1)^2$ random variables (pixels). The width of each border is $\rho^*$ and the border regions surrounding each square consist of $2(2\phi+1)\rho^*+(\rho^*)^2$ random variables. Let $n_{sq}$ be the number of squares. Let $S_k$ be the sum of the random variables in the $k$th square, $S_k^B$ be the sum of the random variables in the borders surrounding the $k$th square, where $k=1,2,...,n_{sq}$, also $\sigma^2_k$ be the variance of $S_k$ and $r^3_k=E|S_k|^3.$
\begin{restatable}{proposition}{proptwo}
\label{prop2}
$\widehat{E(s)}$ is asymptotically normally distributed as $n_1, n_2 \to \infty$, if
\begin{enumerate}[label=\alph*)]
\item $E\left(p_i|\mathbf{O}\right)$'s are absolutely continuous random variables for $1\le i \le N$, and
\item $n_{sq} \to \infty$, and $\phi \to\infty$ at a rate slower than $n_{sq}^{1/6}$.
\end{enumerate}
\end{restatable}
\begin{remark}
Condition $b)$ says that one can choose a relatively smaller smoothness parameter $\nu$ compared to the size of the image, implying that the spatial dependence is not strong, otherwise this assumption may not hold.
\end{remark}
\begin{remark}
The asymptotics in the proposition refers to increasing-domain asymptotics. There is another type of asymptotics, i.e. infill asymptotics (fixed-domain asymptotics). The infill asymptotics can lead the spatial dependence $\rho$ to increase rapidly as well as $\phi$, which might break down condition $b)$. This is in line with what \citet{Cressie1993} mentioned, infill asymptotics is preferable in geostatistical data, whereas increasing-domain asymptotics is often more appropriate for lattice data.
\end{remark}
\section{Methods}
In this section, we briefly introduce background about MR images to be used and specify how to sparsify the images such that it can be analysed using the BHM described in Section 2. Furthermore, how to set the prior distributions for the BHM and how to evaluate the performance of the estimator are also presented in this section.
\subsection{Input data}
Two kinds of MR images are analysed in the study. One kind is simulated brain images with resolution $128\times 128$, and the other kind is a real brain image with resolution $256\times 256$. The simulated images were produced by using simulated gradient echo based dynamic contrast enhanced MRI scans at 3T with a noise level equivalent of a 12-channel head coil \citep[see][for details]{Brynolfsson2014}. The real brain image was acquired with a 2D spin-echo sequence on a 3T GE Signa Positron emission tomography (PET)/MR scanner.
To be able to fit the BHM to a sparsified MR image, the following question should be answered. Is the sparsity of sparsified MR image random? Assume $\mathbf{x}_1,\mathbf{x}_2$ are two sparse images transformed from two sequential scans of MR images, $\mathbf{x}_{MRI_1}\text{ and } \mathbf{x}_{MRI_2}$, of the same brain under the same conditions, any slight difference between the two images may result in the positions as well as the numbers of non-zero values in $\mathbf{x}_1 \text{ and } \mathbf{x}_2$ are different, i.e. $s_1 \ne s_2$, which implies that $s$ is varying and can be considered as random.
\subsection{Sparsification}
Since MR image is not sparse in general, one discrete wavelet transform (DWT) followed by one thresholding method could be used to transform a MR image to a sparse image in wavelet domain. There are many DWT and thresholding methods can be used \citep{Vanithamani2011}. Since this study does not focus on evaluation of the performance among these methods, DWT of Daubechies 1 with level 3 is used and followed by the hard thresholding method to eliminate the noise. The reason of using wavelet thresholding method is that wavelet transform is good at energy compaction, the smaller coefficients represent noise and larger coefficients represent the image features. DWT decomposes the image into four sub-bands in general (i.e. A, DH, DV and DD) shown in Figure 1.
\begin{figure}
\centering
\includegraphics[width=0.4\linewidth,keepaspectratio]{fig2}
\caption{\small {Discrete Wavelet Transform with three levels}}
\label{fig:2}
\end{figure}
The numbers $1,2 \text{ and } 3$ in Figure 1 indicate the three levels of the DWT, while DH, DV and DD are the detail sub-bands in horizontal, vertical and diagonal directions respectively and A sub-band is the approximation sub-band.
The hard thresholding method eliminates coefficients that are smaller than a certain threshold value $T$. The function of hard thresholding is given by
\[
f(c)=
\begin{cases}
c,& \text{if } |c|\ge T\\
0, & \text{otherwise}
\end{cases}
\]
where $c$ is the detailed wavelet coefficient. Note that when the threshold value $T$ is too small, the noise reduction is not sufficient. In the other way around, when $T$ is too large, the noise reduction is over sufficient. In this study, one of the most commonly used method is considered to estimate the value $T$ \citep{Braunisch,Prinosil2010,Vanithamani2011}, and the estimator is defined as
\begin{align*}
\hat{T}=\sigma_{image}\sqrt{2 \log(N)}
\end{align*}
where $N$ is the number of image pixels, $\sigma_{image}$ is the standard deviation of noise of the image which can be estimated by \citep{Donoho1995,Prinosil2010,Vanithamani2011}
\begin{align*}
\hat{\sigma}_{image}=\frac{\text{median}|c_D^1|}{0.6745}
\end{align*}
where $c_D^1$ indicates detailed wavelet coefficients from level 1.
\subsection{Priors of the BHM}
After thresholding, a sparse image in wavelet domain is obtained. Before fitting the BHM to the sparse image, prior distributions of the unknown random variables in the third layer should be specified. A flat prior is assigned to $\mu$ which is equivalent to a Gaussian distribution with $0$ precision. The priors for $\log(\tau_{iid})$ and $\log(1/\sigma^2)$ are set to be Log-Gamma$(1,5\times 10^{-5})$. The Log-Gamma distribution is defined as that a random variable $X\sim$ Log-Gamma$(a, b)$ if $\exp(X)\sim$ Gamma$(a, b)$ \citep{SaraMartino2010}. Thus, a Log-Gamma$(a, b)$ distribution is assigned to the logarithm of the precision, e.g. $\log(\tau_{iid})$, which is equivalent to assign a Gamma$(a, b)$ to $\tau_{iid}$, and the prior knowledge about $\tau_{iid}$ is reflected through $a$ and $b$. The prior for $\log(\sqrt{8\nu}/\kappa)$ is set to be Log-Gamma$(1,10^{-2})$. $\sqrt{8\nu}/\kappa$ represents a distance where the covariance is about $0.1$.
$\nu$ is treated as a fixed number. In this study, $\nu=1$ is used, which implies that for a pixel in the sparse image, the conditional mean of this pixel given remaining pixels is only affected by its third order nearest neighbors.
R-INLA \citep{Rue2009} is used to implement the BHM.
\subsection{Evaluation}
$E(s)$ is the parameter of interest, which can also be estimated by another unbiased estimator, i.e. the sample mean $\widehat{E_{sim}(s)}=\frac{1}{n}\sum_{i=1}^ns_i$, where $n$ is the number of simulated images and $s_i$ is the sparsity of the $i$th sparse image. We use $\widehat{E_{sim}(s)}$ as a reference of the true mean value by choosing a larger value for $n$ according to law of large numbers (LLN). A series of $\widehat{E(s)}_I$ can be obtained from the simulated images, which can be used to compare with $\widehat{E_{sim}(s)}$ in order to confirm the theoretical property of the unbiasedness. The unbiasedness is measured by the mean of absolute difference in percentage $|\widehat{E_{sim}(s)}-\widehat{E(s)}_I|*100/N$ over the series of $\widehat{E(s)}_I$. The range of $I$ should not be too small in terms of LLN, whereas it should not be too large in terms of computational time. To state that the estimator is unbiased, the measurement should be close to zero. Besides the unbiasedness, the measurement also indicates the difference in terms of image size. Furthermore, the asymptotic normality of the BHM estimator could also be examined in the simulation study, since in this case $n_1=n_2=128$ (large dimensional size) and $\rho^*=2$ (weak spatial dependence), which indicates the condition $b)$ in Proposition 2 could be possibly met.
The evaluation methods mentioned above can not be extended to real images analysis, since it is not practical to scan one brain even for several times. We only compare $\widehat{E(s)}$ with the true sparsity and measure the absolute difference in percentage, i.e. $|\widehat{E(s)}-s|*100/N$.
\section{Results}
\subsection{Simulated MR images}
Figure 2 is one slice of simulated MR image of human brain with resolution $128\times 128$.
\begin{figure}
\centering
\includegraphics[width=0.3\linewidth,keepaspectratio]{fig1}
\caption{\small {A slice of simulated human brain with resolution $128\times 128$}}
\label{fig:1}
\end{figure}
The DWT of Daubechies 1 with level 3 is shown in the left sub-figure of Figure 3.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth,keepaspectratio]{fig3}
\caption{\small {Wavelet transformed image}}
\caption*{Left: discrete wavelet transformed image. Right: the sparsified image. }
\label{fig:3}
\end{figure}
The most upper left corner of the sub-figure is the approximation sub-band, while the remaining parts are the detail sub-bands. The estimated threshold value $\hat{T} \approx 0.01$, implying that the detailed coefficients which are less than 0.01 are set to $0$. The sparsified image is shown in the right sub-figure of Figure 3. It is difficult to see the difference between these two sub-figures except that the right sub-figure in general is darker than the left sub-figure, which is a consequence of thresholding. From the right sub-figure in Figure 3, it also shows that the non-zero coefficients are clustered, implying that given a pixel with higher probability to be a non-zero coefficient, its neighboring pixels should also have higher probabilities to have non-zero coefficients, and this relationship falls apart as the distance between two pixels becomes larger. This phenomenon could be described either by Mat\'{e}rn covariance or by its correspond sparse precision matrix. Thus, it confirms the reasonability of using BHM with Mat\'{e}rn covariance.
After fitting the BHM to the sparsified image, the posterior mean of $p_i$ for each pixel can be estimated and is shown in Figure 4.
\begin{figure}
\centering
\includegraphics[width=1.1\linewidth,keepaspectratio]{fig4}
\caption{\small {Posterior mean of $p_i$ for every pixel}}
\caption*{Left: scatter plot of the posterior mean of $p_i$. Right: image form of the posterior mean of $p_i$. }
\label{fig:4}
\end{figure}
The left sub-figure of Figure 4 is scatter plot of the posterior mean of $p_i$, while the right sub-figure of Figure 4 shows the posterior mean of $p_i$ in image form which is easier to relate the posterior mean of $p_i$ to the sparse image in Figure 3. It shows some pixels, located at the most upper left corner of the right sub-figure of Figure 4, are with higher probabilities to have non-zero coefficients, while most of the remaining pixels are with lower probabilities to have non-zero coefficients. The summation of the posterior mean of $p_i$ over all pixels, i.e. the estimator of $E(s)$, is $4241.768$. The true sparsity of the sparsified MR image is $4213$. The absolute difference between the estimate and the true sparsity in percentage $|\widehat{E(s)}-s|*100/N\approx 0.2$. Afterwards, 1000 simulated MR images were generated under the same settings as the one in Figure 2. $\widehat{E_{sim}(s)}=4219.289$ and the absolute difference between $\widehat{E_{sim}(s)}$ and the estimate in percentage $|\widehat{E_{sim}(s)}-\widehat{E(s)}|*100/N\approx 0.1$.
By far, we only illustrate the performance of the BHM estimator from a single image. Further, to evaluate the unbiasedness and stability of the estimator, 90 $\widehat{E(s)}$'s out of the 1000 simulations are calculated. The scatter plot of the absolute difference in percentage $|\widehat{E_{sim}(s)}-\widehat{E(s)}_I|*100/N$ for the 90 simulations, where $I=1,2,...,90$, is shown in Figure 5.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth,keepaspectratio]{fig5}
\caption{\small {$|\widehat{E_{sim}(s)}-\widehat{E(s)}|*100/N$ for 90 simulations}}
\label{fig:5}
\end{figure}
The black line is the mean of $|\widehat{E_{sim}(s)}-\widehat{E(s)}_I|*100/N$ over the 90 simulations, and its value is about $0.21$, which is a relatively small number that could confirm the theoretical property of unbiasedness. From Figure 5, $|\widehat{E_{sim}(s)}-\widehat{E(s)}|*100/N\in [0.01,0.53]$. The variance of $\widehat{E(s)}$ based on the 90 simulations is $480.069$. All of these show that the BHM estimator from an image does not diverge too much from $\widehat{E_{sim}(s)}$. Furthermore, a normal quantile-quantile plot of standardized $\widehat{E(s)}$ from the 90 simulations is shown in Figure 6.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth,keepaspectratio]{fig9}
\caption{\small {Normal quantile-quantile plot of standardized $\widehat{E(s)}$ from 90 simulations}}
\label{fig:9}
\end{figure}
The Shapiro-Wilk test of normality is performed and p-value $=0.7715$, implying that the estimator is normal distributed. It confirms the theoretical result about asymptotic normality of the estimator.
\subsection{Real MR image}
The same sparsification procedure as for the simulated images is applied here. One slice of real MR image of human brain with resolution $256\times256$ is shown in Figure 7.
\begin{figure}
\centering
\includegraphics[width=0.3\linewidth,keepaspectratio]{fig6}
\caption{\small {One slice of real human brain with resolution $256\times 256$}}
\label{fig:6}
\end{figure}
The DWT of Daubechies 1 with level 3 is shown in the left sub-figure of Figure 8. Sequentially, the hard thresholding method is used to eliminate the noise, and the result is shown in the right sub-figure of Figure 8.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth,keepaspectratio]{fig7}
\caption{\small {Wavelet transformed image}}
\caption*{Left: discrete wavelet transformed image. Right: the sparsified image.}
\label{fig:7}
\end{figure}
By analysing the sparsified image using the BHM, the posterior mean of $p_i$ for each pixel is estimated and shown in Figure 9. The left sub-figure of Figure 9 is scatter plot of the posterior mean of $p_i$. The right sub-figure of Figure 9 is the posterior mean of $p_i$ in the image form. The summation of the posterior mean of $p_i$ over all pixels is $8254.679$. The true sparsity of the sparsified MR image is $8120$. The absolute difference in percentage $|\widehat{E(s)}-s|*100/N\approx 0.2$.
\begin{figure}
\centering
\includegraphics[width=1.1\linewidth,keepaspectratio]{fig8}
\caption{\small {Posterior mean of $p_i$ for every pixel}}
\caption*{Left: scatter plot of the posterior mean of $p_i$. Right: image form of the posterior mean of $p_i$.}
\label{fig:8}
\end{figure}
\section{Conclusion and Discussion}
In the study, we propose a novel estimator for the mathematical expectation of sparsity, $E(s)$, and prove that it is unbiased and asymptotically normally distributed. Its variance can also be derived analytically. Simulation study is used to confirm the theoretical results. The absolute difference in percentage, i.e. $|\widehat{E_{sim}(s)}-\widehat{E(s)}|*100/N$, is about $0.21$ in average, which indicates the unbiasedness. The asymptotic normality is also examined through the simulation study. The real data analysis is used for illustration of the new method's applicability in the sense of that $\widehat{E(s)}$ could be used directly in the recovery algorithms of CS for future MR images which are believed to have the same sparse level after sparsification as the one used in the study.
There are some issues that are not considered in this study. Relatively conservative priors are used in the study, thus more informative priors could be used according to properties of MR images. Also different models can be used to model the GMRF, e.g. Besag and Leroux model, and comparisons are made among different models. Besides these, it is possible to fit the model to a 3D image and prove that the statistical properties still hold for 3D images. Based on the image patterns of the posterior means shown in Figure 4 and 9, the dimension of measurement matrix, $A\in \mathbb{C}^{m\times N}$, is possible to be further reduced. Since if some pixels in $\mathbf{x}\in \mathbb{C}^N$ are believed to have zero coefficients, i.e. the probabilities are close to $0$, then these pixels do not need to be measured through CS. Similarly, if some pixels are believed to possess image features, i.e. the probabilities are close to $1$, then the values can be measured by a direct method rather than by CS. And CS is only applied to the remaining pixels, which leads to dimension reduction.
\section{Acknowledgements}
This work was supported by the Swedish Research Council grant [Reg.No.: 340-2013-5342].
\section*{Appendix}
\renewcommand{\thesubsection}{\Alph{subsection}}
\subsection{Proof of Proposition~\ref{prop1}}
\propone*
\begin{proof}[Proof:]
$ $\newline
The unbiasedness follows from the fact that
\begin{align*}
&E\left(\widehat{E(s)}\right)=E\left(\sum_{i=1}^N E\left(p_i|\mathbf{O}\right)\right)=\sum_{i=1}^N E(p_i)=E\left(\sum_{i=1}^N p_i\right)=E\left(\sum_{i=1}^N E\left(\mathbbm{1}_{(x_i\ne 0)}|p_i\right)\right)\\
&\qquad\qquad{ }
=\sum_{i=1}^N E\left(\mathbbm{1}_{(x_i\ne 0)}\right)= E\left( \sum_{i=1}^N \mathbbm{1}_{(x_i\ne 0)}\right)=E(s).\\
&\text{Calculation of variance is straightforward: }\\
&Var\left(\widehat{E(s)}\right)=Var\left(\sum_{i=1}^N E\left(p_i|\mathbf{O}\right)\right)
=\sum_{i=1}^N Var\left(E\left(p_i|\mathbf{O}\right)\right)+2\sum_{1\le i<j\le N}Cov\left(E\left(p_i|\mathbf{O}\right), E\left(p_j|\mathbf{O}\right)\right).
\end{align*}
\end{proof}
\subsection{Proof of Proposition~\ref{prop2}}
\proptwo*
\begin{proof}[Proof:]
$ $\newline
We prove this proposition by verifying that all conditions in the theorem in \citet{Harvey2010} are met, since \citet{Harvey2010} has proven that the sum of $\rho$-radius dependent three dimensionally indexed random variables is asymptotically normally distributed under conditions. The proof for two dimensionally indexed random variables in our case follows similar to that given in \citet{Harvey2010} if
\begin{enumerate}[label=\roman*)]
\item $\{E\left(p_i|\mathbf{O}\right): 1\le i \le N\}$ are $\rho$-radius dependent random variables, and
\item $\sigma_k^2$ and $r_k^3$ are finite for a given square size, and
\item $n_{sq} \to \infty$ and $\phi \to\infty$ at a rate slower than $n_1, n_2$ as well as $n_{sq}$ such that
\begin{align*}
\frac{\sum_k Var(S_k^B)}{n_{sq}\sum_k\sigma^2_k}&\to 0\tag{B1}\\
\frac{(\sum_k r_k^3)^{1/3}}{(\sum_k\sigma_k^2)^{1/2}}&\to 0 \tag{B2}.
\end{align*}
\end{enumerate}
We verify the three conditions one by one, sequentially. It has been shown that the dependence structure of a transformed GMRF is the same as the one of original GMRF if the transformed GMRF consists of absolutely continuous variables \citep{Prates2015,Cardin2009}. Since Mat\'{e}rn covariance is used to model $\text{logit}(\mathbf{p})$ under the model setting, in which the random variables are $\rho$-radius dependent, the same dependence structure is inherited by $E\left(\mathbf{p}|\mathbf{O}\right)$ if condition $a)$ holds, i.e. two distinct sites for $E\left(\mathbf{p}|\mathbf{O}\right)$ are pairwise non-negative correlated and $\{E\left(p_i|\mathbf{O}\right): 1\le i \le N\}$ are $\rho$-radius dependent random variables. Thus condition i$)$ is met.
By using the fact $0< E\left(p_i|\mathbf{O}\right)< 1 $ almost surely, it follows that
\begin{align*}
r_k^3&=E|S_k|^3=E(S_k)^3= E\left(\sum_jE\left(p_{kj}|\mathbf{O}\right)\right)^3< (2\phi+1)^6,
\end{align*}
where subscript $j=1,2,...,(2\phi+1)^2$, subscript $kj$ denotes the $j$th random variable in the $k$th square. Thus, $r_k^3$ is finite for a given $\phi$, so is $\sigma^2_k$, and the condition $ii)$ is met.
Next we verify $(B1)$ and $(B2)$ which are met under condition $b)$. To verify $(B1)$ and $(B2)$, we need use $Var(S_k^B)$ and $\sigma_k^2$ besides $r_k^3$ which has been shown above.
First, we calculate $Var(S_k^B)$. Since
\begin{align*}
Var\left(E\left(p_{kl}|\mathbf{O}\right)\right)<E\left(E\left(p_{kl}|\mathbf{O}\right)\right)^2<1,
\end{align*}
\begin{align*}
Cov\left(E\left(p_{kl}|\mathbf{O}\right),E\left(p_{kq}|\mathbf{O}\right)\right)&=E\left(E\left(p_{kl}|\mathbf{O}\right)E\left(p_{kq}|\mathbf{O}\right)\right)-E(p_{kl})E(p_{kq})\\
&<E\left(E\left(p_{kl}|\mathbf{O}\right)E\left(p_{kq}|\mathbf{O}\right)\right)\\
&<1,
\end{align*}
and $\phi>\rho^*\ge1$, it follows that
\begin{align*}
Var(S_k^B)&=Var\left(\sum_lE\left(p_{kl}|\mathbf{O}\right)\right)=\sum_lVar\left(E\left(p_{kl}|\mathbf{O}\right)\right)+\sum_{l\ne q}Cov\left(E\left(p_{kl}|\mathbf{O}\right),E\left(p_{kq}|\mathbf{O}\right)\right)\\
&<2(2\phi+1)\rho^*+(\rho^*)^2+[2(2\phi+1)\rho^*+(\rho^*)^2][2(2\phi+1)\rho^*+(\rho^*)^2-1] \\
&=4(\rho^*)^2(2\phi+1)^2+4(\rho^*)^3(2\phi+1)+(\rho^*)^4\\
&<C_1(2\phi+1)^4,
\end{align*}
where $l,q=1,2,...,2(2\phi+1)\rho^*+(\rho^*)^2$, $C_1$ is a positive constant that does not depend on $n_1,n_2$.
Then we calculate $\sigma_k^2$. Since $\{E\left(p_i|\mathbf{O}\right): 1\le i \le N\}$ are $\rho$-radius dependent random variables and any two random variables in the set are non-negative correlated, it follows that
\begin{align*}
\sigma_k^2&=Var\left(\sum_jE\left(p_{kj}|\mathbf{O}\right)\right)\ge\sum_jVar\left(E\left(p_{kj}|\mathbf{O}\right)\right)\ge C_2(2\phi+1)^2>0,
\end{align*}
where $C_2>0$ is the smallest variance of $E\left(p_{kj}|\mathbf{O}\right)$ and also does not depend on $n_1,n_2$.
Therefore,
\begin{align*}
\frac{\sum_k Var(S_k^B)}{n_{sq}\sum_k\sigma^2_k}&\le\frac{n_{sq}C_1(2\phi+1)^4}{n_{sq}^2C_2(2\phi+1)^2}\propto \frac{(2\phi+1)^2}{n_{sq}}\to 0,\text{and}\\
\frac{(\sum_k r_k^3)^{1/3}}{(\sum_k\sigma_k^2)^{1/2}}&\le\frac{n_{sq}^{1/3}(2\phi+1)^2}{n_{sq}^{1/2}C_2^{1/2}(2\phi+1)}\propto \frac{2\phi+1}{n_{sq}^{1/6}}\to 0 ,
\end{align*}
where the two limits hold if condition $b)$ holds.
\end{proof}
\bibliographystyle{apalike}
|
train/arxiv
|
BkiUbdw5qsMAIwbUZ5x_
| 5 | 1 |
\section{Introduction} \label{sec:intro}
According to the standard \lcdm model of structure formation, small overdensities seeded by quantum fluctuations in the homogeneous matter fields of the early Universe grew through gravitational collapse into structures.
Prior to Recombination ($z\sim1100$), overdensities of baryonic matter were prevented from growing by the strong coupling between the baryonic and photonic fields.
Dark matter (DM) overdensities, however, were free to collapse.
By the time of Recombination, when baryons decoupled from radiation, DM overdensities had grown to five orders of magnitude larger than the baryonic overdensities \citep[e.g.,][]{NB05}.
Once decoupled, baryons collapsed into the significantly larger DM potential wells, resulting in the formation of structures with a central baryon component inside a larger DM halo \citep[e.g.,][]{Wechsler+18}.
This \lcdm picture of structure formation is very successful on large scales \citep[e.g.,][]{Springel+05,Vogelsberger+14a,Vogelsberger+14b,Vogelsberger+20,Schaye+15}.
Uncertainties and tensions remain, however, especially on the scales of faint dwarf galaxies \citep[e.g,][]{BullockBK+17,Simon+19,Perivo+22}.
From uncertainties such as the core-cusp challenge \citep[e.g.,][]{Flores+94,Moore+94} to serious tensions such as the observed diversity of rotation curves compared to simulations \citep[e.g.,][]{Oman+15,Oman+19}, challenges to \lcdm at low masses include not only tensions with observations \citep[e.g.,][]{Webb+22} but also discrepancies between different state-of-the-art cosmological simulations \citep[see][for a review]{Sales+22}.
The ultra-faint dwarf regime is thus expected to be one of the most sensitive probes of models and simulations of structure formation that succeed at the scales of Milky Way-like galaxies and larger mass dwarf galaxies.
A precise description of the morphologies, dynamical histories, and star-formation histories of ultra-faint galaxies under \lcdm (and other models) will be central to resolving these tensions.
In an effort to refine the physical understanding of \lcdmc \citet{Tes+10a} pointed out that previous work neglected the
highly supersonic relative velocity ($v_{\rm bc}$) between DM and baryonic overdensities stemming from their five orders of magnitude difference in density.
At Recombination, the root-mean-square (rms) value of the relative velocity ($\sigma_{vbc}$) was 30 km s$^{-1}$, five times the speed of sound of the baryons at the time.
This velocity has important consequences for structure formation at small scales in the early Universe.
It is coherent over a few Mpc \citep{Tes+10a}, and on those scales it can be modeled as a stream velocity of a single value.
Subsequent works further explored the early-Universe implications of structure formation in the presence of the stream velocity.
For example, the stream velocity was shown to delay the formation of Pop III stars \citep[e.g.,][]{Stacy+10,Greif+11,Schauer+17a} with impacts on reionization and the 21-cm signal \citep[e.g.,][]{Visbal+12, McOL12,Munoz+19,Park+21}.
It also suppresses halo abundance and generates ``empty" halos with low gas content \citep[e.g.,][]{Naoz+11a,Asaba+16}, generating large scale inhomogeneities of galaxies \citep[e.g.,][]{Fialkov+11} and affecting the minimum halo mass that holds most of its baryons \citep[e.g.,][]{Naoz+12}.
Furthermore, in regions with a large relative velocity, gas accretion onto star-forming dwarf halos is affected -- the gas falls downwind of halos, and has very low densities \citep[e.g.,][]{OLMc12}.
The stream velocity was shown to be responsible for reducing the number of low mass, luminous satellite galaxies expected in \lcdmc somewhat resolving an existing tension with observation at the time \citep[e.g.,][]{BD}.
Low mass galaxies in the stream velocity also have colder, more compact radial profiles \citep[e.g.,][]{Richardson+13}.
Beyond galaxies, the stream velocity was suggested to enhance massive black hole formation \citep[e.g.,][]{TanakaLi+13,Tanaka+14,Latif+14,Hirano+17,Schauer+17}.
In addition, the stream velocity produces supersonic turbulence, which can assist with the generation of early magnetic fields in the Universe \citep{Naoz+13}.
Intriguingly, the stream velocity effect is also expected to induce the formation of objects with anomalous properties in patches of the Universe with non-zero values of $v_{bc}$.
\citet{Naoz+14} showed that the stream velocity introduces a phase shift between DM and baryon overdensities, which translates to a physical separation between the two components.
Two interesting classes of objects arise from this effect that differ from classical \lcdm objects at the same scales.
First, for objects at a range of low masses ($\lsim {\rm few} \times 10^6$~M$_\odot$), the spatial offset is so large that the baryonic component collapses outside the virial radius of its parent DM halo entirely, potentially surviving as a DM-deficient bound object.
\citet{Naoz+14} proposed that these Supersonically Induced Gas Objects (SIGOs) may be the progenitors of globular clusters \citep[e.g.,][]{Naoz+14,Popa+15,Chiou+18,Chiou+19,Chiou+21,Lake+21,Nakazato+22,Lake+22}.
Second, for a range of slightly higher mass objects ($\lsim 10^8$~M$_\odot$), the spatial offset is such that the centers of mass of the baryonic component and the parent DM halo are offset, but the majority of the gas remains inside the DM virial radius \citep{Naoz+14}.
We term these objects Dark Matter + Gas Halos Offset by Streaming (DM GHOSts).
These structures consist of both a DM and gas component, unlike SIGOs, which are almost entirely gas.
Compared to their classical \lcdm analogues, DM GHOSts are enriched in DM and highly diffuse, because the stream velocity advects a portion of their gas component out of the halo.
\citet{Naoz+14} suggested that these objects may be the progenitors of ultra faint or dark-satellite galaxies.
The Supersonic Project was introduced to investigate the supersonic stream velocity-induced objects and their ties to observed structures.
Previous studies focused on the formation and evolution of SIGOs \citep[e.g.,][]{Popa+15,Chiou+18,Chiou+19,Chiou+21,Lake+21,Nakazato+22,Lake+22}.
These simulations attempted to confirm the existence of SIGOs and investigate their connection to globular clusters using only adiabatic and sometimes atomic cooling.
All except \citet{Schauer+21,Nakazato+22} and \citet{Lake+22} neglected the effects of molecular hydrogen cooling.
\citet{Popa+15} and \citet{Chiou+19} placed early constraints on the rotational properties of SIGOs, showing that they are highly elongated structures with seemingly greater rotational support than both DM GHOSts and ``classical" analogs--objects of the same mass in regions without streaming.
\citet{Chiou+18,Chiou+21} and \citet{Lake+22} focused on the potential for SIGOs to be sites of star formation.
In a semi-analytic study, \citet{Chiou+19} found that SIGOs occupy a similar part of magnitude-radius space today as the population of observed globular clusters \citep[e.g.,][]{McConnachie12}.
\citet{Lake+21} extrapolated the large-scale variation of SIGO abundances across the sky, predicting anisotropies in the distribution of gas-rich objects at low masses that could be observed by the \textit{James Webb Space Telescope} (JWST) and binary black hole gravitational-wave sources detectable by gravitational-wave detectors.
Several recent studies indicate that molecular cooling may play an important role in the evolution of SIGOs and other objects in the stream velocity.
\citet{Glover13} and \citet{Schauer+21} indicate that molecular cooling affects the abundance of gas objects in the early Universe, and \citet{Nakazato+22} found that SIGOs became more filamentary in their molecular cooling simulations.
\citet{Lake+22} studied the collapse of SIGOs in the context of molecular cooling, drawing an analogy to giant molecular clouds, and found that SIGOs should form stars outside DM halos.
Studies have neither investigated DM GHOSts in detail nor constrained the rotational and morphological properties of the supersonically-induced objects with molecular cooling.
Here, we present an updated analysis of the morphology, rotation, rotational curves, and mass distribution of both SIGOs and DM-GHOSts using molecular hydrogen cooling numerical simulations supplemented by an analytical perspective.
We characterize the population-level properties of these elongated objects in the context of ellipsoid potentials, and quantify their total angular momentum and rotational support.
We find that the DM component deviates from a spherical configuration in the presence of the stream velocity.
We also present the first rotation curves for these objects, finding a bifurcation in rotation curve shape according to mass.
This may serve as an early universe analog to the rotational curve diversity observed in dwarf galaxies \citep[e.g.,][]{Sales+22}.
The paper is organized as follows: \S~\ref{sec:numerical} describes the numerical simulations used in the study and the classification criteria for SIGOs and DM GHOSts.
\S~\ref{sec:ellipse} is devoted to the analytical and numerical results of the study.
In \S~\ref{sec:analytical}, we present the analytical ellipsoid potentials used to understand supersonically-induced objects, and we show the population level morphological properties of SIGOs and DM GHOSts from our simulations in \S~\ref{sec:nummorphology}.
In \S~\ref{sec:spinparam}, we discuss the rotational support and angular momentum of these objects.
In \S~\ref{sec:rotation}, we present density profiles and rotation curves of DM GHOSts.
A summary and discussion of the results is given in \S~\ref{sec:discussion}. The appendices explain the choice of cutoff gas fraction used to define a SIGO (App.~\ref{ap.gasfraction}), a full derivation of the potential and total mass from \S~\ref{sec:analytical} (App.~\ref{ap:potential}), and supplemental morphological data, including comparisons to NFW profiles (App.~\ref{ap:morphology}) .
In this study we assume a \lcdm cosmology, with $\Omega_{\rm \Lambda} = 0.73$, $\Omega_{\rm m} = 0.27$, $\Omega_{\rm b} = 0.044$, $\sigma_8 = 1.7$, and $h = 0.71$.
\section{Numerical Set Up} \label{sec:numerical}
In a similar manner to previous studies by the Supersonic Project \citep[e.g.,][]{Chiou+18,Chiou+19,Chiou+21}, we perform hydrodynamical simulations using the {\tt AREPO} code \citep[][]{Springel2010a}.
{\tt AREPO} is a moving-mesh code that allows for high resolution studies of structure formation with an accurate picture of the stream velocity up to $z\sim20$.
\subsection{Simulation and Initial Conditions}
We use a modified CMBFAST code \citep[][]{1996ApJ...469..437S}, as presented in \citet{Popa+15}, to include the first-order correction of scale-dependent temperature fluctuations on the initial conditions and their transfer functions, following \citet{NB05}.
This is necessary as the corrections detemine gas fraction in halos at higher redshift \citep[e.g.,][]{NBM,Naoz+10,Naoz+12}.
\citet{Tes+10a} showed that the supersonic relative velocity is coherent on scales of $\sim$few Mpc, so following \citet{Popa+15}, we choose a box size of 2 comoving Mpc, such that the relative velocity can be modeled as a stream velocity.
Evolution of the stream velocity, a second order correction \citep[][]{Tes+10a}, is also included in the transfer functions.
The simulations begin at $z=200$, when a $2\sigma$ fluctuation in the stream velocity corresponds to 11.8 km s$^{-1}$.
The stream velocity is thus implemented as a boost of 11.8 km s$^{-1}$ to all baryon particles in the $+x$-direction.
The box of {2 comoving} Mpc contains 512$^3$ DM particles with a mass resolution of $m_{\rm DM}=1.9\times 10^3\text{ M}_\odot$ and 512$^3$ Voronoi mesh cells representing the gas component, with a mass resolution of $m_{\rm gas}=360\text{ M}_\odot$.
Our results are presented at the end of the simulations, $z=20$.
To investigate the effect of the stream velocity, we perform two runs without the stream velocity (i.e., runs in a region of space with a $0\sigma_{v_{bc}}$ fluctuation in the velocity field) and two runs with a value of $v_{bc}=2\sigma_{v_{bc}}$.
{ For each set of two runs (with and without the stream velocity), we include molecular (H$_2$) cooling in one and only adiabatic cooling in the other. The inclusion of molecular cooling is described in \S~ref{sec:molecular} below.}
\begin{table}
\centering
\begin{tabular}{lll}
\hline
\multicolumn{1}{|l|}{Run} & \multicolumn{1}{l|}{$v_{bc}$} & \multicolumn{1}{l|}{H$_2$ Cooling}
\\ \hline
\multicolumn{1}{|l|}{$0$v} & \multicolumn{1}{l|}{$0$} & \multicolumn{1}{l|}{No} \\ \hline
\multicolumn{1}{|l|}{$2$v} & \multicolumn{1}{l|}{$2\sigma_{v_{bc}}$} & \multicolumn{1}{l|}{No} \\ \hline
\multicolumn{1}{|l|}{$0$vH$2$} & \multicolumn{1}{l|}{$0$} & \multicolumn{1}{l|}{Yes} \\ \hline
\multicolumn{1}{|l|}{$2$vH$2$} & \multicolumn{1}{l|}{$2\sigma_{v_{bc}}$} & \multicolumn{1}{l|}{Yes} \\ \hline
\end{tabular}
\caption{Simulation Parameters}
\label{Table:Sims}
\end{table}
\subsection{Molecular Cooling} \label{sec:molecular}
To understand the effect of molecular cooling, we perform two runs for each value of the stream velocity ($0\sigma_{v_{bc}}$ and $2\sigma_{v_{bc}}$), one with adiabatic cooling only and one with molecular cooling included.
We denote the H$_{2}$ cooling runs with ``H2".
The 0vH2 and 2vH2 runs were also used in \citet{Lake+22}.
A summary of the runs in this work is given in Tab.~\ref{Table:Sims}.
As in \citet{Nakazato+22} and \citet[][]{Lake+22}, we explicitly account for nonequilibrium chemical reactions and radiative cooling in the gas, using GRACKLE, a chemistry and cooling library
\citep[][]{Smith+17,Chiaki+19}.
The 0vH2 and 2vH2 runs include H$_2$ and HD molecular cooling.
The radiative cooling rate of the former includes both rotational and vibrational transitions \citep[][]{Chiaki+19}.
Chemistry for the following 15 primordial species is included in H2 runs:
e$^-$, H, H$^+$, He, He$^+$, He$^{++}$, H$^-$, H$_2$, H$_2^+$, D, D$^+$, HD, HeH$^+$, D$^-$, and HD$^+$.
{ We do not include star formation.}
\subsection{Object Classification} \label{sec:objects}
We are interested in gas-rich structures, including SIGOs, which have somewhat low statistical power in these small box simulations.
Thus, following \citet{Popa+15,Chiou+18,Chiaki+19,Chiou+21,Lake+22, Lake+21,Nakazato+22} we choose $\sigma_8 = 1.7$, which will increase the statistical power.
This represents a region of the Universe where structure forms early, such as in the Virgo cluster \citep[e.g.,][]{NB07}.
These results can then be scaled to other regions accordingly \citep[e.g.,][]{Park+20}.
To identify structures, we search for two object classes using a friends-of-friends (FOF) algorithm \citep[see e.g,][]{Popa+15,Chiou+18}.
\begin{enumerate}
\item DM-primary/Gas-secondary (DM/G) objects are found using the FOF algorithm on DM particles first.
Gas cells in the same object are associated with the DM groups at a secondary stage.
We require DM/G objects to have at least 300 DM particles, to avoid numerical artifacts.
\item Gas-primary (GP) objects are found using the FOF algorithm only on gas cells.
This allows us to find objects such as SIGOs in the simulation that have little or no DM component.
We require GP objects to have at least 100 gas cells, again in order to avoid non-physical numerical effects.
\end{enumerate}
\begin{table}
\centering
\begin{tabular}{llll}
\hline
\multicolumn{1}{|l|}{Run} & \multicolumn{1}{l|}{\# GP} & \multicolumn{1}{l|}{\# SIGOs} & \multicolumn{1}{l|}{\# DM GHOSts}
\\ \hline
\multicolumn{1}{|l|}{$0$v} & \multicolumn{1}{l|}{$2557$} & \multicolumn{1}{l|}{-} & \multicolumn{1}{l|}{-} \\ \hline
\multicolumn{1}{|l|}{$2$v} & \multicolumn{1}{l|}{$759$} & \multicolumn{1}{l|}{$25$} & \multicolumn{1}{l|}{$734$} \\ \hline
\multicolumn{1}{|l|}{$0$vH$2$}& \multicolumn{1}{l|}{$5823$} & \multicolumn{1}{l|}{-} & \multicolumn{1}{l|}{-} \\ \hline
\multicolumn{1}{|l|}{$2$vH$2$} & \multicolumn{1}{l|}{$1406$} & \multicolumn{1}{l|}{$69$} & \multicolumn{1}{l|}{$1337$} \\ \hline
\end{tabular}
\caption{Summary of the number of gas primary (GP) objects and subclasses found in the four runs used in this study. Only objects containing over 100 gas particles are included. SIGOs and DM GHOSts do not exist in regions with 0 stream velocity, so they are not tabulated for the 0v and 0vH2 runs, but see App.~\ref{ap.gasfraction} for an investigation of false identification of SIGOs in molecular cooling runs.}
\label{Table:objects}
\end{table}
The choice to cut off DM/G and GP objects at 300 particles and 100 cells respectively gives us a minimum structure mass resolution of $5.7\times10^5$ M$_\odot$ for DM/G and $3.6\times10^4$ M$_\odot$ for GP objects.
\citet{Popa+15} and \citet{Chiou+18} found that GP objects are often filamentary in nature, and thus a spherical fitting algorithm is {not an optimal choice, as it does not reflect the actual morphology of these structures.}
{We therefore employ the same fitting algorithm of these works, which is based on a triaxial ellipsoid fit.}
We keep the axis ratio of a triaxial ellipsoid with $N_{0}$ gas particles and maximum radius $R_{\rm max,0}$ around the GP object constant and shrink it in increments of 0.5 percent until the condition $R_{\rm max, n}/R_{\rm max, 0}>N_{\rm n}/N_{0}$ is met, or until $N_{\rm n}/N_{0}<0.8$, where $R_{\rm max, n}$ and $N_{\rm n}$ are the maximum ellipsoid radius and number of gas particles of the $n$th iteration.
\begin{figure*}
\center
\includegraphics[width=\textwidth]{figures/densityprojections.pdf}
\caption{Projected gas (left) and dark matter (DM) (right) density around several DM GHOSts and a SIGO in a region $5$ physical kpc on a side.
{ The SIGO is bounded by the white ellipse, located in a region relatively devoid of dark matter, and contains no DM component.
It is embedded in a stream of gas.
The DM GHOSts (A, B, C, and D) each contain a gaseous and a DM component.
The gas components of the DM GHOSts are shown in orange, whereas the DM components of the DM GHOSts are shown in green.
Note that the DM components are not entirely spherical.
The centers of mass of the gas components of A, C, and D are offset from the centers of mass of the DM component, whereas B has had time for the gas component to fall back into the center of the DM potential.
Several DM halos with no associated gas components also lie in this region--depicted in pink.
One of these may be the ``parent" halo of the SIGO. }
}
\label{fig:SIGOdm}
\end{figure*}
The GP FOF algorithm is performed to identify SIGOs, gas-rich objects that form outside the virial radius of the parent DM halos.
However, many of the GP objects are located inside DM halos, being the gas component of the DM/G structures.
These structures are also of interest to this study.
In order to clarify the difference between structures formed via classical \lcdm and these dark matter and gas structures formed in regions with the stream velocity, we term the DM/G objects in regions of streaming as Dark Matter + Gas Halos Offset by Streaming (DM GHOSts).
In previous papers, these were referred to as ``DM/G".
Having formed offset from the center of mass of their parent DM halo, these structures display different morphological and dynamical properties than those {that formed in regions of the Universe with no relative velocity (i.e., a patch with a $0\sigma_{v_bc}$ fluctuation)}, even though many are no longer offset by the redshifts considered here due to dynamical processes {(such as the DM GHOSt labeled ``B" in Fig~\ref{fig:SIGOdm}.)}
We follow the convention in \citet{Nakazato+22}, where SIGOs are defined as GP objects which meet the following two conditions:
\begin{enumerate}
\item {They are }located outside the virial radius of their parent DM halo.
\item {They contain } a gas fraction,
\begin{equation}
f_g=\frac{M_{\rm g}}{M_{\rm DM}+M_{\rm g}} > 0.6 \ ,
\label{eq.gasfraction}
\end{equation}
where $M_g$ is the total mass of gas in the object and $M_{DM}$ is the total mass of DM in the object.
\end{enumerate}
Similar criteria were used in \citet{Popa+15,Chiou+18,Chiou+19,Chiou+21,Lake+21}.
The gas fraction cutoff in those works was chosen rather arbitrarily to be $0.4$.
This value was implemented because those studies were interested specifically in the gas rich structures in connection with observed DM-deficient objects such as globular clusters.
Our cutoff gas fraction of $0.6$ is higher.
\citet{Nakazato+22} found that choosing a smaller cutoff gas fraction in runs with molecular cooling leads to the identification of filamentary structure as SIGOs, such that without the stream velocity many SIGOs are misidentified.
We find similar behavior in our molecular cooling run, and this choice of gas fraction is discussed further in App.~\ref{ap.gasfraction}.
GP objects in runs with stream velocity that do not meet the SIGO criteria above are classified as the baryonic component of DM GHOSts.
A DM GHOSt therefore contains two components, a DM component, identified by the DM/G FOF algorithm, and a gaseous component, identified by the GP FOF.
For DM GHOSts, the GP FOF often identifies the gas structure within the DM-primary object.
Figure~\ref{fig:SIGOdm} shows the projected density of DM (left) and gas (right) in a region of the simulation box with two DM GHOSts and a SIGO.
The SIGO contains only a component identified by the GP FOF, which can be clearly seen in the plot of the gas density.
The DM GHOSts are found in both particle FOF types and have two overlapping (but offset) components.
In \citet{Chiou+19} and subsequent papers, a spherical overdensity calculation was used to obtain the virial radius of the DM halos.
However, in this study we sought to explore the morphology of the diffuse DM GHOSts and their dark matter component.
Thus, we also perform an ellipsoid fit as described above to the DM/G objects to explore whether they deviate from a spherical morphology.
So, as before, a triaxial ellipsoid with fixed axis ratios is fit to the DM/G object, shrinking in $0.5\%$ increments until the axis ratio is less than the ratio of the number of particles in the original object to the shrunken ellipsoid, or $20\%$ of particles are removed.
A table of the GP objects found using the FOF algorithm described here is presented in Tab.~\ref{Table:objects}.
The probability density distributions in \S~\ref{sec:results} are calculated from this set of objects, with a Gaussian kernel density function using a Scott bandwidth \citep[][]{Scott+2010}.
\section{Physical properties and Analytical Description}
\label{sec:results}
In combination with an analytical understanding, this section describes the morphological and rotational properties of the population of numerically simulated structures from the four simulation runs described above.
\subsection{Morphology} \label{sec:ellipse}
Historically, spherical overdensity calculations have been used to understand the gravitational potentials of the Universe's first structures \citep[see][for a review]{BarkanaLoeb+01}.
However, both the stream velocity and molecular cooling were shown to induce gaseous filaments and elongate structures.
For this reason, we introduce the eccentricity as a measure of an object's deviation from an idealized spherical configuration, and present an analytical potential for SIGOs and DM GHOSts in terms of their eccentricity.
A full derivation of the potential and other relevant equations is given in App.~\ref{ap:potential}.
\subsubsection{Analytical ellipsoid potential of SIGOs and DM GHOSts}
\label{sec:analytical}
\begin{figure}
\center
\includegraphics[width=0.4\textwidth]{figures/ellipsoid1.pdf}
\caption{Choice of coordinates adopted in this work.
The ellipsoids are arranged such that the primary axes of the ellipsoid are aligned along the cartesian coordinate directions, with $R_{\rm max}$, the polar radius of a prolate spheroid, along the $z$-axis.
In the prolate approximation, $R_{\rm min}\sim R_{\rm max}$.
Cylindrical coordinates are used in \S~\ref{sec:analytical} as the natural choice for prolate ellipsoid potentials. }
\label{fig:elliosoid}
\end{figure}
In order to analytically explore the role of the eccentricity in the gravitational potential, we approximate SIGOs and DM GHOSts as prolate ellipsoids, with $R_{\rm max}>R_{\rm mid}\sim R_{\rm min}$. We show in Sec.~\ref{sec:nummorphology} that this approximation is consistent with the structures found in the simulation.
\begin{figure*}[!t]
\center
\includegraphics[width=.95\textwidth]{figures/newgasecc.pdf}
\caption{ \textit{Left:} Probability density distribution of $\log(1-e)$, where $e$ is the eccentricity (Eq.~(\ref{eq:ecc})), for gas primary (GP) objects.
Distributions are separated into the object classes listed in Tab.~\ref{Table:objects} and calculated using a Gaussian kernel density.
The orange distributions show the gas component of DM GHOSts, the grey distributions show the gas component of classical halos (without $v_{\rm bc}$), and the blue distributions show SIGOs.
The darker lines denote runs with H$_2$ cooling, while the lighter dashed lines denote no cooling.
\textit{Right:} Scatter plot of $R_{\rm mid}/R_{\rm max}$ versus $R_{\rm min}/R_{\rm mid}$ for gas-primary (GP) objects in H$_2$ cooling runs.
The color bar is $R_{\rm min}/R_{\rm max}$.
The left column has no stream velocity and the right column is from the $v_{bc}=2\sigma_{v_{bc}}$ runs (See Tab.~\ref{Table:objects}).
Stars represent SIGOs, as defined in Sec.~\ref{sec:objects}. }
\label{fig:eccentricity}
\end{figure*}
In cylindrical coordinates ($R,z$), the gravitational potential ($\Phi$) of a prolate ellipsoid can be written as:
\begin{multline}
\Phi (\textbf{x}) =\\
-2^{3/2} \frac{4\pi GR_{\rm max}^4 \rho(R_{\rm max}^2)\cos^{-1}(\sqrt{1-e^2})}{e\sqrt{1+(1-e^2)\left(\frac{R^2}{R_{\rm min}^2}+\frac{z^2}{R_{\rm max}^2}\right)}} \ ,
\label{eq:cylindricalpotential}
\end{multline}
where $G$ is the gravitational constant, $R_{\rm max}$ is half the length of the maximum axis of the ellipsoid, $R_{\rm min}$ is half the length of the minimum axis of the ellipsoid (see Fig.~\ref{fig:elliosoid}), $\rho(R_{\rm max}^2)$ is the density at $R_{\rm max}$, and $e$ is the eccentricity.
See App.~\ref{ap:potential} for a derivation of Eq.~(\ref{eq:cylindricalpotential}).
The eccentricity is a measure of the ellipsoid elongation, defined (following the convention used in \citealp{BT08}) as:
\begin{equation}
e\equiv \sqrt{1-\left(\frac{R_{\rm min}}{R_{\rm max}}\right)^2} \ .
\label{eq:ecc}
\end{equation}
This parameter resembles the 2D ellipse eccentricity, and varies from 0 (spherical) to 1 (radial).
Previous works by the Supersonic Project \citep{Chiou+18} used the prolateness factor ($\xi$) to characterize the shape of GP objects:
\begin{equation}
\xi = \frac{R_{\rm max}}{R_{\rm min}},
\end{equation}
The relation between the eccentricity and the prolateness factor is:
\begin{equation}
e = \sqrt{1-\xi^{-2}} \ .
\end{equation}
In deriving the potential in Eq.~(\ref{eq:cylindricalpotential}), we have assumed a prolate spheroidal density profile given by:
\begin{equation}
\rho(m^2) = \rho_0 \left[1+\left(\frac{m}{R_{\rm max}}\right)^2\right]^{-\frac{3}{2}} \ ,
\label{eq:density}
\end{equation}
where $m$ is defined in cylindrical coordinates as:
\begin{equation}
m^2 \equiv \frac{R^2}{1-e^2}+z^2 \ ,
\end{equation}
\citep{BT08}.
Note that $0\leq m\leq R_{\rm max}$.
In the above formalism, we calibrate the density such that $\rho_0 = 2^{3/2} \rho(R_{\rm max})$.
From here, we find the dependence of the total mass on eccentricity.
Once again, a complete derivation can be found in App.~\ref{ap:potential}.
The total mass of the ellipsoid is found by integrating over a set of similar ellipsoids from the center to the outer edge of the object (i.e., $m=0$ to $m=R_{\rm max}$).
Thus, for an object with density given by Eq.~(\ref{eq:density}), we find the total mass of the object $M$:
\begin{eqnarray}
M&=& 4\pi (2^{3/2}\rho_{\rm max})(1-e^2) R_{\rm max}^3 \left(\sinh^{-1}{(1)}-\frac{1}{\sqrt{2}}\right) \nonumber \\
&\approx& 6.19 \rho_{\rm max} (1-e^2)R_{\rm max}^3.
\label{eq:massecc}
\end{eqnarray}
Below we use the eccentricity parameter to estimate the prolateness of the SIGOs and DM-GHOSts.
{We also compare the eccentricity inferred by the analytical ellipsoid potential (Eq.~(\ref{eq:massecc})) for an average object with our simulated objects' eccentricity versus the mass enclosed within the ellipsoid bounding each object, finding agreement between the analytic and numeric results for eccentric objects}.
\begin{figure*}[!t]
\center
\includegraphics[width=.95\textwidth]{figures/newdmeccs2.pdf}
\caption{\textit{Left:} Probability density distribution of $\log(1-e)$ of dark matter (DM)-primary objects, where $e$ is the eccentricity (see Eq.~(\ref{eq:ecc})).
The orange distributions include the DM component of DM GHOSts and the grey distributions show the gas component of classical halos without $v_{bc}$.
SIGOs, which have little to no DM component, are not plotted.
The darker lines denote runs with H$_2$ cooling, while the lighter dashed lines denote no cooling.
\textit{Right:} Scatter plot of $R_{\rm mid}/R_{\rm max}$ versus $R_{\rm min}/R_{\rm mid}$ for DM-primary (DM/G) objects.
The color bar is $R_{\rm min}/R_{\rm max}$.
The left column has no stream velocity and the right column is from the $v_{bc}=2\sigma_{v_{bc}}$ runs (See Tab.~\ref{Table:objects}).
SIGOs are not included because they have no DM component.
These results imply that the DM component for the majority of objects is non-spherical, and the stream velocity induces further elongation.}
\label{fig:eccentricityDM}
\end{figure*}
\subsubsection{Morphology of numerically simulated objects}
\label{sec:nummorphology}
\begin{figure}
\center
\includegraphics[width=0.5\textwidth]{figures/mlog1e.pdf}
\includegraphics[width=0.485\textwidth]{figures/dmmassecc.pdf}
\caption{\textbf{Top:} Scatter plot of $\log(1-e^2)$ versus $M_{\rm tot}$ for gas-primary (GP) objects. SIGOs are denoted by stars.
\textbf{Bottom:} Scatter plot of $\log(1-e^2)$ versus $M_{\rm DM}$ for DM-primary (DM/G) objects {(we include both DM components of DM GHOSts and other DM halos in the box that have no associated GP component)}
In both plots, the top two panels show the H$_2$ cooling runs, and the bottom two panels show runs without cooling.
The left column has no stream velocity and the right column is from the $v_{bc}=2\sigma_{v_{bc}}$ runs.
Stars represent SIGOs, as defined in Sec.~\ref{sec:objects}.
The color bar is the gas fraction (Eq.~(\ref{eq.gasfraction})).
The red overplotted line is the expected relationship from Eq.~(\ref{eq:massecc}) for an example object with the average density and maximum radius of objects in the H$_2$ cooling runs ($\bar{\rho}_{Rmax}=1.8\times 10^8 $M$_\odot$ kpc$^{-3}$, $\bar{R}_{max}=0.134$ kpc).}
\label{fig:eccmass}
\end{figure}
In the previous section we assumed a prolate relation between ellipsoid axes ( $R_{\rm max}>R_{\rm mid}\sim R_{\rm min}$).
Interestingly, we find that both the gas component and the DM component of structures become prolate in the presence of the stream velocity as depicted in Figure~\ref{fig:eccentricity}.
The left panel of Fig.~\ref{fig:eccentricity} shows probability density distributions for the eccentricity of GP objects in all four runs.
The gas components of classical halos (i.e., no stream velocity), SIGOs, and DM GHOSts comprise three distinct populations in eccentricity.
The stream velocity induces elongation of objects, with objects in 0v and 0vH2 runs being the most spherical.
Among the supersonically-induced objects, SIGOs are more elongated and prolate than DM GHOSts.
The average SIGO eccentricity of $0.977$ corresponds to an object whose $R_{\rm min }$ is only around 20\% of its $R_{\rm max}$, whereas for a classical object with molecular cooling the average ratio is around 60\%
(See Tab. \ref{tab:means} of App. \ref{ap:morphology} for the means of morphological parameters).
The stream velocity effect dominates runs with and without molecular cooling, but in the no stream velocity case, molecular cooling also slightly elongates the gas component.
The stream velocity also affects the shape of the DM component of DM GHOSts, resulting in elongated DM structures.
The left panel of Fig.~\ref{fig:eccentricityDM} shows the distribution of eccentricities of the DM-primary objects in the 0v and 0vH2 runs and DM GHOSts with and without cooling.
Including H$_2$ cooling, the DM component of DM GHOSts tends to be less spherical than the classical DM halos.
The eccentricity is only a measure of the difference between the minimum and maximum radii of the ellipsoid.
Therefore, to justify the prolate approximation, we show the ratios of all three axes of the gas (Fig.~\ref{fig:eccentricity}, right panel) and DM (Fig.~\ref{fig:eccentricityDM}, right panel) component of the ellipsoids.
The parameter space is divided into spherical, triaxial, oblate, and prolate objects.
Even without a stream velocity, there is a range of morphologies among both the DM and gas components of structures.
The probability density distributions of all the ratios are given in App.~\ref{ap:morphology}.
As seen in the Figures, the majority of the DM components are spherical in nature, and those that deviate from sphericity tend to be prolate.
For the gas components (top right panel in Figure~\ref{fig:eccentricity}), as expected, the majority of the gas component of classical object is spherical, with preference toward prolate configuration.
The stream velocity elongates objects into more extreme axis ratios.
In fact, there are a scarce few truly spherical objects in the runs with stream velocity.
SIGOs, shown as stars in the figure, have not only the most extreme eccentricities overall, but also tend towards the triaxial region of the figure.
In Fig.~\ref{fig:eccmass}, we plot the eccentricity of the objects' gas component as a function of mass.
We overlay the expected relation from Eq.~(\ref{eq:massecc}) for object of average density and scale (solid line).
Recall that this equation represents the relationship between the mass and the elongation for an ellipsoid potential.
Thus, for the no stream velocity case, where the majority of the structure is spherical (i.e., $\log(1-e^2)=0$) most of the objects are concentrated at low eccentricity.
However, more elongated objects in runs with the stream velocity follow the trend outlined by Eq.~(\ref{eq:massecc}).
In particular, for the no cooling case, the plot shows a more elongated structure for smaller mass systems.
However, cooling, even in the presence of stream velocity, tends to assist with collapse, thus resulting in a deviation compared to the analytical prediction.
Note that in general, cooling still tends to create more elongated structures \citep[this was also highlighted in][]{Nakazato+22,Lake+21}.
In the bottom plot of Fig.~\ref{fig:eccmass}, we plot the same relation for the DM/G objects.
Again, those that are more eccentric in nature follow the trend derived from the ellipsoid potential, while many halos, especially classical halos, are concentrated towards circularity.
\begin{figure*}[!t]
\center
\includegraphics[width=0.9\textwidth]{figures/spins.pdf}
\caption{Probability density distribution of $\lambda_{\rm g}$ (left) and $\lambda_{\rm DM}$ (right).
These are calculated via Eqns.~(\ref{eq:gasspin}) and~(\ref{eq:spin}), respectively.
Distributions are separated into the object classes listed in Tab.~\ref{Table:objects} and calculated using a Gaussian kernel density.
The orange distributions include the gas component of DM GHOSts, the grey distributions show the gas component of classical halos without $v_{bc}$, and the blue distributions show SIGOs.
We do not include $\lambda_{\rm DM}$ for SIGOs since they are dominated by gas and the DM does not contribute significantly to the angular momentum of the system.
The darker lines denote runs with H$_2$ cooling, while the lighter dashed lines denote no cooling. }
\label{fig:spins}
\end{figure*}
\subsection{The Spin Parameter} \label{sec:spinparam}
The angular momentum of galaxies has long been understood to be closely tied to their formation and evolution \citep[e.g.,][]{Peebles1969,FallEf80}.
In particular, the relationship between the angular momentum of the DM halo and the gas seems critical in shaping the final galactic morphology and spin parameter \citep[e.g.,][]{Bullock+01,Maller+02,Danovich+15,RodGom+17,Wechsler+18,Kurapati+21,Yang+21,Rohr+22,Cadiou+22,Ebrahimian+22E,RodGom+22,Hegde+22}.
DM halo spin in simulations follows a lognormal distribution \citep[e.g.,][]{Bullock+01,Zjupa+17}, and the spin of the cold gas of galaxies seems to follow a similar distribution in observations and simulations \citep[e.g.,][]{Danovich+15,Burkert+16}.
Models that conserve angular momentum suggest that the structure, size, and morphology of galaxies follow the mass and angular momentum of their host halos \citep[e.g.,][]{Somerville+08,Guo+11,Benson+12,Somerville+15}.
Initially, simulations struggled to replicate observed properties of galaxies such as the large spin of the baryonic component compared to the DM and the shape of angular momentum of galaxies, but it was recognized that baryonic processes, including feedback, can explain the evolution of the angular momentum of the baryonic component \citep[e.g.,][]{Maller+02,Teklu+15,Zjupa+17,ElBadry+18,Rohr+22}.
Furthermore, some recent work \citep[e.g.,][]{Sales+12,Danovich+15,Jiang+19} suggests that galaxies' spins are not correlated with the spins of their host halos at all, and observed scaling relations must be explained via other mechanisms.
Persistent uncertainties in the relationship between angular momentum, morphology, and galaxy structure remain, particularly at low masses \citep[e.g.,][]{Nguyen+22,Ebrahimian+22E}.
Including $v_{bc}$, which affects both the velocity and the configuration of the baryonic component, has already been shown to affect the spin at low masses \citep[e.g.,][]{Chiou+18}, and thus we continue with an investigation of the angular momentum of our sample of structures.
To quantify the rotation and angular momentum of objects, we follow the analytical formulation from \citet{Chiou+18}.
The total angular momentum, denoted by the spin vector ($\mathbf{J}_{\rm sp}$) of a set of $N$ particles each of mass $m_i$ is
\begin{equation}
\mathbf{J}_{\rm sp}=\sum_{i=1}^N m_i\mathbf{r}_i\times \mathbf{v}_i,
\label{eq:spin}
\end{equation}
where $\mathbf{r}_i$ and $\mathbf{v}_i$ are the particles' position and velocity vectors from the center of mass.
For DM primary objects, we estimate the angular momentum of the entire halo using the spin parameter \citep[e.g.,][]{Peebles1969} as defined in \citet{Bullock}:
\begin{equation}
\lambda_{\rm DM} = \frac{J_{\rm sp}}{\sqrt{2} M_{200}v_{200}R_{200}} \ .
\label{eq:spinparameter}
\end{equation}
Here $M_{200}$ is the virial mass of the object, $v_{200}= \sqrt{GM_{200}/R}$ \citep[][]{BarkanaLoeb+01}, and $J_{\rm sp}=|\mathbf{J_{\rm sp}}|$.
\citet{Chiou+18} showed that the DM/G spin follows a lognormal distribution consistent with \citet{Bullock+01}.
Following \citet{Chiou+18}, in order to account for the more ellipsoidal nature of the gas component, we calculate the spin parameter for gas primary objects using:
\begin{equation}
\lambda_{\rm g} =\frac{J_{\rm g }}{6\sqrt{2}M_{\rm g}v_{\rm GP}R_{\rm max}},
\label{eq:gasspin}
\end{equation}
where $M_{\rm g}$ is the total gas mass, $v_{\rm GP}$ is the circular velocity of the gas primary object at a distance $R_{\rm max}$, and $J_{\rm g}$ is calculated from Eq.~(\ref{eq:spin}) for gas particles only.
Figure~\ref{fig:spins} shows the probability density distributions of the spin parameter of gas primary objects and DM primary objects.
The stream velocity induces higher total spin for the gas component of all runs.
However, for classical gas objects, molecular cooling serves to lower the total angular momentum by condensing gas inward, thus allowing for a smaller spin parameter.
For classical objects, the DM components have a larger spin parameter magnitude overall: because DM constitutes most of the mass, it contributes most of the total angular momentum.
On the other hand, the stream velocity boosts the gas spin for both SIGOs and the gas component of DM GHOSts, thus increasing the total angular momentum of the system despite H$_2$ cooling.
SIGOs and DM GHOSts thus have gas spin parameters an order of magnitude larger than the no stream velocity gas (see Tab.~\ref{tab:means}).
In Fig.~\ref{fig:spinscombined}, we plot the spin parameter against the total mass (left) and eccentricity (right) of objects.
More eccentric objects tend to have higher spins in all runs.
The vector sum in the definition of $\mathbf{J}_{\rm sp}$ (Eq.~(\ref{eq:spin})) means that this parameter encodes not only the magnitude of the total angular momentum but also the alignment of particles' rotation.
Thus, the trend on the right of Fig.~\ref{fig:spinscombined} is consistent with spherical configurations corresponding to an isotropic distribution of the particles' orbits. Further, it is consistent with prolate systems having a preferred directionality to the angular momentum or ordered distribution of particle orbits.
\begin{figure}
\center
\includegraphics[width=0.3\textwidth]{figures/ellipsoid2.pdf}
\caption{
An ellipsoid in spherical coordinates for
an arbitrary
spin vector direction.
Coordinates are chosen such that the primary axes of the ellipsoid are aligned along the cartesian coordinate directions, with $R_{\rm max}$, the polar radius of the prolate spheroid, along the $z$-axis.
The spin vector (Eq.~(\ref{eq:spin})) can be aligned in any direction with respect to the ellipsoid axes, and its relative alignment with respect to these axes is described by the usual spherical angular coordinates $\theta$ and $\phi$. }
\label{fig:elliosoid2}
\end{figure}
Furthermore, the spin of the objects in the no stream velocity case roughly follows a $\lambda\sim M^{-2/3}$ slope, see bottom left panel in Fig.~\ref{fig:spinscombined}.
This relation is expected from Eq.~(\ref{eq:spinparameter}) for mostly circular orbits.
However, the trend dissipates in the presence of the stream velocity and cooling, where the objects deviate from spherical symmetry and the combined effects introduce a preferred directionality for the angular momentum, almost regardless of the mass.
We attribute this to the turbulent and filamentary nature of these structures in the presence of the stream velocity and molecular cooling \citep[e.g.,][]{Nakazato+22,Lake+22}.
Note that the cut-off in the low-mass regime is due to our resolution limit of $300$ particles minimum (corresponding to a mass of $5.7\times 10^5$~M$_\odot$), while at the high mass regime we are limited by Poisson fluctuations of small number statistics at this high redshift and small box size.
The question of whether these larger spin parameters imply greater overall angular momentum or more ordered rotation leads us to an investigation the connection between the morphology of the objects and their rotational support.
{An investigation of the ellipsoids' alignment with respect to the stream velocity direction revealed that SIGOs are not always aligned with $R_{max}$ in the direction of the stream velocity.
More frequently, they are embedded in a stream of gas that is infalling towards a larger DM halo (see Fig.~\ref{fig:SIGOdm}, for example), and their longest axis aligns with this stream of gas.
Thus, to check how ordered these objects rotation is, we test whether the rotation axis is aligned with any of the three primary ellipsoid axes.}
\begin{figure*}[!t]
\center
\includegraphics[width=\textwidth]{figures/spins-combined.pdf}
\caption{Scatter plot $\lambda_{\rm g}$ (left) versus the dark matter mass ($M_{\rm tot}$) (left) and $\lambda_{\rm g}$ versus $\log(1-e)$ (right) of GP objects.
The top row includes H$_2$ cooling.
The left column denotes the runs without the inclusion of the stream velocity, and the right column contains runs with the stream velocity.
A line corresponding to $\lambda_g \sim M^{-2/3}$ is overplotted in red on the right hand side.
This is the expected relation from Eq.~(\ref{eq:gasspin}).
The vertical cut-off at low masses is due to our resolution limit of $300$ particles minimum (corresponding to a mass of $5.7\times 10^5$~M$_\odot$), while at high mass we are limited by Poisson fluctuations of small number statistics at this high redshift and small box size.}
\label{fig:spinscombined}
\end{figure*}
To describe the directionality of the angular momentum we utilize spherical coordinate notation.
With the maximum radius aligned with the $z$-axis and the minimum radius aligned with the $x$-axis, we calculate the spherical $\theta$ and $\phi$ components of the spin vector of both the DM and gas of objects.
Fig.~\ref{fig:elliosoid2} shows an illustration of the orientation of the ellipsoid with respect to the spin vector.
\begin{figure*}
\center
\includegraphics[width=0.85\textwidth]{figures/spinangles-vertical.pdf}
\caption{\textbf{Top panels:} \textit{Left:} Probability density distribution of $\cos{(\phi_{\rm g})}$ the angle between $R_{\rm min}$ and the spin vector of the gas component of GP objects (See Fig.~\ref{fig:elliosoid}). \textit{Right:} Probability density distribution of $|\cos{(\theta_g)}|$, the angle between $R_{\rm max}$ and the spin vector.
The darker lines denote runs with H$_2$ cooling, while the lighter dashed lines denote no cooling.
Distributions are separated into the object classes listed in Tab.~\ref{Table:objects} and calculated using a Gaussian kernel density.
The orange distributions include the gas component of DM GHOSts, the grey distributions show the gas component of classical halos without $v_{bc}$, and the blue distributions show SIGOs.
\textbf{Middle panels:} \textit{Left} Probability density distribution of $\cos{(\phi_{\rm dm})}$ the angle between $R_{\rm min}$ and the spin vector of the DM component of objects (See Fig.~\ref{fig:elliosoid}). \textit{Right:} Probability density distribution of $|\cos{(\theta_{\rm dm})}|$, the angle between $R_{\rm max}$ and the spin vector.
The darker lines denote runs with H$_2$ cooling, while the lighter dashed lines denote no cooling.
Distributions are separated into the object classes listed in Tab.~\ref{Table:objects} and calculated using a Gaussian kernel density.
The orange distributions include the gas component of DM GHOSts, the grey distributions show the gas component of classical halos without $v_{bc}$.
SIGOs, which have little to no DM component, are not shown.
\textbf{Bottom panels:} Cartoon depiction of the orientation of the spin vector with respect to the various object classes, classical halos (left), DM GHOSts (middle) and SIGOs (right).
The large, dark arrow indicates the most common orientation, while the other faint arrows indicate a spread of other common alignments for each object type according to the distributions in the top and middle panels.}
\label{fig:spinsradii}
\end{figure*}
Figure~\ref{fig:spinsradii} shows distributions of the misalignment between the primary ellipsoid axes and the spin vector for both the gas component (top row) and the DM component (middle row).
As depicted, the classical halos (both for DM and gas components) are preferentially spinning in alignment with their minimum axis, and do not show a preference with respect to their maximum axis.
These classical halos were the most oblate group overall, thus the lack of preference for alignment with the maximum axis could be due to the fact that for an oblate spheroid, $R_{\rm max}\sim R_{\rm mid}$.
In other words, they are consistent with puffy disks.
The bottom row of Fig.~\ref{fig:spinsradii} shows a cartoon depiction of the range of preferred rotation of classical objects here.
DM GHOSts, to a lesser degree, are rotating in alignment with their minimum axis, but they also show a preference towards the maximum axis.
This ``spinning top'' type of behaviour seems to be consistent when considering their formation \citep[see Fig. 3 in][]{Lake+22}.
The gas component of DM GHOSts (similar to classical objects) is accreted in a stream onto the halo, but the stream velocity induces a velocity gradient in a preferential direction in that region perpendicular to the infall stream.
This results in spinning-top rotator behavior, depicted also in a cartoon at the bottom of Fig.~\ref{fig:spinsradii}.
SIGOs, however, exhibit a weak bimodal distribution of alignment with $R_{\rm max}$.
The majority are preferentially {\it misaligned} with the maximum axis, while some demonstrate alignment as in the case of the DM GHOSts described above.
Considering an idealized growth scenario, SIGOs are embedded in the gas stream, which is normally in the process of accreting onto a DM halo.
This configuration often yields an $R_{\rm max}$ in alignment with the accretion stream, (as is the case in the example in Figure \ref{fig:SIGOdm}).
{As described above with DM GHOSts, the stream velocity induces a velocity gradient (in our case towards the $x-$direction) which may be perpendicular to the infall stream moving in the $y-$ or $z-$directions. \citep[For example, see Fig. 3 in][where a SIGO is embedded in a stream of gas infalling towards a larger halo. All the gas in the region is moving towards this stream, however greater velocities are found on the $+x$ side, a gradient induced by the original stream velocity in the $+x$-direction.]{Lake+22}}
This perpendicular accretion mode causes objects with alignment between $\mathbf{J}_{\rm sp}$ and $\mathbf{R}_{\rm max}$.
However, this picture is idealized, and in practice the SIGOs represent a density perturbation within the stream that results in gas accretion that is not necessarily aligned with the object's $R_{\rm max}$.
Those SIGOs that are preferentially misaligned respect to $R_{\rm max}$ show a variety of alignments with respect to the $R_{\rm min}$ .
This is potentially due to the fact that (as opposed to the oblate case described above) for prolate spheroids, the system's symmetry is that $R_{\rm min}\sim R_{\rm mid}$.
\begin{figure}
\center
\includegraphics[width=0.45\textwidth]{figures/gasdmmisalignment.pdf}
\caption{Probability density distribution of the cosine of the misalignment between the spin parameter of the gas and dark matter components of individual GP objects.
(See Eq.~(\ref{eq:thetamisalign}))
The grey distributions are classical halos, the orange distributions are DM GHOSts, and the blue dashed distributions show SIGOs.
The darker lines denote no cooling, while the lighter dashed lines show the inclusion of molecular cooling.
}
\label{fig:misalignment}
\end{figure}
The similarity between the DM and gas components in Fig.~\ref{fig:spinsradii} is further investigated below. Specifically, we calculate the misalignment angle between the angular momentum of the DM component and the gas component:
\begin{equation}
\cos{(\theta_{\rm g,DM})}=\frac{\mathbf{J}_{\rm DM}\cdot\mathbf{J}_{\rm g}}{|J_{\rm DM}||J_{\rm g}|} \ .
\label{eq:thetamisalign}
\end{equation}
Note that for SIGOs the DM component is negligible, thus we only consider the classical objects and DM GHOSts in this analysis.
Figure~\ref{fig:misalignment} shows the probability distributions of $\cos{(\theta_{\rm g,DM})}$. Consistent with previous studies \citep[e.g.,][]{Chiou+18}, the classical halos are have a strong alignment between the gas and DM components.
On the other hand, the alignment between gas and DM spin is weaker for the DM GHOSts, with a long tail of nearly isotropic configurations.
This result is consistent between molecular and atomic cooling.
In future work it may also be relevant to examine the effects of feedback on this distribution.
This may especially be relevant for star-forming SIGOs.
\subsection{Rotation Curves and Mass Distribution of DM GHOSts}
\label{sec:rotation}
Because the stream velocity affects the angular momentum and morphological configuration of structures, we expect a possible effect on the density distribution and rotational curves.
In particular, since rotation curves contain signatures of the DM component, we expect that both SIGOs and DM GHOSts will deviate from the classical profiles.
\begin{figure}
\center
\includegraphics[width=0.5\textwidth]{figures/newdensity.pdf}
\caption{Density of gas in DM GHOSts as a function of radius, normalized to the $R_{\rm max}$ of the ellipsoid, for the molecular cooling run with $v_{bc}=2\sigma_{v_{bc}}$ (top panel) compared to classical halos with $v_{bc}=0\sigma_{v_{bc}}$ and molecular cooling (bottom panel).
The density is calculated in 50 ellipsoidal shells moving out from the center of the object.
We split objects by mass--the average rotation curve for objects above $10^{5.5}$ M$_\odot$ is shown in solid blue, while those below this cutoff are plotted in solid pink.
Low mass objects display a deviation from the classical rotation curves, with less of a cusp.
The blue shaded region shows $1\sigma$ away from the curve for objects above $10^{5.5}$ M$_\odot$, while the purple hatched region shows $1\sigma$ away from the curve for objects below $10^{5.5}$ M$_\odot$.
An NFW profile for a $10^5$ M$_\odot$ halo is shown for comparison as the dashed line.
Eq.~(\ref{eq:density}) is plotted for a gas object with average density in solid blue. }
\label{fig:densitymol}
\end{figure}
In particular, in this section we focus our analysis to a comparison between runs with and without stream velocity (0vH2 and 2vH2).
Figure~\ref{fig:densitymol} shows the density of gas of DM GHOSts (top panel) and classical halos (bottom panel) as a function of radius from the center of mass, with an NFW \citep[e.g.,][]{NFW-96b,NFW-96a,NFW-97} halo profile overplotted (dashed line).
The stream velocity serves to reduce densities across the structure, as expected.
Physically, this is due to the advection of gas from the halo and the spatial separation of the two components.
In addition, at low masses ($\lsim 10^{5.5}$ M$_{\odot}$), we observe a deviation from the NFW shape, with a constant, core profile, rather than a cusp. Using the prolate density profile, Eq.~(\ref{eq:density}), we see that a core-like structure is expected for these ellipsoids (solid line).
In \S~\ref{sec:nummorphology}, we demonstrated that low mass objects had high eccentricities.
Since the classical NFW formulation assumes spherical overdensities \citep[e.g.,][]{NFW-96b,NFW-96a,NFW-97}, another reason for the deviation may be the extreme eccentricities of very low mass objects in stream velocity simulations.
\begin{figure}
\center
\includegraphics[width=.5\textwidth]{figures/newrotation.pdf}
\caption{Rotation curves of DM GHOSts as a function of radius, calculated in ellipsoidal shells going outward, for the molecular cooling run with $v_{bc}=2\sigma_{v_{bc}}$ (top panel) and $v_{bc}=0\sigma_{v_{bc}}$ (bottom panel). The average velocity of the gas in an ellipsoidal shell at each radius is normalized by the $v_{circ}$, the circular radius at $R_{\rm max}$ of the ellipsoid. The objects are colored by the mass of their gas component. The average velocity of the gas in an ellipsoidal shell at each radius is normalized by the $v_{circ}$, the circular radius at $R_{\rm max}$ of the ellipsoid. We split objects by mass--the average rotation curve for objects above $10^{5.5}$ M$_\odot$ is shown in solid blue, while those below this cutoff are plotted in solid pink.
Low mass objects display a deviation from the classical rotation curves, with less of a cusp.
The blue shaded region shows $1\sigma$ away from the curve for objects above $10^{5.5}$ M$_\odot$, while the purple hatched region shows $1\sigma$ away from the curve for objects below $10^{5.5}$ M$_\odot$.
The dashed lines show the minimum and maximum rotation curves of all the objects by velocity.
}
\label{fig:rotationmol}
\end{figure}
Figure~\ref{fig:rotationmol} shows the rotation curves of DM GHOSts (top panel) and classical halos (bottom panel). The rotation curves as a function of radius for DM GHOSts have two behaviours separated by mass.
In particular, the core-type density distribution of low-mass objects in the stream also means that their rotational velocity does not climb as fast.
\begin{figure}
\center
\includegraphics[width=0.45\textwidth]{figures/vrmaxmtot.pdf}
\caption{ Scatter plot of the velocity at $R_{\rm max}$ ($v_{Rmax}$) of objects as a function of total mass ($M_{tot}$).
The top two panels show the molecular cooling runs, and the bottom two panels show no cooling.
The left column has no stream velocity and the right column is from the $v_{bc}=2\sigma_{v_{bc}}$
Stars represent SIGOs, as defined in Sec.~\ref{sec:objects}.
The color bar is the eccentricity (Eq.~(\ref{eq:ecc})).
The line shows the expected value for an NFW profile.}
\label{fig:vrmaxmass}
\end{figure}
\begin{figure}
\center
\includegraphics[width=0.45\textwidth]{figures/rmaxmtot.pdf}
\caption{
Scatter plot of $R_{\rm max}$ of objects as a function of total mass ($M_{tot}$).
The top two panels show the molecular cooling runs, and the bottom two panels show no cooling.
The left column has no stream velocity and the right column is from the $v_{bc}=2\sigma_{v_{bc}}$
Stars represent SIGOs, as defined in Sec.~\ref{sec:objects}.
There are a few objects misclassified as SIGOs in the runs with no stream velocity--see App.~\ref{ap.gasfraction} for a discussion.
The color bar is the eccentricity (Eq.~(\ref{eq:ecc}))
The line shows the expected value for an NFW profile.
For the 0vH2 run, 22\% of objects fall above the expected NFW line.
For the 2vH2 run, this fraction rises to 58\% of objects located above the line.
}
\label{fig:rmaxmass}
\end{figure}
DM GHOSts are more diffuse and rotationally supported than classical halos, having received a boost from the stream velocity.
In Figure~\ref{fig:vrmaxmass}, we show the velocity at $R_{\rm max}$ as a function of the total mass.
We compare this to the nominal NFW expectation, following \citet{NFW-97}, taking the maximum of the NFW circular velocity:
\begin{equation}
v_{\rm circ}^2 = \frac{1}{x} \frac{\ln{(1+cx)}-(cx)/(1+cx)}{\ln(1+c)-c/(1+c)},
\end{equation}
where $x=r/r_{200}$, and $c$ is the halo concentration.
In regions of streaming, the velocity at $R_{\rm max}$ reaches or exceeds the expected values.
This behaviour is expected based on their larger overall spin parameters than in the classical case as seen in Fig.~\ref{fig:spins}.
We note that the radii of DM GHOSts and SIGOs are larger than the expected from classical considerations. Specifically, in Fig.~\ref{fig:rmaxmass}, the maximum ellipsoid radius is plotted against the total mass of the object, with an NFW expected relationship overplotted.
H$_2$ cooling runs show objects which have condensed to smaller maximum radii (top left panel).
We calculate the fraction of objects in Fig.~\ref{fig:rmaxmass} above and below the NFW line, and find that the majority ($\sim80\%$) of classical halos condense to smaller radii than the NFW $R_{\rm max}$ with H$_2$ cooling in regions of no stream velocity.
In the presence of stream velocity, however, the velocity boost overall yields a larger radii (top right panel).
In the 2vH2 run, 60\% of all objects lie $above$ the line, having larger than expected maximum radius.
Interestingly, SIGOs tend to have higher $R_{\rm max}$ than NFW in all cases.
Again, we suggest that eccentricity plays a central role in giving objects much greater $R_{\rm max}$ than would be possible in the spherical case.
These results illustrate the combined effects of the stream velocity and molecular cooling that cause DM GHOSts to be more diffuse and rotationally supported than their classical counterparts of similar masses.
\section{ Discussion} \label{sec:discussion}
In this work, we investigate the spin, rotational and morphological properties of structures in the presence of stream velocity at $z=20$ using high resolution numerical simulations in {\tt AREPO}.
For the first time, molecular cooling is included in a detailed study of these dynamical properties.
We focus on a class of objects that we term DM GHOSts, structures where the baryonic component is offset from the dark matter halo, but does not fully escape the virial radius (as with SIGOs, which were previously the focus of studies by the Supersonic Project).
As in Figure \ref{fig:SIGOdm}, we emphasize that as time goes by, the gas sinks to the center of the DM halo, but carries the signature of its unique formation channel.
Using molecular cooling simulations, we are able to more precisely constrain the properties of SIGOs and DM GHOSts in comparison to classical low mass objects than was possible in previous studies \citep[e.g.,][]{Chiou+18}.
{We considered the following physical properties of DM GHOSts, comparing them to classical objects and SIGOs. }
\begin{itemize}
\item {\it Morphology:}
We show that SIGOs are the most elongated class of objects, followed by DM GHOSts, {for both gas and DM components} (as depicted in Fig.~\ref{fig:eccentricity} and Fig.~\ref{fig:eccentricityDM}). {We note that the DM component, of DM GHOSts is significantly elongated compared to the classical objects.}
Frequently, SIGOs and DM GHOSts tend to be prolate ellipsoidal, and we present an analytical expression of their gravitational potential.
While the gas morphology deviates from spherical symmetry, star formation takes place at density peaks, which end up as less elongated ellipsoids (Lake et al. in prep.).
Interestingly, we find that the DM component of DM GHOSts is elongated as well, unlike the classical (no stream velocity) counterparts.
This prediction may be observable with gravitational lensing models that allow for deviation from spherical symmetry \citep[e.g.,][]{Kneib+11}.
{Note that while there is no direct correlation between the stream velocity large scale distribution and the density field, the stream velocity divergence relates to the density field via the continuity equation \citep[e.g.,][]{Tes+10a,Tes+10b}. Thus, high density $\sigma$ peaks are weakly correlated with large stream velocity patch e.g., \citet[][]{Fialkov14}. }
The box considered here has an increased $\sigma_8$ compared to the average.
{Thus, it roughly corresponds to a high redshift progenitor of a patch of the Universe within a density peak such as the Virgo cluster \citep[e.g.,][]{NB07}. Thus, {because of the} above weak correlation, we expect that galaxy clusters are likely to host elongated DM substructures. }
Thus, given the right alignments, they may be detected using strong lensing \citep[e.g.,][]{Mahler+22}. {We emphasize that about $40\%$ of the Universe has a stream velocity larger that $1\sigma_{\rm vbc}$, and therefore, DM GOHSts with elongated gas and DM components should be common regardless of large scale density fluctuations. }
\item {\it Spin Parameter:} The stream velocity serves to increase the total angular momentum and thus rotational support of SIGOs and DM GHOSts.
As shown in Fig.~\ref{fig:spins}, the DM GHOSts have higher gas spin parameter compared to classical objects.
Less spherical objects (more eccentric objects) have greater angular momentum, see Fig.~\ref{fig:spinscombined}.
As expected, the spin vectors of classical gas objects are aligned with those objects' minimum radius, forming a puffy-disk-like configuration at high redshift, consistent with lower redshift analysis for larger objects \citep[e.g.,][]{Jesseit+04,Kautsch+06,Wheeler+17,ElBadry+18}.
DM GHOSts, on the other hand, demonstrate spin vectors that are often aligned with their maximum axis (similar to a ``spinning top," see Fig.~\ref{fig:spinsradii}).
Lastly, SIGOs' total gas angular momenta exhibit a weak bifurcation.
Most are misaligned with the maximum radius without a preference for alignment with the minimum radius, while another group are aligned towards the maximum radius (similarly to DM GHOSts, as shown in Fig.~\ref{fig:spinsradii}).
Additionally, the DM and gas components' spins in classical halos are almost always aligned.
However, DM GHOSts, as shown in Fig.~\ref{fig:misalignment}, have a weak preference for alignment between the DM and gas's spin, with a long tail of nearly isotropic configurations.
\item {\it Mass distribution:}
Classical objects are expected to have a cusp-like mass distribution \citep[e.g.,][]{NFW-96b} which are often reproduced in simulations \citep[e.g.,][]{Delos+22}.
The stream velocity reduces the density of objects and increases their size, causing them to be puffier and more diffuse than classical objects and the theoretical NFW profile.
The ellipsoid-like configuration of low mass DM GHOSts yields a core-like profile (see Fig.~\ref{fig:densitymol}).
As expected, SIGOs that follow an ellipsoid profile have a core-like mass density, with a nearly constant density (see Fig~\ref{fig:densitysigos} in App.\ref{ap:morphology}).
This behaviour for SIGOs is consistent with the suggestion that SIGOs are giant molecular cloud analogs \citep{Lake+22}.
\item {\it Rotation curves} The stream velocity affects not only the spin parameter, but also the rotational velocity curves of structures.
Objects formed by streaming have a higher maximum rotational velocity than those formed without for a given mass (See Figs.~\ref{fig:rotationmol} and~\ref{fig:vrmaxmass}).
Furthermore, the bifurcation between high and low mass objects seen in the radial mass distributions for DM GHOSts is also reflected in their velocity profiles.
Low mass ($\lsim 10^{5.5}$ M$_{\odot}$) objects, which have cores, do not reach high rotational velocities at their inner radii.
The inclusion of molecular cooling increases the velocity at the maximum radius and decreases the maximum radius by condensing rotationally supported material inward.
We note that rotational curve anomalies have been observed for slightly larger objects in the local Universe \citep[e.g.,][]{Sales+22}.
We speculate that anomalous rotation curves produced by the stream velocity at high redshift may persist to low redshift structures.
This may be related to the observed ``diversity of rotation curves" problem for ultra faint dwarf galaxies.
\end{itemize}
The combined effects of molecular cooling and the stream velocity give the most accurate picture to date of the morphological and rotational properties of DM GHOSts.
We characterize these objects as highly diffuse, rotationally supported dwarf structures with large radii and high eccentricities.
Based on these anomalous properties, we speculate that at low redshift, DM GHOSts may evolve to form some ultra faint dwarf galaxies or anomalous dwarf galaxies. {In particular, some dwarf galaxies exhibit similar properties, including a diffuse structure and atypical rotation curves \citep[e.g.,][]{BullockBK+17,Sales+22}.
Thus, while observed ultra faint dwarf galaxies and dwarf galaxies may be more massive than DM GHOSts we find they share similar characteristics at these high redshifts.
We expect DM GHOSts to grow over time according to the natural hierarchical growth of structure, and may be the progenitors of some faint dwarf galaxies in regions of the Universe with a highly supersonic stream velocity at early times.}
\section*{Acknowledgements}
The authors would like to thank Sahil Hegde, Bao-Minh Hoang, and Keren Sharon for constructive conversations.
C.E.W., W.L., S.N., Y.S.C, B.B., F.M., and M.V. thank the support of NASA grant No. 80NSSC20K0500 and the XSEDE AST180056 allocation, as well as the
Simons Foundation Center for Computational Astrophysics and the UCLA cluster \textit{Hoffman2} for computational resources. C.E.W. also thanks the UCLA Competitive Edge program. S.N thanks Howard and Astrid Preston for their generous support. Y.S.C thanks the partial support from UCLA dissertation year fellowship. B.B. also thanks the the Alfred P. Sloan Foundation and the Packard Foundation for support. MV acknowledges support through NASA ATP grants 16-ATP16-0167, 19-ATP19-0019, 19-ATP19-0020, 19-ATP19-0167, and NSF grants AST-1814053, AST-1814259, AST-1909831 and AST-2007355. NY acknowledges financial support from JST AIP Acceleration Research JP20317829.
\begin{appendix}
\section{Choice of Cutoff Gas Fraction}
\label{ap.gasfraction}
In the first papers by the Supersonic Project that included only adiabatic or atomic cooling \citep[e.g.,][]{Popa+15,Chiou+18,Chiou+19,Chiou+21,Lake+21}, a cutoff gas fraction of $f_g = 0.4$ was chosen for the definition of SIGOs.
Those studies' statistics for SIGO abundances and properties were thus calculated for objects that were located outside of the virial radius of their parent DM halo and had $f_g=0.4$ within the bounds of the ellipsoid fit described in \S~\ref{sec:objects}.
This choice of gas fraction was a somewhat arbitrary choice, motivated by the fact that it was above the cosmic baryon fraction and close to the stellar fraction of globular clusters \citep[][]{Chiou+18}.
\citet{Nakazato+22} found that in molecular cooling simulations, this choice was too lenient, and resulted in the identification of SIGOs in runs \textit{without} the stream velocity.
\begin{figure*}
\center
\includegraphics[width=\textwidth]{figures/gasfraccomparisons3.pdf}
\caption{The same as Figure~\ref{fig:eccmass} (Scatter plot of $\log(1-e^2)$ versus $M_{\rm tot}$ for gas-primary (GP) objects,), with the definition of SIGOs calculated using gas fractions of $0.4$ (left), $0.5$ (center) and $0.6$ (right).
Significantly more SIGOs are found in the molecular cooling runs without stream velocity (top left of each panel) for $f_g=0.4$ and $f_g =0.5$, as was also shown in \citet{Nakazato+22}.
As in Fig.~\ref{fig:eccmass}, the top two panels show the H$_2$ cooling runs, and the bottom two panels show runs without cooling.
The left column has no stream velocity and the right column is from the $v_{bc}=2\sigma_{v_{bc}}$ runs.
Stars represent SIGOs, as defined in Sec.~\ref{sec:objects}.
The color bar is the gas fraction (Eq.~(\ref{eq.gasfraction})).
The red overplotted line is the expected relationship from Eq.~(\ref{eq:massecc}) for an example object with the average density and maximum radius of objects in the H$_2$ cooling runs ($\bar{\rho}_{Rmax}=1.8\times 10^8 $M$_\odot$ kpc$^{-3}$, $\bar{R}_{max}=0.134$ kpc). }
\label{fig:gasfraccmasses}
\end{figure*}
We also find that a choice of $f_g=0.4$ results in an unacceptable number of objects being identified as SIGOs in molecular cooling simulations.
For example, Fig.~\ref{fig:gasfraccmasses} shows the eccentricity versus mass of objects as in Fig.~\ref{fig:eccmass}, with a gas fraction of 0.4 (left), 0.5 (center), and 0.6 (right).
The top left panel shows the molecular cooling run with no stream velocity, and stars represent SIGOs.
With $f_g=0.4$ and $f_g=0.5$, there are many objects identified as SIGOs by the algorithm.
While these gas rich structures may be interesting, they are obviously not the result of a large stream velocity.
In order to exclude as many of these false SIGOs as possible while still having plenty of objects in the $2$v runs to study, we follow \citet{Nakazato+22} and choose $f_g=0.6$ for this work.
\begin{figure}
\center
\includegraphics[width=0.5\textwidth]{figures/gasfraccomparisons2.pdf}
\caption{Probability density distributions as shown in Fig.~\ref{fig:eccentricity} ($\log(1-e)$, where $e$ is the eccentricity (Eq.~(\ref{eq:ecc})), for gas primary (GP) objects) and Fig.~\ref{fig:spins} ($\lambda_{gas}$ for GP objects), with the definition of SIGOs calculated using gas fractions of $0.4$ (top row), $0.5$ (center row) and $0.6$ (bottom row).
As seen in Fig.~\ref{fig:gasfraccmasses}, more SIGOs are found in the molecular cooling runs without stream velocity for $f_g=0.4$ and $f_g =0.5$, as was also shown in \citet{Nakazato+22}, which motivates us to choose a gas fraction in this work of $0.6$.
However, the results are broadly consistent despite variation in gas fraction, and the effects are only seen in the distribution of SIGOs.
As in Figs.~\ref{fig:eccmass} and~\ref{fig:spins}, distributions are separated into the object classes listed in Tab.~\ref{Table:objects} and calculated using a Gaussian kernel density.
The orange distributions include the gas component of DM GHOSts, the grey distributions show the gas component of classical halos without $v_{bc}$, and the blue distributions show SIGOs. }
\label{fig:gasfracsparams}
\end{figure}
For completeness, Figs.~\ref{fig:gasfracsparams} and~\ref{fig:gasfracsangles} show the probability density distributions for GP objects from this work (as in Figs.~\ref{fig:eccentricity},~\ref{fig:spins},~\ref{fig:spinsradii}, and~\ref{fig:misalignment}) with varying gas fraction from the previous value of $0.4$ to the value of $0.6$ adopted in this work.
The results are generally consistent despite changing the gas fraction cutoff.
\begin{figure*}
\center
\includegraphics[width=0.75\textwidth]{figures/gasfraccomparisons.pdf}
\caption{Probability density distributions as shown in Fig.~\ref{fig:spinsradii} (Left: $\cos{\phi_g}$ the angle between rmin and the spin vector of the gas component of GP objects (See Fig.~\ref{fig:elliosoid}); Center: $|\cos{(\theta_g)}|$, the angle between $R_{\rm max}$ and the spin vector) and Fig.~\ref{fig:misalignment} (Right: $|\cos{\theta_{\rm g,dm}}|$ the misalignment between the spin parameter of the gas and dark matter components of individual objects), with the definition of SIGOs calculated using gas fractions of $0.4$ (top row), $0.5$ (center row) and $0.6$ (bottom row).
As seen in Fig.~\ref{fig:gasfraccmasses}, more SIGOs are found in the molecular cooling runs without stream velocity for $f_g=0.4$ and $f_g =0.5$, as was also shown in \citet{Nakazato+22}, which motivates us to choose in this work a gas fraction of $0.6$.
However, the results are broadly consistent despite variation in gas fraction, and the effects are only seen in the distribution of SIGOs.
As before, the darker lines denote runs with H$_2$ cooling, while the lighter dashed lines denote no cooling.
Distributions are separated into the object classes listed in Tab.~\ref{Table:objects} and calculated using a Gaussian kernel density.
The orange distributions include the gas component of DM GHOSts, the grey distributions show the gas component of classical halos without $v_{bc}$, and the blue distributions show SIGOs. }
\label{fig:gasfracsangles}
\end{figure*}
\section{Derivation of Prolate Ellipsoidal Potential}
\label{ap:potential}
\begin{table}[]
\centering
\begin{tabular}{|l|c|ccc|c|ccc|}
\hline
\textbf{Run} & \textbf{0v} & \multicolumn{3}{c|}{\textbf{2v}} & \textbf{0vH2} & \multicolumn{3}{c|}{\textbf{2vH2}} \\ \hline
\textbf{Objects} & \multicolumn{1}{l|}{\textbf{All}} & \multicolumn{1}{l|}{\textbf{SIGOs}} & \multicolumn{1}{l|}{\textbf{DM GHOSts}} & \multicolumn{1}{l|}{\textbf{All}} & \multicolumn{1}{l|}{\textbf{All}} & \multicolumn{1}{l|}{\textbf{SIGOs}} & \multicolumn{1}{l|}{\textbf{DM GHOSts}} & \multicolumn{1}{l|}{\textbf{All}} \\ \hline
Gas Eccentricity & 7.75E-01 & \multicolumn{1}{c|}{9.82E-01} & \multicolumn{1}{c|}{8.88E-01} & 8.92E-01 & 8.06E-01 & \multicolumn{1}{c|}{9.77E-01} & \multicolumn{1}{c|}{9.11E-01} & 9.15E-01 \\
DM Eccentricity & 8.22E-01 & \multicolumn{1}{c|}{X} & \multicolumn{1}{c|}{8.19E-01} & X & 7.69E-01 & \multicolumn{1}{c|}{X} & \multicolumn{1}{c|}{8.12E-01} & X \\
Gas Spin & 9.12E-02 & \multicolumn{1}{c|}{1.21E-01} & \multicolumn{1}{c|}{1.53E-01} & 1.52E-01 & 6.44E-02 & \multicolumn{1}{c|}{1.80E-01} & \multicolumn{1}{c|}{1.26E-01} & 1.31E-01 \\
DM spin & 3.25E-01 & \multicolumn{1}{c|}{X} & \multicolumn{1}{c|}{2.33E+00} & X & 4.25E-01 & \multicolumn{1}{c|}{X} & \multicolumn{1}{c|}{1.29E+00} & X \\
Total mass (M$_\odot$) & 9.91E+05 & \multicolumn{1}{c|}{3.61E+04} & \multicolumn{1}{c|}{1.75E+06} & 1.69E+06 & 5.39E+05 & \multicolumn{1}{c|}{1.98E+05} & \multicolumn{1}{c|}{1.35E+06} & 1.29E+06 \\
Gas Fraction & 1.20E-01 & \multicolumn{1}{c|}{8.26E-01} & \multicolumn{1}{c|}{1.47E-01} & 1.72E-01 & 2.70E-01 & \multicolumn{1}{c|}{7.72E-01} & \multicolumn{1}{c|}{2.30E-01} & 2.60E-01 \\ \hline
\end{tabular}
\caption{Mean value of selected parameters presented in this work for the four runs. For the $2$v and $2v$H$2$ runs, means are given also for the populations of SIGOs and DM GHOSts separately. }
\label{tab:means}
\end{table}
In this Appendix, we present a derivation of the gravitational potential and total mass of prolate spheroids.
\citet{BT08} give the general formulae for potentials of various ellipsoidal bodies in their Table 2.1.
The following equations apply to any inhomogeneous ellipsoid with axes $a_1$, $a_2$ and $a_3$.
The potential is
\begin{equation}
\Phi (\mathbf{x}) = -\pi G \frac{a_2 a_3}{a_1}
\int_0^\infty \frac{\text{d} \tau }{\Delta} \{\varphi(\infty)-\varphi[m(\tau,\mathbf{x})]\},
\label{eq:potential}
\end{equation}
where
\begin{equation}
\Delta^2(\tau) \equiv \prod_{i=1}^3 (a_i^2+\tau),
\label{eq:deltadef}
\end{equation}
\begin{equation}
m^2(\tau,\mathbf{x})\equiv a_1^2\sum_{i=1}^3\frac{x_i^2}{a_i^2+\tau}
\label{eq:mdef}
\end{equation}
and
\begin{equation}
\varphi(m) \equiv \int_0^{m^2}\rho(\mathbf{x})\text{d}m^2(0,\mathbf{x}).
\label{eq:varphidef}
\end{equation}
For the prolate spheroidal case, we have
\begin{equation}
a_1=a_2=R_{\rm min}
\end{equation}
and
\begin{equation}
a_3 = R_{\rm max}.
\end{equation}
Thus, in cylindrical coordinates $(R,z)$,
\begin{equation}
m^2(\tau,\mathbf{x}) = R_{\rm min}^2\left[\frac{R^2}{R_{\rm min}^2+\tau}+\frac{z^2}{R_{\rm max}^2+\tau}\right],
\label{eq:mproldef}
\end{equation}
and from Eq.~(\ref{eq:deltadef}),
\begin{equation}
\Delta^2 (\tau) = (R_{\rm min}^2+\tau)^2(R_{\rm max}^2+\tau).
\label{eq:deltaproldef}
\end{equation}
Eq.~(\ref{eq:varphidef}) requires a density distribution, and here we will use the following prolate spheroidal density distribution:
\begin{equation}
\rho(m^2) = \rho_0 \left(1+\left(\frac{m}{a_0}\right)^2\right)^{-\frac{3}{2}},
\label{eq:appdensity}
\end{equation}
where $\rho_0$ and $a_0$ are constants.
Plugging the density from Eq.~\ref{eq:appdensity} into Eq.~\ref{eq:varphidef} gives:
\begin{equation}
\varphi (m) = \int_{0}^{m^2}\rho_0 \left[1+\left(\frac{m}{a_0}\right)^2\right]^{-3/2}\text{d}m^2(0,\mathbf{x})
\end{equation}
\begin{equation}
=-2a_0^2\rho_0\left[1+\left(\frac{m(0,\mathbf{x})}{a_0}\right)^2\right]^{-1/2}+2a_0^2\rho_0.
\label{eq:varphim}
\end{equation}
Additionally,
\begin{equation}
\varphi(\infty) = 2a_0^2\rho_0
\label{eq:varphiinfty}
\end{equation}
From in Eqs.~(\ref{eq:potential}), ~(\ref{eq:mproldef}),~(\ref{eq:deltaproldef}), ~(\ref{eq:varphim}), and~(\ref{eq:varphiinfty}), the prolate potential is
\begin{equation}
\Phi (\textbf{x}) = -2\pi GR_{\rm max}a_0^2\rho_0\int_0^{\infty}\frac{\text{d}\tau}{(R_{\rm min}^2+\tau)\sqrt{R_{\rm max}^2+\tau}}\left[1+\left(\frac{m(0,\mathbf{x})}{a_0}\right)^2\right]^{-1/2}
\end{equation}
And with Eq.~(\ref{eq:mproldef}), we have
Equation~(\ref{eq:potential}) thus evaluates to
\begin{equation}
\Phi (\textbf{x}) = -2\pi GR_{\rm max}a_0^2\rho_0 \int_0^{\infty}\frac{\text{d}\tau}{(R_{\rm min}^2+\tau)\sqrt{R_{\rm max}^2+\tau}}
\left[1+\left(\frac{R_{\rm min}^2}{a_0^2}\left[\frac{R^2}{R_{\rm min}^2}+\frac{z^2}{R_{\rm max}^2}\right]\right)\right]^{-1/2}
\end{equation}
Evaluating the integral, we get:
\begin{equation}
\Phi (\textbf{x}) =
-4\pi GR_{\rm max}^2a_0^3\rho_0
\left(\frac{\cos^{-1}\left(\frac{R_{\rm max}}{R_{\rm min}}\right)}{\sqrt{R_{\rm max}^2-R_{\rm min}^2}}\right)
\frac{1}{\sqrt{1+\left(\frac{R_{\rm min}^2}{a_0^2}\left[\frac{R^2}{R_{\rm min}^2}+\frac{z^2}{R_{\rm max}^2}\right]\right)}}
\label{eq:finalpotential}
\end{equation}
Taking $a_0=R_{\rm max}$:
\begin{equation}
\Phi (\textbf{x}) =
-\frac{4\pi GR_{\rm max}^4 \rho_0\cos^{-1}(\sqrt{1-e^2})}{e\sqrt{1+(1-e^2)\left(\frac{R^2}{R_{\rm min}^2}+\frac{z^2}{R_{\rm max}^2}\right)}}.
\label{eq:finalpotentiala0}
\end{equation}
Now, we find the dependence of the total mass on eccentricity, starting with a similar argument to that presented in \citet{BT08} for the potential of oblate spheroids.
In cylindrical coordinates a prolate spheroidal shell with axes $\beta R_{\rm max}$ and $\beta R_{\rm min}$ is given by:
\begin{equation}
\frac{R^2}{R_{\rm min}^2}+\frac{z^2}{R_{\rm max}^2}=\beta^2.
\end{equation}
Here, the $z$-axis is aligned with the polar radius ($R_{\rm max}$) and the $R$ coordinate points in the direction of the equatorial radius ($R_{\rm min}$).
The volume enclosed inside this shell is given by
\begin{equation}
V = \frac{4}{3}\pi R_{\rm max}R_{\rm min}^2 \beta^3
\end{equation}
\begin{equation}
= \frac{4}{3}\pi R_{\rm max}^3 \beta^3 (1-e^2)
\end{equation}
Thus, assuming a constant surface density, the mass enclosed between two shells $\beta$ and $\beta+\delta\beta$ is:
\begin{equation}
\delta M = 4\pi \rho R_{\rm max}^3 (1-e^2)\beta^2 \delta \beta
\label{eq:massdifferential}
\end{equation}
The full mass of the ellipsoid is found by integrating over a set of similar spheroids from the center to the outer edge of the object.
Using the notation of \citet{BT08}, this set is given by all the spheroids for which: \begin{equation}
\text{constant} = m^2 \equiv \frac{R^2}{1-e^2}+z^2.
\end{equation}
This constant $m=\beta R_{\rm max}$.
Thus, for some density function $\rho(m^2)$, according to Eq.~(\ref{eq:massdifferential}),
\begin{equation}
\delta M = 4\pi \rho(m^2)(1-e^2)m^2 \delta m.
\end{equation}
Integrating this equation over the ellipsoid gives the total mass:
\begin{equation}
M = 4 \pi (1-e^2) \int_0^{R_{\rm max}} \rho(m^2)m^2 dm.
\label{eq:generalintegral}
\end{equation}
Once again, we assume the density distribution of Eq.~(\ref{eq:density}), and plugging into Eq.~(\ref{eq:generalintegral}), we solve
\begin{equation}
M = 4 \pi (1-e^2) \int_0^{R_{\rm max}} \left(1+\left(\frac{m}{a_0}\right)^2\right)^{-\frac{3}{2}} m^2 dm.
\end{equation}
to obtain
\begin{equation}
M= 4 \pi \rho_0 (1-e^2) a_0^3
\left[\sinh^{-1}{\left(\frac{R_{\rm max}}{a_0}\right)}-\frac{R_{\rm max}}{\sqrt{a_0^2+R_{\rm max}^2}}\right].
\end{equation}
Taking $a_0 = R_{\rm max}$ as above gives:
\begin{equation}
M=4\pi \rho_0(1-e^2) R_{\rm max}^3 \left(\sinh^{-1}{(1)}-\frac{1}{\sqrt{2}}\right)
\label{eq:masseccap}
\end{equation}
\begin{equation*}
\approx 2.19 \rho_0 (1-e^2)R_{\rm max}^3.
\end{equation*}
\section{Morphological Investigation}
\label{ap:morphology}
In this Appendix, we include several supporting Figures relating to our morphological and rotational investigation above.
Table~\ref{tab:means} lists the means of selected distributions from this work.
\begin{figure}
\center
\includegraphics[width=0.8\textwidth]{figures/ratios.pdf}
\caption{Probability density distribution of the ratios of GP objects.
Following the same convention as in Fig.~\ref{fig:eccentricity}, the various distributions demonstrate runs with and without the stream velocity and cooling.
The orange distributions show objects with a gas fraction $f$ of less than 0.6 in the runs without a stream velocity ($v_{bc}=0\sigma_{v_{bc}}$), the grey distributions show the classical equivalent of $f<0.6$ objects in the $v_{bc}=0\sigma_{v_{bc}}$ run, and the blue dashed distributions show SIGOs, which are only found in the $v_{bc}=2\sigma_{v_{bc}}$ run.
The darker lines denote no cooling, while the lighter dashed lines show the inclusion of molecular cooling. }
\label{fig:ratios}
\end{figure}
\begin{figure}
\center
\includegraphics[width=0.5\textwidth]{figures/rmaxr200.png}
\caption{Scatter plot of $R_{\rm max}$ ($v_{Rmax}$) versus $R_{200}$ of GP objects.
The top two panels show the molecular cooling runs, and the bottom two panels show no cooling.
The left column has no stream velocity and the right column is from the $v_{bc}=2\sigma_{v_{bc}}$.
Stars represent SIGOs, as defined in Sec.~\ref{sec:objects}.
The color bar is the gas fraction (Eq.~(\ref{eq.gasfraction}))
The orange line shows $R_{\rm max}=R_{200}$. Colored points are GP objects, and grey points are DM/G objects. }
\label{fig:rmaxr200}
\end{figure}
In Fig~\ref{fig:ratios}, we plot the probability density distributions of the three axis ratios plotted on the axes of Fig.~\ref{fig:eccentricity}.
The classical halos tendency towards sphericity
($R_{\rm min}/R_{\rm mid}\sim R_{\rm min}/R_{\rm max}\sim R_{\rm mid}/R_{\rm max}\sim 1$) is clearly seen here, as well as the distinct deviation of SIGOs and DM GHOSts away from sphericity.
For SIGOs especially, the distributions show evidence of triaxiality (($R_{\rm min}/R_{\rm mid}\neq R_{\rm min}/R_{\rm max}\neq R_{\rm mid}/R_{\rm max}$).
Figure~\ref{fig:ratios} also shows the axes ratios for DM primary objects (bottom row).
Here, we see that the DM components of DM GHOSts are not only more eccentric, having a tail of small axes ratios as in Fig.~\ref{fig:eccentricity}, but also show prolate shapes when $R_{\rm mid}$ is taken into account.
In the center bottom panel, the ratio $R_{\rm min}/R_{\rm mid}$ for the DM component is close to one, whereas the ratio $R_{\rm mid}/R_{\rm max}$ has a tail of small values.
This is an indication of prolateness ($R_{\rm min}\sim R_{\rm mid}<R_{\rm max}$).
\begin{figure}
\center
\includegraphics[width=0.5\textwidth]{figures/sigos_dens_rotation.pdf}
\caption{\textbf{Top panel}: Density of gas in SIGOs as a function of radius, normalized to the $R_{\rm max}$ of the ellipsoid, for the molecular cooling run with $v_{bc}=2\sigma_{v_{bc}}$.
The density is calculated in 50 ellipsoidal shells moving out from the center of the object. The objects are colored by the mass of their gas component.
An NFW profile for a $10^5$ M$_\odot$ halo is shown for comparison as the dashed line.
Eq.~(\ref{eq:density}) is plotted for a SIGO with average density in solid blue.
\textbf{Bottom panel:} Rotation curves of SIGOs as a function of radius, calculated in ellipsoidal shells going outward, for the molecular cooling run with $v_{bc}=2\sigma_{v_{bc}}$.
The average velocity of the gas in an ellipsoidal shell at each radius is normalized by the $v_{circ}$, the circular radius at $R_{\rm max}$ of the ellipsoid.
The objects are colored by the mass of their gas component.
The average velocity of the gas in an ellipsoidal shell at each radius is normalized by the $v_{circ}$, the circular radius at $R_{\rm max}$ of the ellipsoid.
The objects are colored by the mass of their gas component.
The average SIGO has $\bar{\rho}_{\rm Rmax}=4.14\times 10^{7}$ M$_\odot$ kpc$^{-3}$ and $\bar{R}_{\rm max}=0.240 $ kpc}
\label{fig:densitysigos}
\end{figure}
In Fig.~\ref{fig:rmaxr200}, we plot the maximum gas ellipsoid radius against the $R_{200}$ of its parent halo.
The DM halos, in general, are much larger than the gas--this is expected.
The stream velocity (as mentioned in \S~\ref{sec:nummorphology}) drives more extreme eccentricity, leading to large $R_{\rm max}$.
The DM maximum ellipsoid radius is also shown as dark points in Fig.~\ref{fig:rmaxr200} as a function of the $R_{200}$ found from a spherical overdensity calculation.
The orange line corresponds to $R_{\rm max,DM}=R_{200}$
In Fig.~\ref{fig:eccmass}, it was shown that low mass objects have higher eccentricity.
This is reflected in the fact that the DM distribution deviated from the $1:1$ line at low masses, whereas most DM objects fall on the line at higher masses.
Finally, in Fig.~\ref{fig:densitysigos}, we show the radial gas density and rotation curve of all the SIGOs from the 0vH2 run, with an NFW profile and Eq.~(\ref{eq:density}) overplotted.
As with the low mass DM GHOSts, the NFW is not a good fit.
SIGOs seem to have a core, rather than a cusp.
\end{appendix}
\newpage
\subsubsection*{#1}}
\pagestyle{headings}
\markright{Reference sheet: \texttt{natbib}}
\usepackage{shortvrb}
\MakeShortVerb{\|}
\begin{document}
\thispagestyle{plain}
\newcommand{\textsc{Bib}\TeX}{\textsc{Bib}\TeX}
\newcommand{\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}}{\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}}
\begin{center}{\bfseries\Large
Reference sheet for \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ usage}\\
\large(Describing version \fileversion\ from \filedate)
\end{center}
\begin{quote}\slshape
For a more detailed description of the \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package, \LaTeX\ the
source file \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\texttt{.dtx}.
\end{quote}
\head{Overview}
The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package is a reimplementation of the \LaTeX\ |\cite| command,
to work with both author--year and numerical citations. It is compatible with
the standard bibliographic style files, such as \texttt{plain.bst}, as well as
with those for \texttt{harvard}, \texttt{apalike}, \texttt{chicago},
\texttt{astron}, \texttt{authordate}, and of course \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}.
\head{Loading}
Load with |\usepackage[|\emph{options}|]{|\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}|}|. See list of
\emph{options} at the end.
\head{Replacement bibliography styles}
I provide three new \texttt{.bst} files to replace the standard \LaTeX\
numerical ones:
\begin{quote}\ttfamily
plainnat.bst \qquad abbrvnat.bst \qquad unsrtnat.bst
\end{quote}
\head{Basic commands}
The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package has two basic citation commands, |\citet| and
|\citep| for \emph{textual} and \emph{parenthetical} citations, respectively.
There also exist the starred versions |\citet*| and |\citep*| that print
the full author list, and not just the abbreviated one.
All of these may take one or two optional arguments to add some text before
and after the citation.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citet{jon90}| & Jones et al. (1990)\\
|\citet[chap.~2]{jon90}| & Jones et al. (1990, chap.~2)\\[0.5ex]
|\citep{jon90}| & (Jones et al., 1990)\\
|\citep[chap.~2]{jon90}| & (Jones et al., 1990, chap.~2)\\
|\citep[see][]{jon90}| & (see Jones et al., 1990)\\
|\citep[see][chap.~2]{jon90}| & (see Jones et al., 1990, chap.~2)\\[0.5ex]
|\citet*{jon90}| & Jones, Baker, and Williams (1990)\\
|\citep*{jon90}| & (Jones, Baker, and Williams, 1990)
\end{tabular}
\end{quote}
\head{Multiple citations}
Multiple citations may be made by including more than one
citation key in the |\cite| command argument.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citet{jon90,jam91}| & Jones et al. (1990); James et al. (1991)\\
|\citep{jon90,jam91}| & (Jones et al., 1990; James et al. 1991)\\
|\citep{jon90,jon91}| & (Jones et al., 1990, 1991)\\
|\citep{jon90a,jon90b}| & (Jones et al., 1990a,b)
\end{tabular}
\end{quote}
\head{Numerical mode}
These examples are for author--year citation mode. In numerical mode, the
results are different.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citet{jon90}| & Jones et al. [21]\\
|\citet[chap.~2]{jon90}| & Jones et al. [21, chap.~2]\\[0.5ex]
|\citep{jon90}| & [21]\\
|\citep[chap.~2]{jon90}| & [21, chap.~2]\\
|\citep[see][]{jon90}| & [see 21]\\
|\citep[see][chap.~2]{jon90}| & [see 21, chap.~2]\\[0.5ex]
|\citep{jon90a,jon90b}| & [21, 32]
\end{tabular}
\end{quote}
\head{Suppressed parentheses}
As an alternative form of citation, |\citealt| is the same as |\citet| but
\emph{without parentheses}. Similarly, |\citealp| is |\citep| without
parentheses. Multiple references, notes, and the starred variants
also exist.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citealt{jon90}| & Jones et al.\ 1990\\
|\citealt*{jon90}| & Jones, Baker, and Williams 1990\\
|\citealp{jon90}| & Jones et al., 1990\\
|\citealp*{jon90}| & Jones, Baker, and Williams, 1990\\
|\citealp{jon90,jam91}| & Jones et al., 1990; James et al., 1991\\
|\citealp[pg.~32]{jon90}| & Jones et al., 1990, pg.~32\\
|\citetext{priv.\ comm.}| & (priv.\ comm.)
\end{tabular}
\end{quote}
The |\citetext| command
allows arbitrary text to be placed in the current citation parentheses.
This may be used in combination with |\citealp|.
\head{Partial citations}
In author--year schemes, it is sometimes desirable to be able to refer to
the authors without the year, or vice versa. This is provided with the
extra commands
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citeauthor{jon90}| & Jones et al.\\
|\citeauthor*{jon90}| & Jones, Baker, and Williams\\
|\citeyear{jon90}| & 1990\\
|\citeyearpar{jon90}| & (1990)
\end{tabular}
\end{quote}
\head{Forcing upper cased names}
If the first author's name contains a \textsl{von} part, such as ``della
Robbia'', then |\citet{dRob98}| produces ``della Robbia (1998)'', even at the
beginning of a sentence. One can force the first letter to be in upper case
with the command |\Citet| instead. Other upper case commands also exist.
\begin{quote}
\begin{tabular}{rl@{\quad$\Rightarrow$\quad}l}
when & |\citet{dRob98}| & della Robbia (1998) \\
then & |\Citet{dRob98}| & Della Robbia (1998) \\
& |\Citep{dRob98}| & (Della Robbia, 1998) \\
& |\Citealt{dRob98}| & Della Robbia 1998 \\
& |\Citealp{dRob98}| & Della Robbia, 1998 \\
& |\Citeauthor{dRob98}| & Della Robbia
\end{tabular}
\end{quote}
These commands also exist in starred versions for full author names.
\head{Citation aliasing}
Sometimes one wants to refer to a reference with a special designation,
rather than by the authors, i.e. as Paper~I, Paper~II. Such aliases can be
defined and used, textual and/or parenthetical with:
\begin{quote}
\begin{tabular}{lcl}
|\defcitealias{jon90}{Paper~I}|\\
|\citetalias{jon90}| & $\Rightarrow$ & Paper~I\\
|\citepalias{jon90}| & $\Rightarrow$ & (Paper~I)
\end{tabular}
\end{quote}
These citation commands function much like |\citet| and |\citep|: they may
take multiple keys in the argument, may contain notes, and are marked as
hyperlinks.
\head{Selecting citation style and punctuation}
Use the command |\bibpunct| with one optional and 6 mandatory arguments:
\begin{enumerate}
\item the opening bracket symbol, default = (
\item the closing bracket symbol, default = )
\item the punctuation between multiple citations, default = ;
\item the letter `n' for numerical style, or `s' for numerical superscript
style, any other letter for
author--year, default = author--year;
\item the punctuation that comes between the author names and the year
\item the punctuation that comes between years or numbers when common author
lists are suppressed (default = ,);
\end{enumerate}
The optional argument is the character preceding a post-note, default is a
comma plus space. In redefining this character, one must include a space if
one is wanted.
Example~1, |\bibpunct{[}{]}{,}{a}{}{;}| changes the output of
\begin{quote}
|\citep{jon90,jon91,jam92}|
\end{quote}
into [Jones et al. 1990; 1991, James et al. 1992].
Example~2, |\bibpunct[; ]{(}{)}{,}{a}{}{;}| changes the output of
\begin{quote}
|\citep[and references therein]{jon90}|
\end{quote}
into (Jones et al. 1990; and references therein).
\head{Other formatting options}
Redefine |\bibsection| to the desired sectioning command for introducing
the list of references. This is normally |\section*| or |\chapter*|.
Define |\bibpreamble| to be any text that is to be printed after the heading but
before the actual list of references.
Define |\bibfont| to be a font declaration, e.g.\ |\small| to apply to
the list of references.
Define |\citenumfont| to be a font declaration or command like |\itshape|
or |\textit|.
Redefine |\bibnumfmt| as a command with an argument to format the numbers in
the list of references. The default definition is |[#1]|.
The indentation after the first line of each reference is given by
|\bibhang|; change this with the |\setlength| command.
The vertical spacing between references is set by |\bibsep|; change this with
the |\setlength| command.
\head{Automatic indexing of citations}
If one wishes to have the citations entered in the \texttt{.idx} indexing
file, it is only necessary to issue |\citeindextrue| at any point in the
document. All following |\cite| commands, of all variations, then insert
the corresponding entry to that file. With |\citeindexfalse|, these
entries will no longer be made.
\head{Use with \texttt{chapterbib} package}
The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package is compatible with the \texttt{chapterbib} package
which makes it possible to have several bibliographies in one document.
The package makes use of the |\include| command, and each |\include|d file
has its own bibliography.
The order in which the \texttt{chapterbib} and \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ packages are loaded
is unimportant.
The \texttt{chapterbib} package provides an option \texttt{sectionbib}
that puts the bibliography in a |\section*| instead of |\chapter*|,
something that makes sense if there is a bibliography in each chapter.
This option will not work when \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ is also loaded; instead, add
the option to \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}.
Every |\include|d file must contain its own
|\bibliography| command where the bibliography is to appear. The database
files listed as arguments to this command can be different in each file,
of course. However, what is not so obvious, is that each file must also
contain a |\bibliographystyle| command, \emph{preferably with the same
style argument}.
\head{Sorting and compressing citations}
Do not use the \texttt{cite} package with \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}; rather use one of the
options \texttt{sort} or \texttt{sort\&compress}.
These also work with author--year citations, making multiple citations appear
in their order in the reference list.
\head{Long author list on first citation}
Use option \texttt{longnamesfirst} to have first citation automatically give
the full list of authors.
Suppress this for certain citations with |\shortcites{|\emph{key-list}|}|,
given before the first citation.
\head{Local configuration}
Any local recoding or definitions can be put in \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\texttt{.cfg} which
is read in after the main package file.
\head{Options that can be added to \texttt{\char`\\ usepackage}}
\begin{description}
\item[\ttfamily round] (default) for round parentheses;
\item[\ttfamily square] for square brackets;
\item[\ttfamily curly] for curly braces;
\item[\ttfamily angle] for angle brackets;
\item[\ttfamily colon] (default) to separate multiple citations with
colons;
\item[\ttfamily comma] to use commas as separaters;
\item[\ttfamily authoryear] (default) for author--year citations;
\item[\ttfamily numbers] for numerical citations;
\item[\ttfamily super] for superscripted numerical citations, as in
\textsl{Nature};
\item[\ttfamily sort] orders multiple citations into the sequence in
which they appear in the list of references;
\item[\ttfamily sort\&compress] as \texttt{sort} but in addition multiple
numerical citations are compressed if possible (as 3--6, 15);
\item[\ttfamily longnamesfirst] makes the first citation of any reference
the equivalent of the starred variant (full author list) and subsequent
citations normal (abbreviated list);
\item[\ttfamily sectionbib] redefines |\thebibliography| to issue
|\section*| instead of |\chapter*|; valid only for classes with a
|\chapter| command; to be used with the \texttt{chapterbib} package;
\item[\ttfamily nonamebreak] keeps all the authors' names in a citation on
one line; causes overfull hboxes but helps with some \texttt{hyperref}
problems.
\end{description}
\end{document}
|
train/arxiv
|
BkiUdh05qhLBXU5kwL-D
| 5 | 1 |
\section{Introduction}
\setcounter{equation}{0}
Among other physical phenomena,
the integrable dilute O($n$) model on the square lattice
\cite{N:90a} is relevant to self-avoiding polymer chains in the
bulk \cite{N:90b,N:90c}.
The partition function of the dilute O($n$) model is defined
by \cite{N:90a,WN:93}
\begin{equation}
Z = \sum_{{\mathcal G}} \rho_1^{m_1} \cdots \rho_9^{m_9} \; n^{P},
\label{Z}
\eeq
where the sum is over all configurations
${\mathcal G}$ of non-intersecting closed
loops which cover some (or none) of the lattice bonds.
The possible loop configurations at a vertex are
shown in Fig. 1, with a vertex of type
$i$ carrying a Boltzmann weight $\rho_i$.
In configuration ${\mathcal G}$,
$m_i$ is the number of occurrences of the
vertex of type $i$, while $P$ is the
total number of closed loops of fugacity $n$.
The loop weights in (\ref{Z}) are \cite{N:90a,WN:93}
\begin{eqnarray}
\rho_1 (u)&=&1 + {\sin u \sin (3\lambda-u)
\over \sin 2\lambda\sin 3\lambda} \nonumber\\
\rho_2 (u)&=&\rho_3 (u) = {\sin (3\lambda-u)\over \sin 3\lambda}\nonumber \\
\rho_4 (u)&=&\rho_5 (u) = {\sin u \over \sin 3\lambda} \nonumber \\
\rho_6 (u)&=&\rho_7 (u) = {\sin u \sin (3\lambda-u) \over \sin
2\lambda\sin 3\lambda} \\
\rho_8 (u)&=&{\sin (2\lambda-u) \sin (3\lambda-u)\over\sin
2\lambda\sin 3\lambda} \nonumber \\
\rho_9 (u)&=&-{\sin u \sin(\lambda-u) \over \sin 2\lambda\sin 3\lambda}. \nonumber
\ee
Here $n=-2\cos 4\lambda$.
These weights were originally constructed via a mapping involving the
Potts model \cite{N:90a} and later seen to satisfy the
Yang-Baxter equation for loop
models \cite{WN:93,NWB:93}.
On the other hand, when mapped to a 3-state vertex model \cite{N:90a},
the dilute O($n$) model is seen to be related to the integrable
19-vertex model of Izergin and Korepin \cite{IK:81}.
The Nienhuis O($n$) model on the honeycomb lattice
\cite{N:82,N:84,Bax:86}
follows from either of the special values
$u=\lambda$ and $u=2\lambda$ of the spectral parameter \cite{N:90a,R:91}.
In the appropriate region the model thus contains the essential physics
of the self-avoiding polymer problem at
$n=0$ \cite{N:90b,N:82,N:84,N:87,D:90a}.
\begin{figure}[t]
\begin{center}
\setlength{\unitlength}{0.01250000in}
\begin{picture}(34,34)(88,673)
\multiput( 90,690)(3,0){11}{.}
\multiput(105,705)(0,-3){11}{.}
\put(103,665){\small$1$}
\end{picture}
\setlength{\unitlength}{0.01250000in}
\begin{picture}(34,34)(88,673)
\put(106.5,706){\thicklines\line(0,-1){13}}
\put(90,690.5){\thicklines\line(1,0){12.5}}
\put(102.5,694.6){\thicklines\oval(8,8)[br]}
\multiput(119.5,690)(-3,0){6}{.}
\multiput(105,675)(0,3){6}{.}
\put(103,665){\small$2$}\end{picture}
\setlength{\unitlength}{0.01250000in}
\begin{picture}(34,34)(88,673)
\put(110.5,686.7){\thicklines\oval(8,8)[tl]}
\put(122.5,690.8){\thicklines\line(-1,0){12}}
\put(106.5,675){\thicklines\line(0,1){12.5}}
\multiput(90,690)(3,0){6}{.}
\multiput(104.6,705)(0,-3){6}{.}
\put(103,665){\small$3$}\end{picture}
\setlength{\unitlength}{0.01250000in}
\begin{picture}(34,34)(88,673)
\put(106.5,675){\thicklines\line(0,1){13}}
\put(90,690.5){\thicklines\line(1,0){12.5}}
\put(102.5,686.5){\thicklines\oval(8,8)[tr]}
\multiput(119.5,690)(-3,0){6}{.}
\multiput(104.6,705)(0,-3){6}{.}
\put(103,665){\small$4$}\end{picture}
\setlength{\unitlength}{0.01250000in}
\begin{picture}(34,34)(88,673)
\put(110.5,694.7){\thicklines\oval(8,8)[bl]}
\put(122.5,690.8){\thicklines\line(-1,0){12}}
\put(106.5,706){\thicklines\line(0,-1){12.5}}
\multiput(90,690)(3,0){6}{.}
\multiput(105,675)(0,3){6}{.}
\put(103,665){\small$5$}\end{picture}
\setlength{\unitlength}{0.01250000in}
\begin{picture}(34,34)(88,673)
\put( 90,690.5){\thicklines\line(1,0){33}}
\multiput(105,705)(0,-3){11}{.}
\put(103,665){\small$6$}
\end{picture}
\setlength{\unitlength}{0.01250000in}
\begin{picture}(34,34)(88,673)
\put(106.6,705){\thicklines\line(0,-1){30}}
\multiput( 90,690)(3,0){11}{.}
\put(103,665){\small$7$}
\end{picture}
\setlength{\unitlength}{0.01250000in}
\begin{picture}(34,34)(88,673)
\put(106.5,706){\thicklines\line(0,-1){13}}
\put(90,690.5){\thicklines\line(1,0){12.5}}
\put(102.5,694.6){\thicklines\oval(8,8)[br]}
\put(110.5,686.7){\thicklines\oval(8,8)[tl]}
\put(122.5,690.8){\thicklines\line(-1,0){12}}
\put(106.5,675){\thicklines\line(0,1){12.5}}
\put(103,665){\small$8$}
\end{picture}
\setlength{\unitlength}{0.01250000in}
\begin{picture}(34,34)(88,673)
\put(106.5,675){\thicklines\line(0,1){13}}
\put(90,690.5){\thicklines\line(1,0){12.5}}
\put(102.5,686.5){\thicklines\oval(8,8)[tr]}
\put(110.5,694.7){\thicklines\oval(8,8)[bl]}
\put(122.5,690.8){\thicklines\line(-1,0){12}}
\put(106.5,706){\thicklines\line(0,-1){12.5}}
\put(103,665){\small$9$}
\end{picture}
\vskip 5mm
\caption{The 9 vertices of the dilute O($n$) model.}
\vskip 5mm
\end{center}
\end{figure}
The dilute O($n$) model has recently been used to
construct a family of dilute $A$--$D$--$E$\space lattice models
\cite{WNS:92,Roche:92,WNS:93}. These models
are restricted solid--on--solid
models with a finite number of heights
built on the $A$--$D$--$E$\space Dynkin diagram. At criticality,
the face weights are \cite{WNS:92,Roche:92,WNS:93}
\begin{eqnarray}
& & \wt Wabcdu
= \;\rho_1(u)
\delta_{a,b,c,d} +\rho_2(u) \delta_{a,b,c} A_{a,d}+\rho_3(u)
\delta_{a,c,d} A_{a,b} \nonumber\\*
& &\quad \mbox{} +\sqrt{S_a \over S_b}\rho_4 (u) \delta_{b,c,d} A_{a,b}
+\sqrt{S_c \over S_a}\rho_5(u) \delta_{a,b,d} A_{a,c}
+\rho_6(u) \delta_{a,b} \delta_{c,d} A_{a,c} \label{adeface-d}\\
& &\quad \mbox{} +\rho_7(u) \delta_{a,d} \delta_{c,b} A_{a,b} +\rho_8(u)
\delta_{a,c} A_{a,b} A_{a,d} +
\sqrt{{S_a S_c \over S_b S_d}} \rho_9(u) \delta_{b,d} A_{a,b}
A_{b,c} \nonumber
\ee
where the $\rho_i$ are as given above.
The generalized Kronecker
delta is unity if all its arguments take the same
value and is zero otherwise.
The Perron-Frobenius vectors $S_{a}$ in the face weights are
the eigenvector of the largest eigenvalue of the
adjacency matrix $A$ of the $A$--$D$--$E$\space graphs,
\begin{equation}
\sum_{b}A_{a,b}S_b=2\cos{\pi\over L+1} S_a\;,
\eeq
where for the dilute $A_L$ models, $L$
is the number of graph states, with $a,b,c,d=1,2,\cdots,L$.
The dilute O($n$) model exhibits various branches of critical
behaviour \cite{BN:89,BNW:89,WBN:92,Warnaar}.
These are reflected in the properties
of the dilute $A$--$D$--$E$\space models, for which there are four physical
branches \cite{WNS:92}
\begin{eqnarray}
&\mbox{branch {\it 1}}
\hs{1.8} 0<u< 3\lambda &\hs{0.3}\lambda={\pi\over 4}{L\over L+1} \hs{0.7}
L=2,3,\cdots\nonumber\\
&\mbox{branch {\it 2}}
\hs{1.8} 0<u< 3\lambda &\hs{0.3}\lambda={\pi\over 4}{L+2\over L+1}\hs{0.7}
L=3,4,\cdots\nonumber\\
&\mbox{branch {\it 3}}
\hspace*{0.5cm} -\pi+3\lambda<u<0 &\hs{0.3}\lambda={\pi\over 4}{L+2\over L+1}\hs{0.7}
L=3,4,\cdots \label{branches}\\
&\mbox{branch {\it 4}}
\hspace*{0.5cm} -\pi+3\lambda<u<0 &\hs{0.3}\lambda={\pi\over 4}{L\over L+1}\hs{0.7}
L=2,3,\cdots\nonumber
\ee
Recent studies have highlighted the prominence of the dilute $A_L$
face models, which admit an off-critical extension \cite{WNS:92,WNS:93}.
In regime 2
the $A_3$ model lies in the same universality class as the Ising model
in a magnetic field and gives the magnetic exponent
$\delta=15$ \cite{WNS:92,WNS:93,WPSN:94}. This $A_3$ model also shows
the $E_8$ scattering theory for massive excitations over the
groundstate \cite{BNW:94,WP:94,GN:95}. Both $su(2)$ and $su(3)$ fusion
hierarchies of the dilute $A_L$ face models have
been constructed in \cite{Zf:96,ZPG:95}.
In this paper we both generalise and extend earlier calculations
of the critical properties, such as the central charges and bulk
scaling dimensions (the conformal spectra), of the
dilute O($n$) model and the related dilute $A_L$ and Izergin-Korepin models.
After outlining the necessary
preliminaries in Section 2, our calculations are presented in
Section 3 for branches 1 and 2 and in Section 4 for branches 3 and 4.
The method employed involves the extension of the nonlinear integral
equation approach \cite{KB:90,KBP:91,WBN:92}
to obtain the complete conformal spectra, as has been done for
the six-vertex model \cite{KWZ:93,F:95}
and most recently \cite{Zhou:95} for the
Andrews-Baxter-Forrester (ABF) model \cite{ABF:84}.
Having read Section 2, those readers not specifically interested in the
technical details may prefer to skip to Section 5 where a
discussion of our results for the various models concludes the paper.
\section{Bethe equations and known results}
\setcounter{equation}{0}
As we are interested in bulk critical behaviour, we
consider periodic boundary conditions across a finite lattice of width $N$,
where for convenience we take $N$ even.
The eigenvalues $T(u)$ for the row-transfer matrix $\mbox{\boldmath {$T$}}(u)$ of the
dilute O($n$) model are given by \cite{BNW:89,WBN:92,Warnaar}
\begin{eqnarray}
T(u) & = & e^{-\mbox{\small i} \phi}\,\frac{s(2\lambda-u)s(3\lambda-u)}{s(2\lambda)s(3\lambda)}\,
\frac{Q(u+\lambda)}{Q(u-\lambda)} \nonumber \\
&& \qquad + \frac{s(u)s(3\lambda -u)}{s(2\lambda)s(3\lambda)}\,
\frac{Q(u)Q(u-3\lambda)}{Q(u-\lambda)Q(u-2\lambda)} \nonumber\\*
&& \qquad\qquad + e^{\mbox{\small i} \phi}\,
\frac{s(u)s(\lambda-u)}{s(2\lambda)s(3\lambda)}\,
\frac{Q(u-4\lambda)}{Q(u-2\lambda)}, \label{BAE}
\ee
where
\begin{equation}
s(u)=\sin^{N}(u), \qquad
Q(u)=\prod_{j=1}^{m} \cosh(\mbox{\small i} u-u_j)
\eeq
and the $m$ zeros $\{u_j\}$ satisfy the Bethe equations
\begin{equation}
e^{\mbox{\small i} \phi} \left[\frac{\cosh(u_j+\mbox{\small i}\lambda)}{\cosh(u_j-\mbox{\small i}\lambda)}\right]^N =
- \prod_{k=1}^{m} \frac{\sinh(u_j-u_k+2\mbox{\small i}\lambda)\sinh(u_j-u_k-\mbox{\small i}\lambda)}
{\sinh(u_j-u_k-2\mbox{\small i}\lambda)\sinh(u_j-u_k+\mbox{\small i}\lambda)}
\label{bethe}
\eeq
for $j=1,\ldots,m$. It is convenient to label the sectors of $\mbox{\boldmath {$T$}}(u)$ by
$\ell = N - m$, where $\ell=0$ for the largest (groundstate) sector,
$\ell=1$ for the next largest, etc.
The Bethe equations ensure that the eigenvalues $T(u)$ are analytic
functions of $u$.
Apart from the phase factors $\phi$ these equations are
the Bethe equations of the Izergin-Korepin
model \cite{R:83,VR:83,T:88}.
In general $\phi$ is a continuous variable associated
with a ``seam'' to ensure that loops which wrap round the
cylinder carry the correct weight $n$. Thus
\begin{equation}
\phi = \pi - 4\lambda \label{on}
\eeq
for the dilute O($n$) model in the largest ($\ell=0$) sector of $\mbox{\boldmath {$T$}}(u)$
with $\phi=0$ in all other sectors. For the Izergin-Korepin model,
$\phi=0$ in all sectors.
On the other hand, for the dilute $A_L$ face models there is a fixed number
of Bethe roots ($\ell=0$) and
\begin{equation}
\phi = \pi s/(L+1) \label{ph}
\eeq
with $s=1,\ldots,L$.
In this case the transfer matrix $\mbox{\boldmath {$T$}}(u)$ has elements
\begin{equation}
\langle \sigma | \mbox{\boldmath {$T$}}(u) | \sigma' \rangle =
\prod_{j=1}^{N} \wt W{\sigma_j}{\sigma_{j+1}}{\sigma_{j+1}'}{\sigma_j'}u,
\eeq
where the paths $\sigma=\{\sigma_1,\sigma_2,\ldots,\sigma_N\}$ and
$\sigma'=\{\sigma_1',\sigma_2',\ldots,\sigma_N'\}$
are allowed configurations of heights along a row with periodic
boundary conditions $\sigma_{N+1}=\sigma_1$ and $\sigma_{N+1}'=\sigma_1'$.
The face weights are those defined in (\ref{adeface-d}).
The expression for the eigenvalues $T(u)$ was given
in \cite{BNW:94} in terms of the more general elliptic functions.
As we are also interested in the dilute O($n$) model, we
do not restrict the crossing parameter $\lambda$ to the values
given in (\ref{branches}). This may lead to unphysical regimes in the
dilute $A_L$ face model for which, however, the finite-size
corrections to the transfer matrix eigenspectra are still of interest
from the viewpoint of
statistical mechanics and conformal field theory.
\subsection{Central charge}
Some exact results are known for the dilute O($n$)
model \cite{BNW:89,WBN:92,Warnaar}. In particular, the
central charge is found to be
\begin{eqnarray}
c &=& 1 - \frac{3 \phi^2}{\pi (\pi - 2 \lambda)}
\qquad \mbox{branches 1 \& 2}, \label{c1&2}\\
c &=& \frac{3}{2} - \frac{3 \phi^2}{2 \pi \lambda}
\qquad\qquad\,\; \mbox{branches 3 \& 4}.
\label{c3&4}
\ee
These results follow from the finite-size behaviour
of the largest eigenvalue. The result (\ref{c1&2}) had already been obtained
from the Bethe equations in the honeycomb limit \cite{BB:88,Su:88,BB:89}.
However, the result (\ref{c3&4}) \cite{WBN:92} had to await the
development of the more sophisticated nonlinear integral equation approach
\cite{KB:90,KBP:91,WBN:92} (see also \cite{PK:91,KP:91}).
The reason for this is that the distribution of Bethe roots for the largest
eigenvalue differs significantly
in each case. In the limit of infinite size $N$
the Bethe roots are distributed on the lines \cite{BNW:89,WBN:92,Warnaar}
\begin{eqnarray}
&\mbox{branches 1 and 2}& \quad \mbox{${\Im}m$ } (u_j)=\mbox{$\textstyle {1 \over 2}$}\pi, \label{good} \\
&\mbox{branches 3 and 4}& \quad \mbox{${\Im}m$ } (u_j)=\pm\mbox{$\textstyle {1 \over 2}$}\pi \lambda. \label{bad}
\ee
Whereas there are no finite-size deviations from the line (\ref{good}),
the finite-size deviations from (\ref{bad}) are severe
enough to render the more standard root density approach\footnote{See, for
example, \cite{SNW:92} and references therein.} invalid.
Here we extend the analytic, nonlinear integral equation
approach in the dilute O($n$) model
\cite{WBN:92,Warnaar} to the calculation of the
conformal weights in all four branches. Our treatment follows that given
in the recent study of the ABF model \cite{Zhou:95}.
The above results for the central charge have
already been used to obtain the central charges of the dilute $A$--$D$--$E$\space
models \cite{WNS:92,WNS:93,Warnaar}.
In particular, for the dilute $A_L$ face models, either (\ref{ph}) or
the O($n$) value (\ref{on}) with
the appropriate value of $\lambda$ in (\ref{branches}) gives
\begin{eqnarray}
c=\cases{1-\displaystyle{6\over h(h-1)}, &branches 1 \& 2, \cr
\mbox{$\textstyle {3 \over 2}$}-\displaystyle{6\over h(h-1)}, &branches 3 \& 4, }\label{c}
\ee
where
\begin{equation}
h=\cases{L+1, &branches 2 \& 4, \cr
L+2, &branches 1 \& 3. }
\eeq
The first two branches give realisations of the unitary
minimal series, while the other two branches involve a product of the
unitary minimal series and an Ising model.
The O($n$) model had earlier
been identified \cite{DF:84,SS:85} in the conformal classification
scheme \cite{BPZ:84,FQS:84}
with the aid of the Nienhuis Coulomb gas results \cite{N:82,N:84}.
In particular,
$
c = 1 - 6(g-1)^2/g,
$
where $g \in [1,2]$, with $g=h/(h-1)$, in the high temperature phase
(branch 1) and $g \in [0,1]$, with $g=(h-1)/h$, in the low temperature
phase (branch 2). Here $g = 2 (1-2 \lambda/\pi)$.
The Ising value $c=\mbox{$\textstyle {1 \over 2}$}$ thus occurs both for the
dilute $A_2$ model ($n=1$ in the high temperature O($n$)
phase) and the dilute $A_3$ model ($n=\sqrt 2$ in the low temperature O($n$)
phase). The central charges of the dilute $A_L$ face models have recently
been estimated numerically from the
finite-size diagonalisation of the dilute $A_L$ model transfer matrix for
various $L$ on all four branches \cite{OP:95}.
The central charge has also been derived by solving the
transfer matrix functional relations of the dilute $A_L$ model on branches
2 and 4 \cite{ZP:95p}.
The calculation confirms the result (\ref{c}) obtained
via the dilute O($n$) model \cite{WNS:92,WNS:93,Warnaar}.
\subsection{Scaling dimensions}
Various scaling dimensions have been calculated
via the Bethe equations for the
dilute O($n$) model.
Again in the honeycomb limit for branches 1 and 2,
the `magnetic' set of scaling
dimensions is found to be \cite{BB:88,Su:88,BB:89}
\begin{equation}
X_\ell^\sigma = \frac{\ell^2 (\pi-2\lambda)^2 -(\pi-4\lambda)^2}{4\pi(\pi-2\lambda)}
= {\mbox{$\textstyle {1 \over 8}$}} g\, \ell^2 - \frac{(g-1)^2}{2 g}.
\label{mag12}
\eeq
Alternatively, this result is written as
\begin{equation}
X_\ell^\sigma =\cases{2 \Delta_{\ell/2,0}\,, &branch 1 \cr
2 \Delta_{0,\ell/2}\,, &branch 2 }
\eeq
where
\begin{equation}
\Delta_{r,s}^{(h)}={[h r-(h-1)s]^2-1\over 4h(h-1)}
\label{Delta}
\eeq
is the Kac formula. This result had earlier been obtained via
Coulomb gas calculations \cite{Sal:86,D:87}.
On the other hand, both numerical evidence \cite{BB:89} and
root-density calculations \cite{SNW:92} revealed the set of `thermal'
dimensions to be
\begin{equation}
X_j^\epsilon = \frac{j^2 \pi - j (\pi-4\lambda)}{\pi-2\lambda}
= \frac{2}{g} j (j+1) - 2 j.
\label{therm12}
\eeq
Both the results (\ref{mag12}) and (\ref{therm12})
generalized earlier results via the
Coulomb gas \cite{N:82,N:84,N:87}.\footnote{The leading thermal
dimension had been conjectured earlier for the O($n$) model by
Cardy and Hamber \cite{CH:80}.}
The thermal dimensions follow from
\begin{equation}
X_j^\epsilon =\cases{2 \Delta_{1,2j+1}\,, &branch 1 \cr
2 \Delta_{2j+1,1}\,, &branch 2 }
\eeq
in the Kac formula \cite{BB:89}.
On the other hand, the situation is not so clear for branches 3 and 4
of the dilute square lattice model. Numerical evidence \cite{WBN:92}
indicates that the magnetic dimensions are given by
\begin{equation}
X_\ell^{\sigma} = \frac{\lambda \ell^2}{2 \pi} - \frac{(\pi-4\lambda)^2}
{8 \pi \lambda}.
\label{mag34}
\eeq
The only known thermal result is $X_1^\epsilon = 1$ \cite{WBN:92}.
There are no Coulomb gas results for these branches.
The conformal weights of the dilute $A_L$ face models have been
estimated numerically from the finite-size diagonalisation
of the transfer matrix (for $L=3$ and $L=4$ at $u=3\lambda/2$ on
all four branches) \cite{OP:95} and from numerical
solutions to the Bethe equations for $L=3$ \cite{GN:95}.
For branches 1 and 2,
the results fulfil the expectation that the scaling dimensions
reflect the conformal weights of the unitary minimal series. For
branches 3 and 4, they reflect a product of the Ising and unitary minimal
series. The related modular invariant partition function has been
discussed at length in \cite{OP:95}.
As mentioned above, the finite-size corrections
to the transfer matrix eigenspectra have been obtained for
the dilute $A_L$ face models in branches 2 and 4 in \cite{ZP:95p}
via the functional relation method \cite{PK:91,KP:91,KlPe:92}.
The analytic calculation confirms the conformal weights obtained via
the calculation of the local height
probabilities for $L$ odd \cite{WPSN:94}. Here we
consider the dilute models in all four branches with more general crossing
parameter $\lambda$ and calculate the conformal
spectra for each branch.
\section{Branches 1 and 2}\setcounter{equation}{0}
We consider branches 1 and 2 defined by
\begin{equation}
0<u<3\lambda \hspace*{0.5cm}\h {\pi/6}\le\lambda<{\pi/ 3}.
\eeq
This regime covers the $\lambda$ values (\ref{branches}) for
the dilute $A_L$ models.
However, the derivation below is also valid for the dilute O($n$)
and Izergin-Korepin models in the
larger interval $0<\lambda<{\pi/3}$.
Let us introduce the new variable $v=\mbox{\small i} u$
with a shift $v_j=u_j-\mbox{$\textstyle {1 \over 2}$}\mbox{\small i}\pi$. The Bethe equations
(\ref{bethe}) are then of the form
\begin{equation}
p(v_j)=-1,
\eeq
where
\begin{eqnarray}
p(v)&=&e^{-\mbox{\small i}\phi} {\Phi(v- \mbox{\small i} \lambda)q(v-\mbox{\small i}\lambda)q(v+2\mbox{\small i}\lambda)\over
\Phi(v+\mbox{\small i} \lambda)q(v+\mbox{\small i}\lambda)q(v-2\mbox{\small i}\lambda)}, \\
\Phi(v)&=&\sinh^N v\;, \hspace*{0.5cm}\h q(v)
\;=\;\prod_{j=1}^{m}\sinh(v-v_j)\; . \label{phiq}
\ee
After the shift, the Bethe roots $v_j$ are
distributed along the real axis, with
\begin{equation}
\overline{q}(v)=q(\overline{v}) \;\;{\rm and}\;\; \overline{p}(v)=1/p({v}) \label{qp}
\eeq
where the overbar denotes complex conjugation.
\subsection{Nonlinear integral equation}
Define two functions that
are Analytic and Non-Zero ({\sc anz}) in the strips around the real axis:
\begin{eqnarray}
\alpha(v)&=&e^{\mbox{\small i}\omega}
\;g(v)\;p(v+\mbox{\small i}\xi), \nonumber \\
A(v)&=& 1+\alpha(v)/g(v).
\ee
The phase factor $\omega$ has been introduced for
taking different branches of the log function involved in the
subsequent Fourier transforms. We take
\begin{equation}
\omega=\cases{
{\rm sgn}(v)(\ell-r)\pi &dilute O($n$) model, $\ell\ne 0$ \cr
r\pi &dilute O($n$) model, $\ell= 0$ \cr
\pi(r-s) & dilute $A_L$ face model \cr}
\label{cases}
\eeq
where the function ${\rm sgn}(v)=-1$ for $Re(v)>0$
and $+1$ otherwise. For the O$(n)$ model
the integers $r,s$ are restricted (as discussed further in
section~5). For the moment we leave them as arbitrary integers.
The function $g$ is introduced for compensating
the anticipated bulk behaviour of $p(v+\mbox{\small i}\xi)$ and is given by
\begin{equation}
g(v)=\left({{\rm th}\rho(v+\mbox{\small i}\lambda+\mbox{\small i}\xi)\over{\rm th}\rho(v-\mbox{\small i}\lambda+\mbox{\small i}\xi)}
\right) ,\hspace*{0.5cm}\h \rho=\pi/6\lambda,
\eeq
where $0<\xi\le\mbox{$\textstyle {1 \over 2}$}\pi$. The $\mbox{\small i}\pi-$periodic function $\alpha$
can be rewritten as
\begin{equation}
\alpha(v)= e^{\mbox{\small i}\omega-\mbox{\small i}\phi} g(v){\Phi(v- \mbox{\small i} \lambda+\mbox{\small i}\xi)
q(v-\mbox{\small i}\lambda+\mbox{\small i}\pi+\mbox{\small i}\xi)q(v+2\mbox{\small i}\lambda+\mbox{\small i}\xi)\over
\Phi(v+\mbox{\small i} \lambda+\mbox{\small i}\xi)q(v+\mbox{\small i}\lambda+\mbox{\small i}\xi)q(v-2\mbox{\small i}\lambda+\mbox{\small i}\pi+\mbox{\small i}\xi)}.
\label{al}\eeq
The above treatment results in the function $\alpha$ representing
finite-size corrections. To see this we
consider the Fourier transform pair
\begin{eqnarray}
\alpha(k)&=&{1\over 2\pi}\int_{-\infty}^{\infty}
\(\ln\alpha(v)\)^{\prime\prime}e^{-\mbox{\small i} kv}\;dv \nonumber \\
\(\ln\alpha(v)\)^{\prime\prime}
&=&\int_{-\infty}^{\infty}\alpha(k)e^{\mbox{\small i} kv}\;dk
\label{Fou}
\ee
for $\alpha$ and similarly for $A$. The Fourier transform of $q(v)$ is
defined to be
\begin{eqnarray}
&q(k)={1\over 2\pi}\displaystyle{\int_{-\infty+{\rm i} r}^{\infty+\mbox{\small i} r}}
\(\ln q(v)\)^{\prime\prime}e^{-{\rm i} kv}\;dv \hspace*{0.5cm} &0<r<\pi,\nonumber \\
&\(\ln q(v)\)^{\prime\prime}=\displaystyle{\int_{-\infty}^{\infty}}
q(k)e^{\mbox{\small i} kv}\;dk\hspace*{0.5cm}\h &0<\mbox{${\Im}m$ }(v)<\pi.
\ee
To represent $\alpha(k)$ by $A(k)$ and $\overline{A}(k)$ we also need
another relation, which can be given by applying Cauchy's theorem
to the auxiliary function
\begin{equation}
h(v)={1+p(v)\over p(v)q(v)}, \label{h}
\eeq
which satisfies the non-trivial analyticity property
\begin{equation}
\displaystyle{\int_{-\infty+{\rm i}\xi}^{\infty+{\rm i}\xi}}
\(\ln h(v)\)^{\prime\prime}e^{-{\rm i} kv}\;dv=
\displaystyle{\int_{-\infty-{\rm i}\xi}^{\infty-{\rm i}\xi}}
\(\ln h(v)\)^{\prime\prime}e^{-{\rm i} kv}\;dv\;.
\label{h1}\eeq
From the equations following Fourier transforming (\ref{al})
and inserting (\ref{h}) into (\ref{h1}) we obtain
\begin{eqnarray}
q(k)&=&{Nke^{(\mbox{$\textstyle {1 \over 2}$}\pi k)}\cosh(\mbox{$\textstyle {1 \over 2}$}\lambda k)\over
2\sinh(\mbox{$\textstyle {1 \over 2}$}\pi k)\cosh(\mbox{$\textstyle {3 \over 2}$}\lambda k)} \nonumber \\
&&+{e^{(\mbox{$\textstyle {1 \over 2}$}\pi k)}\cosh(\mbox{$\textstyle {1 \over 2}$}\lambda k)\over
2\sinh(\mbox{$\textstyle {1 \over 2}$}\pi k-\lambda k)\cosh(\mbox{$\textstyle {3 \over 2}$}\lambda k)}
\(e^{\xi k}A(k)-e^{-\xi k}\overline{A}(k)\), \\
\alpha(k)&=&F(k)\;A(k)-F_\xi(k)\;\overline{A}(k),
\ee
where
\begin{equation}
F_\xi(k)=-e^{-2\xi k}{\sinh(\lambda k)\cosh(\mbox{$\textstyle {1 \over 2}$}\pi k-\mbox{$\textstyle {3 \over 2}$}\lambda k)\over
\cosh(\mbox{$\textstyle {3 \over 2}$}\lambda k)\sinh(\mbox{$\textstyle {1 \over 2}$}\pi k-\lambda k)}
\eeq
and $F(k)=F_0(k)$.
Transforming back and integrating twice we obtain
the nonlinear integral equation
\begin{equation}
\ln\alpha(v)=F*\ln A-F_\xi *\ln \overline{A}+C +C' v,
\eeq
where the convolution is defined by
\begin{equation}
(f*g)(v)=\int_{-\infty}^\infty f(w)g(v-w)\ dw.
\eeq
The constant $C'$ is chosen to be $C'=0$ for all terms to remain
finite. The other constant $C$ is scaling-dependent and is fixed
after taking the scaling limit defined by
\begin{eqnarray}
a_\pm(x)&=& \lim_{N\to\infty}\alpha(\pm v)/g(\pm v), \nonumber \\
A_\pm(x)&=& \lim_{N\to\infty}A(\pm v)\;=\;1+a_\pm(x)\;.
\label{scaling12}
\ee
The nonlinear integral equation then becomes
\begin{equation}
\smat{\ln a_{\pm}\cr\ln \overline{a}_{\pm}}=2\mbox{\small i}\sqrt{3}e^{-x}
\smat{-e^{\mp 2\rho\mbox{\small i}\xi}\cr e^{\pm 2\rho\mbox{\small i}\xi}}
+K*\smat{\ln A_\pm\cr\ln\overline{A}_\pm}+C_{\pm}\;\smat{1\cr -1}
\; ,\label{a12}
\end{equation}
in which the kernel $K$ is given by
\begin{equation}
K=\smat{F_1&-F_2\cr -\overline{F}_2&\overline{F}_1},
\eeq
where
\begin{eqnarray}
F_1(x)&=&{F}_1(-x)={1\over 2\rho}F\(\pm{1\over 2\rho}x\), \nonumber \\
F_2(x)&=&\overline{F}_2(-x)={1\over 2\rho}F\(\pm{1\over 2\rho}x+2\mbox{\small i}\xi\).
\end{eqnarray}
We can see that $K^T(x)=K(-x)$, a key property to be used in
the derivation of the finite-size corrections. Taking $x\to\infty$ we obtain
\begin{equation}
C_\pm=\mbox{\small i}\pi(\omega_\pm-\phi)/(\pi-2\lambda),
\end{equation}
where
\begin{equation}
\omega_\pm=\cases{\omega\mp2\ell\lambda & dilute O($n$) model,\cr
\omega & dilute $A_L$ face model\cr}
\label{casespm}\eeq
The nonlinear integral equation (\ref{a12}) is equivalent to the Bethe
equations for the largest eigenvalue, as given in
\cite{WBN:92,Warnaar}. The key difference is the change in the
integration constants $C_\pm$ for the low-lying excited states.
This is similar to the nonlinear integral equation
approach in the six-vertex \cite{KWZ:93} and ABF
\cite{Zhou:95} models. In each case the constants contain
the necessary information to extract the conformal weights.
\subsection{Conformal spectra}
The eigenvalues of the transfer matrix are given
by (\ref{BAE}). For small positive values of $u$ the first term in the
eigenvalue expression dominates exponentially. For small positive
imaginary $v$ we therefore have
\begin{equation}
T(v)\sim e^{-\mbox{\small i}\phi} \Phi(v-2\mbox{\small i}\lambda)
\Phi(v-3\mbox{\small i}\lambda){q(v+\mbox{\small i}\lambda)\over q(v-\mbox{\small i}\lambda)}\;
\eeq
for the finite-size corrections. Taking Fourier transforms and integrating
twice yields
\begin{eqnarray}
\ln T(v)&=&-N f_\infty(v)+ {2\sqrt{3}\rho\over \pi}\int_{-\infty}^{\infty}
\left({\sinh 4\rho(v-w-\mbox{\small i}\xi)\over
\sinh 6\rho(v-w-\mbox{\small i}\xi)}\ln A(w)\right. \nonumber \\
&&\hspace*{0.5cm}\h \left.- {\sinh 4\rho(v-w+\mbox{\small i}\xi)\over\sinh 6\rho(v-w+\mbox{\small i}\xi)}\ln
\overline{A}(w)\right)dw, \label{t}
\ee
where the free energy is given by
\begin{equation}
f_\infty(v)=2\int_{-\infty}^{\infty} dk\; {\sinh( k\mbox{\small i} v)\sinh(3k\lambda+\mbox{\small i} vk)
\cosh(5k\lambda-k\pi)\cosh(k\lambda)\over k\cosh(3k\lambda)\sinh(k\pi)} .
\eeq
The integration constants have been fixed again by the limit $v\to\infty$.
Taking the thermodynamic limit $N\to\infty$ in (\ref{t})
and using the definitions
(\ref{scaling12}) gives
\begin{eqnarray}
\ln T(v)&=&-N f_\infty(v)+{2\sqrt{3}\mbox{\small i}\rho\over N\pi}e^{2\rho v}
\mbox{${\Im}m$ }\left(e^{-2\rho\mbox{\small i}\xi} \int_{-\infty}^{\infty}
\ln A_+(x)e^{-x}\right) \nonumber \\
&&\hspace*{0.5cm}\h -{2\sqrt{3}\mbox{\small i}\rho\over N\pi}e^{2\rho v}
\mbox{${\Im}m$ }\left(e^{2\rho\mbox{\small i}\xi} \int_{-\infty}^{\infty}\ln
{A}_-(x)e^{-x}\right)dx \label{T1}
\ee
up to order $1/N$.
To proceed further we consider the expression
\begin{equation}
\int_{-\infty}^{\infty}\left[\smat{\ln a_\pm\cr\ln\overline{a}_\pm }^\prime
(\ln A_\pm\;,\ln\overline{A}_\pm)-\smat{\ln a_\pm\cr\ln\overline{a}_\pm }
(\ln A_\pm\;,\ln\overline{A}_\pm)^\prime\right]dx
\label{expr}
\eeq
which can be written exactly as
\begin{equation}
L(z)+L(1/z)={\pi^2\over 3} \label{L+L}
\eeq
in terms of the Rogers dilogarithmic function
\begin{equation}
L(x)=\int^x_0\left({\ln(1+y)\over y}-{\ln y\over 1+y}\right]dy\label{L(x)}.
\eeq
On the other hand, making use of (\ref{a12}) in (\ref{expr}) and using
$a_\pm(-\infty)=\overline{a}_\pm(-\infty)=0$
and $a_\pm(\infty)=1/\overline{a}_\pm(\infty)=e^{i(\omega_\pm-\phi)}$,
we arrive at the result
\begin{equation}
\mp 8\sqrt{3}\mbox{${\Im}m$ }\left(e^{\mp 2\rho\mbox{\small i}\xi} \int_{-\infty}^{\infty}
\ln A_\pm(x)e^{-x}\right) +{\pi(\omega_\pm-\phi)^2\over\pi-2\lambda}
.\label{327}
\eeq
Equating (\ref{327}) and (\ref{L+L}) gives the integral in (\ref{T1}). Thus
inserting this integral into the expression (\ref{T1})
we obtain
\begin{equation}
\ln T(v)=-Nf_\infty(v)-{\pi\sin(2\mbox{\small i}\rho v)\over 6N}(c-24\Delta)
\eeq
to leading order in $1/N$. This is our final result, from which
the central charge and conformal weights can be read-off
\cite{BCN:86,Affleck:86,cx} as
\begin{eqnarray}
c&=&1-{3 \phi^2\over \pi(\pi-2\lambda)}, \label{c12} \\
\Delta&=&\cases{\displaystyle{(\omega-\phi\mp 2\ell\lambda)^2
-(\pi-4\lambda)^2\over 8\pi(\pi-2\lambda)} & dilute O($n$) model,\cr
\displaystyle{(\omega-\phi)^2
-(\pi-4\lambda)^2\over 8\pi(\pi-2\lambda)}
& dilute $A_L$ face model.\cr}\label{res12}
\ee
\section{Branches 3 and 4}
\setcounter{equation}{0}
On branches 3 and 4 the spectral and crossing parameters are
specialized in the regime
\begin{equation}
-\pi+3\lambda<u<0\hspace*{0.5cm}\h {1\over 6}\pi\le\lambda<{1\over 3}\pi\;.
\eeq
The following computation of the finite-size corrections to the
transfer matrix eigenspectra for each of the models is valid for the
larger interval $0<\lambda<{1\over 3}\pi$.
We proceed in a similar manner as for branches 1 and 2 and
introduce a new parameter $v=iu$ and set $v_j=u_j$.
The function $p(v)$ is defined by
\begin{equation}
p(v)=e^{\mbox{\small i}(\omega-\phi)}
{\Phi(v- \mbox{\small i} \lambda+\mbox{$\textstyle {1 \over 2}$}\pi\mbox{\small i})q(v-\mbox{\small i}\lambda)q(v+2\mbox{\small i}\lambda)
\over\Phi(v+\mbox{\small i} \lambda+\mbox{$\textstyle {1 \over 2}$}\pi\mbox{\small i})q(v+\mbox{\small i}\lambda)q(v-2\mbox{\small i}\lambda)},
\eeq
with $\Phi$ and $q$ as given in (\ref{phiq}).
In \cite{WBN:92,Warnaar} it has been checked
that the Bethe ansatz roots are (almost) located on the lines
$\mbox{${\Im}m$ }(v)=\pm\mbox{$\textstyle {1 \over 2}$}\lambda$ in the complex $v$-plane.
As a consequence we still have the
symmetries of equation (\ref{qp}).
\subsection{Nonlinear integral equation}
We proceed by defining functions that
are {\sc anz} in the strips around the real axis:
\begin{eqnarray}
A(v)=1+\alpha(v)/g(v) &\hspace*{0.5cm}& \alpha(v)=g(v)p(v-\mbox{\small i}\lambda)[1+p(v)] \nonumber\\
B(v)=1+\beta(v)/g(v) &\hspace*{0.5cm}& \beta(v)=g(v)\displaystyle{p(v)p(v-\mbox{\small i}\lambda)\over
1+p(v-\mbox{\small i}\lambda)} \nonumber\\
C(v)=1+\gamma(v)/g(v) &\hspace*{0.5cm}& \gamma(v)=g(v)p(v-\mbox{\small i}\lambda) \\
&\hspace*{0.5cm}& \delta(v)=p(v). \nonumber
\ee
The function $g(v)=\mbox{th}^N\rho(v+\mbox{\small i}\lambda-\mbox{$\textstyle {1 \over 2}$}\pi\mbox{\small i})$, with
$\rho=\pi/(2\pi-6\lambda)$, is introduced to compensate the anticipated bulk
behaviour of the functions $\alpha,\beta,\gamma$.
We define the Fourier transform of the functions $\alpha,\beta,\gamma$ as
in (\ref{Fou}). For $q$ we have
\begin{eqnarray}
q(k)={1\over 2\pi}\displaystyle{\int_{-\infty+{\rm i} r}^{\infty+\mbox{\small i} r}}
\(\ln q(v)\)^{\prime\prime}e^{-{\rm i} kv}\;dv
&\hs{0.5}&-\pi+\mbox{$\textstyle {1 \over 2}$}\pi<r<-\mbox{$\textstyle {1 \over 2}$}\pi\nonumber \\
\(\ln q(v)\)^{\prime\prime}=\displaystyle{\int_{-\infty}^{\infty}}q(k)
e^{\mbox{\small i} kv}\;dk&\hs{0.5} &-\pi+\mbox{$\textstyle {1 \over 2}$}\pi<\mbox{${\Im}m$ }(v)<-\mbox{$\textstyle {1 \over 2}$}\pi \\
q_1(k)={1\over 2\pi}\displaystyle{\int_{-\infty+{\rm i} r}^{\infty+\mbox{\small i} r}}
\(\ln q(v)\)^{\prime\prime}e^{-{\rm i} kv}\;dv
& \hs{0.5}&-\mbox{$\textstyle {1 \over 2}$}\pi<r<\mbox{$\textstyle {1 \over 2}$}\pi\nonumber \\
\(\ln q(v)\)^{\prime\prime}=\displaystyle{\int_{-\infty}^{\infty}}
q_1(k)e^{\mbox{\small i} kv}\;dk&\hs{0.5} &-\mbox{$\textstyle {1 \over 2}$}\pi<\mbox{${\Im}m$ }(v)<\mbox{$\textstyle {1 \over 2}$}\pi.
\ee
To solve the functional relations we need the relations of the Fourier
transforms of $\alpha,\beta,\gamma$.
First we can see that not all functions
are independent and thus we have
\begin{eqnarray}
\beta(k)-\gamma(k)-\delta(k)+C(k)&=&0 \nonumber \\
\alpha(k)-\overline{\alpha}(k)-\gamma(k)
+\overline{\gamma}(k)-\delta(k)&=&0 \label{rel-1} \\
A(k)-B(k)-C(k)&=&0. \nonumber
\ee
Applying the Fourier transform to the $\delta,\gamma$ gives
\begin{eqnarray}
\gamma(k)&=&Nk\sinh\lambda k/\sinh{\pi k/2}
-\mbox{$\textstyle {1 \over 2}$} Nke^{k\lambda\over 2}/\cosh {k\over 2}(3\lambda-\pi) \nonumber \\
&&\hspace*{0.5cm} +(e^{-\lambda k+\pi k}+
e^{2\lambda k}-e^{3\lambda})q(k)-q_1(k) \label{gamma-q}\\
\delta(k)&=& Nk\sinh\lambda k/\sinh{\pi k/2}-
4e^{k\pi\over 2}\sinh{\lambda k/2}
\cosh {k\over 2} (3\lambda-\pi)\; q(k). \label{delta-q}
\ee
Other relations follow by applying Cauchy's
theorem to the auxiliary functions
\begin{eqnarray}
h_1(v)&=&p(v-\mbox{$\textstyle {1 \over 2}$}\mbox{\small i}\lambda)[1+p(v+\mbox{$\textstyle {1 \over 2}$}\mbox{\small i}\lambda)], \label{h2} \\
h_2(v)&=&{1+p(v-\mbox{$\textstyle {1 \over 2}$}\mbox{\small i}\lambda)[1+p(v+\mbox{$\textstyle {1 \over 2}$}\mbox{\small i}\lambda)]
\over p(v+\mbox{$\textstyle {1 \over 2}$}\mbox{\small i}\lambda)}, \label{h3} \\
h_3(v)&=&[1+p(v-\mbox{$\textstyle {1 \over 2}$}\mbox{\small i}\lambda)
[1+p(v+\mbox{$\textstyle {1 \over 2}$}\mbox{\small i}\lambda)]/q(v-\mbox{$\textstyle {1 \over 2}$}\mbox{\small i}\lambda),\label{h4}
\ee
which all satisfy the non-trivial analyticity property
\begin{equation}
\displaystyle{\int_{-\infty+\mbox{$\textstyle {1 \over 2}$}{\rm i}\lambda}^{\infty+\mbox{$\textstyle {1 \over 2}$}{\rm i}\lambda}}
\(\ln q(v)\)^{\prime\prime}e^{-{\rm i} kv}\;dv=
\displaystyle{\int_{-\infty-\mbox{$\textstyle {1 \over 2}$}{\rm i}\lambda}^{\infty-\mbox{$\textstyle {1 \over 2}$}{\rm i}\lambda}}
\(\ln q(v)\)^{\prime\prime}e^{-{\rm i} kv}\;dv\;.
\eeq
It follows, respectively, that
\begin{eqnarray}
\alpha(k)&=&-e^{\lambda k}\overline{\beta}(k) ,\label{alphabeta} \\
A(k)-\delta(k)&=&e^{\lambda k}[\overline{A}(k)+\delta(k)],
\ee
and
\begin{eqnarray}
&&e^{-\mbox{$\textstyle {1 \over 2}$}\lambda k}\left(A(k)-e^{\lambda k}q(k)\right) \nonumber \\
&&\;\;=\;e^{\mbox{$\textstyle {1 \over 2}$}\lambda k}\left(\overline{B}(k)-\overline{\beta}(k)-q_1(k)+{\mbox{$\textstyle {1 \over 2}$} Nk
e^{-\mbox{$\textstyle {1 \over 2}$}\lambda k}\over\cosh
(\mbox{$\textstyle {3 \over 2}$}\lambda k-\mbox{$\textstyle {1 \over 2}$}\pi k) }\right)\;.\label{rel-4}
\ee
Now solving (\ref{rel-1})--(\ref{rel-4}) and their complex conjugates
in terms of the functions $A$ and $B$, we find
\begin{eqnarray}
\alpha(k)+\gamma(k)&=&F(k)A(k)+G(k)\overline{A}(k)+H(k)B(k)
+\overline{H}(k)\overline{B}(k) \nonumber\\
\beta(k)-\gamma(k)&=&\overline{H}(k)A(k)+\overline{H}(k)\overline{A}(k)+B(k) \nonumber\\
C(k)&=&A(k)-B(k) \label{q34}\\
&&\hs{-2.5}q(k)={Nke^{-\mbox{$\textstyle {1 \over 2}$}\pi k}
\cosh{\lambda k\over 2}\over 2\sinh{\mbox{$\textstyle {1 \over 2}$}\pi k}
\cosh{k\over 2}(3\lambda-\pi)}-{e^{-\mbox{$\textstyle {1 \over 2}$}(\pi+\lambda) k}A(k)
-e^{-\mbox{$\textstyle {1 \over 2}$}(\pi-\lambda) k}\overline{A}(k)\over 4\sinh{\lambda k}
\cosh{k\over 2}(3\lambda-\pi) } \nonumber
\ee
with
\begin{eqnarray}
F(k)&=&{\sinh {k\over 2}(\pi-3\lambda) -2 \sinh {k\over 2}(\pi-5\lambda)\over
2\sinh\lambda k \cosh {k\over 2}(3\lambda-\pi)} \nonumber \\
G(k)&=& {3e^{-\mbox{$\textstyle {1 \over 2}$}(\pi-5\lambda) k}-2e^{-\mbox{$\textstyle {1 \over 2}$}(\pi-7\lambda) k}
-2e^{-\mbox{$\textstyle {1 \over 2}$}(\pi-3\lambda) k}+e^{\mbox{$\textstyle {1 \over 2}$}(\pi-\lambda) k} \over
4\sinh\lambda k \cosh{k\over 2}(3\lambda-\pi)} \\
H(k)&=& -{e^{-\mbox{$\textstyle {1 \over 2}$}\lambda k}\over 2 \cosh \mbox{$\textstyle {1 \over 2}$}\lambda k}.
\ee
Transforming back and integrating twice, we obtain
a coupled set of nonlinear integral equations,
\begin{eqnarray}
\ln \alpha(v)+\ln \gamma(v)&=&F*\ln A+G*\ln\overline{A}
+H*\ln B+\overline{H}*\ln \overline{B}+C,\nonumber\\
\ln \beta(v)-\ln \gamma(v)&=&\overline{H}*\ln A
+\overline{H}*\ln\overline{A}+\ln B, \label{nonli}
\ee
where we have introduced
\begin{eqnarray}
F(v)&=&{1\over 2\pi}\int_{-\infty}^\infty F(k) e^{\mbox{\small i} k v}\;dk, \nonumber \\
G(v)&=&{1\over 2\pi}\int_{-\infty}^\infty G(k) e^{\mbox{\small i} k v}\;dk, \\
H(v)&=&{1\over 2\pi}\int_{-\infty}^\infty H(k) e^{\mbox{\small i} k v}\;dk. \nonumber
\ee
Taking the scaling limit as in (\ref{scaling12}), the nonlinear integral
equations become
\begin{eqnarray}
\smat{\ln a_{\pm}+\ln c_{\pm}\cr\ln b_{\pm}-\ln c_{\pm}\cr
\ln \overline{a}_{\pm}+\ln \overline{c}_{\pm}\cr
\ln \overline{b}_{\pm}-\ln \overline{c}_{\pm}}=\pm 4\mbox{\small i} e^{-x}
\smat{-e^{\pm \rho\mbox{\small i}\lambda}\cr 0\cr -e^{\mp\rho\mbox{\small i}\lambda}\cr 0}
+K*\smat{\ln A_\pm\cr\ln B_\pm\cr\ln\overline{A}_\pm\cr\ln\overline{B}_\pm}
+C_\pm\smat{1\cr0\cr -1\cr 0}
\; ,\label{a34}
\ee
where the kernel $K$ again satisfies the very useful symmetry
$K^T(x)=K(x)$.
The integration constant in (\ref{nonli}) follows from the
limit $x\to\infty$. We have
\begin{equation}
C_\pm= {\pi\mbox{\small i}(\omega_\pm-\phi)/( 2\lambda)}.
\eeq
Again the integration constants $C_\pm$ contain the essential information
to obtain the conformal spectra.
\subsection{Conformal spectra}
For branches 3 and 4 the transfer matrix eigenvalues in
(\ref{BAE}) are dominated by
\begin{eqnarray}
T(v+\mbox{\small i}\lambda-\mbox{$\textstyle {1 \over 2}$}\pi\mbox{\small i})\sim e^{\mbox{\small i}\phi}
\Phi(v+\mbox{\small i}\lambda+\mbox{$\textstyle {1 \over 2}$}\pi\mbox{\small i})\Phi(v+\mbox{$\textstyle {1 \over 2}$}\pi\mbox{\small i})
{q(v-3\mbox{\small i}\lambda)\over q(v-\mbox{\small i}\lambda)}\;A(v).
\ee
Taking Fourier transforms and
using the solution (\ref{q34}) for $q(k)$, we obtain
\begin{eqnarray}
&&\ln T(v+\mbox{\small i}\lambda-\mbox{$\textstyle {1 \over 2}$}\pi\mbox{\small i})\;=\;-N f_\infty(v)\nonumber \\
&&\hspace*{0.5cm}+\; {\rho\over \pi}\int_{-\infty}^{\infty}
\left({\ln A(w)\over\cosh 2\rho(v-w+\mbox{$\textstyle {1 \over 2}$}\mbox{\small i}\lambda)}
- {\ln \overline{A}(w)\over\cosh 2\rho(v-w-\mbox{$\textstyle {1 \over 2}$}\mbox{\small i}\lambda)}
\right)dw,
\ee
where the bulk free energy is given by
\begin{eqnarray}
\hs{-1} f_\infty(v)=2\!\int_{-\infty}^{\infty}\! dk {\sinh( k\mbox{\small i} v)
\sinh(3k\lambda+\mbox{\small i} vk-\pi k)
\cosh(5k\lambda-k\pi)\cosh(k\lambda)\over k\cosh(3k\lambda-\pi k)\sinh(k\pi)} .
\ee
Taking the thermodynamic limit $N\to\infty$ and using the definition as in
(\ref{scaling12}) gives
\begin{eqnarray}
\ln T(v)&=&-N f_\infty(v)+{2\mbox{\small i}\over N\pi}e^{2\rho v}
\mbox{${\Im}m$ }\left(e^{\rho\mbox{\small i}\lambda} \int_{-\infty}^{\infty}
\ln A_+(x)e^{-x}\right) \nonumber \\
&&\hspace*{0.5cm}\h +{2\mbox{\small i}\over N\pi}e^{-2\rho v}
\mbox{${\Im}m$ }\left(e^{-\rho\mbox{\small i}\lambda} \int_{-\infty}^{\infty}\ln
{A}_-(x)e^{-x}\right)dx.
\ee
To calculate the integral, we consider the expression
\begin{eqnarray}
\int_{-\infty}^{\infty}\left[
\smat{\ln a_{\pm}+\ln c_{\pm}\cr\ln b_{\pm}-\ln c_{\pm}\cr
\ln \overline{a}_{\pm}+\ln \overline{c}_{\pm}\cr
\ln \overline{b}_{\pm}-\ln \overline{c}_{\pm}}^{\prime}
\smat{\ln A_\pm\cr\ln B_\pm\cr\ln\overline{A}_\pm\cr\ln\overline{B}_\pm}^T
-\smat{\ln a_{\pm}+\ln c_{\pm}\cr\ln b_{\pm}-\ln c_{\pm}\cr
\ln \overline{a}_{\pm}+\ln \overline{c}_{\pm}\cr
\ln \overline{b}_{\pm}-\ln \overline{c}_{\pm}}
\smat{\ln A_\pm\cr\ln
B_\pm\cr\ln\overline{A}_\pm\cr\ln\overline{B}_\pm}^{\prime\; T}\right]
dx, \label{express}
\ee
which can be evaluated exactly using the Rogers dilogarithmic
function relation (\ref{L+L}). We thus arrive at
\begin{eqnarray}
&&L\(a_\pm(\infty)\)+L\(1/a_\pm(\infty)\)
+L\(1/b_\pm(\infty)\) \nonumber \\
&&\hspace*{0.5cm} +L\(b_\pm(\infty)\)
+L\(c_\pm(\infty)\)+L\(1/c_\pm(\infty)\)=
\pi^2-8k\pi^2, \label{hand-1}
\ee
where $k=0,1$ \cite{KWZ:93}.
We have also used the asymptotics of the functions
$a_\pm(\infty)=e^{i(\omega_\pm-\phi)}(e^{i(\omega_\pm-\phi)}+1)$,
$b_\pm(\infty)={ e^{2i(\omega_\pm-\phi)}/(e^{i(\omega_\pm-\phi)}+1)}$,
$c_\pm(\infty)=e^{i(\omega_\pm-\phi)}$ and $a_\pm(-\infty)=c_\pm(-\infty)
=c_\pm(-\infty)=0$.
On the other hand, substituting (\ref{a34})
into (\ref{express}) we arrive at
\begin{eqnarray}
\pm 16\mbox{${\Im}m$ }\left(e^{\pm\rho\mbox{\small i}\lambda} \int_{-\infty}^{\infty}
\ln A_\pm(x)e^{-x}\right)
+{\pi(\omega_\pm-\phi)^2\over\lambda}.\label{hand-2}
\ee
Combining the results (\ref{hand-1}) and (\ref{hand-2}) we are left with
\begin{eqnarray}
\mbox{${\Im}m$ }\left(e^{\pm\rho\mbox{\small i}\lambda} \int_{-\infty}^{\infty}\ln A_\pm(x)e^{-x}\right)
=\pm{\pi^2\over 24}\left({3\over 2}-{3(\omega_\pm-\phi)^2\over
2\pi\lambda}- 12 k\right).
\ee
Inserting this in the expression $\ln T(v)$ we obtain
the final result
\begin{eqnarray}
\ln T(v+\mbox{\small i}\lambda-\mbox{$\textstyle {1 \over 2}$}\pi\mbox{\small i})=-Nf_\infty(v)+{\pi\sin(2\mbox{\small i}\rho v)\over 6N}
(c-24\Delta).
\ee
The central charges and conformal weights are given by
\begin{eqnarray}
c&=&{3\over 2}-{3(\pi-4\lambda)^2\over 2\pi\lambda} ,\label{c34} \\
\Delta&=&\cases{\displaystyle{(\omega-\phi\mp 2\ell
\lambda)^2-(\pi-4\lambda)^2\over 16\pi\lambda}+\Delta_{\rm Ising}
& dilute O(n) model \cr
\displaystyle{(\omega-\phi)^2-(\pi-4\lambda)^2\over 16\pi\lambda}
+\Delta_{\rm Ising} & dilute $A_L$ face model\cr} \label{res34}
\ee
with $\Delta_{\rm Ising}=0,{1\over 2}$.
\section{Summary and discussion}
We have calculated the finite-size corrections to the transfer matrix
eigen-spectra of the intimately related dilute O($n$),
dilute $A_L$ and Izergin-Korepin models at criticality
via the nonlinear integral equation approach.
The resulting conformal weights defining the critical
exponents are seen to follow from appropriate branches
of the log functions appearing in the Bethe equations.
For the dilute O($n$) model the integration constants appearing
in the key nonlinear integral equations (\ref{a12}) and (\ref{a34})
differ in the distinct limits $v\to\pm\infty$,
leading to two constants, $C_{\pm}$, in the scaling limit.
However, they satisfy $|C_+|=|C_-|$, allowing the calculation
to go through in a similar manner as for the ABF model \cite{Zhou:95}.
\subsection{Dilute O($n$) model}
Consider first the dilute O($n$) model in branches 1 and 2 in
the $u$ positive regime.
Our final results are the central charge (\ref{c12}) and
the conformal weights (\ref{res12}).
These are in agreement with the previous results outlined in
Sections 2.1 and 2.2 with the O($n$) $\phi$ value (\ref{on}).
In particular, the magnetic dimensions (\ref{mag12}) follow
with the parameters $\ell\ne 0$ and $r=0$ in (\ref{casespm}).
Similarly, the thermal dimensions (\ref{therm12}) follow with
$\ell= 0$ and $r=2j$.
As expected, in this
regime the conformal dimensions are seen to be in agreement
with the results obtained in the honeycomb limit.
On the other hand, in branches 3 and 4 of the $u$ negative regime
our final results are (\ref{c34}) and (\ref{res34}).
Here the conformal dimensions are new. The conjectured
magnetic dimensions (\ref{mag34}) \cite{WBN:92} are associated with
$\ell\ne 0$ and $r=\ell$.
The similarly conjectural thermal dimension
$X_1^\epsilon =2\Delta_{\rm Ising}=1$
follows from the choice $\ell= 0$ and $r=0$.
More generally we see that this dimension belongs to the
thermal set
\begin{eqnarray}
X_{j+1}^\epsilon={j^2\pi-j(\pi-4\lambda)\over 2\pi\lambda}
+\Delta_{\rm Ising},
\ee
which is given by setting $\ell=0$ and $r=2j$ in (\ref{res34}).
\subsection{Dilute $A_L$ face model}
Recall that the Bethe equations of the dilute $A_L$ face model
follow from the choice of crossing parameter given in (\ref{branches})
with seam $\phi$ as given in (2.5). In this case the branches 1 and 2
results (\ref{c12}) and (\ref{res12}) give the central charge (2.11)
and conformal weights (2.15) of the unitary minimal series.
In a similar manner, the branches 3 and 4 results are indicative
of the product of the unitary minimal series with the Ising model.
The central charge is given by (2.11) and the conformal weights
again by given by (2.15), with however, the additional
$\Delta_{\rm Ising}$ component. Here $\Delta_{\rm Ising} =
0,\frac{1}{2},\frac{1}{16}$ is expected, in accordance with the
results of \cite{WPSN:94}. However, $\frac{1}{16}$ does not appear
in our results. Our method needs further refinement to reveal this
conformal dimension.
For the more general crossing parameter
$$ \lambda={\pi\over 4}(1+{k\over L+1}) $$
with $|k|<\lfloor (L+1)/3\rfloor$, the integer part of
the fraction $(L+1)/3$, our results (\ref{c12}) and (\ref{c34}),
and (\ref{res12}) and (\ref{res34}) imply
\begin{eqnarray}
c&=&1-{6k^2\over (L+1)(L+1-k)} \hspace*{0.5cm}\mbox{branches 1 and 2}\\
c&=&{3\over 2}-{6k^2\over (L+1)(L+1-k)} \hspace*{0.5cm}\mbox{branches 3 and 4}
\ee
for the central charges and
\begin{eqnarray}
\Delta&=&{[(L+1)t-(L+1-k)s]^2-k^2\over 4(L+1)(L+1-k)}
\hs{2.0}\mbox{branches 1 and 2}\\
\Delta&=&{[(L+1)t-(L+1-k)s]^2-k^2\over 4(L+1)(L+1-k)}+\Delta_{\rm Ising}
\hspace*{0.5cm}\mbox{branches 3 and 4}
\ee
for the conformal weights, where
$s=1,2,\cdots,L$ and $t=1,2,\cdots,L-k$. These results indicate the
non-unitary minimal models for the dilute $A_L$ models in braches 1 and 2
and the non-unitary minimal models plus an Ising model
in branches 3 and 4. For the
case $k=1$ our results confirm the conformal weights presented
in \cite{WPSN:94}. For $k>1$ our results show that
the dilute $A_L$ models in branches 1 and 2 can be classified
by the same universality
classes as for the ABF models \cite{FoBa:85,Zhou:95}.
\subsection{Izergin-Korepin model}
Our results for the O($n$) model reduce to those of the
Izergin-Korepin model when the seam $\phi=0$.
In this way the central charge and conformal weights are given by
\begin{equation}
c = 1, \quad X_{\ell,r} =
\displaystyle{\ell^2(\pi- 2\lambda)\over 4\pi}+
\displaystyle{r^2\pi\over \pi-2\lambda} \label{iz12}
\eeq
for $0<u<3\lambda$ with $0 <\lambda < \pi/3$. On the other hand,
\begin{equation}
c = {3\over 2}, \quad X_{\ell,m} =
\displaystyle{\lambda\ell^2\over2\pi}
+{m^2\over 2\pi\lambda} +2 \Delta_{\rm Ising} \label{iz34}
\eeq
for $-\pi+3\lambda<u<0$ with $0< \lambda < \pi/3$.
The result (\ref{iz12}) is in agreement with the previous results in
the honeycomb limit, while the result (\ref{iz34}), reflecting also the
additional Ising content, is new.
\vskip 0.5cm
It is a pleasure to thank Ole Warnaar for helpful discussions
over the years.
This work has been supported by the Australian Research Council.
YKZ also thanks the Natural Science Foundation of China
for partial support.
|
train/arxiv
|
BkiUduE4uBhjC0FRTeL3
| 5 | 1 |
\section{Introduction}
\label{sect_intro}
In this paper, we aim to study high-order time discretization methods
to preserve the maximum bound principle (MBP) of a class of semilinear parabolic equations taking the following form
\begin{equation}
\label{model_pde}
u_t = \mathcal{L} u + \mathcal{N}[u],
\end{equation}
where $\mathcal{L}$ and $\mathcal{N}$ are linear and nonlinear operators, respectively,
and $u=u(t,\bm{x})$ is the unknown function subject to appropriate initial and boundary conditions.
The MBP implies the existence of special upper and lower solutions in the sense that
the solution $u(t,x)$ preserves for any time a uniform pointwise bound imposed by the initial and boundary data.
Mathematical models for many practical problems have the form \eqref{model_pde}
and their solutions satisfy the MBP.
A typical example is the classic Allen--Cahn equation,
where the linear operator $\mathcal{L}$ is the standard Laplace operator multiplied by a diffusion coefficient $\varepsilon^2$
and the nonlinear part is given by a cubic polynomial $\mathcal{N}[u]=u-u^3$.
It is well known \cite{EvSoSo92} that
the solution of the Allen--Cahn equation is pointwisely bounded by $1$ for any time
if the initial and boundary conditions are bounded by $1$,
which implies the MBP.
The Allen--Cahn equation was originally developed in \cite{AlCa79}
to model the motion of anti-phase boundaries in crystalline solids,
and nowadays, has been widely used as a fundamental model for phase-field (or diffuse-interface) methods
to study the interfacial motions and phase transitions in various application fields.
The MBP is an important physical feature
and is essential for numerical simulations to yield physically relevant solutions of many mathematical models.
Besides the solution of the Allen--Cahn equation, the density, concentration, or pressure in fluid flows must be nonnegative,
and the probability distribution should also be in the range $[0,1]$.
Hence, in numerical simulations,
it is highly expected that the numerical solutions preserve the MBP in the discrete sense.
The violation of MBP may cause the ill-posedness of the problem and blow-ups of the numerical algorithms.
For instance, when there is a logarithmic term in the equation, e.g.,
the reaction-diffusion equation with the Flory--Huggins potential function
and the Peng--Robinson equation of state \cite{PeRo76,QiSu14},
a widely used realistic equation of state for hydrocarbon fluid in the petroleum industry.
On the other hand, in the view of numerical analysis,
the MBP suggests a type of strong stability in the supremum-norm sense
and guarantees the spatially uniform pointwise boundedness of the numerical solution.
Such a property facilitates the further numerical analysis of the numerical schemes, e.g., energy stability for phase-field models, since the locally Lipschitz continuous nonlinear term usually will become fully Lipschitz continuous thanks to the uniform pointwise bound. For instance,
the Allen--Cahn equation could be viewed as the $L^2$ gradient flow associating with the energy
\begin{equation}
E(u) = \int \Big( \frac{\varepsilon^2}{2} |\nabla u(\bm{x})|^2 + \frac{1}{4}(u^2(\bm{x})-1)^2 \Big) \, \d \bm{x},
\end{equation}
and the solution satisfies the energy stability
in the sense that the energy is non-increasing in time, namely,
$E(u(t_2))\le E(u(t_1))$ for any $t_2\ge t_1\ge 0$.
There have been a large variety of numerical schemes preserving such energy stability
successfully applied on different types of phase-field models, e.g., \cite{DuJuLiQi18,FeTaYa13,GuWaWi14,JuLiQiZh18,QiZhTa11,ShWaWaWi12,ShXuYa19,ShYa10b,WiWaLo09,XuTa06,Yang16}
and the references therein.
It was found that the MBP plays a key role to show such nonlinear energy stability \cite{DuJuLiQi19,HoTaYa17,ShTaYa16,TaYa16}.
Recently, MBP-preserving numerical schemes
have attracted increasingly attentions for semilinear parabolic equations in the form of \eqref{model_pde}.
In \cite{StVo15}, it was shown for the Allen--Cahn equation in one-dimensional case that
the MBP was preserved by the central difference semi-discrete scheme
and its fully discrete approximations with forward and backward Euler time-stepping methods.
Later, the discrete MBPs, as well as the energy stability,
of the first- and second-order stabilized semi-implicit (SSI) schemes with central difference method
were proved for the Allen--Cahn equation in \cite{HoLe20,ShTaYa16,TaYa16},
and those of the SSI schemes with finite element discretization in space
were obtained for the surface Allen--Cahn equation in \cite{XiFeYu17}.
The discrete MBPs were also obtained for the Allen--Cahn equation
in \cite{DuZh05} by first-order exponential time differencing (ETD) scheme in space-continuous setting,
in \cite{HoXiJi20} by second-order nonlinear implicit-explicit schemes with spatial central difference method,
and in \cite{YaDuZh18} by showing the uniform $L^p$ boundedness and passing the limit as $p$ goes to infinity.
In addition, MBP-preserving numerical schemes have been studied
for the nonlocal Allen--Cahn equation
by using first- and second-order ETD schemes \cite{DuJuLiQi19}
combined with a quadrature-based difference method \cite{DuTaTiYa19},
for the space-fractional Allen--Cahn equation
by using the Crank--Nicolson scheme \cite{HoTaYa17}
with central difference approximation \cite{TiZhDe15},
for the time-fractional Allen--Cahn equation
by using the convex splitting methods \cite{DuYaZh19},
and for the complex-valued Ginzburg--Landau model of superconductivity \cite{DuGuPe92}
by considering the finite volume method \cite{Du98,GaJuXi19}
and finite element method with the mass-lumping technique \cite{Du05} in space
with backward Euler time integration.
In \cite{tangyang19},
Tang and Yang proposed so-called one-step monotone schemes to check
whether a scheme is an MBP-preserving scheme or not,
in which the regular third-order explicit strong stability-preserving (SSP) method \cite{GoShTa01}
is shown to be MBP-preserving for the Allen--Cahn equation with CFL condition as ${\tau}=\mathcal{O}(h^2)$.
An abstract framework on the MBP for equations like \eqref{model_pde}
was established in the recent work \cite{DuJuLiQi20review},
where sufficient conditions for linear and nonlinear operators
were given such that the equation satisfies the MBP
and the corresponding first- and second-order ETD schemes preserve the MBP.
Some details on the framework will be given in Section \ref{sect_mbp}.
In this paper, we investigate high-order MBP-preserving time integration schemes.
The integrating factor Runge--Kutta (IFRK) method is used for the time integration,
which has been widely studied for stiff ordinary differential equations (ODEs) recently \cite{AhLi19,JuLiLe14,TaWaNi15}.
The IFRK method could be viewed as an extension of the conventional Runge--Kutta (RK) method,
particularly designed for the case that the problem contains linear part with strong stiffness.
The key idea is to use the exponential integrating factor to eliminate the stiff linear term
and apply the conventional RK method to the resulted system.
To study the stability of the IFRK method,
the concept of strong stability-preserving was proposed in \cite{ShOs88}
to construct efficient time discretization for hyperbolic conservation laws
and then further explored in \cite{GoSh98} for high-order schemes.
The SSP property means that the norm of the numerical solution diminishes.
More precisely, for an ODE system taking the form
\begin{equation}
\label{intro_ode}
\daoshu{u}{t} = Lu + N(u)
\end{equation}
with the matrix $L$ and the mapping $N$ satisfying
\begin{equation}
\label{intro_ode_L}
\|\mathrm{e}^{\omega L}\| \le 1, \quad \forall \, \omega > 0
\end{equation}
and, for some $\omega_0>0$,
\begin{equation}
\label{intro_ode_N}
\|u + \omega N(u)\| \le \|u\|, \quad \forall \, \omega\in(0,\omega_0],
\end{equation}
the IFRK method for \eqref{intro_ode} is called SSP
if its solution satisfies $\|u^{n+1}\|\le\|u^n\|$.
A review of the SSP-RK time discretization method was presented in \cite{GoShTa01}
for ODE systems like \eqref{intro_ode} with $L=0$,
which were often derived from spatial discretization of hyperbolic conservation law equations,
and a recent work \cite{IsGrGo18} generalized these results
to the general case of the SSP-IFRK method for \eqref{intro_ode}.
The general form of the SSP-RK method for \eqref{intro_ode} with $L=0$ is given by
\begin{align}
& u^{(i)} = \sum_{j=0}^{i-1} \left[\alpha_{ij} u^{(j)} + {\tau}\beta_{ij} N(u^{(j)})\right],
\quad 1\le i\le s, \label{intro_ode_rk} \\
& \text{with } u^{(0)} = u^n, \ u^{n+1} = u^{(s)}, \nonumber
\end{align}
where $\alpha_{ij}$ are nonnegative.
Here, \eqref{intro_ode_rk} is actually a convex combination
of some forward Euler sub-steps with the step sizes ${\tau}\frac{\beta_{ij}}{\alpha_{ij}}$
and the nonnegativity of $\beta_{ij}$ is crucial to guarantee \eqref{intro_ode_N}.
It is concluded in \cite{GoSh98,GoShTa01} that,
under the constraint of the nonnegativity of $\beta_{ij}$,
there are no SSP-RK methods with order higher than four
and the number of stage for fourth-order SSP-RK methods cannot be lower than five.
The main contribution of this work includes three aspects.
First, we formulate the general results for the IFRK method preserving the MBP for equations like \eqref{model_pde}
under the abstract framework established in the recent work \cite{DuJuLiQi20review}.
For simplicity, we restrict our discussion in the space-discrete version
to avoid the abstract and tedious definitions of continuous function spaces and domains of operators.
Second, we give the error estimates of the numerical solution of the MBP-preserving IFRK method
by utilizing the uniform $L^\infty$ boundedness guaranteed by the MBP.
Third, we present three-stage, third-order and four-stage, fourth-order MBP-preserving IFRK schemes.
To the best of our knowledge,
this fills the gap of the lack of MBP-preserving numerical schemes with order higher than three.
The requirements on the time step size for preserving the MBP have the same magnitudes as the first-order IF scheme,
and are contributed only from the nonlinear term without the CFL restriction.
Numerical experiments also reflect the high efficiency of the four-stage, fourth-order IFRK scheme.
The rest of this paper is organized as follows.
In Section \ref{sect_mbp},
we briefly restate the sufficient conditions determined in \cite{DuJuLiQi20review}
for the linear and nonlinear operators
such that equation \eqref{model_pde} satisfies the MBP.
We also give the space-discrete equation of \eqref{model_pde}
and the corresponding conditions of the linear and nonlinear parts in the discrete setting.
Then, in Section \ref{sect_ifrk},
we present the IFRK method in the general form
and prove the preservation of the MBP and the error estimates of the method under some certain requirement on the time step size.
In particular, we present a four-stage, fourth-order IFRK scheme which is MBP preserving and give some simple examples of the space-discrete system.
In Section \ref{sect_numexp},
some numerical experiments are carried out for the Allen--Cahn equation with a logarithmic nonlinear term,
including the tests of convergence rate, MBP, and efficiency for long-time simulations.
Finally, some concluding remarks are given in Section \ref{sect_con}.
\section{Maximum bound principle for semilinear parabolic equations}
\label{sect_mbp}
An abstract framework on the maximum bound principle
for a class of semilinear parabolic equations \eqref{model_pde}
was established in \cite{DuJuLiQi20review},
where sufficient conditions for the linear and nonlinear operators
were given such that equation \eqref{model_pde} satisfies the MBP.
For the completeness of the current paper,
we present some main results in \cite{DuJuLiQi20review}.
Denote by $\Omega$ the spatial domain as usual.
A crucial condition on the linear operator $\mathcal{L}$ is the dissipativity in the sense that
if a function $w$ reaches its maximum on $\overline{\Omega}$ at a point $x_0\in\Omega$,
then it must hold $\mathcal{L} w(\bm{x}_0)\le0$.
By defining the function space $X$ appropriately,
this condition implies that $\mathcal{L}$ is the generator of a contraction semigroup on $X$.
Such $\mathcal{L}$ could be the standard Laplace operator, nonlocal diffusion operator \cite{DuGuLeZh12},
fractional Laplace operator \cite{NePaVa12} and so on.
The nonlinear operator $\mathcal{N}$ is assumed to act as a composite function,
i.e., $\mathcal{N}[w](\bm{x})=f(w(\bm{x}))$ for any function $w$ and $\bm{x}\in\overline{\Omega}$,
where $f$ is a one-variable continuously differentiable function satisfying
\begin{equation}
\label{assump_f}
f(\rho)\le 0\le f(-\rho), \quad \text{for some constant $\rho>0$.}
\end{equation}
The essential is the change of the sign of $f$ on both sides of zero.
Under these assumptions,
equation \eqref{model_pde} satisfies the MBP, that is,
if the absolute values of initial and boundary conditions are bounded by $\rho$,
then the absolute value of the entire solution is also bounded by $\rho$ pointwisely for all time.
Applying some type of spatial discretization to \eqref{model_pde},
one can obtain the space-discrete problem given by the ordinary differential equation (ODE) system
\begin{equation}
\label{model_eq}
\daoshu{u}{t} = Lu + f(u).
\end{equation}
Here, $u(t)=(u_1(t),u_2(t),\dots,u_m(t))^T\in\mathbb{R}^m$ denotes the space-discrete solution,
$L$ is an $m$-by-$m$ symmetric matrix derived from the spatial discretization of the linear operator $\mathcal{L}$,
and the vector $f(u)$ with the $j$-th component $f(u_j)$ corresponds to the nonlinear term $\mathcal{N}[u]$.
We denote by $\|\cdot\|_\infty$ the vector or matrix $\infty$-norm as usual.
The framework developed in \cite{DuJuLiQi20review} also consists of the space-discrete problem \eqref{model_eq}.
Therefore, we require that the matrix $L$ is the generator of a contraction semigroup on $\mathbb{R}^m$,
or equivalently,
\begin{equation}
\label{cond_L}
\|\mathrm{e}^{\omega L}\|_\infty \le 1, \quad \forall \, \omega > 0,
\end{equation}
which is identical to \eqref{intro_ode_L}.
For the nonlinear function $f$,
due to the assumption \eqref{assump_f}, i.e., the change of the sign of $f$ on both sides of zero,
the following condition holds:
\begin{equation}
\label{cond_f}
\text{$\exists\,\omega_0^+>0$ such that
$|\xi+\omega f(\xi)| \le \rho$, $\forall\,\xi \in [-\rho,\rho]$, $\forall\,\omega\in(0,\omega_0^+]$,}
\end{equation}
which is weaker than \eqref{intro_ode_N}.
Sometimes, we also need to further assume:
\begin{equation}
\label{cond_f2}
\text{$\exists\,\omega_0^->0$ such that
$|\xi-\omega f(\xi)| \le \rho$, $\forall\,\xi \in [-\rho,\rho]$, $\forall\,\omega\in(0,\omega_0^-]$.}
\end{equation}
\begin{remark}
The above two conditions \eqref{cond_f}-\eqref{cond_f2} are very crucial to remove the nonnegativity requirement of the coefficients $\beta_{ij}$s
as we mentioned in the standard SSP-RK schemes \eqref{intro_ode_rk}. This allows us to show that the three-stage, third-order and four-stage, fourth-order IFRKs are actually MBP-preserving for the model equation \eqref{model_eq} satisfying \eqref{assump_f}, which will be demonstrated in the following section.
\end{remark}
\section{Integrating factor Runge--Kutta method}
\label{sect_ifrk}
In this section,
we will present a family of IFRK schemes for time-stepping of the space-discrete system \eqref{model_eq}.
The method is based on the Runge--Kutta time discretizations
combined with the exponential integrating factor.
We will show the MBP-preserving property under the conditions \eqref{cond_L}--\eqref{cond_f2}
and the error estimates of the numerical solutions.
\subsection{MBP-preserving IFRK method in general form}
We have claimed that $L$ in \eqref{model_eq} is an $m$-by-$m$ matrix
corresponding to the spatial discretization of the linear operator $\mathcal{L}$.
Premultiplying the system \eqref{model_eq} by $\mathrm{e}^{-tL}$, we have
\[
\daoshu{(\mathrm{e}^{-tL}u)}{t} = \mathrm{e}^{-tL} f(u).
\]
Defining a transformation of variables by $w = \mathrm{e}^{-tL} u$,
we obtain a new ODE system
\begin{equation}
\label{model_eq_w}
\daoshu{w}{t} = \mathrm{e}^{-tL} f(\mathrm{e}^{tL}w) =: G(t,w).
\end{equation}
The general explicit $s$-stage Runge--Kutta method for \eqref{model_eq_w} is given by \cite{ShOs88}
\begin{subequations}
\label{wODE_RK_general}
\begin{align}
w^{(0)} & = w^n,\\
w^{(i)} & = w^{(0)}+{\tau}\sum_{j=0}^{i-1}d_{ij}G(t_n+c_j{\tau},w^{(j)}),\quad 1\le i\le s, \label{wODE_RK_general2} \\
w^{n+1} & = w^{(s)},
\end{align}
\end{subequations}
where $c_0=0$, $c_i=\sum\limits_{j=0}^{i-1}d_{ij}$ for $1\le i\le s$,
and $c_s=1$ for consistency.
For $\alpha_{ij}\ge0$ to be determined such that $\sum\limits_{j=0}^{i-1}\alpha_{ij}=1$,
we rewrite \eqref{wODE_RK_general2} as
\begin{equation}
\label{wODE_RK_euler}
w^{(i)} = \sum_{j=0}^{i-1}[\alpha_{ij}w^{(j)}+{\tau}\beta_{ij}G(t_n^{(j)},w^{(j)})],\quad 1\le i\le s,
\end{equation}
where $\beta_{ij}=d_{ij}-\sum\limits_{k=j+1}^{i-1}\alpha_{ik}d_{kj}$ and $t_n^{(j)}=t_n+c_j{\tau}$.
If we require that $\beta_{ij}=0$ when its corresponding $\alpha_{ij}=0$, thus
\eqref{wODE_RK_euler} is a convex combination of a group of forward Euler substeps
$$ w^{(j)}+{\tau}\frac{\beta_{ij}}{\alpha_{ij}}G(t_n^{(j)},w^{(j)}).$$
Define $u^n=\mathrm{e}^{t_nL}w^n$ and $u^{(i)}=\mathrm{e}^{t_n^{(i)}L}w^{(i)}$, $0\le i\le s$,
then \eqref{wODE_RK_euler} becomes
\begin{equation*}
u^{(i)} = \sum_{j=0}^{i-1} \mathrm{e}^{(c_i-c_j){\tau} L} [\alpha_{ij} u^{(j)} + {\tau}\beta_{ij} f(u^{(j)})],
\quad 1\le i\le s,
\end{equation*}
which can be viewed as
a convex combination of the exponential forward Euler substeps
\[
\mathrm{e}^{(c_i-c_j){\tau} L} \Big[u^{(j)}+{\tau}\frac{\beta_{ij}}{\alpha_{ij}}f(u^{(j)})\Big].
\]
Now, we obtain the following $s$-stage IFRK method for \eqref{model_eq}:
\begin{subequations}
\label{model_ODE_IFRK}
\begin{align}
u^{(0)} & = u^n,\\
u^{(i)} & = \sum_{j=0}^{i-1} \mathrm{e}^{(c_i-c_j){\tau} L} [\alpha_{ij} u^{(j)} + {\tau}\beta_{ij} f(u^{(j)})],
\quad 1\le i\le s,\\
u^{n+1} & = u^{(s)}.
\end{align}
\end{subequations}
The main result on the MBP-preserving property of the method \eqref{model_ODE_IFRK} is as follows.
\begin{theorem}
\label{thm_MBP}
Given a linear operator $L$ satisfying \eqref{cond_L},
a function $f$ satisfying \eqref{cond_f} and \eqref{cond_f2},
and the abscissas $\{c_j\}$ satisfying
\begin{equation}
\label{cond_cc}
c_0 \le c_1 \le \cdots \le c_s,
\end{equation}
if $\|u^n\|_\infty\le\rho$,
then $u^{n+1}$ obtained from \eqref{model_ODE_IFRK} satisfies $\|u^{n+1}\|_\infty \le \rho$,
provided that the time step size satisfies
\begin{equation}
\label{cond timestep}
{\tau} \le \mathcal{C}\omega_0^+, \quad \text{with } \mathcal{C}=\min_{i,j}\frac{\alpha_{ij}}{\beta_{ij}}
\end{equation}
when $\beta_{ij}$ are all nonnegative,
or satisfies both \eqref{cond timestep} and
\begin{equation}
\label{cond timestep2}
{\tau} \le \mathcal{C}\omega_0^-, \quad \text{with } \mathcal{C}=\min_{i,j}\frac{\alpha_{ij}}{|\beta_{ij}|}
\end{equation}
whenever there is a negative $\beta_{ij}$.
\end{theorem}
\begin{proof}
For each stage of \eqref{model_ODE_IFRK}, suppose $\|u^{(j)}\|_\infty\le\rho$ for all $j\le i-1$.
Then, we have
\begin{align*}
\|u^{(i)}\|_\infty
& \le \sum_{j=0}^{i-1} \|\mathrm{e}^{(c_i-c_j){\tau} L} [\alpha_{ij} u^{(j)} + {\tau}\beta_{ij} f(u^{(j)})]\|_\infty\\
& \le \sum_{j=0}^{i-1}\alpha_{ij}\|\mathrm{e}^{(c_i-c_j){\tau} L}\|_\infty
\Big\|u^{(j)} + {\tau}\frac{\beta_{ij}}{\alpha_{ij}} f(u^{(j)})\Big\|_\infty\\
& \le \sum_{j=0}^{i-1}\alpha_{ij} \cdot \rho\\
& = \rho,
\end{align*}
since $c_i-c_j\ge0$,
and ${\tau} \max\limits_{i,j}\frac{\beta_{ij}}{\alpha_{ij}}\le\omega_0^+$
or ${\tau} \max\limits_{i,j}\frac{|\beta_{ij}|}{\alpha_{ij}}\le\min\{\omega_0^+,\omega_0^-\}$.
By induction, we obtain $\|u^{(i)}\|_\infty\le\rho$ for each $i$, and thus $\|u^{n+1}\|\le\rho$.
\end{proof}
\begin{remark}
Condition \eqref{cond_cc} implies the property of \emph{non-decreasing abscissas},
which is crucial for the preservation of the MBP for the IFRK method \eqref{model_ODE_IFRK}.
\end{remark}
\subsection{Error estimate}
To carry out convergence analysis for the IFRK method \eqref{model_ODE_IFRK},
we transform the variable $w$ in \eqref{wODE_RK_general} back to $u$ to get
\begin{subequations}
\label{IFRK_Butcher}
\begin{align}
u^{(0)} & = u^n,\\
u^{(i)} & = \mathrm{e}^{c_i{\tau} L} u^n + {\tau} \sum_{j=0}^{i-1} d_{ij} \mathrm{e}^{(c_i-c_j){\tau} L} f(u^{(j)}),
\quad 1\le i\le s-1, \label{IFRK_Butcher2} \\
u^{n+1} & = \mathrm{e}^{{\tau} L} u^n + {\tau} \sum_{i=0}^{s-1} d_{si} \mathrm{e}^{(1-c_i){\tau} L} f(u^{(i)}). \label{IFRK_Butcher3}
\end{align}
\end{subequations}
For simplicity, we do not introduce the general order conditions for arbitrary-order RK methods (see, e.g., \cite{HairerNoWa93}).
Instead, we suppose directly that the RK method \eqref{wODE_RK_general} is $p$-th order, where $1\le p\le s$.
Then, we have the following error estimate for \eqref{IFRK_Butcher}.
\begin{theorem}
Given $T>0$,
suppose that the exact solution, denoted by $u_e(t)$, of \eqref{model_eq} is sufficiently smooth on $[0,T]$
and $f$ is $p$-times continuously differentiable on $[-\rho,\rho]$.
Under the conditions of Theorem \ref{thm_MBP},
if the time step size ${\tau}$ satisfies \eqref{cond timestep} and \eqref{cond timestep2}, then
the numerical solution $u^n$ generated by the IFRK method \eqref{IFRK_Butcher}
with $u^0=u_e(0)$ and $\|u_e(0)\|_\infty\le\rho$
satisfies the error estimate:
\[
\|u_e(t_n)-u^n\|_\infty \le C(\mathrm{e}^{F_1st_n}-1){\tau}^p, \quad t_n\le T,
\]
where $F_1=\max\limits_{|\xi|\le\rho}|f'(\xi)|$ and
the constant $C>0$
is independent of ${\tau}$.
\end{theorem}
\begin{proof}
Following \cite{DuJuLu19,Ying00},
let us introduce the reference functions $v^{(i)}$ satisfying
\begin{subequations}
\begin{align}
v^{(0)} & = u_e(t_n), \label{IFRK_Butcher_v1} \\
v^{(i)} & = \mathrm{e}^{c_i{\tau} L} u_e(t_n) + {\tau} \sum_{j=0}^{i-1} d_{ij} \mathrm{e}^{(c_i-c_j){\tau} L} f(v^{(j)}),
\quad 1\le i\le s-1, \label{IFRK_Butcher_v2} \\
u_e(t_{n+1}) & = \mathrm{e}^{{\tau} L} u_e(t_n) + {\tau} \sum_{i=0}^{s-1} d_{si} \mathrm{e}^{(1-c_i){\tau} L} f(v^{(i)}) + {\tau} R^n, \label{IFRK_Butcher_v3}
\end{align}
\end{subequations}
where the truncation error $R^n$ satisfies
\begin{equation}
\label{thm_error_pf0}
\max_{0\le n\le [T/{\tau}]} \|R^n\|_\infty \le \widetilde{C} {\tau}^p,
\end{equation}
where the constant $\widetilde{C}>0$ depends on the $C^{p+1}[0,T]$-norm of $u_e$,
the $C^p[-\rho,\rho]$-norm of $f$, $\|L\|_\infty$, $T$, $p$, and $s$,
but is independent of ${\tau}$.
Note that $\|u_e(t)\|_\infty\le\rho$ for any $t\in[0,T]$ due to the MBP of \eqref{model_eq}.
According to \eqref{IFRK_Butcher_v1} and \eqref{IFRK_Butcher_v2},
we know from the proof of Theorem \ref{thm_MBP} that $\|v^{(i)}\|_\infty\le\rho$ for each $i=1,2,\dots,s-1$.
Let $e^n=u_e(t_n)-u^n$ and $e^{(i)}=v^{(i)}-u^{(i)}$ for $i=0,1,\dots,s-1$.
For each $i=1,2,\dots,s-1$, the difference between \eqref{IFRK_Butcher_v2} and \eqref{IFRK_Butcher2} gives
\[
e^{(i)} = \mathrm{e}^{c_i{\tau} L} e^n + {\tau} \sum_{j=0}^{i-1} d_{ij} \mathrm{e}^{(c_i-c_j){\tau} L} [f(v^{(j)}) - f(u^{(j)})].
\]
Using \eqref{cond_L} and noting $d_{ij}\le c_i\le 1$, we obtain
\[
\|e^{(i)}\|_\infty
\le \|e^n\|_\infty + {\tau} \sum_{j=0}^{i-1} \|f(v^{(j)}) - f(u^{(j)})\|_\infty
\le \|e^n\|_\infty + F_1 {\tau} \sum_{j=0}^{i-1} \|e^{(j)}\|_\infty.
\]
By induction, assuming $\|e^{(j)}\|_\infty\le(1+F_1{\tau})^j\|e^n\|_\infty$ for $j=0,1,\dots,i-1$,
we obtain
\begin{equation}
\label{thm_error_pf1}
\|e^{(i)}\|_\infty
\le \|e^n\|_\infty + F_1 {\tau} \sum_{j=0}^{i-1} (1+F_1{\tau})^j \|e^n\|_\infty
= (1+F_1{\tau})^i \|e^n\|_\infty.
\end{equation}
Therefore, the inequality \eqref{thm_error_pf1} holds for any $i=0,1,2,\dots,s-1$.
Similarly, the difference between \eqref{IFRK_Butcher_v3} and \eqref{IFRK_Butcher3} leads to
\[
e^{n+1} = \mathrm{e}^{{\tau} L} e^n + {\tau} \sum_{i=0}^{s-1} d_{si} \mathrm{e}^{(1-c_i){\tau} L} [f(v^{(i)}) - f(u^{(i)})] + {\tau} R^n,
\]
and then, using \eqref{thm_error_pf1} and \eqref{thm_error_pf0}, it yields
\begin{align*}
\|e^{n+1}\|_\infty
& \le \|e^n\|_\infty + F_1 {\tau} \sum_{i=0}^{s-1} \|e^{(i)}\|_\infty + {\tau} \|R^n\|_\infty \\
& \le \|e^n\|_\infty + F_1 {\tau} \sum_{i=0}^{s-1} (1+F_1{\tau})^i \|e^n\|_\infty + \widetilde{C} {\tau}^{p+1} \\
& = (1+F_1{\tau})^s \|e^n\|_\infty + \widetilde{C} {\tau}^{p+1}.
\end{align*}
By recursion, we obtain
\[
\|e^n\|_\infty \le (1+F_1{\tau})^{ns} \|e^0\|_\infty + \widetilde{C} {\tau}^{p+1} \sum_{k=0}^{n-1} (1+F_1{\tau})^{ks}.
\]
Noting that $e^0=0$ and denoting $C=\widetilde{C}(F_1s)^{-1}$, we have
\[
\|e^n\|_\infty \le C {\tau}^{p} [(1+F_1{\tau})^{ns}-1] \le C (\mathrm{e}^{F_1st_n}-1) {\tau}^{p},
\]
which completes the proof.
\end{proof}
\subsection{Various MBP-preserving IFRK schemes}
\label{sect_IFRKschemes}
We have shown the MBP-preserving property for the IFRK method in the general form.
Now, we present some concrete and practical MBP-preserving IFRK schemes
under the general results established above.
\begin{scheme}[IF1]
The first-order integrating factor (IF1) scheme
for solving \eqref{model_eq} reads \cite{IsGrGo18}
\begin{equation}
\label{IFFE}
u^{n+1} = \mathrm{e}^{{\tau} L} [u^n + {\tau} f(u^n)].
\end{equation}
Here, $\beta_{ij}>0$ and $\mathcal{C}=1$.
Thus, the condition \eqref{cond timestep} becomes
\begin{equation}
\label{cond_timestep_IF1}
{\tau} \le \omega_0^+.
\end{equation}
\end{scheme}
\begin{scheme}[IFRK2]
A second-order integrating factor Runge--Kutta (IFRK2) scheme
for solving \eqref{model_eq} reads \cite{IsGrGo18}
\begin{subequations}
\label{IFRK2}
\begin{align}
u^{(1)} & = \mathrm{e}^{{\tau} L} [u^n + {\tau} f(u^n)] \nonumber \\
& = \mathrm{e}^{{\tau} L} u^n + {\tau} \mathrm{e}^{{\tau} L} f(u^n), \\
u^{n+1} & = \frac{1}{2} \mathrm{e}^{{\tau} L} u^n + \frac{1}{2} [u^{(1)} + {\tau} f(u^{(1)})] \nonumber \\
& = \mathrm{e}^{{\tau} L} u^n + {\tau}\Big(\frac{1}{2}\mathrm{e}^{{\tau} L} f(u^n) + \frac{1}{2} f(u^{(1)})\Big).
\end{align}
\end{subequations}
Here, $\beta_{ij}\ge0$ and $\mathcal{C}=1$.
Thus, the condition \eqref{cond timestep} is the same as \eqref{cond_timestep_IF1}.
\end{scheme}
\begin{scheme}[IFRK3]
A third-order integrating factor Runge--Kutta (IFRK3) scheme
for solving \eqref{model_eq} reads \cite{IsGrGo18}
\begin{subequations}
\label{IFRK3}
\begin{align}
u^{(1)} & = \frac{1}{2} \mathrm{e}^{\frac{2{\tau}}{3} L}u^n
+ \frac{1}{2} \mathrm{e}^{\frac{2{\tau}}{3} L} \Big[u^n + \frac{4{\tau}}{3}f(u^n)\Big] \nonumber \\
& = \mathrm{e}^{\frac{2{\tau}}{3}L} u^n + \frac{2{\tau}}{3} \mathrm{e}^{\frac{2{\tau}}{3}L} f(u^n),\\
u^{(2)} & = \frac{2}{3} \mathrm{e}^{\frac{2{\tau}}{3}L}u^n
+ \frac{1}{3} \Big[u^{(1)} + \frac{4{\tau}}{3}f(u^{(1)})\Big] \nonumber \\
& = \mathrm{e}^{\frac{2{\tau}}{3}L}u^n
+ \frac{2{\tau}}{3} \Big(\frac{1}{3}\mathrm{e}^{\frac{2{\tau}}{3}L}f(u^n)+\frac{2}{3}f(u^{(1)})\Big), \\
u^{n+1} & = \frac{59}{128}\mathrm{e}^{{\tau} L}u^n + \frac{15}{128}\mathrm{e}^{{\tau} L} \Big[u^n + \frac{4{\tau}}{3}f(u^n)\Big]
+ \frac{27}{64}\mathrm{e}^{\frac{{\tau}}{3}L} \Big[u^{(2)} + \frac{4{\tau}}{3}f(u^{(2)})\Big] \nonumber \\
& = \mathrm{e}^{{\tau} L}u^n
+ {\tau}\Big(\frac{4}{16}\mathrm{e}^{{\tau} L}f(u^n) + \frac{3}{16}\mathrm{e}^{\frac{{\tau}}{3}L}f(u^{(1)}) + \frac{9}{16}\mathrm{e}^{\frac{{\tau}}{3}L}f(u^{(2)})\Big).
\end{align}
\end{subequations}
Here, $\beta_{ij}\ge0$ and $\mathcal{C}=\dfrac{3}{4}$.
Thus, the condition \eqref{cond timestep} leads to
\begin{equation}
\label{cond_timestep_IFRK3}
{\tau} \le \frac{3\omega_0^+}{4}.
\end{equation}
\end{scheme}
\begin{remark}
\label{rmk_ShuOsher}
Applying the third-order Shu--Osher method \cite{ShOs88} to \eqref{model_eq_w} gives
\begin{subequations}
\label{IFRK3n}
\begin{align}
u^{(1)} & = \mathrm{e}^{{\tau} L} [u^n + {\tau} f(u^n)] \nonumber \\
& = \mathrm{e}^{{\tau} L} u^n + {\tau} \mathrm{e}^{{\tau} L} f(u^n), \\
u^{(2)} & = \frac{3}{4} \mathrm{e}^{\frac{{\tau}}{2}L}u^n + \frac{1}{4} \mathrm{e}^{-\frac{{\tau}}{2}L} [u^{(1)} + {\tau} f(u^{(1)})] \nonumber \\
& = \mathrm{e}^{\frac{{\tau}}{2}L}u^n
+ \frac{{\tau}}{2} \Big(\frac{1}{2}\mathrm{e}^{\frac{{\tau}}{2}L} f(u^n) + \frac{1}{2}\mathrm{e}^{-\frac{{\tau}}{2}L} f(u^{(1)})\Big), \\
u^{n+1} & = \frac{1}{3}\mathrm{e}^{{\tau} L}u^n + \frac{2}{3}\mathrm{e}^{\frac{{\tau}}{2}L}[u^{(2)}+{\tau} f(u^{(2)})] \nonumber \\
& = \mathrm{e}^{{\tau} L}u^n
+ {\tau} \Big(\frac{1}{6}\mathrm{e}^{{\tau} L}f(u^n) + \frac{2}{3}\mathrm{e}^{\frac{{\tau}}{2}L}f(u^{(2)}) + \frac{1}{6}f(u^{(1)})\Big),
\end{align}
\end{subequations}
which gives $\beta_{ij}\ge0$ and $\mathcal{C}=1$.
However, there is a matrix exponential with a negative coefficient,
so this scheme may not preserve the MBP.
We will show in Section \ref{sect_numexp} that
the scheme \eqref{IFRK3n} does not preserve the MBP
even though a small time step size is used.
This suggests the necessity of the property of non-decreasing abscissas.
\end{remark}
Apart from the three-stage IFRK3 scheme \eqref{IFRK3},
one can give more MBP-preserving third-order IFRK schemes
by combining the exponential integrating factor approach with the RK method with non-decreasing abscissas,
e.g., eSSPRK$^+$(4,3) in \cite{IsGrGo18}.
\begin{scheme}[IFRK4]
Applying the classic fourth-order Runge--Kutta method to \eqref{model_eq_w} gives
the fourth-order integrating factor Runge--Kutta (IFRK4) scheme:
\begin{subequations}
\label{IFRK4}
\begin{align}
u^{(1)} & = \mathrm{e}^{\frac{{\tau}}{2}L} \Big[u^n + \frac{{\tau}}{2} f(u^n)\Big] \nonumber \\
& = \mathrm{e}^{\frac{{\tau}}{2}L} u^n + \frac{{\tau}}{2} \mathrm{e}^{\frac{{\tau}}{2}L} f(u^n), \\
u^{(2)} & = \frac{1}{2} \mathrm{e}^{\frac{{\tau}}{2}L} \Big[u^n - \frac{{\tau}}{2} f(u^n)\Big]
+ \frac{1}{2} [u^{(1)} + {\tau} f(u^{(1)})] \nonumber \\
& = \mathrm{e}^{\frac{{\tau}}{2}L} u^n + \frac{{\tau}}{2} f(u^{(1)}), \\
u^{(3)} & = \frac{1}{9} \mathrm{e}^{{\tau} L} [u^n - {\tau} f(u^n)]
+ \frac{2}{9} \mathrm{e}^{\frac{{\tau}}{2}L} \Big[u^{(1)} - \frac{3{\tau}}{2} f(u^{(1)})\Big]
+ \frac{2}{3} \mathrm{e}^{\frac{{\tau}}{2}L} \Big[u^{(2)} + \frac{3{\tau}}{2} f(u^{(2)})\Big] \nonumber \\
& = \mathrm{e}^{{\tau} L} u^n + {\tau} \mathrm{e}^{\frac{{\tau}}{2}L} f(u^{(2)}), \\
u^{n+1} & = \frac{1}{3} \mathrm{e}^{\frac{{\tau}}{2}L} \Big[u^{(1)} + \frac{{\tau}}{2} f(u^{(1)})\Big]
+ \frac{1}{3} \mathrm{e}^{\frac{{\tau}}{2}L} u^{(2)}
+ \frac{1}{3} \Big[u^{(3)} + \frac{{\tau}}{2} f(u^{(3)})\Big] \nonumber \\
& = \mathrm{e}^{{\tau} L} u^n + {\tau} \Big(\frac{1}{6}\mathrm{e}^{{\tau} L}f(u^n) + \frac{1}{3}\mathrm{e}^{\frac{{\tau}}{2}L}f(u^{(1)})
+ \frac{1}{3}\mathrm{e}^{\frac{{\tau}}{2}L}f(u^{(2)}) + \frac{1}{6}f(u^{(3)})\Big).
\end{align}
\end{subequations}
Here, $\mathcal{C}=\dfrac{2}{3}$ and $\beta_{ij}$ are not all nonnegative.
Thus, the conditions \eqref{cond timestep} and \eqref{cond timestep2} yield
\begin{equation}
\label{cond_timestep_IFRK4}
{\tau} \le \frac{2\omega_0^*}{3},
\end{equation}
where $\omega_0^*=\min\{\omega_0^+,\omega_0^-\}>0$.
\end{scheme}
As illustrated in \cite{GoSh98},
the constraint of the nonnegativity of $\beta_{ij}$
leads to the nonexistence of four-stage, fourth-order SSP-IFRK schemes.
In our work, however, the
conditions \eqref{cond_f} and \eqref{cond_f2} relax the requirement of the nonnegative $\beta_{ij}$.
Therefore, the IFRK4 scheme \eqref{IFRK4} is adequate to preserve the MBP
without any needs of extra computations for the modification of $f$ as done in \cite{ShOs88}.
More MBP-preserving fourth-order IFRK schemes with larger numbers of stage
could be obtained by using, for example, eSSPRK$^+$(5,4) and eSSPRK$^+$(10,4) presented in \cite{IsGrGo18}.
Since $\omega_0^+$ and $\omega_0^-$ are completely determined by $f$ via \eqref{cond_f} and \eqref{cond_f2},
from the inequalities \eqref{cond_timestep_IF1},
\eqref{cond_timestep_IFRK3}, and \eqref{cond_timestep_IFRK4},
we find that the constraints on the time step sizes depend only on $f$ but not on the size of $L$.
This means that the choice of the time step sizes is independent of the spatial mesh size.
Moreover, we point out that
these constraints are all sufficient but not necessary conditions for the MBP-preserving property.
\subsection{Examples of the matrix $L$ and the function $f$}
A large number of examples of linear and nonlinear operators in \eqref{model_pde}
have been shown to satisfy the assumptions made in \cite{DuJuLiQi20review},
including the space-continuous and space-discrete cases,
and thus, we can check the matrix $L$ in \eqref{model_eq} in the same way.
Here, we present a more direct criterion for $L$ from the point of view of matrices.
\begin{lemma}[\cite{Dahlquist58,Soderlind06}]
\label{lem_expnorm}
For any matrix $A=(a_{ij})\in\mathbb{R}^{m\times m}$ and any constant $s\ge 0$, we have
\[
\|\mathrm{e}^{sA}\|_\infty \le \mathrm{e}^{s\mu_\infty(A)},
\]
where $\mu_\infty(A)$ is the logarithmic norm of $A$ with respect to the $\infty$-norm, i.e.,
\[
\mu_\infty(A) = \max_{1\le i\le m} \bigg(a_{ii} + \sum_{\substack{j=1\\ j\not=i}}^m|a_{ij}|\bigg).
\]
\end{lemma}
\begin{remark}
If $A$ is diagonally dominant with all diagonal entries negative,
then we have $\|\mathrm{e}^{sA}\|_\infty \le 1$ for any $s\ge0$ since $\mu_\infty(A)\le0$.
See the following example.
\end{remark}
\begin{example}
\rm If $L$ is given by the second-order central difference discretization of $\Delta$, i.e.,
\[
L=\frac{1}{h^2}
\begin{pmatrix}
-2 & 1 & ~ & ~ & c\\
1 & -2 & 1 \\
~ & \ddots & \ddots & \ddots \\
~ & ~ & 1 & -2 & 1 \\
c & ~ & ~ & 1 & -2
\end{pmatrix}
\quad\text{with $c=0$ or $c=1$},
\]
then $\mu_\infty(L)=0$ and $\mu_\infty(-L)=4/h^2$.
According to Lemma \ref{lem_expnorm}, we have
\[
\|\mathrm{e}^{{\tau} L}\|_\infty \le 1, \qquad \|\mathrm{e}^{-{\tau} L}\|_\infty \le \mathrm{e}^{\frac{4{\tau}}{h^2}}.
\]
Denoting by $I$ the identity matrix with the same size as $L$ and letting
\begin{equation}
\label{eg_L2}
L^{(2)} = I \otimes L + L \otimes I,
\end{equation}
we have $\mu_\infty(L^{(2)})=0$ and $\mu_\infty(-L^{(2)})=8/h^2$, and thus,
\begin{equation}
\label{eg_Lexp}
\|\mathrm{e}^{{\tau} L^{(2)}}\|_\infty \le 1, \qquad \|\mathrm{e}^{-{\tau} L^{(2)}}\|_\infty \le \mathrm{e}^{\frac{8{\tau}}{h^2}}.
\end{equation}
The three-dimensional case $L^{(3)} = I \otimes I \otimes L + I \otimes L \otimes I + L \otimes I \otimes I$ is quite similar.
\end{example}
To guarantee \eqref{cond_f} and \eqref{cond_f2} for the nonlinear function $f$,
we actually have the following result.
The proof is straightforward, so we omit it.
\begin{proposition}
If there exists $\rho>0$ such that $f(\pm\rho)=0$
and $f$ is continuously differentiable and nonconstant on $[-\rho,\rho]$,
then \eqref{cond_f} and \eqref{cond_f2} hold respectively for
\[
\omega_0^+ = - \frac{1}{\min\limits_{|\xi|\le\rho}f'(\xi)} \quad \text{and} \quad
\omega_0^- = \frac{1}{\max\limits_{|\xi|\le\rho}f'(\xi)}.
\]
\end{proposition}
\begin{example}
\rm The Allen--Cahn equation has the nonlinear term $f(u)=u-u^3$,
which satisfies \eqref{cond_f} and \eqref{cond_f2} with $\rho=1,\omega_0^+=\frac{1}{2}$, and $\omega_0^-=1$.
\end{example}
\begin{example}
\rm Given the Flory--Huggins potential function
\begin{equation}
\label{F_FloryHuggins}
F(u) = \frac{\theta}{2} [(1+u)\ln(1+u) + (1-u)\ln(1-u)] - \frac{\theta_c}{2}u^2,
\end{equation}
where $\theta$ and $\theta_c$ are two positive constants satisfying $\theta<\theta_c$.
We set $f(u)=-F'(u)$, namely,
\begin{equation}
\label{f_log}
f(u) = \frac{\theta}{2}\ln\frac{1-u}{1+u} + \theta_cu.
\end{equation}
Denote by $\gamma$ the positive root of $f(\gamma)=0$.
Noting that
\[
\max_{|\xi|\le\gamma}f'(\xi) = \theta_c - \theta > 0, \qquad
\min_{|\xi|\le\gamma}f'(\xi) = \theta_c - \frac{\theta}{1-\gamma^2} < 0,
\]
we know $f$ satisfies \eqref{cond_f} and \eqref{cond_f2} with $\rho=\gamma$,
$\omega_0^+=\frac{1-\gamma^2}{\theta-\theta_c(1-\gamma^2)}$, and $\omega_0^-=\frac{1}{\theta_c-\theta}$.
\end{example}
\section{Numerical experiments}
\label{sect_numexp}
Let us consider the two-dimensional reaction-diffusion equation
\begin{equation}
\label{eg_AllenCahn}
u_t = \varepsilon^2\Delta u + f(u), \quad (x,y)\in\Omega=(0,1)^2, \ t\in(0,T],
\end{equation}
subject to the periodic boundary condition,
where $f(u)$ takes the form \eqref{f_log} with $\theta=0.8$ and $\theta_c=1.6$.
The positive root $\gamma$ of $f(\gamma)=0$ is approximately $\gamma\approx0.9575$.
The energy functional corresponding to \eqref{eg_AllenCahn} is given by
\begin{equation}
\label{eg_energy}
E(u) = \int_{(0,1)^2} \Big( \frac{\varepsilon^2}{2}|\nabla u(\bm{x})|^2 + F(u(\bm{x})) \Big) \, \d \bm{x},
\end{equation}
where $F(u)$ is the Flory--Huggins potential \eqref{F_FloryHuggins}.
In all experiments,
we always adopt the five-point central difference matrix \eqref{eg_L2} to approximate the Laplace operator
on the spatial uniform mesh with the size $h$ given later.
Since the approximating matrix is circulant,
the product of the matrix exponential and a vector is calculated via the fast Fourier transform.
\subsection{Convergence tests}
First, we test the convergence rates of the IFRK schemes \eqref{IFFE}, \eqref{IFRK2}, \eqref{IFRK3}, and \eqref{IFRK4}.
The initial data is set to be
\[
u_0(x,y) = 0.1 (\sin 3\pi x \sin 2\pi y + \sin 5\pi x \sin 5\pi y).
\]
We use the spatial mesh size $h=1/2048$.
For the cases $\varepsilon=0.1$ and $\varepsilon=0.01$,
we calculate the numerical solutions at $T=2$ with the time step sizes ${\tau}=2^{-k}$, $k=1,2,\dots,12$
and regard the solution obtained by IFRK4 with ${\tau}=0.1\times 2^{-12}$ as the benchmark
to compare the supremum-norm errors.
\figurename~\ref{fig_convergence} shows the results of the convergence tests
and the expected convergence rates are obvious.
\begin{figure}[h]
\centering
\subfigure[IF1 scheme]{\includegraphics[width=0.5\textwidth]{conv_1st.eps}
\subfigure[IFRK2 scheme]{\includegraphics[width=0.5\textwidth]{conv_2nd.eps}}\\
\subfigure[IFRK3 scheme]{\includegraphics[width=0.5\textwidth]{conv_3rd.eps}
\subfigure[IFRK4 scheme]{\includegraphics[width=0.5\textwidth]{conv_4th.eps}}
\caption{Convergence rates of the IFRK schemes.}
\label{fig_convergence}
\end{figure}
\subsection{Tests for MBP preservation}
Then, we simulate the process of the coarsening dynamics
by setting $\varepsilon=0.1$ and the spatial mesh size $h=1/512$,
where the initial data is given by random numbers on each mesh point ranging from $-0.8$ to $0.8$.
Wef irst use the IFRK schemes \eqref{IFFE}, \eqref{IFRK2}, \eqref{IFRK3}, and \eqref{IFRK4}
with the uniform time step size ${\tau}=0.08$,
close to the upper bound of the time step sizes determined by \eqref{cond_timestep_IFRK4} for the IFRK4 scheme.
\figurename~\ref{fig_coarsen11} plots the evolutions of the supremum norm of the solution of the four schemes.
The red dash horizonal line shows the theoretical upper bound $\gamma$ of the numerical solutions
and the black solid curve gives a benchmark obtained by using the IFRK4 scheme with ${\tau}=0.001$.
It can be observed that
the supremum norms of the numerical solutions are always bounded by the theoretical value,
{and more precisely, the values of vertical coordinate of every curve do not exceed $\gamma$,}
which suggests the preservation of the MBP.
In addition, the curves corresponding to the IF1 and IFRK2 schemes
produce obvious deviations from the benchmark due to the low accuracy,
while there is little difference between the curves for the IFRK3 and IFRK4 schemes and that for the benchmark.
This shows the convergence of the four IFRK schemes and the benefit of high-order accurate schemes.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{coarsen1_supremum12.eps
\includegraphics[width=0.5\textwidth]{coarsen1_supremum34.eps}
\caption{Evolutions of the supremum norms of the solutions of IFRK schemes.}
\label{fig_coarsen11}
\end{figure}
As we mentioned in Remark \ref{rmk_ShuOsher},
the third-order IF Shu--Osher scheme \eqref{IFRK3n} may not preserve the maximum bound principle
due to the existence of the negative abscissas in the matrix exponential.
Here, we simulate the coarsening dynamics described above
by using the scheme \eqref{IFRK3n} with a small time step size ${\tau}=0.005$
to explore the behavior of the numerical solution.
The left graph in \figurename~\ref{fig_coarsen12} shows the evolution of the supremum norm till $t=5$.
We can see the supremum norm of the numerical solution (the solid line)
exceeds the theoretical upper bound (the dash line) around $t=4.8$
and even evolves larger than $1$ after $t=4.9$.
Note that there is a logarithmic term in the nonlinear part
and it will be evaluated by complex numbers if $u$ ranges out of the interval $(-1,1)$.
The right grap in \figurename~\ref{fig_coarsen12} shows that
the energy \eqref{eg_energy} decreases along the time
until a wrong sharp corner arises around $t=4.9$.
We see that the simulation will give a completely wrong result
even though a small time step size is adopted for the scheme \eqref{IFRK3n},
which suggests the necessity of the property of non-decreasing abscissas.
Actually, we also repeat the experiment by using a smaller time step size $0.004$
and obtain the correct result similar to that shown in \figurename~\ref{fig_coarsen11}.
Thus we guess the scheme \eqref{IFRK3n} with the time step size ${\tau}\le0.004$
could give the correct numerical solutions.
Moreover, according to the second inequality of \eqref{eg_Lexp},
we further guess the scheme \eqref{IFRK3n} may preserve the MBP
when ${\tau}\le Ch^2$ for some constant $C$,
though we do not have the theoretical proof.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{coarsen1_supremum3n.eps
\includegraphics[width=0.5\textwidth]{coarsen1_energy3n.eps}
\caption{Evolutions of the supremum norm (left) and the energy (right)
of the solution of the third-order IF Shu--Osher scheme \eqref{IFRK3n}.}
\label{fig_coarsen12}
\end{figure}
\subsection{Efficiency comparison for long-time simulations}
One may conclude from \figurename~\ref{fig_convergence}-(b) and (d) that
the numerical error of the IFRK2 scheme with the time step size $0.001$
has the same magnitude as the error of the IFRK4 scheme with the time step size about $0.08$.
Thus, we repeat the simulation of the above coarsening dynamics with $\varepsilon=0.01$
by adopting the IFRK2 scheme \eqref{IFRK2} with $\tau=0.001$ and the IFRK4 scheme \eqref{IFRK4} with $\tau=0.08$
to compare the efficiencies of these two schemes with the accuracy at the same level.
The terminal time of the simulation is set to be $T=610$.
The computations are carried out in MATLAB
on a Laptop with a four-core Intel 2.70~GHz Processor and 8~GB Memory.
The CPU time for the computation by the IFRK2 scheme is about $352.28$~minutes
and that for the IFRK4 scheme is around $13.96$~minutes,
approximately $3.96\%$ of the former.
This implies the higher efficiency of the IFRK4 scheme than that of the IFRK2 one.
The numerical results of the IFRK4 scheme are shown in the following pictures (the results by the IFRK2 scheme are almost identical).
\figurename~\ref{fig_coarsen21} shows the configurations of the solution at $t=4,6,10,30,100$, and $300$.
The simulated dynamics begins with a random state
and towards the homogeneous steady state of constant $-\gamma$,
which is reached after about $t=600$ in our simulation.
The evolutions of the supremum norm and the energy are plotted in \figurename~\ref{fig_coarsen22}.
We observe that the energy decreases monotonically as expected
and the MBP is perfectly preserved
so that the solution is always located in the interval $[-\gamma,\gamma]$.
\begin{figure}[h]
\centerline{
\hspace{-0.4cm}
\includegraphics[width=0.37\textwidth]{coarsen2_t4.eps}\hspace{-0.5cm}
\includegraphics[width=0.37\textwidth]{coarsen2_t6.eps}\hspace{-0.5cm}
\includegraphics[width=0.37\textwidth]{coarsen2_t10.eps}}
\centerline{\hspace{-0.4cm}
\includegraphics[width=0.37\textwidth]{coarsen2_t30.eps}\hspace{-0.5cm}
\includegraphics[width=0.37\textwidth]{coarsen2_t100.eps}\hspace{-0.5cm}
\includegraphics[width=0.37\textwidth]{coarsen2_t300.eps}}
\caption{The snapshots of the evolution by the IFRK4 scheme at $t=4,6,10,30,100,300$, respectively
(left to right and top to bottom).}
\label{fig_coarsen21}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{coarsen2_supremum.eps
\includegraphics[width=0.5\textwidth]{coarsen2_energy.eps}
\caption{Evolutions of the supremum norm (left) and the energy (right) by the IFRK4 scheme.}
\label{fig_coarsen22}
\end{figure}
\section{Concluding remarks}\label{sect_con}
In this work, we study the fully-discrete MBP-preserving IFRK method
for semilinear parabolic equations taking the form \eqref{model_pde}.
We show that the IFRK method in general form preserves the MBP under certain conditions,
present several practical and specific IFRK schemes up to fourth order,
and give the convergence analysis of the method.
As shown in Theorem \ref{thm_MBP},
the constraints on the time step size do not depend on the linear part,
which is a significant advantage in comparison with the standard explicit SSP-RK methods where a CFL restriction is often needed.
The four-stage IFRK4 scheme \eqref{IFRK4} provides
a high-order MBP-preserving numerical scheme for the first time
and its high efficiency was verified by the numerical simulation of long-time evolutions.
There are some related problems worthy to explore further as continuation of this paper.
On one hand, the IF1, IFRK2, and IFRK3 schemes presented in Section \ref{sect_IFRKschemes}
actually come from the SSP-RK schemes in \cite{GoShTa01} for the system \eqref{model_eq} with $L=0$,
while the IFRK4 scheme comes from the classic fourth-order RK method with the inevitable negative $\beta_{ij}$.
Thanks to the conditions \eqref{cond_f} and \eqref{cond_f2} for the nonlinear function,
the nonnegativity constraint of $\beta_{ij}$ is not necessary in our framework.
Therefore, one of our future works is to explore
whether it is possible to find a fifth-order (or even higher-order) MBP-preserving IFRK scheme
by using similar routine as finding SSP schemes
without the requirement of the nonnegativity of $\beta_{ij}$.
On the other hand, the preservation of the MBP requires a constraint on the time step size
due to the conditions \eqref{cond_f} and \eqref{cond_f2}.
Thus, it is also expected to answer
whether one can use the stabilizing technique,
by adding a stabilization term as done in \cite{DuJuLiQi19,ShTaYa16},
to remove the requirement on the time step size.
In addition, the generalization to vector- and matrix-valued MBPs,
including the complex Ginzburg--Landau model \cite{DuGuPe92}
and orthogonal matrix-valued equations \cite{OsWa20} as the examples,
could be also considered as done in \cite{DuJuLiQi20review}.
\section*{Acknowledgments}
We are grateful to Professor Chi-Wang Shu of Brown University for many valuable comments.
This work is supported by the CAS AMSS-PolyU Joint Laboratory of Applied Mathematics.
L. Ju's work is partially supported by US National Science Foundation grant DMS-1818438
and US Department of Energy grant DE-SC0020270.
X. Li's work is partially supported by National Natural Science Foundation of China grant 11801024.
Z. Qiao's work is partially supported by the Hong Kong Research Council GRF grants 15300417 and 15302919
and the Hong Kong Polytechnic University fund G-UAEY.
J. Yang's work is supported by National Natural Science Foundation of China grant 11871264,
Natural Science Foundation of Guangdong Province (2018A0303130123),
and NSFC/Hong Kong RGC Joint Research Scheme (NFSC/RGC 11961160718).
\section*{References}
\bibliographystyle{elsarticle-num}
|
train/arxiv
|
BkiUfFXxK0fkXPSOrIUo
| 5 | 1 |
\section{Conclusion}
This paper proposes a tracking method referred to as the Adaptive Aggregation of Arbitrary trackers (AAA) for robust online tracking. The performance of individual trackers varies significantly with different image sequences, creating variations in simple aggregation strategies. The proposed AAA is based on adaptive expert aggregation (AEA), which demonstrates strong theoretical support in terms of ``regret'', with a theoretically bounded performance difference between AAA and the best tracker (referred to as the best expert). It should be emphasized that the best tracker for an image sequence is identified at the end of the sequence. This means that it is unknown which tracker will be the best when AAA aggregates the trackers for each frame; nevertheless, this theoretical support guarantees that the performance of AAA will be close to that of the best tracker.\par
An exhaustive experimental study on the large variations of benchmark datasets and trackers to be aggregated demonstrated that the proposed method provides state-of-the-art performance. As a theoretical guarantee, AAA performed similar to or better than the best tracker for each image sequence, and often outperformed trackers on average over a benchmark dataset. We also derived a condition that AAA becomes the best tracker on average.\par
Future work will focus on extension to multiple object tracking. It might seem straightforward to aggregate multiple object trackers, but is in fact challenging because it is not obvious how we determine anchor frames, define delayed feedback and design a new loss function. Other potentially worthwhile work involves the development of a methodology for organizing expert sets that give better performance with the proposed aggregation for a certain dataset.
\appendices
\input{text/appendix.tex}
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}
\else
\section*{Acknowledgment}
\fi
This work was supported by JSPS KAKENHI Grant Number JP17H06100 and JP18K18001.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
\section{MCCT details\texorpdfstring{{}~\cite{Wang_2018}}{}\label{sec:append_mcct}}
An MCCT (Multi-Cue Correlation Tracker)~\cite{Wang_2018} evaluates and weights experts for every frame, and simply selects the one with the highest weight to determine the target location. For each expert, the MCCT evaluates the overlap ratio of bounding boxes from other experts and the variance of the overlap ratio in a short period. The expert with smaller variance is evaluated as more stable and therefore more reliable.
\par
However, with the employment of arbitrary experts, the MCCT may select a failed expert due to the nature of the above evaluation. More specifically, some experts that have already lost the target location will constantly have a zero overlap ratio and therefore zero variance; see Fig.~\ref{fig:mcct_compare} for examples. It can be seen that MCCT selects failed experts that maintain a zero overlap ratio. The authors' experiments revealed numerous such cases, and MCCT therefore demonstrated lower performance than other methods.
Accordingly, to fully utilize the potential of MCCT, it is necessary to carefully choose experts with bounding boxes that tend to be close together. In fact, the study reported in the original paper~\cite{Wang_2018} used specific experts with adaptive updating to allow ROI sharing.
\section{HDT* details\texorpdfstring{~\cite{Qi2019HedgingDF}}{}\label{sec:append_hdt}}
This section outlines the implementation of HDT*~\cite{Qi2019HedgingDF} and related issues encountered in the study's experimental setup (with tough adversarial environments). As the current HDT* source code has not been published, the experiments here involved careful implementation based on the source code of the first version of HDT*~\cite{Qi_2016}. However, ideal implementation was challenging because HDT* uses its own Siamese networks to evaluate similarity between the template and the predicted target bounding box, and it implicitly assumes that expert predictions are based on the same bounding box size. This assumption conflicts with the study's experimental setup using arbitrary experts. To make the comparison as fair as possible in light of the above issues, the same criteria were used for similarity evaluation. Specifically, $V_T()$ from (\ref{eq:offline}) was used rather than the Siamese network in HDT* implementation. In addition, as the same experts were used for HDT*, AUC-based evaluation for HDT* was omitted due to related influence from bounding box sizes. \par
HDT* evaluates experts by the following two criteria given for every frame. The first criterion is the difference in appearance between the template image of the target and the cropped image for the predicted location by an expert. The second is the location difference between expert prediction and feedback (i.e., the weighted average of expert prediction). Reliable evaluation using the first criterion is challenging when the target may be occluded and/or shows heavy deformation. Moreover, since the feedback in the second criterion is directly determined via expert prediction (rather than from more reliable information), it is often unreliable in tough tracking situations. \par
Fig.~\ref{fig:hdt_compare} shows two typical cases of HDT* failure. Since feedback is the weighted average of expert prediction (rather than another more reliable pseudo-ground-truth, like input from an offline tracker), performance will degrade when the weight of the wrong expert is high. In addition, the final location is determined via Hedge's prediction, and is sometimes distant from all other expert predictions.
\section{Offline tracker details\label{sec:append_offline}}
In the study's experiments, an offline tracker based on Dijkstra's algorithm was used although any accurate offline tracker providing the delayed feedback can be applied. First, a graph linking two consecutive anchor frames, $u_q$ and $u_{q+1}$, was produced with node content corresponding to one of $f^{u_q} \cup\{f_i^\tau | \tau\in[u_q+1, u_{q+1}]\}$, where $f^{u_q}$ is the target location based on AAA at the previous anchor frame $u_q$, and $f_i^\tau$ is the target location based on the $i$th expert at $\tau$. The edges of the graph were assigned between nodes $f^{\tau-1}_i$ and $f^\tau_j$ with cost $C\left(f^{\tau-1}_i, f^\tau_j\right)$. According to~\cite{Li_Zhang_2008}, the cost was defined as
\begin{equation}
C\left(f^{\tau-1}_i, f^\tau_j\right) = -\log \left(\mathcal{P}(f^{\tau-1}_i, f^\tau_j) V_E(f^{\tau-1}_i, f^\tau_j)
V_T(f^\tau_j)\right), \label{eq:offline}
\end{equation}
where $\mathcal{P} \in [0,1]$ is the value of GIoU between two bounding boxes
and $V_E \in [0,1]$ is the normalized cosine similarity between the feature vectors for the bounding box regions. The feature vectors are given by ResNet, as per the anchor frame determination in Sec.~\ref{sec:anchor}.
$V_T \in [0,1]$ is the normalized cosine similarity to the feature vector of the given template image.
The globally minimum cost path between $f^{u_q}$ and one of $\{f^{u_{q+1}}_i\}$ is determined using Dijkstra's algorithm, and the node sequence of the path gives the pseudo-ground-truth $y^\tau, \tau\in [u_{q-1}+1, u_q]$ as delayed feedback.
\section{Derivation of Theorem \ref{theo:regret_delay}\label{sec:append_regret}}
Quanrud et al.\cite{Quanrud2015OnlineLW} gives the regret bound for the {\em Online Mirror Descent (OMD) algorithm} (a versatile choice for AEA) with delayed feedback. Various AEA algorithms can be derived from OMD by changing its regularizer for the expert selection process. Based on Theorem A.5 of \cite{Quanrud2015OnlineLW}, the regret bound of OMD with delayed feedback is given as
$$
R_T = O \left( \frac { 1 } { \eta }\psi + \eta \frac { G ^ { 2 } ( T + D ) } { \sigma } \right),
$$
where the values $\psi$ and $\sigma$ are determined by the regularizer choice, $\eta$ is the learning rate and $G$ is the difference between the maximum and minimum values of a loss function.\par
The authours' expert selection algorithm based on weights (\ref{eq:weight}) is also a special case of OMD. According to \cite{Quanrud2015OnlineLW}, if an entropic regularizer is applied, weight can be derived by updating the (\ref{eq:weight}) scheme. Moreover, $\sigma$ is 1 and $\psi$ is upper-bounded by $O(\ln N)$~\cite{Quanrud2015OnlineLW}.
The loss function (\ref{eq:loss}) takes values in $[0,1]$, and therefore $G=1$. Based on these values and the assumption that $\eta\propto\sqrt{\ln N/(T+D)}$, Theorem~\ref{theo:regret_delay} is supported.
\section{Determination of the anchor frame threshold \texorpdfstring{$\theta$}{}\label{sec:append_threshold}}
The hyper-parameter $\theta$, which is the threshold used to determine the anchor frame, was determined experimentally using the GOT10K dataset as noted in Sec.\ref{sec:threshold}.
Fig.~\ref{fig:theta} shows the AUC score and the anchor frame ratio based on changing the hyper-parameter $\theta$ from 0.6 to 0.9 at intervals of 0.1. Here, AUC is normalized to the range between $[0,1]$ for easier observation. As a general trend, a smaller threshold value results in a high anchor ratio and, in turn, a higher AUC. However, detailed observation reveals that this trend does not always hold because anchor frames determined using lower values of $\theta$
are less reliable and will result in inaccurate feedback. Consequently, smaller threshold values do not always give better performance.
Specifically, the best $\theta$ values for each expert group was as follows: \textit{High}:0.69; \textit{Low}:0.60; \textit{Mix}:0.65; \textit{SiamDW}:0.65; and \textit{SiamRPN++}:0.61.
\par
Evaluation was also performed to determine whether AAA can identify the target location with high confidence at anchor frames. AAA accuracy with experts in the \textit{High} group is shown in Table~\ref{table:high_anchor}, in addition to the accuracy of individual experts. The hyper-parameter $\theta$ was set at 0.69, as indicated by Fig.~\ref{fig:theta}. AAA achieved the best AUC score for all datasets. This proves that the proposed method can be applied to determine anchor frames appropriately.
\section{Proof of Proposition \ref{prop:ours}\label{sec:append_prop}}
\begin{proof}
If tracking performance evaluation is based on loss $\ell$ (rather than AUC or DP), the situation in which AAA outperforms experts on average over $\mathcal{V}$ is expressed as the inequality
\begin{equation}
\sum_{v\in \mathcal{V}}\sum_{t=1}^T\ell(p^{v,t}, y^{v,t}) \leq
\sum_{v\in \mathcal{V}}\sum_{t=1}^T\ell( f_{i^*}^{v,t}, y^{v,t}),
\label{eq:average-inequality}
\end{equation}
where $p^{v,t}$ is the result of location identification using AAA at $t$ for the sequence $v\in V$ and $y^{v,t}$ is the pseudo-ground-truth given as delayed feedback. As defined in Sec.~\ref{sec:average-performance}, the $i^*$th expert is the overall-best expert having the most optimal average performance over $\mathcal{V}$.
The left side of (\ref{eq:average-inequality}) is relative to the loss of AAA over $\mathcal{V}$. The right side is relative to the loss of the overall-best expert over $\mathcal{V}$, as defined by (\ref{eq:overall-best}).
Accordingly, the inequality (\ref{eq:average-inequality}) indicates the situation in which AAA outperforms all experts on average over $\mathcal{V}$.\par
From (\ref{eq:average-inequality}), the following inequality is derived:
\begin{eqnarray}
\sum_{v\in \mathcal{V}}\left[\sum_{t=1}^T\ell(p^{v,t}, y^{v,t})-\min_i\sum_{t=1}^T\ell(f_i^{v,t}, y^{v,t})\right]\nonumber \\
\leq
\sum_{v\in \mathcal{V}}\left[\sum_{t=1}^T\ell( f_{i^*}^{v,t} y^{v,t})-\min_i\sum_{t=1}^T\ell(f_i^{v,t}, y^{v,t})\right].\label{eq:pp0}
\end{eqnarray}
The left side is the sum of AAA regrets (see (\ref{eq:regret})) over $\mathcal{V}$, and is therefore equal to $|\mathcal{V}|\overline{R}^\mathcal{V}$. The right side of (\ref{eq:pp0}) can be decomposed into two terms by splitting $\mathcal{V}$ into $\mathcal{S}$ and $\mathcal{V}\setminus\mathcal{S}$. Then, the first term is
\begin{equation}
\sum_{v\in \mathcal{S}}\left[\sum_{t=1}^T\ell( f_{i^*}^{v,t}, y^{v,t})-\min_i\sum_{t=1}^T\ell(f_i^{v,t}, y^{v,t})\right]=0, \label{eq:pp1}
\end{equation}
because the overall-best (i.e., the $i^*$th) expert is the best expert in the image sequence $v\in\mathcal{S}$.
By the definition of $\delta$ in (\ref{eq:prop_cond}), the second term is
\begin{equation}
\sum_{v\in \mathcal{V}\setminus\mathcal{S}}\left[\sum_{t=1}^T\ell( f_{i^*}^{v,t}, y^{v,t})-\min_i\sum_{t=1}^T\ell(f_i^{v,t}, y^{v,t})\right] \geq |\mathcal{V}\setminus\mathcal{S}|\delta.
\label{eq:pp2}
\end{equation}
From (\ref{eq:pp1}) and (\ref{eq:pp2}), we have: \begin{equation}
|\mathcal{V}\setminus\mathcal{S}|\delta \leq \sum_{v\in \mathcal{V}}\left[\sum_{t=1}^T\ell( f_{i^*}^{v,t}, y^{v,t})-\min_i\sum_{t=1}^T\ell(f_i^{v,t}, y^{v,t})\right].\label{eq:ppR}
\end{equation}
\par
From (\ref{eq:pp0}) and (\ref{eq:ppR}), {\em if the condition} $|\mathcal{V}|\overline{R}^\mathcal{V}\leq$ $|\mathcal{V}\setminus\mathcal{S}|\delta$ {\em is satisfied}, this gives \begin{eqnarray*}
|\mathcal{V}|\overline{R}^\mathcal{V}
&=&
\sum_{v\in \mathcal{V}}\left[\sum_{t=1}^T\ell(p^{v,t}, y^{v,t})-\min_i\sum_{t=1}^T\ell(f_i^{v,t}, y^{v,t})\right] \\
&\leq& |\mathcal{V}\setminus\mathcal{S}|\delta \\
&\leq& \sum_{v\in \mathcal{V}}\left[\sum_{t=1}^T\ell( f_{i^*}^{v,t} y^{v,t})-\min_i\sum_{t=1}^T\ell(f_i^{v,t}, y^{v,t})\right].
\end{eqnarray*}
This means that if the condition holds,
(\ref{eq:pp0}) holds and immediately
(\ref{eq:average-inequality}) holds. Consequently, AAA outperforms other experts on average over $\mathcal{V}$ if $|\mathcal{V}|\overline{R}^\mathcal{V}\leq |\mathcal{V}\setminus\mathcal{S}|\delta$, and Proposition \ref{prop:ours} is derived.
\end{proof}
\section{When does AAA perform best on a dataset? -- A theoretical inspection\label{sec:average-performance}}
It is considered useful to know the conditions in which AAA outperforms experts as seen in the above experiments. However, Theorem \ref{theo:regret_cdelay}
simply implies that the performance of AAA is similar to that of the best expert for an image sequence based on its regret bound. It does not indicate the conditions in which AAA outperforms experts for a certain sequence.\par
However, it is still possible to determine the conditions in which AAA outperforms experts {\em on average over an image sequence set $\mathcal{V}$}. The four notations are used to indicate these conditions. First, $\overline{R}^\mathcal{V}$ denotes the average regret of AAA over $\mathcal{V}$.
Second, {\em the overall-best expert $i^*$} is the expert whose total loss (or, equivalently, average loss) over $\mathcal{V}$ is the minimum among the $N$ experts, that is:
\begin{equation}
i^* = \mathop{\rm argmin}\limits_{i\in[1,N]} \sum_{v\in \mathcal{V}}\sum_{t=1}^T\ell( f_{i}^{v,t}, y^{v,t}),
\label{eq:overall-best}
\end{equation}
where a new suffix $v$ is attached to $f_i^t$ and $y_i^t$. Third, $\mathcal{S}\subset\mathcal{V}$ is the set of image sequences where the overall-best expert is the best expert. Finally, a value $\delta$ is defined as:
\begin{equation}
\delta = \min_{v\in\mathcal{V}\setminus\mathcal{S}}\left[\sum_{t=1}^T\ell( f_{i^*}^{v,t}, y^{v,t})-\min_i\sum_{t=1}^T\ell(f_i^{v,t}, y^{v,t})\right].
\label{eq:delta}
\end{equation}
In this definition, for the sequence $v\in\mathcal{V}\setminus\mathcal{S}$ (i.e., the sequence $v$ for which the overall-best expert is not the best expert), the overall-best expert performs worse than the best expert with a loss value of $\delta$ or more.\par
From the proof given in Appendix~\ref{sec:append_prop}, the following proposition holds:
\begin{prop}\label{prop:ours}
AAA outperforms all the experts on average over the image sequence set ${\mathcal{V}}$ if the following condition is satisfied:
\begin{equation}
\overline{R}^\mathcal{V} \leq \frac{|\mathcal{V}\setminus\mathcal{S}|}{|\mathcal{V}|}\delta.
\label{eq:prop_cond}
\end{equation}
\end{prop}
\par
This proposition states that AAA performs better on average (i.e., $\overline{R}^\mathcal{V}$ is smaller) when it satisfies the condition (\ref{eq:prop_cond}) with a smaller $\mathcal{S}$ and/or a larger $\delta$.
The set $\mathcal{S}$ is smaller if the overall-best expert is the best expert for only a smaller number of sequences. The difference $\delta$ is larger if the performance of the overall-best expert degrades drastically at
$v\in\mathcal{V}\setminus\mathcal{S}$.\par
From these discussions it can be concluded that {\em when there is no almighty tracker} (i.e., where $\mathcal{S}$ is small), AAA performs better than all experts over $\mathcal{V}$ based on appropriate aggregation. A large $\delta$ also indicates that AAA is {\em better with employment of various experts}, each of which can be an outstanding expert for certain sequences in $\mathcal{V}$
\section{Experiments\label{sec:experiments}}
\input{figs/rank.tex}
\subsection{Experimental setup}
\subsubsection{Experts and comparative methods}
In relation to experts and comparative methods,
twelve state-of-the-art online trackers were employed (ATOM~\cite{Danelljan_2019}, DaSiamRPN~\cite{Zhu_2018}, GradNet~\cite{Li_2019G}, MemTrack~\cite{Yang_2018}, SiamDW~\cite{Zhang_2019}, SiamFC~\cite{bertinetto2016fully}, SiamMCF~\cite{Morimitsu2018MultipleCF}, SiamRPN~\cite{Li_2018}, SiamRPN++~\cite{Li_2019}, SPM~\cite{Wang_2019}, Staple~\cite{Bertinetto_2016}, and THOR~\cite{Sauer2019BMVC}).
For fair comparison and better performance, parameters optimized by the authors of the individual experts were applied.\par
In addition to these online trackers, AAA was also compared with the MCCT~\cite{Wang_2018} and HDT*~\cite{Qi2019HedgingDF}, aggregation-based tracking methods as detailed in Appendix~\ref{sec:append_mcct}
and Appendix~\ref{sec:append_hdt}, respectively.
Other naive aggregation-based methods referred to as ``Random'' and ``Max'' were also examined. ``Random'' randomly selects an expert estimation for each frame, while ``Max'' selects the estimation most similar to the template image for each frame. ``Max'' is the same as AAA when each frame is an anchor frame and thus feedback is given for each frame.
\subsubsection{Benchmark datasets}
The proposed AAA and the comparative methods were evaluated with OTB2015~\cite{Wu2015ObjectTB}, TColor128~\cite{Liang_2015}, UAV123~\cite{Mueller_2016}, NFS~\cite{Galoogahi_2017}, and LaSOT~\cite{Fan_2019}. OTB2015 is a popular benchmark dataset for evaluating online trackers, consisting of 100 image sequences including gray-scale image sequences. TColor128 contains 128 color image sequences, and is specifically designed for evaluation of color-enhanced trackers. VOT2018 is a dataset produced for competition, and consists of 60 image sequences.
UAV123 consists of 123 image sequences taken from unmanned aerial vehicles. NFS consists of 100 image sequences captured with a higher frame rate of 240fps. LaSOT is the largest benchmark dataset among the above, and is divied into ``training'' and ``testing'' subsets. Here, image sequences from the testing subset containing 280 image sequences were used.
\subsubsection{Performance monitoring}
For performance monitoring, the area-under-the-curve (AUC) score and average distance precision (DP) were referenced as standard metrics~\cite{Wu_2013}. AUC (referred to as AO in VOT2018) is derived using ``success plot'' for the performance curve. This plot is based on evaluation of the ratio of frames where the IoU with the ground truth is larger than the threshold value ($\in [0,1]$). Fig.~\ref{fig:high_curve} shows examples of the success plot.\par
DP is derived from ``precision plot'' based on evaluation of the ratio of frames where the geometric distance between the location determined and the ground-truth location is less than the threshold value ($\in [0,50]$ pixels). Fig.~\ref{fig:high_curve} also shows examples of the precision plot. DP is eventually determined as the value of the precision plot at the threshold 20 based on \cite{Wu_2013}.\par
In addition to AUC and DP, the performance rank of individual trackers for each image sequence was used. Since there are $N$ experts and AAA, the rank varies from 1 (the best) to $N+1$ (the worst). If AAA successfully follows the best expert for arbitrary image sequences, it will be frequently ranked second. \par
Performance was evaluated under the ``strictly-online'' conditions described here. First, no tracker has any prior information on the total frame length $T$. Second, the no-reset evaluation protocol was used; in some experimental evaluations (e.g., VOT2018) a reset-based evaluation protocol is employed, where failed trackers (with zero IoU with ground-truth) can restart from the correct location a fixed number of frames later. In no-reset evaluation, however, the failed tracker continues tracking rather than being reset.
\input{tables/table2.tex}
\input{tables/table3.tex}
\subsubsection{Hyper-parameter search}\label{sec:threshold}
The threshold $\theta$, which is just one hyper-parameter in the proposed method, is optimized using the GOT10K~\cite{Huang_2019}, Generic Object Tracking Benchmark.
Specifically, the AUC score of AAA with the expert group is first evaluated by changing $\theta$ from 0.6 to 0.9 at 0.01 intervals with GOT10K. Then, the $\theta$ for the highest AUC score is chosen for evaluation of the other datasets. Appendix~\ref{sec:append_threshold} details the procedure.
It should be noted that GOT10K was not used in the performance evaluation experiments described below or in training of individual experts.
\subsection{Quantitative evaluation using three expert groups with different performance}
Comprehensive experiments were conducted to determine whether the proposed method can be applied to properly aggregate various experts and achieve near-best performance (i.e., the performance similar to that of the best expert) without attentive expert selection. This section outlines quantitative evaluation conducted using three expert groups (referred to as \textit{High}, \textit{Low}, and \textit{Mix}) with different performance characteristics as follows:
\begin{itemize}
\item \textit{High} group consists of six higher-performance experts (ATOM, DaSiamRPN, SiamMCF, SiamRPN++, SPM, and THOR).
\item \textit{Low} group consists of six lower-performance experts (GradNet, MemTrack, SiamDW, SiamFC, SiamRPN, and Staple), and was examined to determine whether the proposed method can be applied to follow the best experts among lower-performance experts.
\item \textit{Mix} group consists of the three higher-performance experts (ATOM, SiamRPN++, and SPM) and the three lower-performance experts (MemTrack, SiamFC, and Staple). Observation of the proposed method with this is importance in verifying that the method supports automatic selection of higher-performance experts while helping to eliminate erroneous estimations from lower-performance experts.
\end{itemize}
\par
Table~\ref{table:high_score} shows the average performance of the six experts in the \textit{High} group and their aggregations. As noted, AUC and DP are derived
from the success plot and the precision plot as shown in Fig.~\ref{fig:high_curve}.
AAA outperformed state-of-the-art trackers in most datasets and demonstrated the best performance for all datasets except for UAV123, and even then was only marginally second best.
\par
Fig.~\ref{fig:rank}~(a) shows an image sequence-level rank histogram for six trackers in the \textit{High} group and AAA. The histogram is normalized based on the number of image sequences in each dataset. As seen in Fig.~\ref{fig:hard_tracking}, no tracker consistently achieved the best performance, and the best expert drastically changed over image sequences even within the same dataset. As also shown in Fig.~\ref{fig:rank}~(a), even a very good tracker (such as SiamRPN++) is sometimes worst-ranked (i.e., 7th). In contrast, AAA was the second- or third-best tracker for most image sequences, although it was not often the best. More importantly, it was rarely ranked lowly. These experimental highlight the importance of a regret bound guaranteeing a minimal difference between performance of the proposed method and the best expert.\par
Table~\ref{table:high_score} also shows that the performance of the other aggregation-based trackers (HDT*, MCCT, Random, and Max) was lower than that of AAA despite their aggregation of the same experts. As detailed in Appendix~\ref{sec:append_hdt} and Appendix~\ref{sec:append_mcct}, HDT* and MCCT did not show ideal performance in the study's stringent experimental setup with aggregation of arbitrary experts. Max, which relies on the feedback given for each frame, sometimes demonstrated the worst performance.
\input{tables/table4.tex}
\input{tables/table5.tex}
Table~\ref{table:low_score} and Fig.~\ref{fig:rank}~(b) show quantitative evaluation of the trackers in the \textit{Low} group and their aggregations. AAA again demonstrated at least the second-best average performance for all datasets even with aggregation of experts in the \textit{Low} group. This means that AAA automatically finds and follows the best among lower-performance experts as expected. It should be emphasized that the other aggregation strategies suffered from lower expert performance.\par
As shown in Table~\ref{table:mix_score} and Fig.~\ref{fig:rank}~(c), AAA was at least the third-best of all trackers with experts in the \textit{Mix} group, and there were significant performance gaps between the three higher- and lower-performance experts in this group. Here, a selection of lower-performance experts drastically degrades aggregation performance. AAA, however, automatically and successfully selected higher-performance experts and was the second- or third-best tracker for most image sequences. This indicates that AAA is not disturbed even when several experts do not perform well, making it a very practical aggregation-based tracker.
\input{figs/example.tex}
\subsection{Quantitative evaluation with experts generated from a single tracking method}
As explained in \ref{sec:effect_of_N}, Theorem \ref{theo:regret_cdelay} guarantees that the use of more experts does not result in any significant degradation of AAA performance. One strategy here is to employ a wide variety of tracking methods based on different algorithms as per the experiments detailed in the previous section. However, due to the difficulty of arranging such a variety, it is preferable to employ a single tracking method and generate various versions thereof by changing its internal parameters, like ~\cite{Zhang_2019, Li_2019}.\par
For evaluation of AAA with the second strategy, different versions of the lower-performance SiamDW (collectively referred to as the \textit{SiamDW} group) were generated by changing the related parameter sets (e.g., backbone networks, the weight of the network, and hyper-parameters) based on the suggestions of the original paper~\cite{Zhang_2019}. The higher-performance tracker SiamRPN++ was also adapted to generate the \textit{SiamRPN++} group. \par
Tables~\ref{table:siamdw_score} and \ref{table:siamrpn_score} show the results from the \textit{SiamDW} and \textit{SiamRPN++} groups.
Even with experts generated from a single tracking method, AAA outperformed the individual experts in these groups, and the negative effect of increasing the number of experts $N$ on the regret bound was therefore not significant. These results are promising for practical application. Rather than focusing on the internal parameters of individual experts, it is simply necessary to have more experts generated with different internal parameters.
\subsection{Tracking examples}
In Fig.~\ref{fig:example}, two tracking examples from the \textit{High} group are shown to demonstrate how AAA tries to track the best expert and achieve similar performance by updating the weights $w_i^t$ adaptively. Specifically, it shows change in the overlap errors (IOU) and the weights of experts and AAA for ``Girl2'' in OTB2015 and ``Yo-yos\_ce1'' in TColor128.\par
For ``Girl2'', all experts successfully tracked the target object at (a). However, because of the occlusion between (b) and (c), the experts failed to track at (c) and only ATOM properly tracked the target object at (d).
AAA also tracked the object well at (d), with an anchor frame being determined after the target object reappeared and high weight given to ATOM via appropriate feedback. At (e), (f) and (g), other experts were also able to properly track the target object, and AAA still achieved high performance. \par
For ``Yo-yos\_ce1'', tracker accuracy frequently varied throughout the sequence. For example, SiamMCF was accurate and ATOM was inaccurate at (l), but this situation was reversed at (o). Even here, AAA tried to follow the better tracker by changing the weights of the experts at the anchor frames, resulting in lower errors for most parts (except the period around (k), when all experts failed). AAA outperformed all the experts in this image sequence.
\section{Introduction}\label{sec:introduction}}
\IEEEPARstart{V}{isual} object tracking (VOT) is a research field of significant interest, and is widely applied in fields such as video surveillance~\cite{Mangawati_2018}, traffic flow monitoring~\cite{Tian_2011} and autonomous driving~\cite{Yurtsever_2020}. Various tracking methods are proposed every year~\cite{Fiaz_2019}, but VOT still involves issues relating to areas such as target appearance change, target motion change, occlusion, camera motion, environment illumination change~\cite{Kristan_2016}. \par
These issues are amplified in {\em online} tracking tasks, where the target location needs to be determined in a frame-by-frame manner. Even when there is no target-like region in the current frame due to heavy appearance changes or occlusion of the target object, it is necessary to determine the target location before consideration of the next frame. Once an erroneous determination is made, it is difficult to recover and track the target object properly again.\par
Fig.~\ref{fig:hard_tracking} shows issues with online tracking. Twelve state-of-the-art trackers are applied to six benchmark datasets, with monitoring based on the ratio of achievement for the best tracker in each dataset. The results indicate the difficulty of realizing an ``almighty'' tracker even for a single dataset. That is, no single tracker based on a specific criterion can handle extended variation of tracking tasks.
\par
A promising strategy for more robust online tracking is to use multiple trackers~\cite{Avidan, Grabner_2006, Zhang_2014, Wang_2018, Qi_2016, Qi2019HedgingDF}. Aggregation of various trackers with different characteristics can be expected to produce complementary interaction. In particular, the potential to combine and/or select trackers based on their reliability will make the results more robust than with a single one.\par
However, estimating the reliability of each tracker is not straightforward, and reliability should be updated during the image sequence because the target condition (i.e., the appearance of the target and the background) will change frame-by-frame. Fig.~\ref{fig:change_reliable} shows two examples in which the most reliable tracker (i.e., the one that determines the target) for one frame becomes totally unreliable in a later frame.\par
\input{figs/hard_tracking.tex}
\input{figs/change_reliable.tex}
In this paper, the authors propose a novel tracking method called Adaptive Aggregation of Arbitrary (AAA) trackers based on {\em Adaptive Expert Aggregation} (AEA). AEA has been studied in the field of theoretical machine learning~\cite{vovk1998game}\footnote{Adaptive expert aggregation is often called \textit{online prediction}~\cite{vovk1998game} or \textit{online learning}~\cite{Shalev_Shwartz_2011} in theoretical machine learning research. In this paper, these terms are avoided in order to avoid confusion with their different meanings in computer vision and pattern recognition research.}, and is a problem involving the aggregation of {\em experts} online.
More specifically, individual {\em experts} give their own solutions to the given task at each time step, and these solutions are then aggregated using a particular algorithm. In AAA, each expert corresponds to an online tracker that estimates the location of the target as its solution, as shown in Fig.~\ref{fig:overview}. $N$ different online trackers will produce $N$ location predictions (i.e., $N$ bounding boxes, often with different sizes) for each frame $t$. These predictions are then aggregated into a single solution (i.e., the target location) for each frame $t$ using a weighted random selection algorithm.
\par
The strength of AAA is that its performance is theoretically guaranteed in terms of {\em regret} due to the solid theoretical background of AEA. Regret is defined by the difference between the performance of the {\em best expert}\footnote{As noted below, the best expert is unknown during tracking - only at the very end of the image sequence, i.e., at $t=T$, can it be known which $N$ is the best expert for the sequence.} and the performance of the aggregation result.
In VOT with $N$ trackers, the best expert among them gives the best tracking accuracy over an image sequence. If regret is bounded, the accuracy difference between AAA and the best expert can also be bounded.
It is practically meaningful to have a theoretical guarantee (i.e., a regret bound) because this means that the target location estimated using AAA at frame $t (<T)$ is often not far away from the estimation of the best expert.
\par
This strength of AAA is further emphasized as described here.
First, this theoretical bound holds with arbitrary experts. Arbitrary trackers (especially state-of-the-art trackers) can therefore be used as experts, whereas the traditional method with multiple trackers often can employ only specific trackers. The bound holds even in an {\em adversarial environment}~\cite{Shalev_Shwartz_2011} in which, for example,
a tracker that was reliable until $t$ can become totally unreliable at $t+1$. This is not an unrealistic environment, as already observed in Fig.~\ref{fig:change_reliable}. Even for image sequences with such extreme situations, the proposed method is still guaranteed in terms of regret. Even though we can know the best expert for an image sequence may be known only at the last ($T$th) frame, it is still theoretically possible to bound the regret of AAA.\par
\input{figs/overview.tex}
In AAA, the reliability (i.e., the weight) of each tracker is better evaluated by the proximity of tracker estimation to the true target location (i.e., the ground truth); in theoretical machine learning research, the ground truth for expert evaluation is called {\em feedback}. In VOT tasks, however, it is impossible to obtain exact feedback for each frame because the true target location is not given during online tracking. Accordingly, a practical strategy called {\em delayed feedback} is adopted in AAA. As shown in Fig.~\ref{fig:overview}, experts receive feedback for {\em anchor frames}, where the target location is determined with high reliability. At the $q$th anchor frame $u_q$, feedback for the $i$th tracker is calculated as the difference from a very reliable {\em offline} tracking result between the previous and current anchor frames $u_{q-1}$ and $u_q$. Since feedback at the frame $\tau\in [u_{q-1}+1,u_q]$ is postponed until $u_q$, this is known as delayed feedback.
\par
It should be noted that the performance of the proposed method is still guaranteed in terms of regret, even with the delayed feedback strategy. Additionally, although offline tracking results are not always exact and thus the delayed feedback is not calculated from the true target location, the theoretical guarantee of the proposed method still holds. These strong theoretical guarantees underpin the very promising performance of the proposed method in various practical situations, as experimentally proved in the later sections.
\par
The main contributions of this paper are as follows:
\begin{itemize}
\item The authors propose an online tracking algorithm called AAA, by which arbitrary experts (online trackers) are aggregated with promising theoretical performance guarantees.
\item To the best of the authors' knowledge, this is the first application of an AEA-based algorithm with delayed feedback in a computer vision task.
\item To demonstrate the experimental performance of the proposed method in an adversarial environment, various experiments were conducted with combinations of arbitrary experts on various datasets.
\item The experimental results show that the proposed method produced quasi-optimal or optimal performance among state-of-the-art trackers.
\end{itemize}
\par
Numerous extensions are shown in this work from preliminary publication by the authors~\cite{Song2020}. A new important theoretical investigation (Proposition \ref{prop:ours}) is presented to clarify how AAA outperforms other trackers.
The paper outlines more up-to-date experimental validations based on six recent benchmark datasets as well as state-of-the-art online trackers and ensemble tracking methods. The experimental results show that AAA can be used to achieve state-of-the-art performance, and several new experimental setups are added. By way of example, different expert sets were used to allow observation of related effects on performance, with results revealing that AAA is very stable in relation to expert choice.
\section{Adaptive Aggregation of Arbitrary trackers (AAA)\label{sec:method}}
\subsection{Overview}\label{sec:AAA-overview}
As shown in Fig.~\ref{fig:overview}, the proposed method, AAA, assumes $N$ arbitrary experts (online trackers). At each frame $t$, the method involves stochastic selection of an expert based on weights $w_1^t,\ldots, w_N^t \in \mathbb{R}$ as probabilistic distribution, where $\sum_{i=1}^N w_i^t=1$ and $w_i^t \geq 0$ for any $i$. That is, an expert with a greater weight has a higher chance of being selected. The target location $p^t$ estimated by the selected expert is then used as the output of AAA at $t$.
Through this simple process, the proposed method enables performance similar to that of the best expert over an image sequence in any situation, even for extreme conditions.
\par
From an algorithmic viewpoint, the main concern is how and when to update the weights $w_1^t,\ldots, w_N^t$. Since $w_i^t$ indicates the reliability of the $i$th tracker, it should be updated using reliable information. Ideally, if the true target location is present at $t$, a greater weight acn be assigned for a tracker that estimates a similar location. However, determination of the true target location is practically impossible. \par
Accordingly, a reliable {\em offline} tracker between two {\em anchor frames} was used to update weights. The frame $t$ is defined as the $q$th anchor frame $u_q$ if the target object is found at a certain location $y^t$ by an object identifier (or tracker) with very high confidence. Connecting two reliable target locations $y^{u_{q-1}}$ and $y^{u_q}$ using an accurate offline tracker produces the pseudo-ground-truth sequence $y^\tau, \tau \in [u_{q-1}+1, u_q]$. If the target location estimated via the $i$th expert is similar to the pseudo-ground-truth during $[u_{q-1}+1, u_q]$, the weight of the expert will be increased at the anchor frame $u_q$.
Globally-optimal offline tracking is used based on Dijkstra's algorithm, as detailed in Appendix~\ref{sec:append_offline}.
\par
The pseudo-ground-truth is referred to as feedback based on the terminology of theoretical AEA research because it is used for posterior evaluation in relation to individual experts. Specifically in AAA, it is referred to as delayed feedback because feedback during $\tau\in [u_{q-1}+1, u_q]$ is given in a later frame $u_q$, as shown in Fig.~\ref{fig:overview}. It should be emphasized again that the performance of the proposed method is still guaranteed even with the use of the pseudo-ground-truth as delayed feedback, as detailed in Sec.~\ref{sec:theory}.\par
\begin{algorithm}[t]
\caption{The proposed method: AAA.\label{alg:AAA}}
\begin{algorithmic}
\Inputs{The initial target location $f_0$.\par $N$ arbitrary experts.}
\Outputs{Predicted target location $p^1,\ldots,p^t,\ldots$}
\Initialize{The Weight of the experts $w_i^1 \gets 1/N, \forall i$.\par
The initial anchor frame $u_1 \gets 1$ and $q \gets 1$.\par
The initial learning rate $\eta \gets \sqrt{\ln N}$.}
\For {$t=2,\dots $}
\State Get estimation $f^t_1, \ldots, f^t_N$ from the experts.
\If{$t$ is determined to be an anchor frame}
\State Increase the number of anchor frame $q \gets q+1$.
\State Store $t$ as the last anchor frame $u_q \gets t$.
\State Obtain delayed feedback $y^{u_{q-1}+1}, \ldots, y^{u_q}$.
\State Calculate each cumulative loss by using (\ref{eq:loss}).
\State Update $\eta$ using doubling trick.
\State Update weights by using (\ref{eq:weight}).
\State Set the target location $p^{t} \gets y^t$.
\Else
\State Do not update weights $w_i^{t} \gets w_i^{t-1}, \forall i$
\State \multilinestate{Select the target location $p^{t}$ from $f^t_1, \ldots, f^t_N$ \newline stochastically using $w_1^t, \ldots, w_N^t$.}
\EndIf
\EndFor
\end{algorithmic}
\end{algorithm}
\subsection{Updating of expert weights}
Expert weights are updated at each anchor frame using the delayed feedback given at the frame as shown in Fig.~\ref{fig:overview}.
At the anchor frame $t$, the {\em loss} $L_i$ of the expert $i$ is first calculated using the delayed feedback $y^\tau, \tau\in [u_{q-1}+1, u_q]$ via:
\begin{equation}
\label{eq:loss}
L_i = \sum_{\tau=u_{q-1}+1}^{u_q} \ell(f_i^\tau, y^\tau),
\end{equation}
where
\begin{equation}
\ell(f_i^\tau, y^\tau) = 1 - {\mathcal{P}}(f_i^\tau, y^\tau),
\end{equation}
and $f_i^\tau$ is the $i$th expert's estimation at frame $\tau$. Locations such as $f_i^\tau$ and $y^\tau$ are represented as a bounding box, and the function $\mathcal{P}$ provides evaluation of proximity between two bounding boxes.
Based on this loss, the weight of the expert $i$ is updated at $t=u_q$ via the following equation and used from $t+1$:
\begin{equation}
\label{eq:weight}
w^{t+1}_i = \frac{w^t_i \exp\left(-\eta L_i\right)}
{\sum_{j=1}^N w_j^t \exp\left(-\eta L_j\right)}.
\end{equation}
In (\ref{eq:weight}), $\eta$ is the learning rate, and its value is carefully and automatically controlled for performance guarantee as detailed in Sec.~\ref{sec:AAA-bound}.
\par
For clarity, AAA is briefly summarized in Algorithm~\ref{alg:AAA}.
The first frame is treated as the first anchor frame $u_1$. The weights are initialized as $w^1_i = 1/N, \forall i$. If the current frame $t$ is not an anchor frame, the weight is not updated, i.e., $w^{t+1}_i=w^t_i$.\par
To evaluate the proximity ${\mathcal{P}}(f_i^\tau,y^\tau)$ of two bounding boxes specified by $f_i^\tau$ and $y^\tau$, IoU~\cite{Wu_2013} is a possible choice. However, if the bounding boxes do not overlap, the IoU score is zero regardless of distance. Accordingly, GIoU~\cite{Rezatofighi_2019} is employed to evaluate both overlap and distance. Any arbitrary function can be used as the loss function $\ell$ to give a theoretical performance guarantee if values are limited in the interval $[0,1]$.
\subsection{Anchor frame determination\label{sec:anchor}}
For accurate delayed feedback close to the exact target location, it is necessary to carefully determine anchor frames because these give the boundary conditions of the offline tracker for such feedback. An anchor frame is determined when the target location is determined with very high confidence. There are several approaches to highly-confident determination. For example, if the target is a specific pedestrian or athlete, the face or jersey number can be used for determination with the help of person re-identification or scene text OCR techniques. \par
For application of AAA to various datasets with various targets, a more general approach can be used for highly-confident target determination with reliance on the target template image. In VOT tasks, the template is generally given as the bounding box at the initial frame. If one or more $N$ experts determine a bounding box whose normalized cosine similarity\footnote{``Normalized'' cosine similarity $\in [0,1]$ is simply given by $(1 + \mathrm {cosine\ similarity})/2$. } to the template is greater than the threshold $\theta$, the current frame $t$ is determined as an anchor frame. The bounding box of the expert with the maximum similarity is determined as $y^t$. The threshold $\theta$ is the only hyper-parameter of the proposed method. As discussed in Sec.~\ref{sec:threshold}, an appropriate value of $\theta$ is experimentally determined for experts.\par
The cosine similarity between the template and the bounding box determined is evaluated using the feature vectors given by ResNet~\cite{He_2016}.
Specifically, like \cite{Lopes_2017}, the output of the average pooling layer of ResNet is used as a feature vector. Here, ResNet pre-trained with ImageNet~\cite{Russakovsky_2015} is used with no extra training.
Both the template and the bounding box are converted to feature vectors using ResNet, and their normalized cosine similarity is calculated.
\section{Related Work\label{sec:related}}
\subsection{Adaptive Expert Aggregation (AEA)}
The goal of AEA is to make predictions with a low regret by aggregating experts. The theories around AEA are discussed in Sec.~\ref{sec:AEA-outline} and~\ref{sec:AEA-regret}, as AEA is not popular in the computer vision field. The Hedge algorithm~\cite{Littlestone_1994, Freund_1997} is a popular algorithm in AEA for prediction of the weighted average of expert solutions. It can be easily applied to expert selection using weights for probability distribution, and achieves good regret bound for the expert selection tasks. \par
One application of the Hedge algorithm involves an adaptive disk spin-down problem. Helmbold et al.~\cite{Helmbold_1996} proposed a method to minimize the energy cost of the disk by aggregating experts estimating the timing of this spin-down. Recently an algorithm called Follow the Regularized Leader (FTRL) has been applied for meta-learning~\cite{finn2017model}. On the other hand, algorithms with delayed feedback have mainly been discussed from a theoretical perspective. Specifically, Quanrud et al.~\cite{Quanrud2015OnlineLW} proposed various algorithms, including the Hedge algorithm with delayed feedback. However, neither practical application nor experimental validation have been completed in ~\cite{Quanrud2015OnlineLW}.
\subsection{Ensemble tracking methods}
Many of the various ensemble tracking methods previously proposed~\cite{Wang2014EnsembleBasedTA, Han2017BranchOutRF, Zhou2014AnEO, Tian2007OnLineES} have leveraged carefully designed synergy among trackers, making it difficult to aggregate arbitrary trackers. By way of example, Avidan et al.~\cite{Avidan} and Grabner et al.~\cite{Grabner_2006} aggregated trackers complementarily trained using AdaBoost~\cite{Freund_1997}. Zhang et al.~\cite{Zhang_2014} proposed a tracking method whose experts are trackers derived from the same (e.g., SVM-based) tracking algorithm. Experts differ in terms of updated frames used to deal with past appearances of the target object. \par
Some ensemble methods can be applied to aggregate arbitrary trackers. Wang et al.~\cite{Wang_2018} proposed a tracking method, called the Multi-Cue Correlation filter based Tracker (MCCT). Similar to the method proposed AAA, any tracker that outputs a bounding box as its prediction can be employed as an expert. However, the practical success of the MCCT is supported by the strong assumption that bounding boxes given by experts are close together. Accordingly, MCCT used specific experts that satisfy assumptions based on inter-expert sharing of ROI.
In aggregation of arbitrary trackers, this assumption may not be met because some trackers may predict a completely different location to others as described in the Appendix~\ref{sec:append_mcct}. The authors' experimental results show that deviation from the assumption degrades MCCT performance.\par
Qi et al.~\cite{Qi2019HedgingDF} proposed an AEA-based tracking method called the Hedged Deep Tracker (HDT*) based on the Hedge algorithm with multiple experts. To the best of the authors' knowledge, this is the only trial to have applied AEA-based aggregation for VOT. As detailed in the Appendix~\ref{sec:append_hdt}, HDT* uses feedback based on expert predictions (rather than other reliable resources such as offline trackers). HDT* also assumes that feedback is given for every frame regardless of reliability. Accordingly, the feedback may be relatively unreliable when a majority of experts do not perform well, and such unreliability degrades overall performance.
\section{Theoretical guarantee of AAA\label{sec:theory}}
\subsection{General preliminaries of AEA}\label{sec:AEA-outline}
Before an explanation of the theoretical guarantee of AAA, there is a need for a brief introduction to the general theories surrounding AEA, which is generally considered as a {\em repeated game} between a player and an adversarial environment. At each round $t=1,\ldots,T$, the player receives $N$ advice $f^t_1,\ldots,f^t_N$ from $N$ experts. The player makes a prediction $p^t$ based on this advice, and the environment gives its feedback $y_t$ to $p^t$. The player suffers the loss $\ell\left(p^t, y^t\right)$. \par
Here, AAA is based on AEA and thus has clear correspondence with the above terminologies. Specifically, the round, player and experts correspond to the frame $t$, AAA (i.e., the proposed tracker) and the online trackers to be aggregated, respectively. The prediction $p^t$, advice $f_i^t$, and feedback $y^t$ correspond to the target location determined from AAA, the target locations estimated from the $N$ online trackers, and the offline tracking result, respectively.\par
The goal of AEA is to minimize the regret $R_T$:
\begin{equation}
R_T = \mathbb E\left[\sum_{t=1}^T \ell(p^t, y^t)\right] - \min_{i= 1,\ldots, N} \sum_{t=1}^T \ell(f_i^t, y^t),
\label{eq:regret}
\end{equation}
where the first and second terms represent the cumulative loss of the player and the {\em best expert}, respectively. The best expert is the one with the minimum cumulative loss among $N$ experts. Intuitively, lower regret means the performance of the player is close to that of the best expert. \par
In AEA with delayed feedback, the player suffers a loss only at $Q (\leq T)$ rounds $u_1, \ldots, u_q, \ldots, u_Q$ rather than at each $t$. At the round $u_q$, the environment gives feedback $y^\tau, \tau\in[u_{q-1}+1, u_q]$ for the predictions of the player between $u_{q-1}$ and $u_q$. In Sec.~\ref{sec:AAA-overview}, it can be seen that $u_q$ corresponds to the $q$th anchor frame in AAA.
\subsection{Regret bound of AEA with delayed feedback}\label{sec:AEA-regret}
The regret bound of AEA with delayed feedback is given as follows:
\begin{theorem}[From Theorem A.5 of \cite{Quanrud2015OnlineLW}]
\label{theo:regret_delay}
Assume an AEA algorithm with the weight-updating strategy of (\ref{eq:weight}) and delayed feedback. Also assume the loss function $\ell\in [0, 1]$ and the learning rate $\eta \propto \sqrt{\ln N/(T+D)}$. The regret of the AEA algorithm after $T$ frames is then bounded as follows:
\begin{equation}
R_T = O \left(\sqrt{\left(T+D\right)\ln N} \right), \label{eq:bound1}
\end{equation}
where $D$ is the total delay.
\end{theorem}
\noindent The {\em total delay} $D$ is defined as $D=\sum_{q=2}^Q\sum_{\tau=1}^{u_q - u_{q-1}}\tau=\sum_{q=2}^{Q}\left(u_q - u_{q-1}\right)\left(u_q - u_{q-1}+1\right)/ 2$.
Thus, $D$ takes its minimum value $T$ when feedback is given at every frame and its maximum value $(T^2+T)/2$ when no feedback is given until $t=T$ after $t=1$ (i.e., $u_1=1, u_2=T$, and $Q=2$). Appendix~\ref{sec:append_regret} details the derivation of Theorem \ref{theo:regret_delay} from Theorem A.5 of \cite{Quanrud2015OnlineLW}.
\par
Theorem~\ref{theo:regret_delay} states that regret $R_T$ increases according to $D$. In the worst case, when $D$ takes its maximum value $(T^2+T)/2$, the regret bound is linearly proportional to $T$. This means that the performance difference between AEA and the best expert will increase drastically with $T$.
\input{figs/high_curve.tex}
\input{tables/table1.tex}
\subsection{Regret bound of AAA\label{sec:AAA-bound}}
Based on Theorem~\ref{theo:regret_delay}, the regret bound of AAA can be derived. This can be lower than (\ref{eq:bound1}) in general because delayed feedback will be far more frequent than the worst case in real-world tracking. Denoting $r$ as the {\em anchor frame ratio}, which is the probability that a frame is determined as an anchor frame, the regret bound outlined below is derived.
\begin{theorem}
\label{theo:regret_cdelay}
Assume the AEA algorithm of Theorem \ref{theo:regret_delay} can have delayed feedback with the anchor ratio $r \in (0,1]$ at each frame.
The expectation of the regret is then upper-bounded as follows:
\begin{equation}
\mathbb E_{r}\left[R_{T}\right] = O \left(\sqrt{{1 \over r}T \ln N} \right).
\label{eq:bound2}
\end{equation}
\end{theorem}
\begin{proof}
The expected delay length $\mathbb E_r[u_q - u_{q-1}]$ is $1/r$
and the expected number of anchor frames is $rT$. Thus, the expectation of the total delay is $\mathbb E_r[D]= \sum_{q=2}^{Q}\left(1/r\right)\left(1/r+1\right)/2 =rT \left(1/r\right)\left(1/r+1\right)/2 = O(T/r)$. Finally, consideration for the expectation of the regret of $R_T$ with $r$ produces the above bound.
\end{proof}
In contrast to Theorem \ref{theo:regret_delay}, when the regret increases linearly with $T$ in the worst case, Theorem \ref{theo:regret_cdelay} guarantees that the regret of AAA increases in $O(\sqrt{T})$.
This indicates that the regret of AAA is bounded more tightly, and stable performance of AAA can therefore be expected even for a longer sequence.
\subsubsection{Impact of the regret bound for VOT}
Theorem \ref{theo:regret_cdelay} guarantees that the performance difference between the proposed AAA and the best expert is upper-bounded by (\ref{eq:bound2}). Intuitively speaking, this bound is significant in a number of ways. First, it means that the performance of AAA is not far from that of the best expert.
Second, this guarantee holds for arbitrary experts, arbitrarily delayed feedback (i.e., arbitrary offline trackers and the determination rule for anchor frames) and arbitrary image sequences.
Third, and most interestingly, AAA may demonstrate performance similar to that of the best expert {\em even though the best expert and its performance is unknown until $T$, i.e., the end of the image sequence}. Expert selection is made at each frame $t$ in a strictly online condition, and it remains unknown which expert will be the best; nevertheless, these theorems guarantee that the performance of AAA will not be far from that of the best expert.
\subsubsection{Effect of the anchor frame ratio \texorpdfstring{$r$}{}}
One might expect that it is better to set $r=1$ (to make all frames the anchor frames), since the regret bound based on (\ref{eq:bound2}) is minimum when $r=1$. However, it must be remembered that the anchor frame should be set to give reliable delayed feedback with a reliable offline tracking result. All the above theories rely on the loss function $\ell$, which treats $y^t$ given by the offline tracking result as a pseudo-ground-truth.
Accordingly, using unreliable feedback eventually produces a choice far from the true ground-truth as the best expert. Finally, AAA tries to follow this false best expert to keep the regret bound. Consequently, for better AAA performance, anchor frames must be carefully determined with higher settings of $\theta$, even though this makes $r$ smaller. This is experimentally demonstrated in Appendix~\ref{sec:append_threshold}.
\subsubsection{Effect of the number of experts \texorpdfstring{$N$}{}\label{sec:effect_of_N}}
The theorems remove the need for concern over the choice of experts, because the regret bound is simply a logarithmic representation of the number of experts $N$. In other words, even if many experts are employed, the regret bound will increase only slightly. This increase will not be problematic to practical tracking performance. As regret represents the difference from the best expert, using more experts with different characteristics will increases the chances to have the best expert with better performance. Since the difference is bounded by (\ref{eq:bound2}), the presence of a better best expert will enhance AAA performance.
\subsubsection{Learning rate \texorpdfstring{$\eta$}{}}
\label{subsubsec:doubling_trick}
Theorem \ref{theo:regret_cdelay}, as well as Theorem \ref{theo:regret_delay}, assumes that the learning rate $\eta$ should be proportional to $\sqrt{\ln N/(T+D)}$. However, $T$ and $D$ are usually unknown until the end of the sequence in online tracking tasks. Fortunately, the {\em doubling trick}~\cite{Quanrud2015OnlineLW} allows adaptive control of $\eta$ with the regret guarantee in the same order in Theorem~\ref{theo:regret_cdelay}.
Roughly speaking, this trick uses a tentative value $Z$ instead of $T+D$.
Thus, the parameter $\eta$ is initially set as $\eta = \sqrt{\ln N/Z}$.
Then, if the actual value of $T+D$ reaches $Z$ at the current frame $t$,\footnote{Specifically, this condition means that $t+D_t$ is the same as $Z$, where $D_t$ is the total delay until $t$ defined as
$D_t=\sum_{\tau=1}^{t - u_{\bar{q}}}\tau + \sum_{q=2}^{\bar{q}}\sum_{\tau=1}^{u_q - u_{q-1}}\tau$. $\bar{q}$ is the latest anchor frame by $t$.} the value of $Z$ is doubled and $\eta$ is updated using the new value of $Z$. The proof that the doubling trick still guarantees the regret bound of Theorem \ref{theo:regret_delay} is detailed in ~\cite{mohri2018foundations} and~\cite{Quanrud2015OnlineLW} and the proof for Theorem \ref{theo:regret_cdelay} is trivial from this.
|
train/arxiv
|
BkiUc8g4uzlhge3-E2CA
| 5 | 1 |
\section{INTRODUCTION}
The discovery of the H mode with enhanced confinement
in ASDEX [1] had openned a new age of tokamak fusion research.
Extensive experimental works were performed
to identify the key procedures and signatures
that carried the L to the H confinement mode [2,3].
These procedures included pellet injection for density enhancement,
neutral beam or radio frequency heating for temperature
and electric conductivity profiling, etc [4,5].
The experimental signatures of L/H transition
were a drastic reduction of the $D_{\alpha}$ hydrogen emission
and a sudden decrease of the plasma floating potential.
These were accompanied by a good enhancement
of plasma density and energy confinements,
plus a steepening of the edge plasma profile
and a reduction of magnetohydrodynamic (MHD) activities [6,7].
Observationally, the L/H transition was accompanied
by an increase in the toroidal and poloidal rotations [3].
Nevertheless, it was not clear that this velocity increase
was the cause or was the consequence of the transition.
The L/H transition is believed to be caused
by divertor shaping of the edge plasma,
under high levels of heating power.
Action of the divertor plus the edge gradient
pump up the static radial electric field
that drives a zonal flow near the
edge rational magnetic surface [8,9].
This zonal flow is thought bo cause a transport barrier
that enhances density and energy confinements.
The plasma is thought to self-organize gradually
under this scenario to reach the H mode [10].
Here, we take a different approach
to view the L/H transition.
We consider the toroidal and poloidal rotations
on the equilibrium scaling, not transport scaling,
as equilibrium parameters.
We solve for rotational equilibria,
under specific source functions,
in spherical coordinates by seeking toroidal solutions [11].
There are two independent solutions
for rotational equilibrium,
and we associate them to the L and H modes.
Under this equilibrium configuration approach,
the transition is seen as a bifurcation
from one equilibrium to another,
under external drivings
such as pellet injection for plasma density
and strong heating for temperature profiling.
Because of the steep edge gradient of the H mode,
the large static radial electric field,
the zonal flow, and the associated confinements,
come as the natural consequences, not causes,
of the H mode solution.
\newpage
\section{ROTATIONAL EQUILIBRIA}
To discuss rotational equilibria,
we begin with the time-dependent MHD equations
\begin{eqnarray}
\label{eqno1}
{\partial\rho\over\partial t}+\nabla\cdot(\rho\vec v)\,
=\,0\,\,\,,
\\
\label{eqno2}
\rho\{{\partial\vec v\over\partial t}
+(\vec v\cdot\nabla)\vec v\}\,
=\,\vec J\times\vec B-\nabla p\,\,\,,
\\
\label{eqno3}
{\partial\vec B\over\partial t}\,
=\,-\nabla\times\vec E\,
=\,\nabla\times(\vec v\times\vec B)\,\,\,,
\\
\label{eqno4}
\nabla\times\vec B\,=\,\mu\vec J\,\,\,,
\\
\label{eqno5}
\nabla\cdot\vec B\,=\,0\,\,\,,
\\
\label{eqno6}
p\,=\,\rho v_{s}^{2}\,\,\,.
\end{eqnarray}
\noindent Here, $\rho$ is the mass density,
$p$ is the plasma pressure,
$\vec v$ is the bulk velocity,
$\vec J$ is the current density,
$\vec B$ is the magnetic field,
$v_{s}$ is the ion acoustic speed,
$\mu$ is the free space permeability.
With axisymmetry, the magnetic field and the current density
can be represented by two scalar functions
in standard spherical coordinates
\begin{eqnarray}
\label{eqno7}
\vec B\,
=\,A_{0}(\nabla P\times\nabla\phi+Q\nabla\phi)\,
=\,{A_{0}\over r\sin\theta}
\{+{1\over r}{\partial P\over\partial\theta},
-{\partial P\over\partial r},
+Q\}\,\,\,,
\\
\label{eqno8}
\mu\vec J\,
=\,{A_{0}\over r\sin\theta}
\{+{1\over r}{\partial Q\over\partial\theta},
-{\partial Q\over\partial r},
-{\partial^2 P\over\partial r^2}
-{1\over r^2}\sin\theta
{\partial\over\partial\theta}
({1\over\sin\theta}{\partial P\over\partial\theta})\}
\,\,\,.
\end{eqnarray}
\noindent Here, $A_{0}$ carries the physical dimension
of poloidal magnetic flux
such that $P$ is a dimensionless function.
Also, we can write the axisymmetric
poloidal and toroidal rotations as
\begin{eqnarray}
\label{eqno9}
\vec v\,
=\,A'_{0}(\nabla P'\times\nabla\phi+Q'\nabla\phi)\,\,\,.
\end{eqnarray}
\noindent Likewise, $A'_{0}$ carries the physical dimension
of poloidal velocity flux
such that $P'$ is a dimensionless function.
With axisymmetry and incompressible fluid condition,
$\nabla\cdot\vec v=0$, steady state in Eq.~\ref{eqno1} requires
\begin{eqnarray}
\nonumber
\nabla\rho\cdot\vec v\,
=\,A'_{0}\nabla\rho\cdot(\nabla P'\times\nabla\phi)\,
=\,A'_{0}(\nabla\rho\times\nabla P')\cdot\nabla\phi\,
=\,0\,\,\,,
\\
\label{eqno10}
\rho\,=\,\rho_{0}\rho(P')\,\,\,,
\end{eqnarray}
\noindent which requires the poloidal velocity
and the mass density have the same level contours.
Here, $\rho_{0}$ carries the dimension and amplitude of mass density,
and $\rho(P')$ is a dimensionless function.
As for Eq.~\ref{eqno3}, by Eq.~\ref{eqno7} and Eq.~\ref{eqno9},
we note that $\vec v\times\vec B$ would be null
and steady state in Eq.~\ref{eqno3} would be warrented with
\begin{mathletters}
\begin{eqnarray}
\label{eqno11a}
P'\,=\,\alpha P\,\,\,,
\\
\label{eqno11b}
Q'\,=\,\alpha Q\,\,\,.
\end{eqnarray}
\end{mathletters}
\noindent As a result, the velocity field and the magnetic field
are parallel, generating an emf-free velocity field
\begin{eqnarray}
\label{eqno12}
\vec v\,=\,\alpha {A'_{0}\over A_{0}}\vec B\,
=\,g\vec B\,\,\,.
\end{eqnarray}
\noindent With Eq.~\ref{eqno4} and Eq.~\ref{eqno6},
toroidal plasma equilibria of Eq.~\ref{eqno2}
with full axisymmetric rotations are described by
\begin{eqnarray}
\label{eqno13}
(1-\mu\rho_{0}g^{2}\rho(P))(\nabla\times\vec B)\times\vec B
-\mu\rho_{0}v_{s}^{2}\nabla\rho(P)\,
=\,{1\over 2}\mu\rho_{0}g^{2}\rho(P)\nabla B^{2}\,
=\,0\,\,\,.
\end{eqnarray}
\noindent We consider the rotational scalar pressure
be much smaller than the plasma pressure,
thereby giving the second equality in the above equation.
\newpage
\section{ROTATIONAL GRAD-SHAFRANOV EQUATION}
We seek to solve Eq.~\ref{eqno13} for toroidal solutions.
This equation renders three components.
By axisymmetry, the $\phi$ component
contains only the magnetic force, and it is
\begin{mathletters}
\begin{eqnarray}
\label{eqno14a}
{\partial P\over\partial r}
{\partial Q\over\partial\theta}
-{\partial P\over\partial\theta}
{\partial Q\over\partial r}\,
=\,0\,\,\,,
\\
\label{eqno14b}
Q(r,\theta)\,=\,Q(P(r,\theta))\,\,\,.
\end{eqnarray}
\end{mathletters}
\noindent As for the $\theta$ component, it reads
\begin{equation}
\label{eqno15}
A_{0}^{2}\{{\partial^2 P\over\partial r^2}
+{1\over r^2}\sin\theta{\partial\over\partial\theta}
({1\over\sin\theta}{\partial P\over\partial\theta})
+{1\over 2}{\partial Q^{2}\over\partial P}\}\,
=\,+({v_{s}\over g})^{2}r^2\sin^2\theta
{\partial\over\partial P}\ln (1-\mu\rho_{0}g^{2}\rho(P))
\,\,\,.
\end{equation}
\noindent This equation is the rotational counterpart
of the Grad-Shafranov equation
of axisymmetric toroidal plasma equilibrium,
represented in spherical coordinate system.
The three terms on the left side
represent the nonlinear force-free field
with $\mu\vec J=K(P)\vec B$,
where $K(P)={\partial Q/\partial P}$ is a scalar function.
This can be verified from Eq.~\ref{eqno7} and Eq.~\ref{eqno8}
when we impose $\mu\vec J=K(P)\vec B$.
In particular, we would have the linear force-free field
should we take $Q^{2}(P)=(aP)^{2}$ with constant $K(P)=a$.
The term on the right side is the plasma pressure balance.
The magnetic function $Q^{2}(P)$ and the mass density $\rho(P)$
are source functions that need to be specified.
This second order partial differential equation
has two independent solutions.
Finally, the $r$ component of Eq.~\ref{eqno13} reads
\begin{eqnarray}
\nonumber
(1-\mu\rho_{0}g^{2}\rho(P)){\partial P\over\partial r}
A_{0}^{2}\{{\partial^2 P\over\partial r^2}
+{1\over r^2}\sin\theta{\partial\over\partial\theta}
({1\over\sin\theta}{\partial P\over\partial\theta})
+{1\over 2}{\partial Q^{2}\over\partial P}\}\,
\\
\label{eqno16}
=\,+({v_{s}\over g})^{2}r^2\sin^2\theta
{\partial\over\partial r}(1-\mu\rho_{0}g^{2}\rho(P))
\,\,\,.
\end{eqnarray}
\noindent Comparing Eq.~\ref{eqno16} to Eq.~\ref{eqno15},
we note that these two equations are identical.
The $r$ component is simply the self-consistent
condition of the $\theta$ component.
To solve Eq.~\ref{eqno15} analytically, we take the source
functions as
\begin{mathletters}
\begin{eqnarray}
\label{eqno17a}
Q^2(P)\,=\,a^2P^2+Q^{2}_{0}\,\,\,,
\\
\label{eqno17b}
\ln (1-\mu\rho_{0}g^{2}\rho(P))\,=\,-P{^q}\,\,\,,
\\
\nonumber
\mu\rho_{0}g^{2}\rho(P)\,=\,1-e^{-P{^q}}\,\,\,.
\end{eqnarray}
\end{mathletters}
\noindent Writing $P(r,\theta)=R(r)\Theta(\theta)$,
the rotational Grad-Shafranov equation reads
\begin{eqnarray}
\nonumber
r^{2}{1\over R}{\partial^2 R\over\partial r^2}
+(ar)^{2}
+{1\over\Theta}\sin\theta{\partial\over\partial\theta}
({1\over\sin\theta}{\partial\Theta\over\partial\theta})\,
\\
\label{eqno18}
=\,-{v_{s}^{2}\over A_{0}^{2}g^{2}}q(R\Theta)^{q-2}
r^{4}\sin^2\theta\,
=\,-{1\over\alpha^{2}}{v_{s}^{2}\over A_{0}^{'2}}q(R\Theta)^{q-2}
r^{4}\sin^2\theta\,\,\,.
\end{eqnarray}
\noindent The variables of this equation could be separated
by taking $q=1$ to give
\begin{mathletters}
\begin{eqnarray}
\label{eqno19a}
(1-x^2){d^2\Theta(x)\over dx^2}
+n(n+1)\Theta(x)\,=\,0\,\,\,,
\\
\label{eqno19b}
r^{2}{d^2R\over dr^2}
+[(ar)^{2}-n(n+1)]R\,
=\,-{1\over\alpha^{2}}{v_{s}^{2}\over (A'_{0}a^{2})^{2}}(ar)^{4}
{(1-x^2)\over\Theta}\,
=\,-A_{1}(ar)^{4}{(1-x^2)\over\Theta}\,\,\,,
\end{eqnarray}
\end{mathletters}
\noindent where we have denoted $x=\cos\theta$,
and used $n(n+1)$ as the separation constant.
The factor $v_{s}^{2}/(A'_{0}a^{2})^{2}$
is proportional to $v_{s}^{2}/v_{pol}^{2}$,
where $v_{pol}$ is the average poloidal rotational velocity.
The first equation gives
\begin{equation}
\label{eqno20}
\Theta(x)\,=\,(1-x^2){dP_{n}(x)\over dx}\,
=\,(1-x^2)\,\,\,,
\end{equation}
\noindent where $P_{n}(x)$ is the Legendre polynomial.
We have taken $n=1$ to get the second equality.
As for the second equation, the $\theta$ dependent part
on the right side disappears by having $n=1$.
The solution is given by $R(r)=R_{0}(r)+R_{1}(r)$,
where $R_{0}(r)$ and $R_{1}(r)$
are the homogeneous and particular solutions.
The homogeneous solution is described by
\begin{equation}
\label{eqno21}
R_{0}(r)\,=\,arj_{n}(ar)+\lambda_{0} ary_{n}(ar)\,\,\,,
\end{equation}
\noindent where $j_{n}(z)$ and $y_{n}(z)$
are the oscillating spherical Bessel functions,
and $\lambda_{0}$ is a constant.
Together with $A_{0}$ defined in Eq.~\ref{eqno7},
there are two constants for $R_{0}(r)$.
As for the particular solution, we have
\begin{eqnarray}
\nonumber
z^{2}{d^2R_{1}\over dz^2}+[z^{2}-n(n+1)]R_{1}\,
=\,-A_{1}z^{4}\,\,\,,
\\
\label{eqno22}
R_{1}(r)\,=\,-A_{1}(ar)^{2}\,=\,-A_{1}z^{2}\,\,\,.
\end{eqnarray}
\noindent We note that the homogeneous solutions,
$R_{0}(r)$ and $\Theta(x)$,
correspond to the linear or nonlinear force-free solutions
of the left side of Eq.~\ref{eqno15}.
The plasma pressure term on the right side
appears only in the particular solution, $R_{1}(r)$,
that keeps the pressure balance.
The homogeneous radial solution
is an oscillating function in $z=ar$,
which has sucessive maxima,
and the homogeneous meridian solution
has a lobe peaked at $x=0$.
The superposition of the particular radial solution
only slightly modifies the homogeneous solutions.
We could use the region between $z=0$
and the first root of $j_{n}(z)$, with $n=1$,
to describe low aspect ratio high $\beta$
toroidal plasma equilibria.
\newpage
\section{MAGNETIC AND CURRENT STRUCTURES}
With the spatial structure solved,
the magnetic field components are given by
\begin{mathletters}
\begin{eqnarray}
\label{eqno23a}
B_{r}\,=\,+{1\over r\sin\theta}
{1\over r}{\partial P\over\partial\theta}\,
=\,-{1\over r^2}R(r)
{d\Theta(x)\over dx}\,\,\,,
\\
\label{eqno23b}
B_{\theta}\,=\,-{1\over r\sin\theta}
{\partial P\over\partial r}\,
=\,-{1\over r}{dR(r)\over dr}
{1\over (1-x^2)^{1/2}}\Theta(x)\,\,\,,
\\
\label{eqno23c}
B_{\phi}\,=\,+{1\over r\sin\theta} Q(P)\,\,\,.
\end{eqnarray}
\end{mathletters}
\noindent The solution $R(r)$ vanishes at some $r$
where we have $B_{r}(r)=0$.
The solution $\Theta(x)$ also vanishes at some $x$.
Together they describe the magnetic fields.
Within this region of $(r,x)$, the topological center
defined by $dR(r)/dr=0$ and $d\Theta(x)/dx=0$
has $B_{r}=0$ and $B_{\theta}=0$.
This is the magnetic axis, $r=r_{*}$,
where the magnetic field is entirely toroidal.
The field lines about this center are given by
\begin{equation}
\label{eqno24}
{B_{r}\over dr}\,=\,{B_{\theta}\over rd\theta}\,
=\,{B_{\phi}\over r\sin\theta d\phi}\,\,\,.
\end{equation}
\noindent By axisymmetry, the third group
is decoupled from the first two groups.
For the field lines on an $(r-\theta)$ plane,
we consider the first equality between $B_{r}$ and $B_{\theta}$
which gives
\begin{equation}
\label{eqno25}
P(r,x)\,=\,R(r)\Theta(x)\,=\,C\,\,\,.
\end{equation}
\noindent The nested poloidal field lines
are given by the contours of $P(r,x)$ on the $(r-x)$ plane.
At the topological center, we have $\Theta(x)$ maximum and
$R(r)$ maximum, so that $P(r,x)$ is maximum.
Since $r\sin\theta$ is the distance
of a point on the $(r-x)$ plane to the z axis,
Eq.~\ref{eqno23c} states that the line integral
of $B_{\phi}$ around the circle on the azimuthal plane
is measured by $2\pi Q$,
\begin{eqnarray}
\nonumber
2\pi r\sin\theta B_{\phi}\,=\,2\pi Q\,=\,\mu I_{z}\,\,\,.
\end{eqnarray}
\noindent This line integral about the axis of symmetry
is maximum at the topological center.
Also, it is evident that $Q$ is equivalent to the axial current,
where the constant part $Q_{0}$ amounts to a uniform component.
As for $P$, we evaluate the poloidal magnetic flux
by integrating Eq.~\ref{eqno23b} on the $x=0$ plane
over a cross section to give
\begin{eqnarray}
\nonumber
\int_{r_{*}}^{r} 2\pi r B_{\theta}dr\,
=\,-2\pi (P(z)-P(z_{*}))\,\,\,.
\end{eqnarray}
\noindent As for the current density of Eq.~\ref{eqno8},
making use of the Grad-Shafranov equation
of Eq.~\ref{eqno15} gives
\begin{eqnarray}
\label{eqno26}
\mu\vec J\,
=\,{A_{0}\over r\sin\theta}
\{+{1\over r}{\partial Q\over\partial\theta},
-{\partial Q\over\partial r},
+(a^2P-{1\over\alpha^{2}}({v_{s}\over A'_{0}})^{2}r^2\sin^2\theta
{\partial\over\partial P}\ln (1-\mu\rho_{0}g^{2}\rho(P))\}
\,\,\,.
\end{eqnarray}
\noindent Analogous to the magnetic field lines,
the current density field lines are given by
\begin{equation}
\label{eqno27}
{J_{r}\over dr}\,=\,{J_{\theta}\over rd\theta}\,
=\,{J_{\phi}\over r\sin\theta d\phi}\,\,\,.
\end{equation}
\noindent Considering the first equality,
the poloidal current density contours are given by
\begin{eqnarray}
\label{eqno28}
Q(r,x)\,=\,C\,\,\,.
\end{eqnarray}
\newpage
\section{L AND H MODES}
We note that there are two independent solutions
for $R_{0}(r)$ in Eq.~\ref{eqno21}.
The first one is $zj_{1}(z)$
which vanishes at $z=0$.
With $z_{1}$ and $z_{2}$ as the first and second zeros,
the region bounded by $0<z<z_{1}$
could be used to describe spheromak and
high $\beta$ low aspect ratio tokamak equilibria.
The second one is $zy_{1}(z)$
which diverges at $z=0$.
Since our domain of interest in tokamak plasmas
excludes $z=0$,
the singularity of $y_{1}$ is irrelevant.
The region bounded by $z_{1}<z<z_{2}$
could also be used to describe
high $\beta$ low aspect ratio tokamak equilibria.
The functions $zj_{1}(z)$ and $zy_{1}(z)$ are shown in Fig.1.
The poloidal magnetic contours of Eq.~\ref{eqno25}
for $zj_{1}(z)$ in the interval
$0<z<z_{1}$ are shown in Fig.2.
In particular, this solution
could also be applied to spheromaks
where $z=0$ is accessible to plasma equilibria.
Similar contours for $z_{1}<z<z_{2}$,
$z_{2}<z<z_{3}$, and so on, can be obtained
to represent tokamak plasmas of different aspect ratios.
The contours for $zy_{1}(z)$ in the interval
$z_{1}<z<z_{2}$ are shown in Fig.3.
In order to illustrate the essential features,
we have neglected the particular solution $R_{1}(r)$,
and have taken $R(r)=R_{0}(r)$.
The contour levels are taken at
$0.95, 0.9, 0.7, 0.5, 0.3, 0.1$ of the respective peak value.
The external contours indicate high poloidal fields,
and internal contours for low poloidal fields.
These contours also indicate
the poloidal rotations with rotation velocity
high on the outside and low on the inside.
Bounded by a smaller interval $z_{1}<z<z_{2}$,
we note that Fig.3 of $zy_{1}(z)$
has a more localized domain and steeper edge profile
than the equilibrium in Fig.2,
which is described by $zj_{1}(z)$
in the larger interval $0<z<z_{1}$.
Including the negative valued particular solution
$R_{1}(r)$ further steepens the edge gradient.
We associate $zj_{1}(z)$ and $zy_{1}(z)$
to the L and H mode respectively.
We also note that the poloidal and toroidal magnetic fields
are plotted in normalized radial coordinate $z=ar$.
In laboratory plasmas,
the fields are measured in terms of radius $r$.
To connect to our normalized results,
we need to determine the normalizing parameter $a$.
This can be done by considering
the magnetic axis $r_{*}$ of a laboratory plasma,
say in the $zj_{1}(z)$ mode bounded by $0<z<z_{1}$,
through $a_{j}r_{*}=z_{*}=2.7$.
Defined by the divertor scrape-off,
the radial range of laboratory plasma, $r_{a}<r<r_{b}$,
can now be converted to $0<z_{a}<z<z_{b}<z_{1}$
with $z_{1}=4.5$.
In the case of $zy_{1}(z)$ mode, we have $a_{y}r_{*}=z_{*}=4.5$,
giving $a_{y}/a_{j}=4.5/2.7$.
The radial range can be converted to
$z_{1}<z_{a}<z<z_{b}<z_{2}$ with $z_{1}=2.8$ and $z_{2}=6.1$.
Experimentally, this change of the normalizing parameter
from $a_{j}$ to $a_{y}$ could be accomplished
by pellet injection and high power external heating.
In the L mode, the $zj_{1}(z)$ profile is more diffused
between a larger interval of the zeros.
The divertor action removes the edge plasma
to the $z_{a}<z<z_{b}$ domain,
with poloidal rotation velocity contours
corresponding to such domain.
In the H mode, due to the compactness of the interval
between zeros of $zy_{1}(z)$ profile,
the plasma equilibrium fits within the toroidal machine vessel
naturally with much less divertor shaping.
The plasma equilibrium occupies
probably the entire domain $z_{1}<z<z_{2}$,
or a large part of it.
As a result, poloidal rotation contours
of the H mode cover not just the central part
but also the high velocity part on the outside.
By going from L to H mode,
the rotation contours within the plasma cross-section,
defined by the divertor action,
are enlarged from a partial central profile
to an almost complete profile.
Observed at a fixed position at the plasma edge,
we would have the impression
that the rotation velocity has been speeded up.
The corresponding mass density contours of Eq.(17b)
for L and H modes are shown in Fig.4 and Fig.5 respectively.
The profiles along $x=0$ horizontal cut are shown in Fig.6.
Although the two modes are presented in one same figure
showing approximately the same dimensionless amplitudes,
the physical amplitude and dimension
is given by $\rho_{0}$ defined in Eq.~\ref{eqno10}.
As a result, the mass density of the H mode
could be much larger than that of the L mode.
The essence of Fig.6 is to show the relative shape
of the mass density profiles for the two modes.
We have suggested the identification
of L mode to the $zj_{1}(z)$ solution in the $(0,z_{1})$ domain,
and H mode to the $zy_{1}(z)$ solution in the $(z_{1},z_{2})$ domain.
To map these $z$ domains to the same $r$ domain of machine vessel,
We have used two different normalizing parameters
$a_{j}$ and $a_{y}$ for the source function $Q^{2}(P)$
of Eq.~\ref{eqno17a}.
Since $a_{y}=1.7a_{j}$, this would require
a substantial increase of toroidal magnetic field
according to Eq.~\ref{eqno23c}.
To avoid this substantial toroidal field enhancement,
the H mode could be generated by superimposing
the $zy_{1}(z)$ solution to the $zj_{1}(z)$ solution,
without displacing significantly the $z$ domain.
As an example, with $\lambda_{0}=1$,
the profile of $(zj_{1}(z)+zy_{1}(z))$ is shown
in Fig.7 indicating $z_{1}=1.9$ and $z_{2}=5.3$
with a maximum at $z_{*}=3.6$.
\newpage
\section{DISCUSSIONS AND CONCLUSIONS}
Experimentally, due to the divertor action on the edge plasma,
there is an electrostatic field normal to the magnetic surfaces.
This field is particularly large in the H mode configuration
because of the steep edge gradient.
Since the poloidal field lines are described by the $P$ contours,
this field can be written as $\vec E=-\nabla\Phi(P)$,
which warrants $\partial\vec B/\partial t=0$ for equilibrium.
Interacting with magnetic islands on a rational surface,
this electric field drives zonal flows
that establish transport barriers
for better plasma density and energy confinements.
With the rotational toroidal equilibrium approach,
the L/H transition amounts to a bifurcation
of the L equilibrium to the H equilibrium,
under the actions of external pumping
through pellet injection and strong auxiliary heating
for density and temperature profile shaping.
The normal electric field, zonal flows, and transport barriers,
come as consequences, not cuases,
of the steep edge gradient of the H mode.
We have solved toroidal plasma equilibria
with axisymmetric toroidal and poloidal rotations,
that are self-similar to the corresponding magnetic fields.
The rotational Grad-Shafranov equation
in spherical coordinates is solved for toroidal solutions,
under the assumption that the scalar rotational pressure
is much less than the plasma pressure.
With a specific set of source functions,
there are two independent homogeneous radial modes
given by $zj_{1}(z)$ and $zy_{1}(z)$.
The $zj_{1}(z)$ mode in the region $0<z<z_{1}$ and
the $zy_{1}(z)$ mode in the region $z_{1}<z<z_{2}$
could be applied to current large scale tokamaks.
The $zj_{1}(z)$ mode has a diffuse edge profile
over a larger $z$ domain,
and the $zy_{1}(z)$ mode has a steep edge profile
over a smaller $z$ domain.
We associate them to the L and H modes respectively.
The L/H transition amounts to a bifurcation
of one equilibrium configuration to another,
following a change of the normalizing parameter
from $a_{j}$ to $a_{y}$.
Experimentally, this change of parameter could be achived
by pellet injection and large external heating.
\newpage
|
train/arxiv
|
BkiUeA05qsNCPbQZ72wt
| 5 | 1 |
\section{Introduction}
A standard tool in quantum field theory (QFT) is to probe the theory with non-dynamical sources, or background fields. The consequences of symmetries can then be systematically analyzed by assigning spurious transformation rules to the background fields. In supersymmetric theories, all sources must therefore reside in multiplets of supersymmetry, or superfields. This constrains the extent to which they can affect protected supersymmetric, or BPS, quantities. A typical example is the effective superpotential in four-dimensional theories with~$\mathcal{N}=1$ supersymmetry, which must be a locally holomorphic function of coupling constants that reside in background chiral superfields~\cite{DUSeiberg:1993vc}. This constraint makes it possible to determine the effective superpotential exactly in a large class of theories; see~\cite{DUIntriligator:1995au} for a classic exposition of this powerful approach to analyzing the dynamics of supersymmetric field theories.
Much recent work has involved placing supersymmetric field theories on a manifold~$\mathcal{M}$ with a non-trivial metric or topology, while preserving some (though generally not all) supercharges.\footnote{~The study of supersymmetric field theories on non-trivial manifolds was pioneered by Witten, see for instance~\cite{DUWitten:1982df,DUWitten:1988ze}.} The partition function~$Z_\mathcal{M}$ on~$\mathcal{M}$ (which may be decorated with suitable background fields or operator insertions) is BPS and can sometimes be computed exactly, e.g.~using supersymmetric localization techniques.\footnote{~The basic idea behind supersymmetric localization is reviewed below; see~\volcite{PZ} for a broader and more detailed exposition.} A systematic approach to constructing and analyzing supersymmetric field theories on curved manifolds~$\mathcal{M}$ was presented in~\cite{DUFestuccia:2011ws}. It extends the principle that all background fields should reside in superfields to the metric~$g_{\mu\nu}$ on~$\mathcal{M}$ by embedding it in an off-shell supergravity multiplet.
The purpose of this review is twofold: first, to outline in broad strokes the supergravity-based approach of~\cite{DUFestuccia:2011ws}, which is very general and applies to all supersymmetric field theories. Second, to present some applications to four-dimensional~$\mathcal{N}=1$ theories (section~\ref{DUsec:4d}) and their three-dimensional cousins with~$\mathcal{N}=2$ supersymmetry (section~\ref{DUsec:3d}). These examples illustrate the general framework and showcase its utility for deriving exact results, often without recourse to explicit localization computations, or even a Lagrangian.
\subsection{Background fields and partition functions}
\label{DUsec:bfpf}
Throughout, background gauge fields coupling to conserved currents will play a crucial role. As an example, consider a theory with a~$U(1)$ flavor symmetry. The corresponding conserved current~$j_\mu$ can be coupled to a background gauge field~$a_\mu$,
\begin{equation}
\label{DUajlag}
\Delta \mathscr{L} = a^\mu j_\mu + \mathcal{O}(a^2)~.
\end{equation}
The~$\mathcal{O}(a^2)$ seagull terms are tuned to ensure invariance of the Lagrangian under gauge transformations of~$a_\mu$, which enforces current conservation, $\partial^\mu j_\mu = 0$. Small field variations around $a_\mu = 0$ are captured by correlation functions of~$j_\mu$ in the undeformed theory.
Every relativistic QFT possesses a conserved, symmetric stress tensor~$T_{\mu\nu}$. (If the theory is also conformally invariant, then~$T_{\mu\nu}$ can be chosen such that~$T^\mu_\mu =0$.) The appropriate source is a background spacetime metric~$g_{\mu\nu}$. Depending on the signature of spacetime, it may be a Lorentzian or a Riemannian metric. Below, we will mostly discuss field theories on compact, Euclidean spacetime manifolds, which require a Riemannian~$g_{\mu\nu}$. Around flat space, $g_{\mu\nu} = \delta_{\mu\nu}$, the theory couples to a metric deformation~$\Delta g_{\mu\nu}$ via the stress tensor,\footnote{~Unless stated otherwise, we follow the conventions of~\cite{DUClosset:2013vra}. Whenever possible, they coincide with those of~\cite{DUBaggerQH}.}
\begin{equation}
\label{DUflatspace}
g_{\mu\nu} = \delta_{\mu\nu} + \Delta g_{\mu\nu}~, \qquad \Delta \mathscr{L} = - \frac{1}{2} \, \Delta g^{\mu\nu} \, T_{\mu\nu} + \mathcal{O}\left(\Delta g^2\right)~.
\end{equation}
Here the indices are raised and lowered using the flat metric~$\delta_{\mu\nu}$. When the perturbation~$\Delta g_{\mu\nu}$ is small, its effect is captured by correlation functions of~$T_{\mu\nu}$ in flat space. The conservation equation~$\partial^\mu T_{\mu\nu} = 0$ is enforced by choosing the~$\mathcal{O}\left(\Delta g^2\right)$ gravitational seagull terms so that the Lagrangian is invariant under diffeomorphisms that also act on the background metric~$g_{\mu\nu}$. Such a diffeomorphism-invariant Lagrangian can then be studied on an arbitrary Riemannian manifold~$\mathcal{M}$, which may be curved or possess non-trivial topology.\footnote{~Additional care is required if the field theory has gravitational anomalies~(see for instance~\cite{DUAlvarezGaume:1983ig}).}
The stress tensor is not unique: it can be redefined by improvement terms, such as
\begin{equation}
\label{DUtimp}
T'_{\mu\nu} = T_{\mu\nu} + \left(\partial_\mu \partial_\nu - \delta_{\mu\nu} \partial^2\right) \mathcal{O}~,
\end{equation}
where~$\mathcal{O}$ is a well-defined scalar operator. Both~$T_{\mu\nu}$ and~$T'_{\mu\nu}$ are acceptable stress tensors: they are symmetric, conserved, and integrate to the momentum operators~$P_\mu$. Consequently, we can use either one to place the theory in curved space. The improvement terms in~\eqref{DUtimp} then give rise to curvature couplings,
\begin{equation}
\label{DUriccicoup}
\mathscr{L}' = \mathscr{L} - \frac{1}{2} R[g] \mathcal{O}~,
\end{equation}
where~$R[g]$ is the Ricci scalar of the metric~$g_{\mu\nu}$.\footnote{~In the conventions of~\cite{DUBaggerQH}, a round~$S^d$ of radius~$r$ has constant negative scalar curvature~$R = - \frac{d(d-1)}{r^2}$.} More general improvements can involve a four-index tensor~$\mathcal{O}_{\mu\nu\rho\lambda}$ that couples to the full Riemann tensor~$R_{\mu\nu\rho\lambda}$. We can also modify the Lagrangian by adding local, diffeomorphism-invariant terms that only involve the background metric. These do not change the correlation functions of~$T_{\mu\nu}$ at separated points, but they can give rise to contact terms at coincident points.
Given a QFT on a manifold~$\mathcal{M}$, it is interesting to study its partition function,
\begin{equation}
\label{DUpartfndef}
Z_\mathcal{M}\left[\, g_{\mu\nu} \, , \, a_\mu \, , \, \ldots \,\right] = \int \mathcal{D} \Psi \, e^{- \int \mathscr{L}_\mathcal{M} \left[\, \Psi \,; \, g_{\mu\nu} \, , \, a_\mu \, , \, \ldots \,\right]}~.
\end{equation}
In addition to the metric~$g_{\mu\nu}$ on~$\mathcal{M}$, we can also couple a background gauge field~$a_\mu$ to every flavor current of the theory, as in~\eqref{DUajlag}. The ellipses in~\eqref{DUpartfndef} denote other background fields. Below, we will see that supersymmetric theories are naturally equipped with a variety of other background fields that must be considered in conjunction with~$g_{\mu\nu}$ and~$a_\mu$. In general, $Z_\mathcal{M}$ suffers from IR and UV divergences. The IR divergences can often be cured by taking~$\mathcal{M}$ to be a compact manifold.\footnote{~This is not sufficient to ensure that~$Z_\mathcal{M}$ is IR finite, since the integral in~\eqref{DUpartfndef} may have bosonic zero modes even if~$\mathcal{M}$ is compact.} As in flat space, the UV divergences are regulated by introducing a short-distance cutoff. The resulting dependence of~$Z_\mathcal{M}$ on the regularization scheme is captured by local counterterms in the background fields. In UV-complete quantum field theories, only finitely many such counterterms are needed. Given a set of background fields, the possible counterterms can be enumerated once and for all. If the regulator preserves certain symmetries, e.g.~diffeomorphisms, the counterterms must also respect these symmetries.
The scheme-independent part of the partition function~$Z_\mathcal{M}$ captures the universal long-distance physics of the QFT. For instance, the functional dependence of~$Z_\mathcal{M}$ on the sources $g_{\mu\nu}, a_\mu,$ etc.~encodes correlation functions of the corresponding local operators~$T_{\mu\nu}, j_\mu$ etc.~on~$\mathcal{M}$. Partition functions can also detect non-local degrees of freedom, which are activated by the topology of~$\mathcal{M}$. A typical example is Chern-Simons theory on a three-manifold, which possesses no local operators but leads to non-trivial partition functions~\cite{DUWitten:1988hf}.
In conformal field theories (CFTs), the conformal symmetry can be used to relate properties of the theory on different manifolds. A typical example is the operator-state correspondence, which identifies states on~$S^{d-1} \times \mathbb{R}$ in Hamiltonian (i.e.~radial) quantization with local operators. Similarly, correlation functions of local operators on~$\mathbb{R}^d$ are conformally related to correlation functions on~$S^d$, where the IR fluctuations of the CFT are naturally regulated by the finite spacetime volume. Note that conformal symmetry fixes the improvement terms~\eqref{DUtimp}, and hence the curvature couplings~\eqref{DUriccicoup}, by singling out a preferred, traceless stress tensor.
A quantity that has received much recent attention is the entanglement entropy. For the special case of vacuum entanglement across a spherical entangling surface in a CFT, the entanglement entropy can be obtained from the partition function~$Z_{S^d}$ on a round sphere~\cite{DUCasini:2011kv}. More precisely, the statement applies to the universal, scheme-independent parts of both quantities. These can in turn be used to define a quantity that is known (in~$1 \leq d \leq 4$ dimensions) or believed to decrease monotonically under renormalization-group (RG) flow (see for instance~\volcite{PU} and references therein).
\subsection{Supersymmetric theories}
\label{DUsec:susyth}
As is the case for most observables in interacting QFTs, the partition functions discussed in section~\ref{DUsec:bfpf} are generally not (exactly) computable. The situation is better in supersymmetric theories: BPS observables, which are annihilated by some of the supercharges, are often tightly constrained; in favorable situations, they can even be determined exactly.
Placing supersymmetric field theories on a non-trivial manifold~$\mathcal{M}$ with a curved metric~$g_{\mu\nu}$ generally breaks all flat-space supercharges. Intuitively, this can be understood from the linearized coupling~\eqref{DUflatspace} of the stress tensor to the background metric~$g_{\mu\nu}$, since~$T_{\mu\nu}$ is not a BPS operator, i.e.~$[Q, T_{\mu\nu}] \neq 0$ for every flat-space supercharge~$Q$. More precisely, placing a flat-space theory on~$\mathcal{M}$ by minimally coupling it to the metric~$g_{\mu\nu}$ leads to a curved-space supercharge for each covariantly constant spinor~$\zeta$ on~$\mathcal{M}$,
\begin{equation}
\label{DUccsonm}
\nabla_\mu \zeta = 0~.
\end{equation}
This equation is very restrictive. For instance, the only compact four-manifolds that admit covariantly constant spinors are flat tori~$T^4$ and K3 surfaces with Ricci-flat K\"ahler metrics. Similar statements apply to background flavor gauge fields~$a_\mu$, which typically break supersymmetry because the associated flavor current~$j_\mu$ is not a BPS operator. A notable exception occurs for flat connections, which can always be turned on without breaking supersymmetry.\footnote{~This is not true for flat~$R$-symmetry background gauge fields, which can break supersymmetry.}
In this review we will follow~\cite{DUFestuccia:2011ws} and explain how the condition~\eqref{DUccsonm} can be relaxed in a systematic way. Consequently, some supersymmetry can be preserved for a much larger class of manifolds~$\mathcal{M}$ and background fields~$g_{\mu\nu}, a_\mu$. If~$\mathcal{M}$ does not admit covariantly constant spinors,
this is achieved by coupling the flat-space field theory to background fields in a special, non-minimal way. As we will see, a crucial role is played by additional background fields that are necessarily present in supersymmetric theories. The resulting curved-space Lagrangian~$\mathscr{L}_\mathcal{M}$ is invariant under the action of one or several supercharges, whose algebra may be deformed. The corresponding spinor parameters satisfy equations that generalize~\eqref{DUccsonm}.
Under favorable conditions, the partition function~$Z_\mathcal{M}$ of a supersymmetric field theory on a curved manifold~$\mathcal{M}$ can be computed exactly using supersymmetric localization. (See~\volcite{PZ} for an overview with references.) The theory is frequently assumed to have a presentation in terms of fields and a Lagrangian. In the simplest case, the curved-space Lagrangian~$\mathscr{L}_\mathcal{M}$ is invariant under a nilpotent supercharge~$Q$, i.e.~$Q^2 = 0$, which can be used to deform the path integral expression~\eqref{DUpartfndef} for the partition function while preserving~$Q$,
\begin{equation}
\label{DUlocpart}
Z_\mathcal{M}(t) = \int \mathcal{D} \Psi \, e^{-\int \mathscr{L}_\mathcal{M} + t \{Q, \mathcal{O}\}}~,
\end{equation}
for some fermionic operator~$\mathcal{O}$. In order to ensure that the deformed action in the exponent of~\eqref{DUlocpart} is~$Q$-invariant for every value of~$t$, it is convenient (but not necessary) to realize the supercharge~$Q$ off shell. The variation of~$Z_\mathcal{M}(t)$ with respect to the parameter~$t$ vanishes, because the change in the integrand is~$Q$-exact,\footnote{~This argument requires the path integral to converge sufficiently rapidly so that it is legitimate to integrate by parts in field space. See~\cite{DUMoore:1997pc} for a detailed discussion of some examples where this assumption breaks down.}
\begin{equation}
\label{DUzvar}
\frac{d}{d t} Z_\mathcal{M}(t) = \langle \left\{Q, \mathcal{O}\right\}\rangle = 0~.
\end{equation}
This shows that~$Z_\mathcal{M} = Z_\mathcal{M}(0)$ can be computed by evaluating~\eqref{DUlocpart} for any choice of~$t$, including~$t \rightarrow \infty$. For suitable choices of the operator~$\mathcal{O}$, this limit localizes the path integral to semiclassical field configurations, with~$t^{-1} \rightarrow 0$ playing the role of Planck's constant. The semiclassical saddle points depend on the choice of~$Q$ and~$\mathcal{O}$, i.e.~they are typically not saddle points of the undeformed theory with Lagrangian~$\mathscr{L}_\mathcal{M}$.
The bulk of this review volume is dedicated to explicit localization computations of supersymmetric partition functions~$Z_\mathcal{M}$, perhaps in the presence of additional insertions (see~\volcite{PZ} and references therein). The techniques and results reviewed below serve as a basis for such calculations. In particular, we will address the following questions:
\begin{itemize}
\item[1.)] When and how can a supersymmetric field theory be placed on a curved manifold~$\mathcal{M}$ while preserving some supersymmetry?
\item[2.)] What additional data does the resulting supersymmetric Lagrangian~$\mathscr{L}_\mathcal{M}$ on~$\mathcal{M}$ depend on, beyond the data that was already present in flat space?
\item[3.)] How does supersymmetry constrain the dependence of the partition function~$Z_\mathcal{M}$ on this data?
\end{itemize}
\noindent As we will see, these questions can be answered within a uniform, largely model-independent framework, which crucially relies on supersymmetry, but not explicit localization computations. In fact, most of the results reviewed below do not require a Lagrangian description of the field theory.\footnote{~See~\volcite{TA} for some examples of localization calculations in non-Lagrangian theories.} Before outlining the general framework in section~\ref{DUsec:overviewgf}, we will examine a few representative examples of supersymmetric field theories in non-trivial backgrounds.
The only way to preserve all flat-space supercharges on a compact manifold~$\mathcal{M}$ without turning on any background fields other than the metric is to take~$\mathcal{M}$ to be a flat torus~$T^d$, with periodic boundary conditions for fermions. The corresponding partition function~$Z_{T^d}$ is the Witten index~\cite{DUWitten:1982df}, which counts the supersymmetric vacua of the theory on~$T^{d-1} \times \mathbb{R}$, weighted by their fermion number.\footnote{~The Witten index may be ill defined if there are bosonic zero modes that are not lifted when the flat-space theory is compactified on a torus.}
As was already discussed around~\eqref{DUccsonm} above, a covariantly constant spinor leads to a supercharge on~$\mathcal{M}$, but such spinors only exist for very special choices of~$\mathcal{M}$, such as Calabi-Yau manifolds. A more general prescription for preserving supersymmetry, which applies to a larger class of manifolds, is known as twisting~\cite{DUWitten:1988ze}: assume that the supersymmetric theory has a continuous~$R$-symmetry~$G_R$, and that the Riemannian holonomy group of the metric~$g_{\mu\nu}$ on~$\mathcal{M}$ is~$G_\text{hol}$. If a given flat-space supercharge~$Q$ is a singlet under the diagonal subgroup~$(G_R \times G_\text{hol}) \, |_\text{diag}$, then~$Q$ can be preserved on~$\mathcal{M}$. (In flat space, the holonomy group~$G_\text{hol}$ acts via Euclidean rotations.) A prototypical example is topologically twisted~$\mathcal{N}=2$ Yang-Mills theory on an oriented Riemannian four-manifold~\cite{DUWitten:1988ze}. Here~$G_R = SU(2)_R$ and~$G_\text{hol} = SO(4) = SU(2) \times SU(2)$. One of the~$SU(2)$ factors of~$G_\text{hol}$ is twisted by the~$SU(2)_R$ symmetry to yield a single scalar supercharge on~$\mathcal{M}$, which can be used to show that the partition function~$Z_\mathcal{M}$ is independent of the metric~$g_{\mu\nu}$ on~$\mathcal{M}$. For this reason, the twist is referred to as topological. However, not all twists give rise to topological theories. For instance, four-dimensional~$\mathcal{N}=1$ theories with a~$U(1)_R$ symmetry can be twisted on an arbitrary K\"ahler surface~$\mathcal{M}$, for which~$G_\text{hol} = U(2)$~\cite{DUWitten:1994ev,DUJohansen:1994aw}. Now the twisted theory depends on the complex structure of~$\mathcal{M}$, and hence it is not topological.
Twisted theories are often described by performing a field redefinition to variables that are adapted to the geometric structure that underlies the twist. For instance, topologically twisted~$\mathcal{N}=2$ theories can be described by fields that are differential forms on~$\mathcal{M}$, while holomorphically twisted~$\mathcal{N}=1$ theories on a K\"ahler surface~$\mathcal{M}$ lead to fields that are complex~$(p,q)$ forms on~$\mathcal{M}$. However, the twisting procedure can also be implemented by coupling the original, untwisted supersymmetric field theory to a background~$R$-symmetry gauge field~$A_\mu^{(R)}$, which is tuned to cancel part of the spin connection~\cite{DUKarlhede:1988ax,DUJohansen:1994aw}. The preserved supercharge on~$\mathcal{M}$ is parametrized by an~$R$-charged spinor~$\zeta$ that satisfies,
\begin{equation}
\label{DUccsp}
\big(\nabla_\mu - i A_\mu^{(R)} \big) \zeta = 0~,
\end{equation}
which generalizes~\eqref{DUccsonm}.
Much recent activity has revolved around supersymmetric field theories on backgrounds that go beyond the basic twisting paradigm. Two prototypical examples of such backgrounds arose in the study of four-dimensional~$\mathcal{N}=2$ theories with an~$SU(2)_R$ symmetry. (We will encounter additional examples below.) The first is the~$\Omega$-background of~\cite{DUNekrasov:2002qd,DUNekrasov:2003rj}, which can be viewed as an equivariant deformation of the topological twist on~$\mathbb{R}^4 = \mathbb{R}^2_{\varepsilon_1} \times \mathbb{R}^2_{\varepsilon_2}$ by an isometry that rotates two orthogonal~$\mathbb{R}^2$ planes inside~$\mathbb{R}^4$. The rotation angles are determined by the equivariant parameters~$\varepsilon_{1,2}$. This background preserves more supercharges than the topological twist, and the corresponding partition function~$Z_\Omega$ explicitly depends on~$\varepsilon_{1,2}$, as well as some flat-space coupling constants, in a complicated and interesting way. The second example is a background on a round~$S^4$, which preserves all eight supercharges~\cite{DUPestun:2007rz}. The supersymmetry algebra is deformed to~$OSp(2|4)$, whose bosonic subalgebra contains the~$SO(2)_R$ Cartan subalgebra of the~$SU(2)_R$ symmetry and the~$Sp(4) = SO(5)$ isometries of~$S^4$. The partition function~$Z_{S^4}$ can depend on some flat-space couplings and the radius of the sphere. See~\cite{DUPestun:2014mja} and~\volcite{HO} for a review of these two backgrounds and some of their applications.
\subsection{Overview of the formalism}
\label{DUsec:overviewgf}
As was noted at the beginning of section~\ref{DUsec:susyth}, the obstruction to preserving supersymmetry on an arbitrary curved manifold~$\mathcal{M}$ is due to the fact that the stress tensor~$T_{\mu\nu}$ is not a BPS operator. In supersymmetric theories~$T_{\mu\nu}$ resides in a supermultiplet, together with other bosonic and fermionic operators~$\mathcal{J}_B^i$ and~$\mathcal{J}_F^i$. As we will review in section~\ref{DUsec:4dstmult}, the structure of the stress-tensor multiplet reflects very general properties of the field theory (e.g.~the spacetime dimension, the amount of supersymmetry, the presence or absence of possible~$R$-symmetries, or whether the theory is superconformal), but is otherwise largely model independent. Moreover, every supersymmetric field theory must have a stress tensor multiplet, even if the theory is strongly coupled or does not have a Lagrangian description.
The bosonic superpartners~$\mathcal{J}_B^i$ of the stress tensor can be coupled to suitable bosonic background fields~$\mathcal{B}_B^i$ and added to the Lagrangian~\eqref{DUflatspace},
\begin{equation}
\label{DUgenlagdef}
\Delta \mathscr{L} = - \frac{1}{2} \, \Delta g^{\mu\nu} \, T_{\mu\nu} + \sum_i \mathcal{B}_B^i \mathcal{J}_B^i + \left(\text{seagull terms}\right)~,
\end{equation}
where we casually refer to all higher-order terms in the background fields as seagull terms. For special choices of~$\Delta g^{\mu\nu}$ and the other bosonic sources~$\mathcal{B}_B^i$, the deformation~$\Delta \mathscr{L}$ can preserve some supersymmetry, due to cancellations between the supersymmetry transformations of~$T_{\mu\nu}$ and~$\mathcal{J}_B^i$. At higher order, we must also ensure supersymmetry of the seagull terms, which can lead to additional conditions.\footnote{~A well-known example arises in four-dimensional~$\mathcal{N}=2$ theories with a continuous flavor symmetry~$G$. We can turn on complex mass parameters~$m$ that are valued in the (complexified) Lie algebra of~$G$. At linear order, all such~$m$ are supersymmetric, but at quadratic order supersymmetry requires that~$\left[m, m^\dagger\right] = 0$. See~\cite{DUCordova:2016xhm} for a recent discussion with references.}
Following~\cite{DUSeiberg:1993vc}, it was explained in~\cite{DUFestuccia:2011ws}\ that the constraints of supersymmetry on the bosonic sources~$g_{\mu\nu} \,, \,\mathcal{B}_B^i$ are best understood by embedding them into a supermultiplet. Their fermionic superpartners~$\mathcal{B}_F^i$\,, which source the operators~$\mathcal{J}_F^i$ in the stress-tensor multiplet, are set to zero in the Lagrangian~\eqref{DUgenlagdef}. As was emphasized in~\cite{DUFestuccia:2011ws}, the sources must reside in an off-shell supergravity multiplet, because they are non-dynamical background fields that couple to the stress-tensor supermultiplet. This construction can be viewed as a rigid limit of dynamical off-shell supergravity, where the fluctuations of the supergravity fields are frozen by scaling the Planck mass to infinity, $M_p \rightarrow \infty$. We will therefore refer to this construction of supersymmetric field theories on~$\mathcal{M}$ as rigid supersymmetry.
The requirement that~$\Delta \mathscr{L}$ in~\eqref{DUflatspace} should preserve a supercharge~$Q$ amounts to the statement that the~$Q$-variation of all fermionic sources should vanish,
\begin{equation}
\label{DUdeltaqferm}
\delta_Q \mathcal{B}_F^i = 0~.
\end{equation}
The left-hand side of this equation is a non-trivial bosonic expression, which involves the sources~$g_{\mu\nu} \, , \, \mathcal{B}_B^i$ and the spinor~$\zeta$ that parametrizes the supercharge~$Q$. The equations~\eqref{DUdeltaqferm} simultaneously determine the allowed supersymmetric configurations for the bosonic background fields and the corresponding spinor parameter~$\zeta$.
Even at this level of generality, we can make the following observations:
\begin{itemize}
\item The fermionic sources~$\mathcal{J}_F^i$ always include at least one background gravitino~$\Psi_\mu$, whose supersymmetry variation takes the schematic form~$\delta_Q \, \Psi_\mu = \nabla_\mu \zeta +~\cdots$~. Imposing~\eqref{DUdeltaqferm} then leads to a differential equation for the spinor parameter~$\zeta$ that generalizes~\eqref{DUccsonm} and~\eqref{DUccsp}. We will follow standard practice and refer to such equations as (generalized) Killing spinor equations. A given configuration of background fields admits multiple supercharges if it satisfies~\eqref{DUdeltaqferm} for each supercharge~$Q$, i.e.~if the Killing spinor equations in this background admit multiple independent solutions.
\item Both the generalized Killing spinor equations and the rigid supersymmetry algebra on~$\mathcal{M}$ follow from the structure of the background off-shell supergravity multiplet. The rigid supersymmetry algebra is realized as a subalgebra of the (infinite-dimensional) algebra of supergravity gauge transformations. As we will review in section~\ref{DUsec:4d}, a given field theory may admit several inequivalent stress-tensor supermultiplets. In this case it can be coupled to different off-shell supergravities,\footnote{~Under certain conditions, distinct off-shell supergravities may be equivalent on shell, but this will not play a role in our discussion.} which generally lead to inequivalent Killing spinor equations, and hence to different supersymmetric backgrounds.
\item A rigid supersymmetric background is characterized by a full set of bosonic supergravity background fields, i.e.~specifying only the metric does not determine the background. In particular, there are distinct backgrounds that have the same metric but lead to different partition functions. In general, they may arise from different off-shell supergravities, preserve different amounts of supersymmetry, or lead to different supersymmmetry algebras.
\item In Lorentzian signature, unitarity fixes the reality properties of the fields in the supergravity multiplet so that the Lagrangian~\eqref{DUgenlagdef} is real. In Euclidean signature, we are free to contemplate background fields that do not satisfy the reality conditions needed for unitarity (more precisely, reflection positivity). This greatly enriches the set of Euclidean backgrounds, and some interesting supersymmetric backgrounds can only be obtained in this way. (We will, however, always assume that the background metric~$g_{\mu\nu}$ is a standard Riemannian metric.) Observables (e.g.~partition functions) computed in such non-unitary backgrounds in general do not possess standard reality properties. Nevertheless, they often encode interesting information about the underlying unitarity field theory.
\item If the flat-space field theory has a Lagrangian description in terms of fields, then the Lagrangian and the supersymmetry transformation rules for the fields in curved space follow from the corresponding formulas in the appropriate matter-coupled off-shell supergravity.\footnote{~The formalism only requires the supergravity fields to be off shell. For explicit computations, it is often convenient to also realize some supercharges off shell in the matter sector (see for instance~\volcite{HO}).} These formulas are universal, i.e.~they apply for arbitrary configurations of the supergravity fields. Once a given supersymmetric background has been found, the Lagrangian and the transformation rules in this background can be obtained by specializing the general formulas.
\end{itemize}
\noindent It is straightforward to extend the preceding discussion to supersymmetric configurations of bosonic background fields residing in other supermultiplets. Supersymmetry requires the variations of all fermionic sources in the multiplet to vanish, as in~\eqref{DUdeltaqferm}. Below we will apply this to background gauge fields that couple to conserved flavor currents. However, the supergravity multiplet enjoys a special status, since it determines the number of supercharges and their algebra. Activating additional background fields that reside in other supermultiplets may preserve these supercharges, or it may break them to a (possibly trivial) subalgebra.
\subsection{Outline}
In the remainder of this review, we will illustrate the rigid supersymmetry formalism using $\mathcal{N}=1$ theories in four dimensions (section~\ref{DUsec:4d}) and~$\mathcal{N}=2$ theories in three dimensions (section~\ref{DUsec:3d}). We discuss different stress-tensor and supergravity multiplets, and describe some of the corresponding supersymmetric backgrounds. We explain how to construct supersymmetric Lagrangians on these backgrounds and describe the data they depend on, paying particular attention to the data that originates from the coupling to the curved manifold~$\mathcal{M}$. Finally, we explain to what extent this data can affect the partition function~$Z_\mathcal{M}$. We will mostly focus on theories with a~$U(1)_R$ symmetry, but we also mention some results for theories that do not have such a symmetry.
We consider two examples in detail: $\mathcal{N}=1$ theories on~$S^3 \times S^1$ and~$\mathcal{N}=2$ theories on a round or squashed~$S^3$. The former background can be used to define an index that tracks supersymmetric operators along RG flows (it is closely related to the superconformal index, see~\volcite{RR}). The latter backgrounds play a crucial role in~$F$-maximization and can be used to compute correlation functions of conserved currents (see~\volcite{PU}).
\section{Four-dimensional~$\mathcal{N}=1$ theories}
\label{DUsec:4d}
\subsection{Stress-tensor multiplets and off-shell supergravities}
\label{DUsec:4dstmult}
As was explained in section~\ref{DUsec:overviewgf}, the procedure of placing a supersymmetric theory on a curved manifold commences with a choice of stress-tensor supermultiplet in flat space. The different possibilities that can arise in four-dimensional~$\mathcal{N}=1$ theories were described in~\cite{DUKomargodski:2010rb,DUDumitrescu:2011iu}. (See also~\cite{DUGates:1983nr} for an early discussion.) Here we will restrict ourselves to the three most common multiplets. We will describe them in superspace (using the conventions of~\cite{DUBaggerQH}) as well as in components. In all cases, the supersymmetry transformations of the component fields implicitly follow from the superspace description. In section~\ref{DUsec:conpartfun4d} we will explicitly write out some of these transformation rules for theories with a~$U(1)_R$ symmetry.
\begin{itemize}
\item[1.)] The stress-tensor multiplet of an~$\mathcal{N}=1$ superconformal theory (SCFT) is a real superfield~$\mathcal{J}_\mu$ that satisfies
\begin{equation}
\label{DUscftmult}
\overline D^{\dot \alpha} \mathcal{J}_{\alpha{\dot \alpha}} = 0~, \qquad \mathcal{J}_{\alpha{\dot \alpha}} = \sigma^\mu_{\alpha{\dot \alpha}} \mathcal{J}_\mu~.
\end{equation}
The component fields in~$\mathcal{J}_\mu$ are given by
\begin{equation}
\label{DUscftcomp}
\mathcal{J}_\mu = \left(j_\mu^{(R)}\,, S_{\mu\alpha}\,, T_{\mu\nu}\right)~,
\end{equation}
where~$j_\mu^{(R)}$ is the superconformal~$U(1)_R$ current, $S_{\mu\alpha}$ is the supersymmetry current, and~$T_{\mu\nu}$ is the stress tensor. All three currents are conserved, and the currents~$S_{\mu\alpha}, T_{\mu\nu}$ are traceless, i.e.~$\overline \sigma^{\mu{\dot \alpha}\alpha} S_{\mu\alpha} = T^\mu_\mu = 0$.
\item[2.)] The majority of four-dimensional~$\mathcal{N}=1$ theories (with or without an~$R$-symmetry) admit a Ferrara-Zumino (FZ) stress-tensor multiplet~\cite{DUFerrara:1974pz}.\footnote{~The only known exceptions are abelian gauge theories with Fayet-Iliopoulos terms, and their analogues in the context of (gauged) sigma models~\cite{DUKomargodski:2009pc,DUKomargodski:2010rb,DUDumitrescu:2010ca,DUDumitrescu:2011iu}.} The FZ-multiplet is given by a real superfield~$\mathcal{J}_\mu^\text{FZ}$, such that
\begin{equation}
\label{DUfzmultdef}
\overline D^{\dot \alpha} \mathcal{J}_{\alpha{\dot \alpha}}^{\text{FZ}} = D_\alpha X~, \qquad \overline D_{\dot \alpha} X = 0~,
\end{equation}
where~$\mathcal{J}_{\alpha{\dot \alpha}}^{\text{FZ}} = \sigma^\mu_{\alpha{\dot \alpha}} \mathcal{J}_\mu^\text{FZ}$, as in~\eqref{DUscftmult}. The component fields in the FZ-multiplet are
\begin{equation}
\label{DUfzcomp}
\mathcal{J}_\mu^\text{FZ} = \left(j_\mu \,, S_{\mu\alpha}\,, x\,, T_{\mu\nu}\right)~.
\end{equation}
Here~$j_\mu$ is a non-conserved vector operator, $S_{\mu\alpha}$ is the conserved supersymmetry current, $x$ is a complex scalar, and~$T_{\mu\nu}$ is the conserved, symmetric stress tensor. The chiral superfield~$X$ is the trace submultiplet of the FZ-multiplet,\footnote{~Unlike unitary superconformal multiplets, which possess a unique lowest-weight state, multiplets of Poincar\'e supersymmetry may be reducible (i.e.~they may contain non-trivial submultiplets) without being decomposable into smaller multiplets. See~\cite{DUDumitrescu:2011iu} for a discussion in the context of stress-tensor multiplets.}
\begin{equation}
\label{DUxfielddef}
X = \left(x \, ,~\sigma^\mu_{\alpha{\dot \alpha}} \overline S_\mu^{\dot \alpha} \, ,~T^\mu_\mu + i \partial^\mu j_\mu \right)~.
\end{equation}
When~$X = 0$, the FZ-multiplet reduces to the superconformal multiplet, as can be seen by comparing~\eqref{DUfzmultdef} and~\eqref{DUscftmult}. In this case the vector operator~$j_\mu$ in the FZ-multiplet becomes the conserved superconformal~$U(1)_R$ current, and~$S_{\mu\alpha}, T_{\mu\nu}$ become traceless.
\item[3.)] Non-conformal theories with a~$U(1)_R$ symmetry possess a stress-tensor multiplet~$\mathcal{R}_\mu$, whose bottom component is the conserved~$R$-current~$j_\mu^{(R)}$. In superspace,
\begin{equation}
\label{DUrmult}
\overline D^{\dot \alpha} \mathcal{R}_{\alpha{\dot \alpha}} = \chi_\alpha~, \qquad \overline D_{\dot \alpha} \chi_\alpha = 0~, \qquad D^\alpha \chi_\alpha = \overline D_{\dot \alpha} \overline \chi^{\dot \alpha}~.
\end{equation}
The component fields residing in the~$\mathcal{R}$-multiplet are given by
\begin{equation}
\label{DUrmultcomp}
\mathcal{R}_\mu = \left(j_\mu^{(R)}\,, S_{\mu\alpha}\,, T_{\mu\nu}\,, C_{\mu\nu}\right)~.
\end{equation}
Here~$C_{\mu\nu} = C_{[\mu\nu]}$ is a conserved two-form current, which can give rise to a string charge in the supersymmetry algebra~\cite{DUDumitrescu:2011iu}. The superfield~$\chi_\alpha$, which satisfies the same constraints as an abelian field-strength multiplet, is the trace submultiplet of the~$\mathcal{R}$-multiplet. Setting~$\chi_\alpha = 0$ leads to the superconformal multiplet~\eqref{DUscftmult}.
\end{itemize}
\medskip
\noindent Some theories have more than one stress-tensor multiplet. For instance, a theory with an FZ-multiplet may possess a~$U(1)_R$ symmetry, in which case it also admits an~$\mathcal{R}$-multiplet. In this case the two multiplets are related by a supersymmetric analogue of the improvement transformation~\eqref{DUtimp} for the stress tensor.
The off-shell supergravity multiplets that couple to the conformal stress-tensor multiplet, the FZ-multiplet and the~$\mathcal{R}$-multiplet are conformal supergravity~\cite{DUKaku:1978nz}, as well as the old~\cite{DUStelle:1978ye,DUFerrara:1978em} and new~\cite{DUSohnius:1981tp,DUSohnius:1982fw} minimal formulations of off-shell supergravity. (See~\cite{DUFreedmanZZ} for a recent discussion of conformal and old minimal supergravity; additional details on new minimal supergravity can be found in~\cite{DUFerrara:1988qxa}.) In principle, we can use any set of off-shell supergravity fields, as long as the flat-space theory admits the corresponding stress-tensor multiplet. In practice, it is often useful to consider non-conformal supergravity, even if the flat-space theory is conformal. The reason is that, quantum mechanically, even CFTs must be defined using a UV cutoff, which breaks conformal symmetry but can often be chosen to preserve supersymmetry. If the theory is conformal, we expect the non-conformal supergravity fields to decouple as the UV cutoff is taken to infinity. However, some remnants of the regulator, and hence of the non-conformal supergravity fields, may survive:
\begin{itemize}
\item The allowed supersymmetric counterterms that parametrized the UV ambiguities (i.e.~the scheme dependence) of the partition function~$Z_\mathcal{M}$ are governed by the non-conformal supergravity theory that couples to the combined SCFT-regulator system. The non-conformal gravity fields can in principle be decoupled by fine-tuning these counterterms, but in practice one is typically left with an ambiguity parametrized by local counterterms that involve the non-conformal supergravity fields.\footnote{~Relevant counterterms are multiplied by positive powers of the UV cutoff~$\Lambda$, so that they are easily identified and adjusted. It is typically more difficult to isolate the effects of marginal counterterms.} This plays an important role in elucidating the properties of supersymmetric partition functions and interpreting the results of explicit localization computations. See for instance~\cite{DUClosset:2012vg,DUClosset:2012vp,DUGerchkovitz:2014gta,DUDiPietro:2014bca,DUGomis:2014woa,DUAssel:2014tba,DUKnodel:2014xea,DUAssel:2015nca,DUGomis:2015yaa} and references therein for a sampling of the recent literature.
\item The decoupling of the non-conformal supergravity fields can be spoiled by superconformal anomalies, which cannot (even in principle) be removed by fine-tuning the allowed supersymmetric counterterms. Examples are Weyl anomalies in even dimensions, which render~$T^\mu_\mu \neq 0$ in the presence of certain background fields. Such anomalies are, for instance, discussed in~\cite{DUAnselmi:1997am, Cassani:2013dba,DUGomis:2015yaa}, as well as~\volcite{MO}. A different, global superconformal anomaly in three dimensions was described in~\cite{DUClosset:2012vp}.
\end{itemize}
\noindent In light of the above, we will only consider the non-conformal old and new minimal supergravity theories.\footnote{~Even though we will not do so here, it is often convenient to formulate non-conformal supergravity theories as coupled systems consisting of a conformal supergravity multiplet and one or several compensating matter multiplets that can be used to Higgs the conformal symmetry.} Moreover, most of our discussion will focus on the new minimal formulation, because field theories with a~$U(1)_R$ symmetry are typically under better theoretical control.
\subsection{Theories with an~$R$-symmetry}
\label{DUsec:4dthwithrsym}
The coupling of theories with a~$U(1)_R$ symmetry to supergravity background fields proceeds via the~$\mathcal{R}$-multiplet~\eqref{DUrmult} and~\eqref{DUrmultcomp}, whose component fields we repeat here for convenience,
\begin{equation}
\label{DUrmultcompii}
\mathcal{R}_\mu = \left(j_\mu^{(R)} \,, S_{\mu\alpha}\,, T_{\mu\nu}\,, C_{\mu\nu}\right)~.
\end{equation}
The appropriate background fields reside in the new minimal supergravity multiplet~\cite{DUSohnius:1981tp,DUSohnius:1982fw},
\begin{equation}
\label{DUnmsugra}
\mathcal{H}_\mu =\left(A_\mu^{(R)} \,, \Psi_{\mu\alpha} \,, g_{\mu\nu} \,, B_{\mu\nu}\right)~.
\end{equation}
In addition to the metric~$g_{\mu\nu}$ and the gravitino~$\Psi_{\mu\alpha}$, this multiplet contains a~$U(1)_R$ gauge field~$A_\mu^{(R)}$, which couples to the conserved~$R$-current~$j_\mu^{(R)}$, and a two-form gauge field~$B_{\mu\nu}$, which couples to the conserved two-form current~$C_{\mu\nu}$. We will often use the Hodge dual of its field strength, which is a covariantly conserved vector field,\footnote{~The factor of~$i$ in~\eqref{DUvdef} is absent in Lorentzian signature, where both~$B_{\mu\nu}$ and~$V^\mu$ are real.}
\begin{equation}
\label{DUvdef}
V^\mu = \frac{i}{2} \varepsilon^{\mu\nu\rho\lambda} \partial_\nu B_{\rho\lambda}~, \qquad \nabla_\mu V^\mu = 0~.
\end{equation}
The only fermionic field in the new minimal supergravity multiplet~\eqref{DUnmsugra} is the gravitino~$\Psi_{\mu\alpha}$.
As explained around~\eqref{DUdeltaqferm}, the supersymmetric configurations of the bosonic background fields are determined by setting the supersymmetry variations of the gravitino to zero. In new minimal supergravity, these variations take the following form,
\begin{align}
\label{DUnmgravitinovari}
& \delta \Psi_{\mu\alpha} = -2 \big(\nabla_\mu - i A_\mu^{(R)}\big) \zeta_\alpha -i V^\nu \sigma_{\mu\alpha{\dot \alpha}} \overline \sigma_\nu^{{\dot \alpha} \beta} \zeta_\beta~,\\
\label{DUnmgravitinovarii}
& \delta \overline \Psi_{\mu}^{\, \dot \alpha} = - 2 \left(\nabla_\mu + i A_\mu^{(R)}\right) \overline \zeta^{\dot \alpha} + i V^\nu \overline \sigma_\mu^{{\dot \alpha} \alpha} \sigma_{\nu \alpha{\dot \beta}} \overline \zeta^{\dot \beta}~.
\end{align}
These formulas are valid in Lorentzian signature, where the left-handed spinor~$\zeta_\alpha$ of~$R$-charge~$+1$ and the right-handed spinor~$\overline \zeta_{\dot \alpha}$ of~$R$-charge~$+1$ are related by complex conjugation, while~$A_\mu^{(R)}$ and~$V_\mu$ are real.
In Euclidean signature, the left-handed and right-handed spinors are independent and no longer related by complex conjugation. We will emphasize this by writing tildes instead of bars, e.g.~$\widetilde \zeta_{\dot \alpha}$ instead of~$\overline \zeta_{\dot \alpha}$ and~$\widetilde \sigma_\mu$ instead of~$\overline \sigma_\mu$. (In Euclidean signature, we follow the conventions of~\cite{DUClosset:2013vra}.) Moreover, the Lorentzian reality conditions on~$A_\mu^{(R)}$ and~$V_\mu$ may be relaxed at the expense of unitarity. In general, a supercharge~$Q$ is characterized by a pair~$(\zeta, \widetilde \zeta)$ of left- and right-handed Killing spinors, but in new minimal supergravity we can always consider supercharges~$(\zeta, 0)$ or~$(0, \widetilde \zeta)$ of definite~$R$-charge. (In section~\ref{DUsec:norsym} we will discuss theories without an~$R$-symmetry, where this decomposition of~$(\zeta, \widetilde \zeta)$ is generally not possible.) A supercharge~$Q$ of~$R$-charge~$-1$ corresponds to a Killing spinor~$\zeta$ for which the right-hand side of~\eqref{DUnmgravitinovari} vanishes,
\begin{equation}
\label{DUnmkse}
\big(\nabla_\mu - i A_\mu^{(R)}\big) \zeta = -\frac{i}{2} V^\nu \sigma_\mu \widetilde \sigma_\nu \zeta~.
\end{equation}
Similarly, a supercharge~$\widetilde Q$ of~$R$-charge~$+1$ corresponds to a Killing spinor~$\widetilde \zeta$ for which the right-hand side of~\eqref{DUnmgravitinovarii} vanishes,
\begin{equation}
\label{DUnmksebar}
\big(\nabla_\mu + i A_\mu^{(R)}\big) \widetilde \zeta = \frac{i}{2} V^\nu \widetilde \sigma_\mu \sigma_\nu \widetilde \zeta~.
\end{equation}
Note that these equations reduce to~\eqref{DUccsp}, which describes twisting, when the background field~$V^\mu$ vanishes.
As explained in section~\ref{DUsec:overviewgf}, the rigid supersymmetry algebra satisfied by the supercharges~$Q$ or~$\widetilde Q$ descends from the algebra of local supergravity transformations. In new minimal supergravity, this algebra includes local supersymmetry transformations (parametrized by arbitrary spinors~$\zeta, \widetilde \zeta$), as well as diffeomorphisms, local Lorentz transformations, and~$R$-symmetry gauge transformations~\cite{DUSohnius:1981tp,DUSohnius:1982fw}. If we restrict to Killing spinors that satisfy~\eqref{DUnmkse} and~\eqref{DUnmksebar}, this algebra simplifies and reduces to the rigid supersymmetry algebra satisfied by the supercharges~$Q, \widetilde Q$. On a field~$\Phi$ with~$U(1)_R$ charge~$r$ and arbitrary spin, the algebra is given by
\begin{equation}
\label{DUnmalg}
\begin{aligned}
& \{\delta_Q, \delta_{\widetilde Q}\} \Phi = 2 i \mathcal{L}_K' \Phi~, \qquad K^\mu = \zeta \sigma^\mu \widetilde \zeta~,\cr
& \delta_Q^2 \Phi = \delta_{\widetilde Q}^2 \Phi = 0~.
\end{aligned}
\end{equation}
The infinitesimal variations anticommute because we take the spinors~$\zeta, \widetilde \zeta$ to be commuting. It follows from the Killing spinor equations~\eqref{DUnmkse} and~\eqref{DUnmksebar} that~$K^\mu$ is a Killing vector. The operator~$\mathcal{L}_K'$ denotes a modified Lie derivative along~$K$, which is twisted by the~$R$-symmetry,
\begin{equation}
\label{DUlprimedef}
\mathcal{L}_K' \Phi = \mathcal{L}_K \Phi - i r K^\mu \left(A_\mu^{(R)} + \frac{3}{2} V_\mu\right) \Phi~.
\end{equation}
Here~$\mathcal{L}_K$ is the ordinary Lie derivative.\footnote{~Its action on spinors~$\chi_\alpha, \widetilde \chi_{\dot \alpha}$ is given by
$$
\mathcal{L}_K \chi = \nabla_\mu \chi - \frac{1}{2} \nabla_\mu K_\nu \sigma^{\mu\nu} \chi~, \qquad \mathcal{L}_K \widetilde \chi = \nabla_\mu \widetilde \chi - \frac{1}{2} \nabla_\mu K_\nu \widetilde \sigma^{\mu\nu} \widetilde \chi~.
$$
}
Due to the twist, the~$R$-charge can appear on the right-hand side of the supersymmetry algebra, unlike in standard flat-space supersymmetry.
The solutions to the generalized Killing spinor equations~\eqref{DUnmkse} and~\eqref{DUnmksebar} were analyzed in~\cite{DUFestuccia:2011ws,DUKlare:2012gn,DUDumitrescu:2012ha}, and the conditions for the existence of one or several supercharges were deduced. In particular, it was found that a single supercharge~$Q$ of~$R$-charge~$-1$ exists if and only if~$\mathcal{M}$ is a complex manifold, i.e.~it admits an integrable complex structure~${J^\mu}_\nu$, and~$g_{\mu\nu}$ is a compatible Hermitian metric. Since there is only one supercharge, it follows from~\eqref{DUnmalg} that it must square to zero, i.e.~$\delta_Q^2 = 0$. In section~\ref{DUsec:s3s1ex} we will discuss complex manifolds with topology~$S^3 \times S^1$ that preserve up to four supercharges.
The Killing spinor~$\zeta$ corresponding to a single supercharge~$Q$ on a complex manifold~$\mathcal{M}$ is simply related to the complex structure~${J^\mu}_{\nu}$ on~$\mathcal{M}$,
\begin{equation}
\label{DUjzetarel}
{J^\mu}_\nu = -\frac{2i}{|\zeta|^2} \zeta^\dagger {\sigma^\mu}_\nu \zeta~.
\end{equation}
The background fields~$A_\mu^{(R)}$ and~$V_\mu$ are essentially determined by~${J^\mu}_\nu$ and the Hermitian metric~$g_{\mu\nu}$. Here we will only quote the formula for~$V^\mu$,
\begin{equation}
\label{DUvviaj}
V^\mu = \frac{1}{2} \nabla_\nu {J^\nu}_\mu~,
\end{equation}
up to a freely adjustable piece that will play no role in our discussion. (See~\cite{DUClosset:2013vra} for additional details, including the formula for~$A_\mu^{(R)}$.) Note that~$V^\mu$ vanishes when~$\mathcal{M}$ is K\"ahler, so that~${J^\mu}_\nu$ is covariantly constant. As discussed around~\eqref{DUccsp}, this is precisely the case that allows for twisting by the~$U(1)_R$ symmetry. Therefore, the supergravity construction reduces to twisting in the appropriate limit, but it is more general. For instance, it allows complex manifolds~$\mathcal{M}$ that are not K\"ahler, such as the~$S^3 \times S^1$ backgrounds discussed in section~\ref{DUsec:s3s1ex}. This is only possible because of the additional field~$V^\mu$ supplied by new minimal supergravity.
An important fact that carries over from twisting is that the supercharge~$Q$ on the complex manifold~$\mathcal{M}$ transforms as a scalar under holomorphic coordinate changes~\cite{DUDumitrescu:2012ha}. This will play a crucial role in section~\ref{DUsec:conpartfun4d}, where we analyze the dependence of the partition function~$Z_\mathcal{M}$ on the geometry of~$\mathcal{M}$.
It is straightforward to extend the preceding discussion to background gauge fields~$a_\mu$, which couple to conserved flavor currents~$j_\mu$~\cite{DUClosset:2013vra,DUClosset:2014uda}. Here we will focus on a single~$U(1)$ current. In flat space, it resides in a real linear superfield~$\mathcal{J}$, which satisfies
\begin{equation}
\label{DUflcurr}
D^2 \mathcal{J} = \overline D^2 \mathcal{J} = 0~.
\end{equation}
In components,
\begin{equation}
\label{DUflcurrcomp}
\mathcal{J} = \left(J \, , j_\alpha \, , \overline j_{\dot \alpha} \, , j_\mu\right)~, \qquad \partial^\mu j_\mu = 0~.
\end{equation}
The corresponding background gauge field~$a_\mu$ resides in a vector multiplet~$\mathcal{V}$. In Wess-Zumino gauge,
\begin{equation}
\label{DUvmult}
\mathcal{V} = \left(D \, , \lambda_\alpha \, , \overline \lambda_{\dot \alpha} \, , a_\mu\right)~.
\end{equation}
Here~$D$ is a real auxiliary field and~$\lambda_\alpha$ is the gaugino. In order to determine the allowed supersymmetric configurations of the bosonic background fields~$a_\mu, D$ on a complex manifold~$\mathcal{M}$ with supercharge~$Q$, we follow the same logic as above and set
\begin{equation}
\label{DUdeltaliszero}
\delta_Q \lambda = i \zeta D + \sigma^{\mu\nu} \zeta f_{\mu\nu} = 0~, \qquad f_{\mu\nu} = \partial_\mu a_\nu - \partial_\nu a_\mu~.
\end{equation}
This leads to the following constraints,
\begin{equation}
\label{DUsusygf}
f^{0,2} = 0~, \qquad D = - \frac{1}{2} J^{\mu\nu} f_{\mu\nu}~,
\end{equation}
where~$f^{0,2}$ is the anti-holomorphic~$(0,2)$ component of the two-form~$f_{\mu\nu}$. Therefore, supersymmetric background gauge fields are in one-to-one correspondence with holomorphic line bundles over the complex manifold~$\mathcal{M}$.
\subsection{Lagrangians}
As was emphasized in~section~\ref{DUsec:overviewgf}, the rigid supersymmetry approach cleanly separates between the allowed supersymmetric backgrounds and their supersymmetry algebras (which were discussed in section~\ref{DUsec:4dthwithrsym}), and supersymmetric Lagrangians on these backgrounds. These Lagrangians only depend on a choice of background supergravity multiplet, but not on the specific field configuration of the supergravity fields. They can be straightforwardly obtained from the corresponding formulas in new-minimal supergravity~\cite{DUSohnius:1981tp,DUSohnius:1982fw,DUFerrara:1988qxa}.
Consider, for instance, a free chiral multiplet~$\Phi = (\phi, \psi_\alpha, F)$ of~$R$-charge~$r$, and its conjugate anti-chiral multiplet~$\widetilde \Phi = (\widetilde \phi, \widetilde \psi_{\dot \alpha}, \widetilde F)$ of~$R$-charge~$-r$, with flat-space Lagrangian
\begin{equation}
\label{DUflatchi}
\mathscr{L}_{\mathbb{R}^4} = \partial^\mu \widetilde \phi \partial_\mu \phi - i \widetilde \psi \widetilde \sigma^\mu \partial_\mu \psi - \widetilde F F~.
\end{equation}
The corresponding curved-space Lagrangian in the presence of supergravity background fields is given by~\cite{DUFestuccia:2011ws},
\begin{equation}
\label{DUnlchinmlag}
\mathscr{L}_\mathcal{M} = \mathscr{L}_{\mathbb{R}^4} \big|_{\text{covariant}} + V^\mu \left(i \widetilde \phi \, {\overleftrightarrow D_\mu} \phi + \widetilde \psi \widetilde \sigma_\mu \psi\right) - r \left(\frac{1}{4} R - 3V^\mu V_\mu\right) \widetilde \phi \phi~.
\end{equation}
Here~$D_\mu = \partial_\mu - i r A_\mu^{(R)}$ is the~$R$-covariant derivative, and~$\mathscr{L}_{\mathbb{R}^4} \big|_{\text{covariant}}$ is the covariantization of~\eqref{DUflatchi} with respect to diffeomorphisms and~$R$-symmetry gauge transformations. It describes the minimal coupling of~$\mathscr{L}_{\mathbb{R}^4}$ to background fields. However, supersymmetry requires the presence of additional, non-minimal terms in the Lagrangian~\eqref{DUnlchinmlag}. Moreover, these terms explicitly depend on the~$R$-charge~$r$ of~$\Phi$, i.e.~on the choice of~$\mathcal{R}$-multiplet that was used to couple the flat-space theory to background supergravity. This agrees with the general discussion in section~\ref{DUsec:overviewgf}: the coupling to~$\mathcal{M}$ proceeds through the stress-tensor multiplet and different multiplets lead to different theories in curved space. Here the ability to freely assign any~$R$-charge~$r$ to~$\Phi$ reflects the freedom to choose an~$\mathcal{R}$-multiplet from a continuous family of such multiplets. In other situations the~$R$-charge may be fixed, e.g.~in the presence of a superpotential~$W = \Phi^n$ we must set~$r = \frac{2}{n}$.\footnote{~Note that the curvature coupling~$\sim r R \widetilde \phi \phi$ in~\eqref{DUnlchinmlag} may lead to a tachyonic instability if the curvature~$R$ has a definite sign and~$|r|$ is too large. This is born out in explicit examples, e.g.~some supersymmetric partition functions are only meaningful if the~$R$-charges are restricted to a certain range.}
The non-minimal terms in~\eqref{DUnlchinmlag} also require a corresponding modification of the supersymmetry transformations,
\begin{equation}
\label{DUdeltachinm}
\begin{aligned}
& \delta \phi = \sqrt 2 \zeta \psi~, \cr
& \delta \psi = \sqrt 2 \zeta F + i \sqrt 2 \sigma^\mu \widetilde \zeta \, \big(\partial_\mu - i r A_\mu^{(R)}\big) \phi~,\cr
& \delta F = \sqrt 2 \widetilde \zeta \widetilde \sigma^\mu \left(\nabla_\mu - i (r-1)A_\mu^{(R)} - \frac{i}{2} V_\mu\right)\psi~,
\end{aligned}
\end{equation}
and similarly for the conjugate fields in the anti-chiral multiplet~$\widetilde \Phi$. Given a solution~$\zeta$ of the Killing spinor equation~\eqref{DUnmkse}, we can substitute the corresponding background fields into the Lagrangian~\eqref{DUnlchinmlag} and verify that it is supersymmetric under~\eqref{DUdeltachinm}, provided we use~\eqref{DUnmkse}.
Broadly speaking, the curved-space Lagrangian~$\mathscr{L}_\mathcal{M}$ depends on three kinds of data:
\begin{itemize}
\item[1.)] Data that was already present in the flat-space Lagrangian~$\mathscr{L}_{\mathbb{R}^4}$.
\item[2.)] The choice of~$\mathcal{R}$-multiplet that is used to couple the flat-space theory to supergravity background fields. For a theory with a Lagrangian description, this amounts to a set of~$R$-charge assignments for the fields.
\item[3.)] Various geometric structures on~$\mathcal{M}$, i.e.~the complex structure~${J^\mu}_\nu$, the Hermitian metric~$g_{\mu\nu}$, and possibly background flavor gauge fields described by holomorphic line bundles over~$\mathcal{M}$. These structures emerge from the Killing spinor equations~\eqref{DUnmkse} and~\eqref{DUnmksebar}, as well as~\eqref{DUdeltaliszero} for background gauge fields.
\end{itemize}
\noindent We will now explain how supersymmetry constrains the dependence of the partition function~$Z_\mathcal{M}$ on this data, focusing on the curved-space data summarized in~$2.)$ and~$3.)$ above.
\subsection{Constraining the partition function}
\label{DUsec:conpartfun4d}
We can use supersymmetry to constrain the dependence of the partition function~$Z_\mathcal{M}$ on continuous data. The basic idea is to vary the data by a small amount, schematically denoted by~$\Delta \mathcal{M}$, and check whether the corresponding small change~$\Delta \mathscr{L}_\mathcal{M}$ in the Lagrangian is~$Q$-exact. If this is the case, the partition function does not depend on the deformation,
\begin{equation}
\label{DUqex}
\Delta \mathscr{L}_{\mathcal{M}} = \left(\Delta \mathcal{M}\right) \{Q, \mathcal{O}\}~, \qquad \Delta Z_\mathcal{M} \sim \langle \{Q, \mathcal{O}\}\rangle = 0~.
\end{equation}
The same logic underlies the localization argument, which was sketched around~\eqref{DUlocpart} and~\eqref{DUzvar}.
A head-on analysis of this problem is possible~\cite{DUClosset:2014uda}, but it is complicated by the fact that the curved-space Lagrangian and the supersymmetry transformations depend on the continuous data that we would like to vary. Here we will explain a simple but powerful method for sidestepping these complications, which has the added advantage of not requiring a Lagrangian. The simplification proceeds in two steps:
\begin{itemize}
\item[1.)] If we work around flat space, with a nearly flat metric, then the deformation Lagrangian $\Delta \mathscr{L}_\mathcal{M}$ consists of operators in the stress-tensor multiplet of the flat-space theory, i.e.~the~$\mathcal{R}$-multiplet~\eqref{DUrmultcomp}. The known supersymmetry transformations of these operators can be used to determine which terms in~$\Delta \mathscr{L}_\mathcal{M}$ are~$Q$-exact.
\item[2.)] These results can be extended to arbitrary complex manifolds by using the fact that the supercharge~$Q$ is a scalar under holomorphic coordinate transformations.
\end{itemize}
\noindent This logic is standard in the context of topological twisting~(see for instance~\cite{DUWitten:1988ze,DUWitten:1994ev}), where~$Q$ is a scalar under all coordinate changes and a suitably defined stress-tensor~$\hat T_{\mu\nu}$ is~$Q$-exact in flat space, $\hat T_{\mu\nu} = \{Q, \Lambda_{\mu\nu}\}$. This is generally sufficient to ensure that the partition function~$Z_\mathcal{M}$ on any four-manifold~$\mathcal{M}$ does not depend on the metric~$g_{\mu\nu}$.
Following~\cite{DUClosset:2013vra}, we will now apply this argument to constrain the dependence of the partition function~$Z_\mathcal{M}$ on a complex manifold~$\mathcal{M}$ on the complex structure~${J^\mu}_\nu$ and the Hermitian metric~$g_{\mu\nu}$. To this end, we introduce local holomorphic coordinates~$z^i \; (i =1,2)$, in which the non-zero components of the complex structure and the metric are given by
\begin{equation}
\label{DUcsmetcomp}
{J^i}_j = i {\delta^i}_j~, \qquad {J^{\overline i}}_{\overline j} = - i {\delta^{\overline i}}_{\overline j}~, \qquad g_{i\overline j}~.
\end{equation}
In these coordinates, infinitesimal variations~$\Delta {J^\mu}_\nu, \Delta g_{\mu\nu}$ of the complex structure and the metric must satisfy the following constraints,
\begin{equation}
\label{DUdefcon}
\begin{aligned}
& \Delta {J^i}_j = \Delta {J^{\overline i}}_{\overline j} = 0~, \qquad \partial_{\overline j} \Delta {J^i}_{\overline k} - \partial_{\overline k} \Delta {J^i}_{\overline j} = 0~,\cr
& \Delta g_{i\overline j} = \text{anything}~, \qquad \Delta g_{ij } = \frac{i}{2} \left(\Delta J_{ij} + \Delta J_{ji}\right)~.
\end{aligned}
\end{equation}
The first line ensures that~${J^\mu}_\nu + \Delta {J^\mu}_\nu$ is also an integrable complex structure (at first order in the variation), while the second line is the statement that the deformed metric~$g_{\mu\nu} + \Delta g_{\mu\nu}$ should be Hermitian with respect to the deformed complex structure. Complex structure deformations of the form
\begin{equation}
\label{DUtrivialj}
\Delta {J^i}_{\overline j} = 2 i \partial_{\overline j} \varepsilon^i~,
\end{equation}
are induced by an infinitesimal diffeomorphism parametrized by the vector field~$\varepsilon^\mu$. This leads to a cohomology problem for non-trivial complex structure deformations: they correspond to classes in~$H^{0,1}(\mathcal{M}, T^{1,0}\mathcal{M})$. If~$\mathcal{M}$ is compact (as we are assuming here), this is a finite-dimensional vector space, i.e.~there is a finite number of complex structure moduli. See~\cite{DUKodairabook} for an introduction to the deformation theory of complex manifolds.
We begin with the linearized couplings of the bosonic operators~\eqref{DUrmultcompii} in the~$\mathcal{R}$-multiplet to the bosonic new minimal supergravity fields~\eqref{DUnmsugra} (this is~\eqref{DUgenlagdef}, specialized to new minimal supergravity),\footnote{~Our operator~$C_{\mu\nu}$ was denoted by~$\frac{i}{4} \varepsilon_{\mu\nu\rho\lambda} \mathcal{F}^{\rho\lambda}$ in~\cite{DUClosset:2013vra}.}
\begin{equation}
\label{DUlinnmcouplings}
\Delta \mathscr{L} = - \frac{1}{2} \Delta g^{\mu\nu} T_{\mu\nu}+ A^{(R) \mu} j_\mu^{(R)} + B^{\mu\nu} C_{\mu\nu}~.
\end{equation}
We can now substitute the deformations~\eqref{DUdefcon} into this formula. (This requires the formula for~$B_{\mu\nu}$ in~\eqref{DUvviaj} and the formula for~$A_\mu^{(R)}$ in~\cite{DUClosset:2013vra}.) We find that
\begin{equation}
\label{DUvarylag}
\Delta \mathscr{L} = - \Delta g^{i \overline j} \mathcal{T}_{i \overline j} - i \sum_j {\Delta J^{\overline i}}_j \mathcal{T}_{\overline j \overline i} + i \sum_j {\Delta J^i}_{\overline j} \left(\mathcal{T}_{ij } + i \partial_j j_i^{(R)}\right)~,
\end{equation}
where we have defined the following (complex) linear combination of operators in the~$\mathcal{R}$-multiplet,
\begin{equation}
\label{DUscripttdef}
\mathcal{T}_{\mu\nu} = T_{\mu\nu} +\frac{1}{4} C_{\mu\nu} -\frac{i}{4} \varepsilon_{\mu\nu\rho\lambda} \partial^\rho j^{(R) \lambda} - \frac{i}{2} \partial_\nu j_\mu^{(R)}~.
\end{equation}
We can now ask whether any of these operators are~$Q$-exact, and hence do not affect the partition function when they appear in~\eqref{DUvarylag}. The only fermionic operators in the~$\mathcal{R}$-multiplet are the supersymmetry current~$S_{\mu\alpha}$ and its conjugate~$\widetilde S_{\mu{\dot \alpha}}$, whose~$Q$-variations are given by
\begin{equation}
\label{DUqofs}
\{Q, S_{\mu\alpha}\} = 0~, \qquad \{Q, \widetilde S_{\mu{\dot \alpha}} \} = 2 i \left(\widetilde \sigma^\nu \zeta\right)_{\dot \alpha} \mathcal{T}_{\mu\nu}~.
\end{equation}
Using the relation~\eqref{DUjzetarel} between the Killing spinor~$\zeta$ and the complex structure~${J^\mu}_\nu$, it can be shown that the second relation in~\eqref{DUqofs} amounts to the statement that all operators of the form~$\mathcal{T}_{\mu \overline i}$, for any index~$\mu$, are~$Q$-exact. Comparing with~\eqref{DUvarylag} shows that:
\begin{itemize}
\item[1.)] The partition function~$Z_\mathcal{M}$ does not depend on the Hermitian metric~$g_{i\overline j}$.
\item[2.)] The partition function~$Z_\mathcal{M}$ depends on~$\Delta {J^i}_{\overline j}$, but not on its complex conjugate~$\Delta {J^{\overline i}}_j$, i.e.~it is a holomorphic function of the complex structure moduli.\footnote{~Note that~$Z_\mathcal{M}$ cannot depend on trivial deformations that vanish in cohomology, since these are induced by background diffeomorphisms.}
\end{itemize}
\noindent These results lead to the following observations:
\begin{itemize}
\item Since~$Z_\mathcal{M}$ does not depend on the metric, we can rescale~$g_{i \overline j} \rightarrow \lambda^2 g_{i \overline j}$ for some constant~$\lambda$. This uniform scale transformation can be identified with RG flow, and hence~$Z_\mathcal{M}$ can be computed in the UV or in the deep IR of any non-trivial RG flow. An immediate consequence is that~$Z_\mathcal{M}$ must be invariant under IR dualities, such as Seiberg duality~\cite{DUSeiberg:1994pq}.
\item The arguments above apply to small (infinitesimal) deformations, and hence they only show that~$Z_\mathcal{M}$ is a locally holomorphic function of the complex structure moduli. There are generally interesting singularities at certain loci in moduli space. Even the metric independence of~$Z_\mathcal{M}$ may only hold for sufficiently small deformations (see for instance~\cite{DUMoore:1997pc}).
\item We can repeat the preceding analysis for flavor current multiplets. The upshot is that~$Z_\mathcal{M}$ only depends on background gauge fields through the corresponding holomorphic line bundles~\cite{DUClosset:2013vra}. In particular, it is a locally holomorphic function of the bundle moduli. If~$\mathcal{M}$ is compact, there are finitely many of them.
\end{itemize}
So far we have discussed the dependence of~$Z_\mathcal{M}$ on the geometric structures supplied by the background fields. We can use similar methods to analyze its dependence on the choice of~$U(1)_R$ symmetry that is used to couple the flat-space field theory to~$\mathcal{M}$. A detailed discussion can be found in~\cite{DUClosset:2014uda}. Here we only recall that, in flat space, the~$R$-symmetry is not unique whenever there is an abelian flavor symmetry that can mix with it. However, in a non-trivial background the~$R$-charges may be quantized, and hence not continuously variable (see for instance~\cite{DUDumitrescu:2012ha,DUClosset:2013vra}). Only special classes of complex manifolds allow a continuously variable~$R$-symmetry.\footnote{~The precise condition is that the canonical bundle~$\mathcal{K}$ of the complex manifold~$\mathcal{M}$ must be topologically trivial, i.e.~its Chern class must vanish, $c_1(\mathcal{K}) = 0$.}
\subsection{Example: $S^3 \times S^1$}
\label{DUsec:s3s1ex}
We will now briefly summarize an application of the general results discussed above to complex manifolds with topology~$S^3 \times S^1$. (See~\cite{DUClosset:2013vra} for additional details.) It follows from results of Kodaira~\cite{DUKodairasone} that every such complex manifold must be a primary Hopf surface, which comes in two types. We will focus on a primary Hopf surface of the first type, $\mathcal{M}^{p,q}$, which is defined by the following holomorphic quotient,
\begin{equation}
\label{DUhopfdef}
\mathcal{M}^{p,q} = \left\{\mathbb{C}^2 -(0,0) \right\} / \left\{(w,z) \sim (p w, q z)\right\}~, \qquad 0 < |p| \leq |q| <1~.
\end{equation}
Here~$p,q$ are complex structure moduli of the Hopf surface. The results summarized in section~\ref{DUsec:conpartfun4d} imply that the partition function~$Z_{\mathcal{M}^{p,q}}$ is a locally holomorphic function of~$p,q$. If there are abelian background gauge fields, it must also be locally holomorphic in the corresponding bundle modulus~$u$. (It can be shown that there is only one such modulus on~$\mathcal{M}^{p,q}$.) Partition functions on Hopf surfaces were directly studied in~\cite{DUAssel:2014paa,DUNishioka:2014zpa} using localization techniques.
It can be shown~\cite{DUClosset:2013vra} that~$Z_\mathcal{M}(p, q, u)$ coincides with the supersymmetric index~$\mathcal{I}(p, q, u)$ for states on~$S^3 \times \mathbb{R}$ defined in~\cite{DURomelsberger:2005eg} (see also~\cite{DUDolan:2008qi, DUKinney:2005ej,DUFestuccia:2011ws}), with general complex fugacities~$p,q,u$.\footnote{~More precisely, the equality between~$Z_\mathcal{M}$ and~$\mathcal{I}$ holds up to a scheme-independent factor, which arises from anomalies and can be interpreted as a supersymmetric Casimir energy~\cite{DUAssel:2014paa,DUAssel:2015nca}.} If the theory is an SCFT, this index coincides with the superconformal index of~\cite{DUKinney:2005ej}, which counts BPS operators, but in general it is distinct. In particular, it is defined away from the conformal point and can be tracked along RG flows. See~\volcite{RR} for a more detailed discussion.
It is worth commenting on the~$S^3 \times \mathbb{R}$ background of new minimal supergravity that is used to define the index~\cite{DUSen:1985ph,DUFestuccia:2011ws}. It preserves four supercharges that anticommute to an~$SU(2|1)$ superalgebra. The bosonic subalgebra~$SU(2) \times U(1)$ contains one of the~$SU(2)$ factors of the~$SU(2)_\ell \times SU(2)_r$ isometry of~$S^3$, and a~$U(1)$ factor that is a linear combination of time translations along~$\mathbb{R}$ and the~$R$-charge. The supergravity background fields are given by
\begin{equation}
\label{DUindsgra}
ds^2 = d\tau^2 + r^2 d\Omega_3~, \qquad V = \pm \frac{i}{r} d\tau~, \qquad A^{(R)} = - \frac{1}{2} V~.
\end{equation}
Here~$r$ is the radius of the round~$S^3$, and the sign of~$V$ depends on whether the~$SU(2) \subset SU(2|1)$ is identified with~$SU(2)_\ell$ or~$SU(2)_r$. The choice of~$A^{(R)}$ is such that the supercharges are time independent. Note that the background fields are consistent with reflection positivity in Euclidean signature, since the~$\tau$-components of~$V$ and~$A^{(R)}$ are purely imaginary, i.e.~they would be real in Lorentzian signature. The non-conformal index~$\mathcal{I}(p, q, u)$ is defined as the Witten index of the theory on~$S^3 \times \mathbb{R}$ in Hamiltonian quantization,
\begin{equation}
\label{DUromberginddef}
\mathcal{I}(p, q, u) = \text{Tr}_{\mathcal{H}_{S^3}} \left((-1)^F p^{J_\ell + J_r - \frac{R}{2}} q^{J_\ell - J_r - \frac{R}{2}} u^{Q_\text{flavor}}\right)~.
\end{equation}
Here~$\mathcal{H}_{S^3}$ is the Hilbert space of states on~$S^3$, $J_\ell$ and~$J_r$ are the Cartan generators of~$SU(2)_\ell$ and~$SU(2)_r$, $R$ is the~$U(1)_R$ charge, and~$Q_\text{flavor}$ is the~$U(1)$ flavor charge associated with the fugacity~$u$.
\subsection{Theories without an~$R$-symmetry}
\label{DUsec:norsym}
Theories without a~$U(1)_R$ symmetry do not possess an~$\mathcal{R}$-multiplet, and hence they cannot be coupled to the new minimal supergravity background fields. Consequently, the discussion in the preceding subsections does not apply to them. A prominent example of such a theory is pure~$\mathcal{N}=1$ supersymmetric Yang-Mills theory, where the~$U(1)_R$ symmetry is explicitly broken by an anomaly. However, even theories without an~$R$-symmetry typically posses an FZ-multiplet~\eqref{DUfzcomp}, which can be coupled to the old minimal supergravity background fields~\cite{DUStelle:1978ye,DUFerrara:1978em},
\begin{equation}
\label{DUombg}
\mathcal{H}_\mu = \left(b_\mu \,, \Psi_{\mu\alpha} \,, M \,, \widetilde M \,, g_{\mu\nu}\right)~.
\end{equation}
Here~$b_\mu$ is a well-defined (i.e.~non-gauge) vector field, and~$M, \widetilde M$ are complex scalars. In Lorentzian signature~$\widetilde M = \overline M$, but in Euclidean signature they may be independent.
The Killing spinor equations that follow from setting the supersymmetry variation of the gravitino~$\Psi_{\mu\alpha}$ to zero are given by~\cite{DUFestuccia:2011ws}
\begin{equation}
\label{DUomsgraks}
\nabla_\mu \zeta = \frac{i}{6} M \sigma_\mu \widetilde \zeta +\frac{i}{3} b_\mu \zeta +\frac{i}{3} b^\nu \sigma_{\mu\nu} \zeta~,
\end{equation}
and a similar equation with~$\zeta \leftrightarrow \widetilde \zeta$, $M \leftrightarrow - \widetilde M$, and~$i \leftrightarrow -i$. Note that, unlike in the new minimal case~\eqref{DUnmkse}, the Killing spinor equation mixes the left- and right-handed spinors~$\zeta$ and~$\widetilde \zeta$, which leads to new backgrounds that cannot arise in new minimal supergravity.
The supersymmetric backgrounds that satisfy~\eqref{DUomsgraks} were classified in~\cite{DUFestuccia:2011ws,DUSamtleben:2012gy,DULiu:2012bi,DUDumitrescu:2012at}. A simple background that highlights the qualitative differences between the old and new minimal cases is a round~$S^4$ of radius~$r$ with\begin{equation}
\label{DUsfourbg}
M = \widetilde M = -\frac{3 i}{r}~, \qquad b_\mu = 0~.
\end{equation}
Since~$S^4$ is not a complex manifold, it cannot arise as a background in new minimal supergravity. Moreover, the non-zero values for~$M, \widetilde M$ necessarily break the~$R$-symmetry of the field theory, even if it was present in flat space. Finally, note that~$M, \widetilde M$ are not complex conjugates, and hence the background does not respect reflection positivity unless these fields decouple. This happens if the flat-space theory is superconformal, in which case it can be mapped to~$S^4$ by a conformal transformation that preserves unitarity. In a non-conformal theory, the violation of unitarity is necessary in order to avoid a no-go theorem that forbids unitary supersymmetric theories in de Sitter space, and hence reflection positive supersymmetric theories on compact spheres. The~$S^4$ background admits a squashing deformation that only preserves the isometry group~$SO(4) \subset SO(5)$. Unfortunately, neither the round nor the squashed~$S^4$ appear to be amendable to localization calculations (see for instance the recent discussion in~\cite{DUKnodel:2014xea}).
\section{Three-dimensional~$\mathcal{N}=2$ theories}
\label{DUsec:3d}
\subsection{Theories with an~$R$-symmetry on curved manifolds}
Here we briefly sketch extensions of the results summarized in section~\ref{DUsec:4d} to three-dimensional theories with~$\mathcal{N}=2$ supersymmetry. We only discuss theories with a~$U(1)_R$ symmetry. Now the~$\mathcal{R}$-multiplet consists of the following operators~\cite{DUDumitrescu:2011iu},
\begin{equation}
\label{DUtdrmult}
\mathcal{R} = \left(j_\mu^{(R)} \,, S_{\mu\alpha} \,, T_{\mu\nu} \,, j_\mu^{(Z)} \,, J\right)~.
\end{equation}
Here~$j_\mu^{(R)}$ is the~$R$-current, $S_{\mu\alpha}$ is the supersymmetry current, $T_{\mu\nu}$ is the stress tensor, $j_\mu^{(Z)}$ is the central charge current, and~$J$ is a scalar operator. All operators other than~$J$ are conserved currents. The corresponding background supergravity fields constitute the analogue of new minimal supergravity in three dimensions~(see for instance~\cite{DUKuzenko:2013uya} and references therein),
\begin{equation}
\label{DUtdsugra}
\mathcal{H} = \left(A_\mu^{(R)} \,, \Psi_{\mu\alpha} \,, g_{\mu\nu} \,, C_\mu \,, H\right)~.
\end{equation}
Now the condition~$\delta_Q \Psi_{\mu\alpha} = 0$ leads to the following generalized Killing spinor equation for the allowed supersymmetric backgrounds~\cite{DUKlare:2012gn,DUClosset:2012ru},
\begin{equation}
\label{DUtdkse}
\left(\nabla_\mu - A_\mu^{(R)}\right) \zeta = - \frac{1}{2} H \gamma_\mu \zeta + \frac{i}{2} V_\mu \zeta - \frac{1}{2} \varepsilon_{\mu\nu\rho} V^\nu \gamma^\rho \zeta~.
\end{equation}
Here~$V^\mu = - i \varepsilon^{\mu\nu\rho} \partial_\nu C_\rho$ is the dual field strength of~$C_\mu$ in Euclidean signature. A solution~$\zeta$ to these equations exists if and only if the three-manifold~$\mathcal{M}$ admits a geometric structure known as a transversely holomorphic foliation (THF), and the metric is a compatible transversely Hermitian metric (see~\cite{DUClosset:2013vra} for additional details). This structure is comprised of the following ingredients:
\begin{itemize}
\item[1.)] A nowhere vanishing unit vector field~$\xi^\mu$, which provides a local~$2+1$ decomposition of the manifold~$\mathcal{M}$.
\item[2.)] An integrable complex structure~$J$ on the two-dimensional spaces transverse to~$\xi^\mu$, such that~$J$ is invariant along~$\xi^\mu$, i.e.~$\mathcal{L}_\xi J = 0$.
\end{itemize}
\smallskip
\noindent In the compact case, such manifolds have been classified~\cite{DUBG,DUBrunella,DUGhys}. Topologically, they must be Seifert manifolds or~$T^2$ bundles over~$S^1$. Compact hyperbolic three-manifolds are not allowed.
As is already clear from the definition, manifolds that carry a THF are very similar to complex manifolds. For instance, both admit complex~$(p,q)$ differential forms, a~$\overline \partial$-operator, a corresponding Dolbeault cohomology, and holomorphic line bundles. As in four dimensions, these holomorphic line bundles correspond to supersymmetric configurations of background gauge fields for abelian flavor symmetries. Both a THF, and the holomorphic line bundles over it, generally come in infinite families labled by a finite number of holomorphic moduli. As in the discussion around~\eqref{DUtrivialj}, these moduli (which are finite in number if~$\mathcal{M}$ is compact) correspond to certain~$\overline \partial$-cohomology classes. See section~5 of~\cite{DUClosset:2013vra} for an introduction to THFs and their moduli.
\subsection{Constraining the partition function}
\label{DUsec:conspf3d}
In addition to the flat-space couplings and the choice of~$R$-symmetry, the Lagrangian on~$\mathcal{M}$ now depends on a choice of THF, a transversely Hermitian metric, and holomorphic line bundles corresponding to background flavor gauge fields. Repeating the arguments in section~\ref{DUsec:conpartfun4d} in this case, we find that (see \cite{DUClosset:2013vra} for a detailed discussion):
\begin{itemize}
\item The partition function~$Z_\mathcal{M}$ does not depend on the transversely Hermitian metric.
\item $Z_\mathcal{M}$ is a locally holomorphic function of the THF moduli.
\item The partition function depends holomorphically on line bundle moduli corresponding to background flavor gauge fields.
\end{itemize}
\subsection{Example: round and squashed~$S^3$}
\label{DUsec:3drsqs3}
In~$\mathcal{N}=2$ theories with a~$U(1)_R$ symmetry, the partition function on a round~$S^3$ is computable using supersymmetric localization techniques~\cite{DUKapustin:2009kz,DUJafferis:2010un,DUHama:2010av} (see also~\volcite{WI}). This result has been generalized to a large variety of squashed spheres, see for instance~\cite{DUHama:2011ea,DUImamura:2011uw,DUImamura:2011wg,DUMartelli:2011fu,DUNishioka:2013haa,DUMartelli:2013aqa,DUAlday:2013lba,DUNian:2013qwa,DUTanaka:2013dca}. These squashed spheres often have the feature that their metric contains arbitrary functions, in addition to various continuous parameters. Explicit localization computations of partition functions on these squashed spheres indicate that:
\begin{itemize}
\item The partition function only depends on the background geometry through a single complex parameter~$b$, known as the squashing parameter. We will therefore denote the partition function by~$Z_{S^3_b}$.
\item Some deformations of the background fields do not affect~$Z_{S^3_b}$ (i.e.~they do not change~$b$), even though the metric changes.
\end{itemize}
These observations can be understood using the results of~\cite{DUClosset:2013vra} summarized in section~\ref{DUsec:conspf3d} above.\footnote{~Some of these results (for special backgrounds and theories) were subsequently reproduced from a different point of view in~\cite{Imbimbo:2014pla}. We thank the authors for emphasizing their work to us.} It follows from the classification of~\cite{DUBG,DUBrunella,DUGhys} that the moduli space of THFs on three-manifolds diffeomorphic to~$S^3$ (i.e.~squashed spheres) is one complex dimensional.\footnote{~There is another, isolated branch of the moduli space, which consists of a single point, but it will not be important for us here (see \cite{DUClosset:2013vra} for additional details).} Therefore all squashed-sphere partition functions should only depend on one complex modulus, which can be identified with the squashing parameter~$b$. It also shows that more complicated squashings will not lead to new partition functions.
Similarly, distinct squashed spheres that give rise to the same value of~$b$ correspond to the same choice of THF, but possibly different transversely Hermitian metrics, which do not affect the partition function.
\subsection{$F$-maximization and correlation functions}
The SUSY theories on~$S^3 \times S^1$ and~$S^3$ discussed in sections~\ref{DUsec:s3s1ex} and~\ref{DUsec:3drsqs3} above explicitly depend on a choice of~$U(1)_R$ symmetry, which affects their curvature couplings. In a superconformal theory, there is a distinguished choice of~$U(1)_R$ symmetry, which resides in the superconformal algebra. In four-dimensional~$\mathcal{N}=1$ theories, it can be determined in flat space using anomalies and~$a$-maximization~\cite{DUAnselmi:1997am,DUIntriligator:2003jj}.
The analogous principle for three-dimensional $\mathcal{N}=2$ theories is $F$-maximization \cite{DUJafferis:2010un}. Since this is the subject of~\volcite{PU}, we will only make a few remarks. Consider the partition function~$Z_{S^3}$ on a round~$S^3$, together with a supersymmetric background gauge field for the conserved flavor current~$j_\mu$. This partition function only depends on one holomorphic line bundle modulus~$u$,
\begin{equation}
\label{DUsthreepf}
Z_{S^3} = e^{-F(u)}~, \qquad F(u) = F(m + i t)~.
\end{equation}
Here~$t \in \mathbb{R}$ controls the mixing of the flavor symmetry with the~$R$-symmetry, while~$m$ is a real mass parameter associated with the flavor symmetry. The fact that the~$m$- and~$t$-dependence of~$F$ descends from a single holomorphic function of~$u$ was first observed in~\cite{DUJafferis:2010un}. A general explanation was given in~\cite{DUClosset:2014uda}.
Derivatives of the free energy~$F$ with respect to~$t$ compute integrated correlation functions of~$j_\mu$ or its superpartners on~$S^3$. In an SCFT, one-point functions should vanish, so that
\begin{equation}
\label{DUfext}
\partial_t \, \text{Re} F \big|_\text{SCFT} = 0~.
\end{equation}
Surprisingly, the first derivative of the imaginary part~$\text{Im} F$ need not vanish, due to a global superconformal anomaly that can arise in three dimensions~\cite{DUClosset:2012vg,DUClosset:2012vp}.
Taking more derivatives with respect to~$t$ leads to higher-point correlation functions of~$j_\mu$, for instance
\begin{equation}
\label{DUtwopointjj}
\partial_t^2 \, \text{Re} F \big|_\text{SCFT} = -\frac{\pi^2}{2} \,\tau~.
\end{equation}
Here~$\tau$ is the coefficient of the current two-point function at separated points in flat space. In a unitary theory~$\tau$ must be positive,
\begin{equation}
\label{DUtaudef}
\langle j_\mu(x) j_\nu(0)\rangle = \frac{\tau}{16 \pi^2} \left(\delta^{\mu\nu} \partial^2 - \partial^\mu \partial^\nu\right) \frac{1}{x^2}~, \qquad \tau > 0~.
\end{equation}
The conditions in~\eqref{DUfext}, \eqref{DUtwopointjj}, and~\eqref{DUtaudef} amount to the statement of~$F$-maximization, which can be used to solve for the superconformal value~$t = t_*$ of the mixing parameter. Once this value has been found, we can use~\eqref{DUtwopointjj} to compute the value of~$\tau$ in the SCFT. Similarly, we can slightly squash the sphere away from the round point~$b = 1$ to extract the positive coefficient~$C_T >0$ that appears in the stress-tensor two-point function at separated points~\cite{DUClosset:2012ru},
\begin{equation}
\label{DUsmallsquash}
C_T \sim \partial_b^2 \, \text{Re} F \big|_{b = 1}~.
\end{equation}
\section*{Acknowledgments}
I am grateful to C.~Closset, G.~Festuccia, Z.~Komargodski, and N.~Seiberg for collaboration on some of the work reviewed here. My work is supported by the Fundamental Laws Initiative at Harvard University, as well as DOE grant DE-SC0007870 and NSF grant PHY-1067976.
\documentfinish
\chapter[\titlevalue{} (by \authorvalue)]{\titlevalue}
\label{ChapterDU}
\chapterauthor{Thomas T. Dumitrescu}
\chapteraddress{\addressvalue}
\chapterabstract{In this review, we give a pedagogical introduction to a systematic framework for constructing and analyzing supersymmetric field theories on curved spacetime manifolds. The framework is based on the use of off-shell supergravity background fields. We present the general principles, which broadly apply to theories with different amounts of supersymmetry in diverse dimensions, as well as specific applications to~$\mathcal{N} = 1$ theories in four dimensions and their three-dimensional cousins with~$\mathcal{N} = 2$ supersymmetry.}
\tightmtctrue
\minitoc
}
\newcommand{\documentheader}{
\begin{flushright} \small
\end{flushright}
\begin{center}
{\bf \Large \titlevalue}
\end{center}
\chapterauthor{Thomas T. Dumitrescu}
\chapteraddress{\addressvalue}
\chapterabstract{In this review, we give a pedagogical introduction to a systematic framework for constructing and analyzing supersymmetric field theories on curved spacetime manifolds. The framework is based on the use of off-shell supergravity background fields. We present the general principles, which broadly apply to theories with different amounts of supersymmetry in diverse dimensions, as well as specific applications to~$\mathcal{N} = 1$ theories in four dimensions and their three-dimensional cousins with~$\mathcal{N} = 2$ supersymmetry.}
\medskip
This is a contribution to the review volume ``Localization techniques
in quantum field theories'' (eds. V.~Pestun and M.~Zabzine) which
contains 17 Chapters available at \cite{ContributionSummary}
\tableofcontents
}
\newcommand{\ifvolume}[2]{\ifx\ifLONG\undefined#2\else#1\fi}
\newcommand{\documentfinish}{
\ifx\ifLONG\undefined
\bibliographystyle{bibreview}
|
train/arxiv
|
BkiUeA44dbghU_Qx7pbf
| 5 | 1 |
\section{Introduction}
The existence of black hole singularity is one of the most fundamental question in physics. The Penrose cosmic censorship hypothesis asserts that the spacetime singularities need to be hidden from an observer at infinity by an event horizon, which blocks all of the information within it \citep{Hawking:1969sw, Hawking:1973uf}. Generally all the electrovacuum solutions of classical general relativity are consistent with this conjecture. However, the conjecture does not restrain us from considering black hole spacetimes which are free from singularity, within classical general relativity. In this context, recently proposed theory of four dimensional Gauss-Bonnet gravity is quite interesting one \citep{Glavan:2019inb}. It was demonstrated that for a positive Gauss-Bonnet coupling parameter $\alpha$, the static and spherically symmetric solution of the theory is free from the much debated singularity problem. The theory is captivating for other reasons too, for example, the obtained black hole solution appears in the setting of the gravity with a conformal anomaly \citep{Cai:2009ua, Cai:2014jea}, and also in the context of quantum corrections \citep{Tomozawa:2011gp, Cognola:2013fva}. However, the black hole solution in four dimensional Gauss-Bonnet theory is attractive, as it is a modified theory of classical gravity, and hence is on an equal footing with general relativity. These captivating features of this novel theory resulted in a surge of various investigations around this theory, including the theoretical aspects, viability of the solution and physical properties \cite{Konoplya:2020bxa, Guo:2020zmf, Casalino:2020kbt, Konoplya:2020qqh, Fernandes:2020rpa, Lu:2020iav, Konoplya:2020ibi, Ghosh:2020syx, Konoplya:2020juj, Kobayashi:2020wqy, Zhang:2020qam, HosseiniMansoori:2020yfj, Kumar:2020uyz, Wei:2020poh, Churilova:2020aca, Islam:2020xmy, Liu:2020vkh, Konoplya:2020cbv, Jin:2020emq, Ai:2020peo, Heydari-Fard:2020sib, Li:2020tlo, Wei:2020ght, Kumar:2020owy, Hennigar:2020lsl, Mahapatra:2020rds, Shu:2020cjw, Gurses:2020ofy, NaveenaKumara:2020rmi}.
It is well known that black holes are not merely strong gravity systems, but also a thermal systems. Particularly, the establishment of laws of black hole thermodynamics has made the phase transition of these compact objects appealing in every sense \citep{Bekenstein1973, Bardeen:1973gs}. In recent times, anti-de Sitter black hole thermodynamics have gained more interest, as the identification of cosmological constant as the thermodynamic variable pressure, leads to the modification in first law, which has a conventional $PdV$ term \citep{Kastor:2009wy, Dolan:2011xt}. In this extended phase space, AdS black holes exhibit variety of phase transition features, of which, the van der Waals like transition is of great interest \citep{Kubiznak2012, Gunasekaran2012, Kubiznak:2016qmn}. As in the case of a conventional van der Waals fluid, the black hole shows a first order phase transition between two phases, namely, the large black hole phase and the small black hole phase. The authors have studied the thermodynamics of the four dimensional Gauss-Bonnet AdS black hole for both the charged and uncharged cases \citep{Hegde:2020xlv}, and it was observed that a vdW like phase transition exists. Having said so, as the black hole is a gravitational and also a thermal system, it is quite natural to seek a connection between the effects of strong gravity and phase transition.
It is customary to seek the details of a gravitating object, especially a compact object with strong gravity, by observing the characteristic features of a test particle moving along the geodesics around it. For a particle moving in the vicinity of a black hole, the black hole features are expected to be encoded in the behaviour of the particle motion. These notions are exploited in connecting the unseen attributes of black hole to observational aspects, for example, black hole shadow and quasinormal modes \citep{Cardoso:2008bp, Stefanov:2010xz}. Along with this, the phase transition signature of a charged AdS black hole can be obtained using the quasinormal mode (QNM) studies \citep{Liu:2014gvf}. It was reported that during the van der Waals like phase transition of the black hole, the slope of the quasinormal mode changes drastically, which is an observable phenomenon.
These initial findings motivated to investigate a more concrete relationship between the gravitational and phase transition features of the AdS black holes using the null geodesics \citep{Wei:2017mwc, Wei:2018aqm}. By studying the photon orbits in the background of a charged AdS black hole, the phase transition properties are observed from the behaviour of radius $r_{ps}$ and minimum impact parameter $u_{ps}$ of the circular orbit. The behaviour of $r_{ps}$ and $u_{ps}$ with the Hawking temperature $T$ and pressure $P$, mimics the isobar and isotherms found in thermodynamics counterpart. Below the critical values, the first order phase transition is reflected by these orbital parameters. During the phase transition, these two parameters change by a finite amount, which serves as order parameters to characterise the black hole phase transition, with a critical exponent $1/2$. Originally, this was observed in a charged AdS black holes \citep{Wei:2017mwc}, this correlation between gravity and thermodynamics, via photon orbits, can be seen in different black hole spacetimes, namely, Kerr-AdS \citep{Wei:2018aqm}, Born-Infeld AdS background \citep{Xu:2019yub}, regular AdS black holes \citep{A.:2019mqv}, massive gravity \citep{Chabab:2019kfs}, Born-Infeld-dilaton black hole \citep{Li:2019dai}, five-dimensional Gauss-Bonnet black holes \citep{Han:2018ooi} etc. Related studies in other contexts have also appeared in subsequent works \citep{ Zhang:2019tzi, Bhamidipati:2018yqy, Wei:2019jve}. In this article we seek a similar correlation for the novel four-dimensional Gauss-Bonnet AdS black hole.
The article is organised as follows. In the next section (\ref{secTD}) we briefly present the 4D Gauss-Bonnet AdS black hole solution and it's thermodynamics. In section \ref{secPT} we investigate the phase transition features of the black hole, wherein, the phase structure is probed using the coexistence and metastable curves. This is followed by section \ref{secgeo}, where we consider the null geodesics on the equatorial plane, and hence obtain the photon orbit radius $r_{ps}$ and minimum impact parameter $u_{ps}$. In section \ref{secphotoncritical}, we study the critical behaviour of $r_{ps}$ and $u_{ps}$, where the order parameters are presented. Finally, we conclude the paper in section \ref{conclusion}.
\section{4D Gauss-Bonnet AdS Black Hole: Metric and Thermodynamics}
\label{secTD}
In this section we briefly present the black hole metric and it's thermodynamics. The $D$-dimensional Einstein-Maxwell-Gauss-Bonnet theory with a negative cosmological constant $\Lambda$ is described by the action \citep{Fernandes:2020rpa},
\begin{equation}
\label{action}
\mathcal{I}=\frac{1}{16\pi} \int d^Dx\sqrt{-g}\left[ R+2\Lambda +\alpha \mathcal{G} -F^{ab}F_{ab}\right],
\end{equation}
where $g$ is the determinant of the metric $g_{ab}$, $F_{ab}=\partial _a A_b -\partial _b A_a$, is the Maxwell field tensor and $\alpha$ is the Gauss-Bonnet coupling coefficient. The Gauss Bonnet term is given by,
\begin{equation}
\mathcal{G}=R^2-4R_{ab}R^{ab}+R_{abcd}R^{abcd},
\end{equation}
where $R$ is the Ricci scalar, $R_{ab}$ is the Ricci tensor, $R_{abcd}$ is the Riemann tensor. The cosmological constant is related to the AdS radius $l$ as,
\begin{equation}
\Lambda = -\frac{(D-1)(D-2)}{2l^2}.
\end{equation}
In four dimensions the Gauss-Bonnet term does not contribute to the dynamics of the system, as the integral over that term is a topological invariant. However, recently a genuine four dimensional Einstein-Gauss-Bonnet gravity was obtained by scaling $\alpha$ as \citep{Glavan:2019inb},
\begin{equation}
\alpha \rightarrow \frac{\alpha }{D-4},
\end{equation}
and then taking the limit $D\rightarrow 4$. The spherically symmetric solution for the action (\ref{action}) is,
\begin{equation}
\label{gbsolution}
ds^2=-f(r)dt^2+\frac{1}{f(r)}dr^2+r^2d\Omega ^2_{D-2}.
\end{equation}
In the limit $D\rightarrow 4$ the metric function has the form,
\begin{equation}
\label{metricfun}
f(r)=1+\frac{r^2}{2\alpha} \left(1-\sqrt{1+4 \alpha \left(-\frac{1}{l^2}+\frac{2 M}{r^3}-\frac{Q^2}{r^4}\right)}\right),
\end{equation}
where $M$ is the ADM mass and $Q$ is the total charge of the black hole. The validity of the theory from which we obtained the above static spherically symmetric solution has been scrutinised in detail in several propositions \citep{Ai:2020peo, Gurses:2020ofy, Shu:2020cjw, Mahapatra:2020rds, Tian:2020nzb, Bonifacio:2020vbk, Arrechea:2020evj}. However, these does not rule out the possibility of having a spherically symmetric solution as it can be obtained from consistent formulations \citep{Lu:2020iav, Kobayashi:2020wqy, Fernandes:2020nbq, Hennigar:2020lsl, Aoki:2020lig}. Therefore we approach the solution (\ref{gbsolution}) as a self-reliant one. Interestingly, this solution also appears in the context of a conformal anomaly gravity \citep{Cai:2009ua, Cai:2014jea}.
The horizon of the black hole ($r_+$) is defined by the condition $f(r_+)=0$. Using this condition we obtain the mass of the black hole to be,
\begin{equation}
M=\frac{r_+^3}{2 l^2}+\frac{Q^2}{2 r_+}+\frac{\alpha }{2 r_+}+\frac{r_+}{2}.
\end{equation}
We present the thermodynamics of the black hole in an extended phase space, where the cosmological constant $(\Lambda)$ is treated as the thermodynamic pressure $(P)$, and they are related as $P=-\frac{\Lambda}{8\pi}$. The Hawking temperature of the black hole is associated with the surface gravity $\kappa$, which is,
\begin{equation}
T=\frac{\kappa}{2\pi}=\left. \frac{f'(r)}{4\pi} \right|_{r=r_+}=-\frac{\alpha -8 \pi P r_+^4+Q^2-r_+^2}{4 \pi r_+^3+8 \pi \alpha r_+}.
\label{Hawking}
\end{equation}
The first law of black hole can be written, considering the GB coupling parameter $\alpha$ to be a thermodynamic variable \citep{Cai:2013qga, Wei:2014hba}, as,
\begin{equation}
\label{firstlaw}
dM=TdS+VdP+\Phi dQ+ \mathcal{A}d\alpha
\end{equation}
where the potentials $\Phi$ and $\mathcal{A}$ are conjugate to $Q$ and $\alpha$, respectively. Likewise, the thermodynamic volume $V$ is a conjugate to pressure $P$,
\begin{equation}
V=\left( \frac{\partial M}{\partial P}\right) _{S,Q,\alpha}=\frac{4}{3} \pi r_+^3.
\end{equation}
The entropy of the black hole can be obtained as follows,
\begin{equation}
S=\int _0^{r_+} \frac{1}{T}dM=\frac{A}{4}+2\pi \alpha \ln \left( \frac{A}{A_0} \right),
\end{equation}
where $A=4\pi r_+^2$ is the horizon area and $A_0$ is the integration constant, which has the dimension of $[length]^2$. It is clear that the Gauss Bonnet coupling parameter $\alpha$ modifies the Bekenstein-Hawking entropy-area law. In general, the black hole entropy is independent of the charge $Q$ and cosmological constant $\Lambda$, therefore, the integration constant can be set as $A_0=4\pi | \alpha |$ \citep{Wei:2020poh}. With this identification, the entropy reads,
\begin{equation}
S=\pi r_+^2+4\pi \alpha \ln \left( \frac{r_+}{\sqrt{|\alpha|}}\right).
\end{equation}
We emphasise that the black hole entropy has a logarithmic correction, whereas, the thermodynamic volume remains same as the geometric volume. Before concluding the thermodynamics of the black hole, we also mention that, the variables presented above satisfy the Smarr relation in addition to the first law,
\begin{equation}
M=2TS+\Phi Q-2PV+2\alpha \mathcal{A}.
\end{equation}
\section{Phase Transition of 4D Gauss Bonnet AdS Black Hole}
\label{secPT}
The phase transition of the 4D Gauss-Bonnet black hole has been well studied by the authors \citep{Hegde:2020xlv}. Here we recall them to analyse the phase structure using the coexistence and spinodal curves. The state equation of the system is obtained by inverting the expression for Hawking temperature,
\begin{equation}
P=\frac{Q^2}{8 \pi r_+^4}+\frac{\alpha }{8 \pi r_+^4}+\frac{\alpha T}{r_+^3}-\frac{1}{8 \pi r_+^2}+\frac{T}{2 r_+}.
\end{equation}
In terms of volume we have,
\begin{equation}
P=\frac{(6 \pi )^{2/3} (\alpha + Q^2)}{18 \pi ^{1/3} V^{4/3}}+\frac{4 \pi \alpha T}{3 V}+\frac{\pi ^{1/3} T}{6V^{1/3}}-\frac{1}{2\times 6^{2/3} \pi ^{1/3} V^{2/3}}.
\end{equation}
The critical behaviour of the black hole can be easily seen in the $P-V$ isotherms, where a first order phase transition exists between a small black hole phase (SBH) and a large black hole phase (LBH). This phase transition property is exhibited by both the charged and neutral black holes. The critical point of the phase transitions is determined by using the condition,
\begin{equation}
\left( \frac{\partial P}{\partial V}\right)_{T,Q,\alpha} =\left( \frac{\partial ^2P}{\partial V^2}\right) _{T,Q,\alpha}=0.
\end{equation}
The critical values of the thermodynamic variables are \citep{Hegde:2020xlv},
\begin{equation}
T_c=\frac{\left(8 \alpha +3 Q^2-\rho \right) \sqrt{6 \alpha +3 Q^2+\rho }}{48 \pi \alpha ^2};
\end{equation}
\begin{equation}
P_c=\frac{9 \alpha +6 Q^2+\rho }{24 \pi \left(6 \alpha +3 Q^2+\rho \right)^2};
\end{equation}
\begin{equation}
V_c=\frac{4}{3} \pi \left(6 \alpha +3 Q^2+\rho \right)^{3/2};
\end{equation}
where $\rho =\sqrt{48 \alpha ^2+9 Q^4+48 \alpha Q^2}$. Making use of these quantities we define the reduced thermodynamic variables,
\begin{equation}
\tilde{T}=\frac{T}{T_c} \qquad \tilde{P}=\frac{P}{P_c} \qquad \tilde{V}=\frac{V}{V_c}.
\end{equation}
By observing the phase structure, we can have a better understanding of the phase transition. In the extended phase space, the black hole mass plays the role of enthalpy, which is evident from the first law (\ref{firstlaw}). With this understanding, the Gibbs free energy of the black hole is calculated to be $G=M-TS$, which reads,
\begin{eqnarray}
G=\frac{4}{3} \pi P r_+^3+\frac{Q^2}{2 r_+}-T \left[\pi r_+^2+4 \pi \alpha \log \left(\frac{r_+}{\sqrt{\alpha }}\right)\right]+\frac{\alpha }{2 r_+}+\frac{r_+}{2}.
\end{eqnarray}
Here $r_+$ is regarded as a function of $(P,T)$ from equation of state. We obtain the coexistence curve in the $\tilde{P}-\tilde{T}$ plane, by using the swallow tail behaviour of the Gibbs free energy. The coexistence expression is also translated into $\tilde{T}-\tilde{V}$ plane. The results are shown in fig. \ref{GBPTTV}. In the $\tilde{P}-\tilde{T}$ plane, the coexistence line (red solid line) partitions the SBH and LBH phases below the critical point. It terminates at the second order phase transition point, above which the phase is supercritical. The figures also display the metastable curves (blue dashed lines) which satisfy,
\begin{equation}
( \partial _V P)_T=0, \qquad (\partial _V T)_P=0.
\end{equation}
The region between the coexistence curve and metastable curve are the metastable phases, namely, superheated SBH and supercooled LBH phases. In the $\tilde{T}-\tilde{V}$ plane the region under the metastable curve corresponds to the coexistence phase of SBH and LBH.
\begin{figure}[t]
\centering
\subfigure[][]{\includegraphics[scale=0.85]{GBPT.eps}\label{GBPT}}
\qquad
\subfigure[][]{\includegraphics[scale=0.85]{GBTV.eps}\label{GBTV}}
\caption{The coexistence curve (red solid line) and Spinodal curve (blue dashed line) in $\tilde{P}-\tilde{T}$ and $\tilde{T}-\tilde{V}$ plane. The black dot at $(1,1)$ denotes the critical point. }
\label{GBPTTV}
\end{figure}
\section{Geodesic equations of motion}
\label{secgeo}
In this section we establish the relationship between the thermodynamics and the null geodesics. Consider a photon which is orbiting the black hole freely on the equatorial plane described by $\theta =\pi /2$. The Lagrangian which characterises this motion can be directly written from the metric (\ref{gbsolution}),
\begin{equation}
\label{lagrangian}
2 \mathcal{L}=-f(r)\dot{t}^2+\frac{\dot{r}^2}{f(r)}+r^2\dot{\phi}^2.
\end{equation}
Here, the dots represent the differentiation with respect to an affine parameter. The 4D Gauss-Bonnet AdS black hole spacetime has two Killing fields, $\partial _t$ and $\partial _\phi$, which leads to two constants of motion, $E$ and $L$, which are the conserved quantities, the energy and orbital angular momentum of the photon, respectively. The generalised momenta corresponding to the Lagrangian (\ref{lagrangian}) can be obtained by using $p_a =g_{ab} \dot{x}^b$ as,
\begin{eqnarray}
p_t=-f(r)\dot{t}\equiv E\\
p_\phi= r^2\dot{\phi} \equiv L\\
p_r=\dot{r}/f(r).
\end{eqnarray}
The $t$ and $r$ motion of the photon can now be described as,
\begin{equation}
\dot{t}=\frac{E}{f(r)}
\end{equation}
\begin{equation}
\dot{\phi}=\frac{L}{r^2 \sin ^2\theta}.
\end{equation}
The Hamiltonian for the system is obtained from the standard definition, and it vanishes,
\begin{equation}
2\mathcal{H}=-E\dot{t}+L\dot{\phi}+\dot{r}^2/f(r)=0.
\end{equation}
Employing $r$ and $\phi$ motion, we can rewrite the expression for the radial $r$ motion as,
\begin{equation}
\dot{r}^2+V_{eff}=0,
\end{equation}
with the effective potential given by,
\begin{equation}
V_{eff}=\frac{L^2}{r^2}f(r)-E^2.
\end{equation}
The photon can only move in the region where $V_{eff}<0$, since $\dot{r}^2>0$. A photon approaching the black hole will be absorbed if it has a smaller angular momentum $L$ and get scattered if the angular momentum is large enough. The absorption and scattering are separated by a critical angular momentum, which defines a unstable circular photon orbit. Expressions governing this orbit are,
\begin{equation}
\label{orbit}
V_{eff}=0\quad , \quad V'_{eff}=0 \quad , \quad V''_{eff}<0,
\end{equation}
where prime denotes a differentiation with respect to $r$. The radial velocity $\dot{r}$ of the photon is zero in this unstable circular orbit. The corresponding value of $r$ is the radius of photon orbit. Expanding the second equation in (\ref{orbit}) we have,
\begin{equation}
2f(r_{ps})-r_{ps}\partial _r f(r_{ps})=0.
\label{aneqn}
\end{equation}
Substituting the metric function $(\ref{metricfun})$ into this equation and solving, we obtain the expression for the radius of photon sphere $r_{ps}$, which is a complicated expression and a function of black hole parameters $(M,Q,P,\alpha)$. Solving the first equation in (\ref{orbit}), $(V_{eff}=0)$, we obtain the minimum impact parameter of the photon as,
\begin{equation}
u_{ps}=\frac{L_c}{E}=\left. \frac{r}{\sqrt{f(r)}} \right| _{r_{ps}}.
\label{upsequation}
\end{equation}
To investigate the correlation between the photon sphere and the black hole phase transition, we observe the behaviour of the radius $r_{ps}$ and minimum impact parameter $u_{ps}$ with respect to the Hawking temperature and pressure, in the reduced parameter space. Apparently this investigation is motivated by the observations in the phenomena of black hole lensing, where the impact parameter $u$ has a close connection with the deflection angle. The deflection angle is small for a large impact parameter. Yet, in the limit $u\rightarrow u_{ps}$, the deflection angle is unbounded \cite{Bozza:2002zj}.
In fig. \ref{TruGB}, the Hawking temperature $T$ is shown as a function of the photon orbit radius $r_{ps}$ and minimum impact parameter $u_{ps}$, separately, with fixed pressures. The isobars in this figure imply the typical van der Waals like phase transition. For pressures below the critical value, the isobars first increase, then decrease, and finally increase with respect to the photon sphere radius $r_{ps}$ and minimum impact parameter $u_{ps}$. In fig. \ref{PruGB} the pressure $P$ is seen as function of $r_{ps}$, first, and then of $u_{ps}$, keeping the temperature as a constant. The behaviour of pressure here (fig. \ref{PruGB}) is opposite to that of temperature (fig. \ref{TruGB}). For example, when temperature $\tilde{T}$ increases with $r_{ps}$ or $u_{ps}$, the pressure $\tilde{P}$ decreases. In summary, from the behaviour of the photon orbit radius and minimum impact parameter along the isothermal and isobaric curves of 4D Gauss-Bonnet AdS black hole, the van der Waals like phase transition can be clearly identified. This affirms that there exists a correlation between null geodesics and phase transition of the black hole.
\begin{figure}[t]
\centering
\subfigure[][]{\includegraphics[scale=0.85]{TrGB.eps}\label{TrGB}}
\qquad
\subfigure[][]{\includegraphics[scale=0.85]{TuGB.eps}\label{TuGB}}
\caption{The behaviour of photon sphere radius and minimum impact parameter of unstable null geodesic with Hawking temperature in reduced parameter space. We take $Q=1$ and $\alpha=0.5$ }
\label{TruGB}
\end{figure}
\begin{figure}[t]
\centering
\subfigure[][]{\includegraphics[scale=0.85]{PrGB.eps}\label{PrGB}}
\qquad
\subfigure[][]{\includegraphics[scale=0.85]{PuGB.eps}\label{PuGB}}
\caption{The behaviour of photon sphere radius and minimum impact parameter of unstable null geodesic with pressure in reduced parameter space. We take $Q=1$ and $\alpha=0.5$ }
\label{PruGB}
\end{figure}
\section{Critical behaviour of the photon sphere}
\label{secphotoncritical}
\begin{figure}[t]
\centering
\subfigure[][]{\includegraphics[scale=0.8]{rGB.eps}\label{rGB}}
\qquad
\subfigure[][]{\includegraphics[scale=0.8]{uGB.eps}\label{uGB}}
\caption{ The variation of the radius of photon sphere and the minimum impact parameter, for unstable null geodesic, with respect to the Hawking temperature (in reduced parameter space). The SBH (blue dashed line) and LBH (red solid line) meet at the critical point $(\tilde{T}=1)$.}
\label{coex}
\end{figure}
\begin{figure}[t]
\centering
\subfigure[][]{\includegraphics[scale=0.8]{drGB.eps}\label{drGB}}
\qquad
\subfigure[][]{\includegraphics[scale=0.8]{duGB.eps}\label{duGB}}
\caption{The change in photon sphere radius and minimum impact parameter of unstable null geodesic during the phase transition of the black hole. The concavity of the curve changes near the critical point, which is shown in an enlarged form in inlets.}
\label{diffcoex}
\end{figure}
The black hole exhibits a first order vdW like phase transition which terminates at the critical point, which corresponds to second order phase transition. As we have seen, there is a connection between the photon sphere and phase transition, it is worth examining the behaviour of change in photon orbit and minimum impact parameter during the phase transition. We construct the equal area law for the $\tilde{T}-\tilde{r}_{ps}$ and $\tilde{T}-\tilde{u}_{ps}$ isobars, similar to the isobars in the $\tilde{T}-\tilde{S}$ plane of the black hole. From the result, we study the behaviour of the photon orbit radius $r_{ps}$ along the coexistence curve (Fig. \ref{coex}). As the temperature increases, the radius $r_{ps}$ for the coexistence LBH phase decreases, whereas for the coexistence SBH phase it increases. The $r_{ps}$ of both coexistence phases attain same value at the critical point $\tilde{T}=1$. Same behaviour is observed for the minimum impact parameter $u_{ps}$. In fig \ref{diffcoex} we display the differences of the quantities $r_{ps}$ and $u_{ps}$ with the phase transition temperature. Both $\Delta r_{ps}$ and $\Delta u_{ps}$ behaves acutely like the order parameter. They have non-zero value corresponding to first-order phase transition and vanish at the second-order phase transition. The behaviour in the neighbourhood of critical point is shown in the inlets. Near the critical point a change in concavity is observed. We numerically obtain the critical exponent of these differences near the critical point to be,
\begin{equation}
\Delta \tilde{r}_{ps} =3.57249(1-\tilde{T})^{0.510839}
\end{equation}
and\begin{equation}
\Delta \tilde{u}_{ps} = 2.54786 (1-\tilde{T})^{0.506096}.
\end{equation}
This behaviour, i.e. $\Delta \tilde{r}_{ps} \sim (1-\tilde{T})^{1/2}$ and $\Delta \tilde{u}_{ps} \sim (1-\tilde{T})^{1/2}$, show that $\Delta \tilde{r}_{ps}$ and $\Delta \tilde{u}_{ps}$ can serve as the order parameters to characterise the black hole phase transition. These results strongly confirm our previous assertion that photon orbits and thermodynamic phase transitions are related to each other.
\section{Concluding Remarks}
\label{conclusion}
In this article we show that the unstable circular photon orbit around the four dimensional Gauss-Bonnet AdS black hole reflects the phase transition information of the black hole. The radius of the photon orbit $r_{ps}$ and the minimum impact parameter $u_{ps}$ are studied in detail. The study establishes a link between the gravity and thermodynamics in the background of Gauss-Bonnet AdS strong gravity.
In the first part of the article we presented the thermodynamics and phase transition of the black hole. The phase structure of the black hole is analysed using the coexistence curve and the metastable curves. These curves are the boundaries that separates different stable and metastable phases of the black holes, using which a clear understanding of phase transition features are obtained. The first-order and second order phase transition details are sought in this study, which are influenced by the Gauss-Bonnet coupling parameter $\alpha$. Throughout our study we keep in mind that, the extended phase space thermodynamics features are same for both the charged and neutral Gauss-Bonnet AdS black holes, as it was reported in our previous work \citep{Hegde:2020xlv}.
In the second part of the article, using the Lagrangian of a photon moving freely in the equatorial plane of the black hole we investigated the null geodesics. Using the effective potential, we solve the photon orbit radius $r_{ps}$ and the minimum impact parameter $u_{ps}$ for the 4D Gauss-Bonnet AdS black hole. These two key quantities depend on the black hole parameters, especially the charge $Q$ and Gauss-Bonnet coupling parameter $\alpha$. To establish the relationship between the photon sphere and black hole phase transition we study the behaviour of $r_{ps}$ and $u_{ps}$ along the isobar and the isotherms of the system. The first order phase transition is revealed from these plots. When the pressure or temperature is below the critical value there exists two extreme values for $r_{ps}$ and $u_{ps}$, which coincide to form one extreme point for the critical values of pressure or temperature. Above the critical value of pressure or temperature the $r_{ps}$ and $u_{ps}$ do not exhibit any extremum. Thus they increase monotonically. This behaviour of the photon orbit isobar and isotherm are consistent with that of the black hole thermodynamics. Finally we probe the behaviour of $r_{ps}$ and $u_{ps}$ along the coexistence curve. The two coexistence branches, namely, small black hole and large black hole, have different $r_{ps}$ and $u_{ps}$ values. Their differences $\Delta r_{ps}$ and $\Delta u_{ps}$ serve as order parameters for the black hole phase transition. They vanish near the critical point, which corresponds to the second order phase transition. In the neighbourhood of this critical point, $\Delta r_{ps}$ and $\Delta u_{ps}$ have a critical exponent of $1/2$, which is obtained numerically. Our results show that in the background of Einstein-Maxwell-Gauss-Bonnet AdS spacetime, the black hole thermodynamics can be probed via the strong gravity effects and vice versa.
\acknowledgments
K.H. , N.K.A. and A.R.C.L. would like to thank U.G.C. Govt. of India for financial assistance under UGC-NET-SRF scheme.
|
train/arxiv
|
BkiUdsE5qX_BxeH2UDVK
| 5 | 1 |
\section{Introduction}
\label{intro}
The Bureau of Transportation Statistics reports that between October $2018$ and October $2019$, delays caused by late aircraft arrivals amounted to 40+ million minutes, which is $39.47\%$ of the total delays experienced by the flights of reporting carriers \cite{bts}. This highlights that operational delays are a significant problem on both an absolute and a relative basis even today, with propagated delays being the biggest offender. Propagated delays occur when the arriving flight for a connection is delayed and causes a departure delay for the onward flight, kicking off a chain reaction of delays on the aircraft's route. Such propagation is primarily due to the creation of ``tight'' schedules with very limited buffers for connection times. Such schedules are created to maximize utilization of assets such as equipment and crew \cite{klabjan2001robust}. This leaves no room for the schedule to absorb fluctuations in flight arrivals and departures, resulting in significant delays and costs.
The idea of making an airline schedule robust seeks to counteract this problem by adjusting the schedule to better absorb time fluctuations in aircraft arrivals and departures during operations. As robustness-based decisions need to be made much earlier than actual operational delays are known, it is necessary to consider the stochasticity of such delays. The downside of this approach is a reduction in resource utilization and an increase in planned operational costs. This creates the need for solution strategies that can balance planning and operational costs. Optimization-based approaches, which are inherently equipped with mechanisms for such balancing acts, are therefore a great fit for this problem.
Schedule robustness has been tackled in the literature from several perspectives. A two-stage stochastic programming model is proposed in \cite{yen2006stochastic}, where crew assignments are made in the first-stage and swap opportunities are anticipated in the second-stage. Another two-stage stochastic programming model is presented in \cite{froyland2013recoverable}, where the first-stage is a tail assignment problem and the second-stage is a schedule recovery problem. This model uses penalties to minimize changes between the planning and recovery solutions. A mixed integer program (MIP) with stochastic data to minimize expected propagated delays is presented in \cite{lan2006planning}. The study in \cite{marla2018robust} compares the performance of chance-constrained programming, robust optimization, and stochastic optimization approaches using a solution space similar to the one in the model presented in \cite{lan2006planning}. Methodologies to solve integrated aircraft routing and crew pairing problems to reduce uncertain propagated delays are considered in \cite{yen2006stochastic,dunbar2012robust,dunbar2014integrated}. More recently, the robust optimization approach presented in \cite{yan2016robust} uses column and row generation to solve a routing problem with delays coming from a bounded uncertainty set by minimizing worst-case propagated delay costs. An alternate perspective in \cite{ahmadbeygi2010decreasing,chiraphadhanakul2013robust} retains a given planned routing but re-times flights in order to add time buffers or ``connection slacks'' to flight connections that are likely to be missed. Other related work can also be found in \cite{arikan2013building,kang2004degradable,rosenberger2004robust,shebalov2006robust,talluri1996swapping,weide2010iterative}.
\update{To motivate our research, we present some concerns we observed with scheduled robustness models proposed so far. First, there is no clear differentiation between the cost of rescheduling flights a few weeks before the day-of-operations versus delaying them a few hours before departure. This difference can be significant in practice. Second, the stochastic programming approaches proposed in literature use very complex first-stage models with a wide variety of first-stage decisions. This may be undesirable, as each adjustment of a schedule can affect other operational considerations such as staff scheduling, maintenance scheduling, crew and passenger connectivity, among others. Also, there is no clarity on how to reduce the scope of such models while still generating useful results for scheduling practitioners. Computationally, the size and complexity of first-stage models proposed in literature makes it difficult to scale them and use them for real-world airline schedules.}
\update{In this research, we seek to fill the aforementioned gaps in literature. Our main contributions are (i) a two-stage stochastic programming model that re-times flights of a given schedule in a controlled manner while minimizing the sum of first-stage rescheduling costs and expected cost of propagated delays on the day of operations; (ii) a parallel decomposition framework based on the L-shaped method \cite{van1969shaped} that uses column generation to solve recourse second-stage models; (iii) extensive computational study using disruptions caused by randomly generated flight delays that show a significant reduction in propagated delays in schedules adjusted by the proposed model; and (iv) recommendations and insights that can boost the performance of decomposition techniques like Benders decomposition and column generation for flight schedule models. The proposed model and solution framework allow to solve much larger instances than those solved so far in literature. For example, one of the networks we consider has $324$ flights and $71$ aircraft, much larger in size than networks used in recent works like \cite{froyland2013recoverable}, \cite{yan2016robust}. Furthermore, we use a dynamic delay approach similar to \cite{yan2016robust} to solve our recourse problems. This approach uses the least required delay on each flight while building paths. This eliminates the need for discrete delay copies which can generate unnecessary flight delays due to discretization and cause significant run time increases (see Figure 7 in \cite{froyland2013recoverable}). The path-based recourse formulation in our model can be easily extended to incorporate requirements from other operational domains of airlines. This includes hard constraints like minimum crew/passenger connection times and soft requirements like the loss of passenger goodwill that can be incorporated into path costs.}
The remainder of this paper is organized as follows. In Section \ref{sec:models}, we present a two-stage stochastic programming formulation to minimize the expected value of propagated delays, along with a simpler mixed integer programming formulation based on sample mean values of primary delays. In Section \ref{sec:sol}, we describe a column-generation procedure for recourse problems and the L-shaped algorithm for the complete two-stage problem. In Section \ref{sec:comp}, we report the results of extensive computational studies that highlight the qualitative and quantitative benefits of our approach. In Section \ref{sec:conclusions}, we conclude the article with a summary and discussion of future research directions.
\section{Stochastic Delay Models}\label{sec:models}
In this section, we present our two-stage stochastic programming formulation of the delay mitigation problem. We also present an alternate approach that we use to benchmark our computational results. The latter approach is based on an MIP model that uses the mean values of individual flight delays. We begin by introducing the required notation.
Given a valid flight schedule, we model it as a connection network on a directed acyclic graph $G = (F, A)$ in which the set of nodes $F$ represent flights and the arcs $A$ represent flight connections. A connection $(i,j)$ is valid if and only if (i) the incoming arrival and outgoing departure airports match, and (ii) the connection slack $s_{ij}$, defined as the difference between the departure time of the outgoing flight $j$ and the arrival time plus the turnaround time of the incoming flight $i$, is non-negative. The set $A$ contains only valid connections.
Our modeling of uncertain flight delays is similar to that in \cite{lan2006planning,dunbar2012robust,yan2016robust}. A flight can experience \textit{primary delays} that are independent of routing and rescheduling, and \textit{propagated delays} that are caused by upstream flights on that flight's route. Let $\omega$ be a random variable representing a delay scenario, and let $\Omega$ be a finite set of delay scenarios. Let $pd_f^\omega$ be the realized non-negative integer-valued primary delay in minutes experienced by flight $f \in F$ in scenario $\omega \in \Omega$. Let $R^\omega$ be the set of possible routes in scenario $\omega$. For any route $r \in R^\omega$ and connection $(i,j)$ in $r$, the parameter $d_{rj}$ representing the delay propagated to the outgoing flight $j$ by the connection is defined as:
\begin{align}
d_{rj} = max(0, d_{ri} + pd_i^\omega - s_{ij}). \label{eq:propagatedDelay}
\end{align}
\subsection{Two-stage model}\label{subsec:two-stage}
Let $x_f \geq 0$ be an integer decision variable representing the number of minutes by which flight $f \in F$ needs to be rescheduled, and let $c_f, f \in F$, be the per-minute reschedule cost. The formulation of the two-stage model (TSM) can then be stated as:
\begin{align}
\nonumber (TSM) \quad & \text{Minimize } && \sum_{f \in F} c_f x_f + \mathbb{E}_\Omega[\phi(x, \tilde{\omega})] && \\
\label{eq:origRoute} & \text{s.t. } && x_i \leq s_{ij} + x_j, && (i,j) \in A^{orig}, \\
\label{eq:rescheduleBudget} & && \sum_{f \in F} x_f \leq B, && \\
\label{eq:firstvars} & && x_f \in \mathbb{Z} \cap [0,l], && f \in F.
\end{align}
\noindent The objective of this model is to minimize the sum of the total reschedule cost and the expected flight delay costs. Constraints \eqref{eq:origRoute} protect the time connectivity for all connections in the original routing $A^{orig} \subseteq A$. Constraints \eqref{eq:rescheduleBudget} provide a control factor in the form of a time budget $B$ that limits the total reschedule time. We also limit the $x_f$ values with a fixed bound $l$ to prevent exorbitant reschedules of individual flights. Given a reschedule $x$ and the scenario probabilities $p_\omega$, $\omega \in \Omega$, the expected value $\mathbb{E}_\Omega[\phi(x, \omega)] = \sum_{\omega \in \Omega} p_\omega \phi(x, \omega)$ can be computed by solving the following set partitioning model for each scenario $\omega \in \Omega$, which is the second-stage formulation for a given $x$ and scenario $\omega$:
\begin{align}
\phi(x, \omega) = \nonumber& \text{ Min } && \sum_{f \in F} e_f z_f^\omega && && \\
\label{eq:onePerTail} & \text{ s.t. } && \sum_{r \in R^\omega} a_{rt} y_r = 1, && t \in T,\\
\label{eq:onePerFlight} & && \sum_{r \in R^\omega} b_{rf} y_r = 1, && f \in F,\\
\label{eq:delayLink} & && \sum_{r \in R^\omega} b_{rf} d_{rf} y_r - x_f \leq z_f^\omega, && f \in F,\\
\label{eq:ssvars}& && z_f^\omega \geq 0, \text{ } f \in F, y_r \in \{0, 1\}, \ r \in R^\omega. &&
\end{align}
\noindent The second-stage model minimizes the propagated delay costs incurred in scenario $\omega \in \Omega$ computed as per-minute costs $e_f$ for each flight $f$. It uses two sets of decision variables: continuous variables $z_f^\omega$ that represent the excess delay propagated to each flight $f \in F$ and binary variables $y_r$ that take the value $1$ to indicate the selection of the route $r \in R^\omega$. The parameters $a_{rt}$ and $b_{rf}$ are binary and respectively indicate whether route $r$ is for the tail $t$ and whether it contains flight $f$. Constraints \eqref{eq:onePerTail} and \eqref{eq:onePerFlight} enforce the assignment of one route per aircraft and one route per flight. Constraints \eqref{eq:delayLink} are linking constraints that capture the excess propagated delay that has not been accounted for by the first-stage rescheduling.
Next, we present an MIP formulation that reschedules flights based on the average values of the primary delays. This model is used in the comparative studies presented in the computational results section.
\subsection{Mean delay model}
Let $\bar{\omega}$ be the scenario in which each flight experiences the mean primary delay across all scenarios in $\Omega$, i.e., $d_f^{\bar{\omega}} = \sum_{\omega \in \Omega} p_\omega d_f^\omega$ for $f \in F$. The mean delay model aims to reschedule flights to accommodate the average delay scenario $\bar{\omega}$ without changing the original routing. To simplify the notation, we set $d_f^{\bar{\omega}}$ to be the delay propagated to flight $f$ in scenario $\bar{\omega}$ in the original schedule. The mean delay model can be stated as follows:
\begin{align*}
& \text{Minimize } && \sum_{f \in F} \left( c_f x_f + e_f z_f^{\bar{\omega}} \right) && && \\
& \text{s.t. } && x_i \leq s_{ij} + x_j, && (i,j) \in A^{orig},\\
& && \sum_{f \in F} x_f \leq B, && \\
& && d_f^{\bar{\omega}} - x_f \leq z_f^{\bar{\omega}}, && f \in F,\\
& && z_f^{\bar{\omega}} \geq 0, x_f \in \mathbb{Z} \cap [0,l], && f \in F.
\end{align*}
\noindent The objective function minimizes the total reschedule and delay costs, with the latter carrying a higher penalty. The first two sets of constraints are the first-stage constraints \eqref{eq:origRoute} and \eqref{eq:rescheduleBudget}. The third set of constraints is obtained from \eqref{eq:delayLink} by selecting only the original route for each aircraft.
\section{Solution approach}\label{sec:sol}
In this section, we present our solution framework that uses the $L$-shaped method in \cite{van1969shaped} to solve the TSM. We first present details about how we solve the recourse problems of the TSM.
\subsection{Column-generation framework}
Solving the TSM using the $L$-shaped method requires computing $\phi_{LP}(\bar{x}, \omega)$, the solutions to linear programming (LP) relaxations of the recourse models for any fixed first-stage solution $\bar{x}$. For a given \update{scenario} $\omega$, we use a column-generation approach to generate the required routes. We iterate between solving a version of the recourse problem restricted to a subset of routes $\tilde{R} \subseteq R^\omega$ and solving a pricing problem to find new routes that can improve the solution. \update{Optimality of the linear program} can be declared when no such route can be found. For ease of exposition, we state here the dual formulation of the recourse problem in full. Let $\mu_t$ and $\nu_f$ be unbounded dual variables for the coverage constraints \eqref{eq:onePerTail} and \eqref{eq:onePerFlight} for a scenario $\omega$. Given a first-stage solution $\bar{x}$, we write the constraints \eqref{eq:delayLink} as
\[ z_f^\omega - \sum_{r \in R^\omega} b_{rf} d_{rf} y_r \geq -\bar{x}_f,\ f \in F, \]
\noindent and we let $\pi_f$ be the non-negative dual variables for these constraints. Let $a(r) \in T$ be the aircraft for which the route $r \in R^\omega$ was generated. Using this notation, the dual formulation can be written as:
\begin{align}
\nonumber
& \text{ Maximize } && \sum_{t \in T} \mu_t + \sum_{f \in F} \left( \nu_f - \bar{x}_f \pi_f \right) && \\
\label{eq:dualRoute}
& \text{ s.t. } && \mu_{a(r)} + \sum_{f \in F} b_{rf} \left(\nu_f - d_{rf} \pi_f \right) \leq 0, && r \in R^\omega,\\
\nonumber
& && \mu_t \text{ free}, && t \in T,\\
\nonumber
& && \nu_f \text{ free, } 0 \leq \pi_f \leq e_f, && f \in F.
\end{align}
Our column-generation procedure begins by solving the LP relaxation of the recourse problem with a subset $\tilde{R}$ of routes. One way to initialize $\tilde{R}$ is with routes of the original schedule that have delays propagated sufficiently enough to protect minimum turnaround times. With the dual solution of this restricted problem, a pricing problem is solved to find columns with the least reduced cost $rc_r$, where
\begin{equation}
rc_r = \sum_{f \in F} b_{rf} \left( d_{rf} \pi_f - \nu_f \right) - \mu_{a(r)}.
\label{eq:reducedCost}
\end{equation}
\noindent The dual formulation provides some intuition for $rc_r$; we want routes that violate the constraints \eqref{eq:dualRoute}. Once such a route is found, it is added to $\tilde{R}$ and we repeat the above steps. If no such route can be found, optimality can be declared. As there are potentially a large number of pricing problems to be solved, it is critical to determine the useful routes quickly. Next, we present our version of the labeling algorithm, an extension of the algorithm presented in \cite{dunbar2012robust,yan2016robust}, which we use to solve this problem.
\subsection{Pricing problem} \label{subsec:pricing-problem}
We solve the pricing problem by searching for routes in the graph $G$ with negative values for the reduced cost as defined in \eqref{eq:reducedCost}. As we assume that the original schedule is already available, the airports from which each aircraft should depart at the beginning of the schedule and at which it should arrive at the end of the schedule are fixed. To reflect this, we introduce separate source and sink nodes for each aircraft and separately search for candidate routes for each aircraft. This approach is quite practical, as it can easily be extended to consider aircraft-specific business constraints during route generation. Each aircraft's source node connects only to flights departing from the aircraft's initial departure airport. Similar restrictions apply to sink nodes based on final arrival airports.
To search for candidate routes, we use a label-setting algorithm similar to the one proposed in \cite{dunbar2012robust,yan2016robust}. This algorithm relies on building \textit{labels} that represent partial routes and extending them along valid flight connections given by $A$ to generate full routes from the source to the sink. The combinatorial explosion in the number of routes is controlled using the notion of \textit{dominance} between labels. More formally, each label $l$ denotes a partial path stored in a tuple $(f_l, pred_l, red_l, prop_l)$, where $f_l \in F$ is the last flight on the path, $pred_l$ is the label from which $l$ was extended, $red_l$ is the reduced cost accumulated so far, and $prop_l$ is the delay propagated to $f_l$ on the partial route corresponding to $l$. Note that $pred_l$ is empty for labels at source nodes. When a label $u$ is extended with a connection $(f_u, f') \in A$, the algorithm generates a new label $v = (f',u, red_v,prop_v)$ in which $red_v$ and $prop_v$ are updated using \eqref{eq:propagatedDelay} and \eqref{eq:reducedCost}, respectively. Once a label is extended to the sink node, the route that it corresponds to becomes a full route and can be obtained by traversing backward along the chain of predecessors.
\begin{defn} \label{def:dominance}
(Label dominance condition) Let $u$ and $v$ be two labels with $f_u=f_v$. The label $u$ dominates $v$ if (i) $red_u \leq red_v$, (ii) $prop_u \leq prop_v$, and at least one of the inequalities is strict.
\end{defn}
Given two labels $u$ and $v$, if we know that any feasible extension of $v$ is also feasible for $u$, any route that can be generated by successively extending $v$ to the sink can also be generated by $u$, meaning that we can safely ignore $v$. This was proved in Lemma $1$ in \cite{yan2016robust}. For clarity, we restate the lemma here using the notation of the present article:
\begin{lemma} \label{lem:extention}
Let $u$ and $v$ be labels such that $u$ dominates $v$. If $u'$ and $v'$ are labels obtained by extending $u$ and $v$ with a connection $(f_u,f') \in A$, then $u'$ dominates $v'$.
\end{lemma}
Lemma \ref{lem:extention} allows us to store and extend only non-dominated labels at each node and thus implicitly remove large numbers of candidate paths from consideration. We have observed that the label-setting algorithm in \cite{yan2016robust} provides at most one negative reduced-cost route in each iteration. As any route with a negative reduced cost is likely to improve the recourse solution, we enhance the algorithm by considering three possible alternatives for generating multiple negative reduced-cost columns:
\begin{enumerate}[(i)]
\item \textit{All paths}: Store and return all negative reduced-cost paths.
\item \textit{Best paths}: Store all negative reduced-cost paths, but return only the $N$ most negative reduced-cost paths.
\item \textit{First paths}: Stop the search as soon as $N$ negative reduced-cost paths are found, and return them.
\end{enumerate}
We found that all three strategies produce a significant speedup over generating a single path per pricing problem. Among the three, the ``first paths'' strategy gave us the best runtime with $N$=10. We present a more detailed comparative study of these strategies in the computational results section. We present the label-setting algorithm of \cite{dunbar2012robust,yan2016robust} with our enhancements below, in Algorithm \ref{algo:pricing}. As the original initial-departure and final-arrival airports can be different for each aircraft, the algorithm is used to separately generate routes for each aircraft. The input includes augmented sets of nodes $F'$ and arcs $A'$; $F'= F \cup \{so,si\}$, where $so$ and $si$ are dummy source and sink nodes, respectively, and $A'$ contains all eligible connections in $A$, connections from $so$ to every valid first flight in $F$, and connections from every valid last flight to $si$ for the selected aircraft. The output of the algorithm is a set of negative reduced-cost columns for the selected aircraft.
\begin{algorithm}[!htbp]
\caption{Label-setting algorithm}\label{algo:pricing}
\begin{algorithmic}
\Function{GenerateColumns}{$F'$, $A'$, $so$, $si$}
\State $M_f \gets \emptyset$, $f \in F'$. \Comment{Processed labels container}
\State $I_f \gets \emptyset$, $f \in F'$. \Comment{All labels container}
\State $I_{so} \gets \{(so, \varnothing, -\mu_{a(r)}^\omega, 0)\}$. \Comment{Source label creation}
\While{$\bigcup_{i \in F'}(I_i\setminus M_i) \neq \emptyset$ and \Call{ShouldStop}{$I_{so}$} $\neq true$}
\State Choose $i \in F'$ and a label $l \in I_i \setminus M_i$ with a minimal reduced cost.
\For{$(i,j) \in A'$}
\State $l' \gets$ \Call{Extend}{l, j}.
\If{$l'$ is not dominated by any label in $I_j$}
\State $I_j \gets I_j \cup \{r(i),j\}$.
\EndIf
\If{$j = si$ and \Call{ShouldStop}{$I_{so}$} $= true$}
\State \textbf{break}. \Comment{Stop processing labels}
\EndIf
\EndFor
\State $M_i \gets M_i \cup \{l\}$.
\EndWhile
\State \Return \Call{BuildColumnsFromLabels}{$I_{so}$}
\EndFunction
\end{algorithmic}
\end{algorithm}
Algorithm \ref{algo:pricing} initializes a single label at the source node as $(so, \varnothing, -\mu_{a(r)}^\omega, 0)$, without a predecessor. Given a label $l = (i, pred_l, red_l, prop_l)$ and a connection $(i,j)$, the \textsc{Extend} procedure creates a new label $l'$ at node $j$ by updating $prop_{l'}$ using \eqref{eq:propagatedDelay} and the reduced cost $red_{l'} = red_l + d_j \pi_j - \nu_j$, as obtained from \eqref{eq:reducedCost}. Labels become complete when they are extended to $si$. The implementation of \textsc{ShouldStop} depends on the column-generation strategy that is used. It always returns $false$ for the all-paths and best-paths strategies. For the first-paths strategy, it returns $true$ if the number of negative reduced-cost labels at $si$ have exceeded $N$, and $false$ otherwise. When the while loop ends, the \textsc{BuildColumnsFromLabels} procedure builds columns using negative reduced-cost labels at $si$. It returns all columns for the all-paths strategy, and the $N$ most negative reduced-cost columns for the other two strategies. The LP solution to the recourse problem is optimal if Algorithm \ref{algo:pricing} returns an empty set.
\subsection{Solution framework for the TSM}\label{subsec:solution}
Now that we have established the machinery to solve recourse models, we are ready to present the $L$-shaped method to solve the TSM. The method has two variants: a single-cut and a multi-cut version. We present the multi-cut method here and show later in this section how it can be modified to obtain the single-cut method. The multi-cut $L$-shaped method works with the following approximation of the TSM:
\begin{align*}
(MP)\quad & \text{Minimize } && \sum_{f \in F} c_f x_f + \sum_{\omega \in \Omega} \eta^\omega && \\
& \text{s.t. } && \eqref{eq:origRoute} - \eqref{eq:firstvars}, \\
& && \eta^\omega \text{ free, } \omega \in \Omega.
\end{align*}
\noindent We refer to this version of the formulation as the ``master problem'' (MP). Our solution procedure iterates between solving the MP and the recourse LP problems. Solutions to the latter can provide \textit{optimality cuts} that bound $\eta$ from below or \textit{feasibility cuts} generated from infeasible recourse problems. As we can always get a feasible solution for any delay scenario by propagating delays along the original routing, our recourse problems are always feasible. So we only need to consider optimality cuts. To describe these cuts, we introduce the following additional notation for each scenario $\omega \in \Omega$:
\begin{align*}
\alpha^\omega &= p_\omega \left( \sum_{t \in T} \mu_t + \sum_{f \in F} \nu_f \right), \text{ and } \beta_f^\omega = p_\omega \pi_f,\ f \in F.\\
\end{align*}
\begin{algorithm}[!htbp]
\caption{Multi-cut $L$-shaped method for the SM}
\label{algo:l-shaped}
\begin{algorithmic}
\State Solve the MP without $\eta^\omega$ variables to get an initial solution $x^0$.
\State Add $\eta^\omega$ variables to the MP.
\State Set $UB \gets$ $\infty$, $LB \gets -\infty$, $k \gets 0$, $x^* \gets x^0$.
\While{$UB - LB > \epsilon$ and $k \leq$ \texttt{MaxNumIterations}}
\For{each scenario $\omega \in \Omega$}
\State Find $\phi_{LP}(x^k,\omega)$ using column generation.
\State Compute $\beta^\omega,\alpha^\omega$ using optimal dual values.
\State Add cut $\eta^\omega \geq \alpha^\omega - \sum_{f \in F} \beta^\omega_f x_f$ to the MP.
\EndFor
\State Set $UB \gets min \left( UB, \sum_{f \in F} c_f x^k_f + \sum_{\omega \in \Omega} \phi_{LP}(x^k,\omega) \right)$.
\If{$UB$ changed}
\State Update incumbent solution $x^* \gets x^k$.
\EndIf
\State Solve the updated MP to get the objective value $obj_k$.
\State Set $LB \gets max(LB, obj_k)$, $k \gets k+1$.
\EndWhile
\Return $x^*$.
\end{algorithmic}
\end{algorithm}
Using this notation, the multi-cut procedure is presented in Algorithm \ref{algo:l-shaped}. We found that $x^0_f = 0, f \in F$ is a reasonable starting solution. The parameter \texttt{MaxNumIterations} provides a practical way to limit the algorithm's runtime. To convert the algorithm into the single-cut $L$-shaped method, we use a single variable $\eta$ in the $MP$ and add only the single cut \eqref{eq:singleCut} that is computed using the optimal dual values of all recourse problems in each iteration:\begin{equation}
\eta \geq \sum_{\omega \in \Omega} \alpha^\omega - \sum_{f \in F} \left( \sum_{\omega \in \Omega} \beta_f^\omega \right) x_f.
\label{eq:singleCut}
\end{equation}
We note here that the Benders cuts are valid only when the binary restrictions of the second-stage problems are relaxed. Making our approach exact requires embedding Algorithm \ref{algo:l-shaped} in a branch-and-bound scheme that finds integer solutions to all second-stage $y_r$ variables. However, as we found that most of the optimality gap was closed in the root node, we did not explore branching. As we shall see in Section \ref{sec:comp}, even these solutions can provide rescheduling values that significantly improve the preparedness of a schedule for uncertain delays.
\begin{comment}
\textbf{ROUGH}
Define
\[
\beta^\omega_f = \bar{\pi}_f^\omega - c_f \delta
\]
and
\[
\gamma^\omega = \sum_{t \in T} \bar{\mu}_t^\omega + \sum_{f \in F} \bar{\pi}_f^\omega - \alpha \delta
\]
the cut becomes:
\[
p^\omega \beta^T x + \eta^\omega \geq p^\omega \gamma^\omega
\]
\end{comment}
\section{Computational experiments}\label{sec:comp}
In this section, we demonstrate the efficacy of our proposed formulation and solution approach using real-world data for five flight networks. We used Java for the implementation, with CPLEX $12.9$ as the solver. The experiments were conducted on an Intel(R) Xeon(R) CPU E5-$2640$ computer with $16$ logical cores and $80$ GB RAM. We implemented parallel processing using the thread-safe Java ``actors'' provided by the Akka actor library (available at \url{https://akka.io}). \update{All code and data used for our experiments is publicly available at \url{https://github.com/sujeevraja/stochastic-flight-scheduler}.}
\begin{table}
\begin{center}
\caption{Instance details}
\label{tab:data}
\begin{tabular}{c c c c}
\hline
\textit{Instance} & \textit{Number of Flights} & \textit{Number of aircraft} & \textit{Number of paths} \\
\hline
s1 & 210 & 41 & 48,674 \\
s2 & 248 & 67 & 20,908 \\
s3 & 112 & 17 & 39,242 \\
s4 & 110 & 17 & 56,175 \\
s5 & 80 & 13 & 190,540 \\
s6 & 324 & 71 & 113,892 \\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Network data and experiment setup}\label{subsec:network}
Table \ref{tab:data} presents details about the flight networks we used. \update{Each network is based on daily schedules of two different airlines on different days in early $2017$, and is the planned schedule for a single equipment type. We avoid solving multiple equipment types together as such swaps can cause operational issues like unfilled seats or passenger spillage.} Each flight in our data has a minimum turnaround time that applies to connecting flights departing after the arrival of the flight. As the costing data for our networks is quite complex, we simplify the calculations with a first-stage reschedule cost of one per minute and a recourse delay cost of $10$ per minute for each flight. This costing serves to encode the significant increase of costs incurred by operational delays as opposed to planned reschedules. \update{The ``Number of paths'' values are the maximum number of paths that can be built during column generation. To calculate them, we build a flight network and add a dummy source and dummy sink node for each aircraft based on its original first-departure and last-arrival stations. We then add dummy source arcs to flights departing from the source node station and dummy sink arcs from flights arriving at the sink node station. The number of paths for each aircraft is recursively computed as the number of paths from the aircraft's dummy source to the aircraft's dummy sink. The total number of paths is the sum of paths of all aircraft.}
We simulate primary delays by constructing $30$ randomly generated delay scenarios for each run. The scenarios are generated by varying two parameters: the distribution used for delay generation and the flights that experience primary delays. We follow the recommendation of \cite{yan2016robust} in using truncated normal, gamma, and log normal distributions for primary delays, with log normal being the default.
We select flights that experience primary delays using two strategies, which we call ``hub'' and ``rush''. The hub strategy selects flights from a hub, which we define as the airport with the most \update{departures} in a given schedule. The rush strategy calculates the duration between the earliest departure and the latest arrival for a schedule and selects flights departing during the first quarter of the window. This idea stems from the morning runway congestion that frequently occurs in most airports.
Our model limits first-stage rescheduling with two control factors, an individual limit of $l$ for each flight and a limit of $B$ minutes on the total delay. We fix $l$ to $30$ minutes in all of our runs. We make $B$ adaptive to the problem data by computing the total primary flight delay for each recourse scenario, taking the average of these values, and allowing $B$ to be a fraction of the average total primary delay. \update{Unless specified otherwise, we default to $0.5$ for $B$, $LogNormal(15,15)$ as the delay distribution, ``hub'' as the flight selection strategy, the multi-cut $L$-shaped method, the first-paths column generation strategy outlined in Section \ref{subsec:pricing-problem}, and use $30$ threads to solve $30$ second-stage problems in parallel. Solution times in all tables are reported in seconds.}
\subsection{Results and insights}\label{subsec:result}
Our computational study contains three sets of results. The first set presents the performance metrics of our algorithm, as shown in Table \ref{tab:quality}. The \textit{Strategy} column shows the strategy we use to select flights, as explained above. We report two gaps: the percentage gap computed as $100 \times (UB-LB)/UB$ from Algorithm \ref{algo:l-shaped} in the \text{Gap} column, and the optimality gap of the solution in the \textit{Opt Gap} column. To compute the latter, we first find an upper bound \textbf{ub} by fixing the first-stage reschedule values to the solution found by Algorithm \ref{algo:l-shaped}, solving all second-stage problems without relaxing the binary restrictions, and computing the objective value as the sum of the fixed reschedule cost and the mean value of the second-stage delay costs. As the objective value of the solution found by Algorithm \ref{algo:l-shaped} is a lower bound (denoted by \textbf{lb}) for the optimal solution, we report the optimality gap as $100 \times$(\textbf{ub}-\textbf{lb})/\textbf{ub}. The columns \textit{Cuts} and \textit{Iter} report the number of Benders cuts added and the number of iterations, respectively. The main takeaways from Table $1$ are that the Benders gap is almost completely closed for most instances and that the root node closes more than $90\%$ of the optimality gap. \update{We believe that the low optimality gap occurs because of the set partitioning structure in the second-stage model in TSM. As set partitioning models are known to have a property called \textit{quasi-integrality} \cite{balas1975set,balas1972set,tahir2019integral}, their linear relaxations typically yield integer solutions in most cases.}
\begin{table}
\begin{center}
\caption{Solution quality and performance}
\label{tab:quality}
\begin{tabular}{c c c c c c c}
\hline
\textit{Strategy} & \textit{Instance} & \textit{Time} & \textit{Gap (\%)} & \textit{Opt gap (\%)} & \textit{Cuts} & \textit{Iter} \\
\hline
Hub & s1 & 78.42 & 0.35 & 3.42 & 886 & 30 \\
& s2 & 53.94 & 2 & 3.87 & 900 & 30 \\
& s3 & 15.94 & 0 & 0 & 93 & 6 \\
& s4 & 14.04 & 0.05 & 7.61 & 304 & 15 \\
& s5 & 73.16 & 0 & 6.18 & 352 & 16 \\
& s6 & 377.3 & 3.54 & 11.85 & 900 & 30 \\
Rush & s1 & 90.64 & 0.09 & 7.52 & 861 & 30 \\
& s2 & 71.07 & 0.5 & 7.94 & 888 & 30 \\
& s3 & 11.73 & 0.03 & 8.75 & 79 & 4 \\
& s4 & 6.37 & 0 & 0.41 & 115 & 6 \\
& s5 & 47.92 & 0 & 0.09 & 188 & 8 \\
& s6 & 144.34 & 0.04 & 1.82 & 302 & 13 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure
\centering
\includegraphics[scale=.75]{table8.pdf}
\caption{Illustration of performance of TSM by budget}
\label{fig:budget}
\end{figure}
\begin{figure
\centering
\includegraphics[scale=.75]{table7.pdf}
\caption{Illustration of performance of TSM by distribution}
\label{fig:dist}
\end{figure}
\begin{figure
\centering
\includegraphics[scale=.75]{table9.pdf}
\caption{Illustration of performance of TSM by distribution mean}
\label{fig:mean}
\end{figure}
\begin{figure
\centering
\includegraphics[scale=.75]{table10.pdf}
\caption{Illustration of performance of TSM by scenarios}
\label{fig:scenarios}
\end{figure}
\update{For the second set of experiments, we report solution quality results in Figures \ref{fig:budget}, \ref{fig:dist}, \ref{fig:mean}, and \ref{fig:scenarios}. Numbers used for these figures are available in the Appendix in Tables \ref{tab:budget}, \ref{tab:distribution}, \ref{tab:mean} and \ref{tab:scenario} respectively.}
\update{In these experiments}, we first randomly generate $30$ delay scenarios and use this data to solve the two-stage and mean delay models. The same scenarios are used for both models for a fair comparison. Next, we generate a new set of $100$ random delay scenarios different from those used for solving. For each new scenario, we compute the total propagated delay incurred by three variants of the original schedule: (i) no adjustments, (ii) adjustments based on the reschedule solution of the \update{mean delay model}, and (iii) adjustments based on \update{the reschedule solution of the TSM}. By ``adjustment'', we mean that the departure time of a flight is changed based on its corresponding reschedule value. The propagated delay for any scenario\update{, measured in minutes,} is found by solving the integer-valued recourse model to optimality. We then take the average value of the total propagated delay of the $100$ scenarios as a comparison metric for the three approaches. \update{Each set of figures \ref{fig:budget}, \ref{fig:dist}, \ref{fig:mean}, and \ref{fig:scenarios} has two charts that measure the relative reduction in the average value of total propagated delay when using the TSM compared with that of the original schedule and MDM solution.}
To study the quality of the solution over the entire parameter space, we vary one parameter in each \update{figure} that reports propagated delay comparisons. \update{Figure \ref{fig:budget}} reports a comparison for the \update{reschedule budget fractions in \{$0.25,0.5,0.75,1,2\}$}. Given a budget fraction, the corresponding reschedule budget is computed by multiplying the average value of the total primary flight delay of each of the $30$ recourse scenarios with the budget fraction value. \update{Figure \ref{fig:mean}} reports comparisons for distributions in \update{\{$Exponential(30)$, $LogNormal(30,15)$, $TruncatedNormal(30,15)$\}.} \update{Figure \ref{fig:mean}} fixes the distribution as exponential and reports comparisons for mean values of $\{15,30,45,60\}$ minutes. \update{Figure \ref{fig:scenarios} reports comparisons for the number of training scenarios in \{$10$,$20$,$30$,$40$,$50$\}. These figures show that the reduction in propagated delay achieved with TSM is significantly better than the original schedule and mean delay model, and that this reduction is agnostic of the underlying data.}
\begin{table}
\begin{center}
\caption{Runtime comparison for column-generation strategies}
\label{tab:columngen}
\begin{tabular}{c c c c c}
\hline
\textit{Instance} & \textit{Enumeration} & \textit{All paths} & \textit{Best paths} & \textit{First Paths} \\
\hline
s1 & 958.58 & 112.33 & 75.61 & 77.45 \\
s2 & 161.19 & 63.45 & 47.46 & 49.87 \\
s3 & 170.61 & 19.64 & 9.87 & 9.49 \\
s4 & 417.46 & 28.32 & 15.2 & 14.28 \\
s5 & 3086.92 & 121.61 & 65.81 & 69.34 \\
s6 & 1860.32 & 1854.57 & 461.2 & 524.48 \\
\hline
\end{tabular}
\end{center}
\end{table}
In addition to the data-related parameters discussed so far, our approach also has several technical parameters, such as the type of column-generation strategy and the use of single versus multiple cuts for the $L$-shaped method. We use our final set of experiments to empirically select a set of these parameters that give the best runtime performance. The results are reported in Tables \ref{tab:columngen}, \ref{tab:threads}, \ref{tab:cut}, and \ref{tab:caching}.
We obtain the values for each row in these tables as follows. First, we generate $30$ random delay scenarios using the default parameters specified in Section \ref{subsec:network}. Then we run Algorithm \ref{algo:l-shaped} for each value of the tested parameter and collect the solution time. We smooth out aberrations by repeating this $5$ times and reporting the average of these values as the time. The same procedure applies for values other than the solution time reported in Table \ref{tab:cut}.
Table \ref{tab:columngen} reports a comparison between the different column-generation strategies presented in Section \ref{subsec:pricing-problem}. In this test, the first-paths and best-paths strategies are run with $N$=10, i.e., by selecting the first $10$ and the $10$ most negative reduced-cost columns, respectively. The results reported in this table are in line with the intuition that enumerating all columns should take much longer than using a delayed column-generation procedure with pricing. Among the pricing strategies, the best-paths and first-paths strategies are both clearly better than the all-paths strategy, which adds all negative reduced-cost columns to the restricted recourse problems.
Table \ref{tab:threads} reports a run-time comparison with an increase in the number of threads. While it is indeed true that parallel solving should be faster, it is not practically obvious that this should be true. Specifically, we expected that the performance should stagnate or worsen when the number of threads exceeds the number of logical cores, but Table \ref{tab:threads} shows that this is not the case. Though the gain in performance declines with increasing threads, on an absolute basis, increasing the number of threads up to $30$ seems to improve the overall runtime. Increases beyond this are not helpful, as the maximum number of problems that can be solved in parallel is the number of recourse problems, which is $30$. Table \ref{tab:cut} reports a runtime comparison between the single- and multi-cut versions of Algorithm \ref{algo:l-shaped}. Clearly, the multi-cut version is better than the single-cut version in terms of the solution time, the Benders percentage gap (reported in the $Gap$ column), and the number of iterations. As the memory used to store and add cuts is minuscule in comparison to the rest of the data, the greater number of cuts in the multi-cut version does not affect performance at all. In Table \ref{tab:caching}, we present the results of caching the columns between the iterations for Algorithm \ref{algo:l-shaped}. We noticed that the columns generated in an iteration of the $L$-shaped method require only flight data and propagated delay data, and are unaffected by changes in the first-stage reschedule solution. This allows them to be cached and reused in future iterations, which in turn allows pricing problems to be warm-started with promising columns. As Table \ref{tab:caching} indicates, we were not able to find a clear advantage of this approach. While we certainly do not discard this idea, we recommend against using it, based purely on an ease-of-implementation perspective.
\begin{table}
\begin{center}
\caption{Runtime comparison for multiple threads}
\label{tab:threads}
\begin{tabular}{c c c c c}
\hline & \multicolumn{4}{c}{\textit{Number of parallel solvers}}\\
\cline{2-5}
\textit{Instance} & \textit{1} & \textit{10} & \textit{20} & \textit{30} \\
\hline
s1 & 692.71 & 123.49 & 98.63 & 77.88 \\
s2 & 402.31 & 74.53 & 60.54 & 48.52 \\
s3 & 64.64 & 12.58 & 10.16 & 8 \\
s4 & 117.12 & 22.5 & 18.53 & 14.4 \\
s5 & 607.55 & 104.76 & 88.55 & 74.04 \\
s6 & 6490.08 & 986.12 & 736.96 & 519.8 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\begin{center}
\caption{Comparison of single- vs multi-cut L-shaped method}
\label{tab:cut}
\begin{tabular}{c c c c c | c c c c}
\hline & \multicolumn{4}{c|}{\textit{Multi-cut}} & \multicolumn{4}{c}{\textit{Single-cut}}\\
\cline{2-9}
\textit{Instance} & \textit{Time} & \textit{Gap} & \textit{Cuts} & \textit{Iter} & \textit{Time} & \textit{Gap} & \textit{Cuts} & \textit{Iter} \\
\hline
s1 & 686.81 & 0.4 & 883.4 & 30 & 708.91 & 24.28 & 30 & 30 \\
s2 & 406.39 & 2.51 & 899.8 & 30 & 455.63 & 33.08 & 30 & 30 \\
s3 & 58.16 & 0 & 85 & 4.8 & 214.2 & 0 & 19.4 & 19.6 \\
s4 & 105.99 & 0.01 & 304.6 & 14 & 223.77 & 11.88 & 30 & 30 \\
s5 & 579.02 & 0.03 & 340 & 14.4 & 1,172.27 & 12.36 & 30 & 30 \\
s6 & 6,488.77 & 3.87 & 900 & 30 & 6,724.13 & 19.02 & 30 & 30 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\begin{center}
\caption{Runtime comparison for caching columns between iterations}
\label{tab:caching}
\begin{tabular}{c c c}
\hline
\textit{Instance} & \textit{Caching} & \textit{No Caching} \\
\hline
s1 & 686.78 & 715.77 \\
s2 & 399.28 & 422.96 \\
s3 & 62.31 & 61.52 \\
s4 & 112.8 & 105.6 \\
s5 & 615.87 & 585.76 \\
s6 & 6,372.9 & 6,198.92 \\
\hline
\end{tabular}
\end{center}
\end{table}
\vspace{10cm}
\section{Conclusions and future research}\label{sec:conclusions}
In this research, we present a two-stage stochastic programming model that adds time buffers to flight connections in order to make a schedule more robust to uncertain delays. By ``robust'', we mean that the schedule is more accommodating to changes in scheduled times and has fewer delays propagated to downstream flights. To solve the two-stage model, we present a solution framework that combines an outer approximation method with a delayed column-generation routine. We conduct a thorough qualitative and quantitative analysis of the proposed framework and report extensive computational results. To efficiently solve large-scale instances of the model, we adopt various software engineering techniques such as caching and parallelism. Our results highlight that the operational delay reduction can be significant using our proposed methodology compared to a deterministic approach.
There are several interesting directions for extending this work, and we highlight a few \update{here}. First, the model can be made into a closer approximation of reality by considering more business constraints such as maintenance events and crew-friendliness. Another direction would be to study the scalability of our approach when more complex modifications such as cancellations, diversions, and overbooking are allowed in the first-stage. We have observed that, in practice, strategies to minimize delays can be quite diverse. While some airlines want to spread out delays among several flights to prohibit exorbitant delays for a single flight, other airlines want almost the exact opposite with the idea of minimizing the number of flights with delays. Making our model flexible enough to allow such variety in rescheduling and delay strategies is a worthwhile idea to pursue in the future. Also, from a modelling perspective, appropriate risk-averse objectives other than the risk-neutral expectation function can be evaluated in the second-stage.
\section*{Acknowledgements}
The authors would like to thank Sabre for providing the anonymized flight network data that we used for the computational studies in this article.
\section*{Appendix}
In this section, we report numbers used for the charts in Figures \ref{fig:budget}, \ref{fig:dist}, \ref{fig:mean} and \ref{fig:scenarios}.
\update{Nomenclature common to all the following tables is listed below. All total propagated values are reported in minutes.
\begin{itemize}
\setlength\itemsep{1pt}
\item \textit{Instance}: name of instance.
\item \textit{Original}: average total propagated delay for the original schedules.
\item \textit{MDM}: average total propagated delay with the schedule adjusted by the mean delay model solution.
\item \textit{TSM}: average total propagated delay with the schedule adjusted by the TSM.
\item \textit{RR over Original (\%)}: relative improvement of the MDM solution over the original ($100 \times (Original-TSM)/Original$).
\item \textit{RR over MDM (\%)}: relative improvement of the TSM solution over the original ($100 \times (MDM-TSM)/MDM$).
\end{itemize}
}
\begin{table}
\begin{center}
\caption{Total propagated delay improvements for different budgets}
\label{tab:budget}
\begin{tabular}{c c c c c c c}
\hline & \multicolumn{6}{c}{\textit{Average total propagated delay}}\\
\cline{3-7}
\textit{Budget fraction} & \textit{Instance} & \textit{Original} & \textit{MDM} & \textit{TSM} & \textit{RR over Original (\%)} & \textit{RR over MDM (\%)} \\
\hline
0.25 & s1 & 845 & 628.06 & 562.57 & 33.42 & 10.43 \\
& s2 & 850.82 & 611.65 & 520.17 & 38.86 & 14.96 \\
& s3 & 50.24 & 26.88 & 15.68 & 68.79 & 41.67 \\
& s4 & 219.37 & 145.93 & 135.86 & 38.07 & 6.9 \\
& s5 & 254.29 & 215.02 & 160.18 & 37.01 & 25.5 \\
& s6 & 1221.54 & 993.74 & 844.16 & 30.89 & 15.05 \\
0.5 & s1 & 836.37 & 474.79 & 406.51 & 51.4 & 14.38 \\
& s2 & 844.62 & 416.29 & 363.95 & 56.91 & 12.57 \\
& s3 & 42.45 & 19.89 & 8.6 & 79.74 & 56.76 \\
& s4 & 232.55 & 150.1 & 117.32 & 49.55 & 21.84 \\
& s5 & 250.1 & 123.74 & 115.61 & 53.77 & 6.57 \\
& s6 & 1231.86 & 799.37 & 672.06 & 45.44 & 15.93 \\
0.75 & s1 & 861.65 & 373.57 & 365.71 & 57.56 & 2.1 \\
& s2 & 868.94 & 345.26 & 303.68 & 65.05 & 12.04 \\
& s3 & 46.81 & 25.88 & 11.76 & 74.88 & 54.56 \\
& s4 & 218.15 & 132.55 & 87.93 & 59.69 & 33.66 \\
& s5 & 242.06 & 116.37 & 102.03 & 57.85 & 12.32 \\
& s6 & 1244.04 & 648.78 & 566.47 & 54.47 & 12.69 \\
1 & s1 & 832.36 & 349.93 & 272.63 & 67.25 & 22.09 \\
& s2 & 829.33 & 316.21 & 209.45 & 74.74 & 33.76 \\
& s3 & 49.48 & 29.62 & 19.71 & 60.17 & 33.46 \\
& s4 & 233.37 & 155.23 & 106.54 & 54.35 & 31.37 \\
& s5 & 246.86 & 123.38 & 89.9 & 63.58 & 27.14 \\
& s6 & 1197.72 & 505.05 & 502.65 & 58.03 & 0.48 \\
2 & s1 & 849.18 & 351.68 & 238.15 & 71.96 & 32.28 \\
& s2 & 851.63 & 344.38 & 222.88 & 73.83 & 35.28 \\
& s3 & 49.12 & 28.81 & 16.94 & 65.51 & 41.2 \\
& s4 & 222.53 & 144.08 & 95.3 & 57.17 & 33.86 \\
& s5 & 243.47 & 116.92 & 79.63 & 67.29 & 31.89 \\
& s6 & 1237.37 & 538.89 & 434.22 & 64.91 & 19.42 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\begin{center}
\caption{Total propagated delay improvements for different distributions}
\label{tab:distribution}
\begin{tabular}{c c c c c c c}
\hline & \multicolumn{6}{c}{\textit{Average total propagated delay}}\\
\cline{3-7}
\textit{Distribution} & \textit{Instance} & \textit{Original} & \textit{MDM} & \textit{TSM} & \textit{RR over Original (\%)} & \textit{RR over MDM (\%)} \\
\hline
Exp(30) & s1 & 2,050.08 & 1,562.11 & 1,230.08 & 40 & 21.26 \\
& s2 & 1,993.59 & 1336.85 & 1,107.84 & 44.43 & 17.13 \\
& s3 & 141.43 & 87.52 & 55.89 & 60.48 & 36.14 \\
& s4 & 701.25 & 434.87 & 391.68 & 44.15 & 9.93 \\
& s5 & 599.99 & 411.45 & 330.68 & 44.89 & 19.63 \\
& s6 & 3,281.99 & 2,542.58 & 2,113.81 & 35.59 & 16.86 \\
LogNormal(30,15) & s1 & 1,966.24 & 1,233.31 & 867.31 & 55.89 & 29.68 \\
& s2 & 1849.07 & 999.45 & 663.77 & 64.1 & 33.59 \\
& s3 & 116.12 & 46.47 & 24.7 & 78.73 & 46.85 \\
& s4 & 575.49 & 223.43 & 203.98 & 64.56 & 8.71 \\
& s5 & 580.18 & 378.54 & 210.44 & 63.73 & 44.41 \\
& s6 & 3,187.22 & 2,251.7 & 1619.3 & 49.19 & 28.09 \\
TruncNormal(30,15) & s1 & 2,008.96 & 1,204.15 & 903.91 & 55.01 & 24.93 \\
& s2 & 1919.41 & 900.75 & 693.00 & 63.84 & 22.95 \\
& s3 & 115.87 & 39.72 & 18.16 & 84.33 & 54.28 \\
& s4 & 615.21 & 238.11 & 207.77 & 66.23 & 16.26 \\
& s5 & 580.18 & 378.54 & 210.44 & 63.73 & 44.41 \\
& s6 & 3,187.22 & 2,251.7 & 1619.3 & 49.19 & 28.09 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\begin{center}
\caption{Total propagated delay improvements for different distribution means}
\label{tab:mean}
\begin{tabular}{c c c c c c c}
\hline & \multicolumn{6}{c}{\textit{Average total propagated delay}}\\
\cline{3-7}
\textit{Distribution} & \textit{Instance} & \textit{Original} & \textit{MDM} & \textit{TSM} & \textit{RR over Original (\%)} & \textit{RR over MDM (\%)} \\
\hline
Exp(15) & s1 & 860.09 & 521.5 & 472.28 & 45.1 & 9.44 \\
& s2 & 853.49 & 453.26 & 395.58 & 53.65 & 12.73 \\
& s3 & 42.41 & 23.41 & 9.34 & 77.98 & 60.10 \\
& s4 & 235.08 & 155.06 & 122.52 & 47.88 & 20.99 \\
& s5 & 252.87 & 149.5 & 122.54 & 51.54 & 18.03 \\
& s6 & 1,233.16 & 849.91 & 705.01 & 42.83 & 17.05 \\
Exp(30) & s1 & 2,050.08 & 1,562.11 & 1,230.08 & 40 & 21.26 \\
& s2 & 1,993.59 & 1,336.85 & 1,107.84 & 44.43 & 17.13 \\
& s3 & 141.43 & 87.52 & 55.89 & 60.48 & 36.14 \\
& s4 & 701.25 & 434.87 & 391.68 & 44.15 & 9.93 \\
& s5 & 599.99 & 411.45 & 330.68 & 44.89 & 19.63 \\
& s6 & 3,242.44 & 2,538.83 & 2,125.24 & 34.46 & 16.29 \\
Exp(45) & s1 & 3,504.48 & 2,554.76 & 2,286.38 & 34.76 & 10.51 \\
& s2 & 3,079.29 & 1,930.02 & 1,818.79 & 40.93 & 5.76 \\
& s3 & 267.3 & 166.12 & 142.39 & 46.73 & 14.28 \\
& s4 & 1,199.09 & 762.59 & 703.14 & 41.36 & 7.80 \\
& s5 & 1,042.92 & 723.06 & 653.13 & 37.37 & 9.67 \\
& s6 & 5,715.73 & 4,359.76 & 3846.91 & 32.7 & 11.76 \\
Exp(60) & s1 & 5247.03 & 3922.66 & 3715.71 & 29.18 & 5.28 \\
& s2 & 4674.16 & 3045.3 & 2938.05 & 37.14 & 3.52 \\
& s3 & 412.07 & 280.51 & 257.87 & 37.42 & 8.07 \\
& s4 & 1,825.04 & 1,322.64 & 1,168.95 & 35.95 & 11.62 \\
& s5 & 1,437.66 & 1,138.58 & 958.5 & 33.33 & 15.82 \\
& s6 & 8,822.83 & 6,750.83 & 6198.65 & 29.74 & 8.18 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\begin{center}
\caption{Total propagated delay improvements for different numbers of training scenarios}
\label{tab:scenario}
\begin{tabular}{c c c c c c c}
\hline & \multicolumn{6}{c}{\textit{Average total propagated delay}}\\
\cline{3-7}
\textit{Scenarios} & \textit{Instance} & \textit{Original} & \textit{MDM} & \textit{TSM} & \textit{RR over Original (\%)} & \textit{RR over MDM (\%)} \\
\hline
10 & s1 & 1,960.17 & 1,115.96 & 846.1 & 56.84 & 24.18 \\
20 & & 1,945.01 & 1,146.31 & 830.28 & 57.31 & 27.57 \\
30 & & 1,941.91 & 1,112.12 & 831.43 & 57.18 & 25.24 \\
40 & & 1,943.8 & 1,123.35 & 812.97 & 58.18 & 27.63 \\
50 & & 1,949.21 & 1,117.64 & 806.86 & 58.61 & 27.81 \\
10 & s2 & 1,857.88 & 811.84 & 614.34 & 66.93 & 24.33 \\
20 & & 1,867.09 & 789.97 & 624.08 & 66.57 & 21 \\
30 & & 1,851.5 & 929.35 & 605.88 & 67.28 & 34.81 \\
40 & & 1,856.5 & 970.19 & 601.81 & 67.58 & 37.97 \\
50 & & 1,845.01 & 938.06 & 609.64 & 66.96 & 35.01 \\
10 & s3 & 108.81 & 39.26 & 21.23 & 80.49 & 45.92 \\
20 & & 110.76 & 43.02 & 23.96 & 78.37 & 44.3 \\
30 & & 112.25 & 38.89 & 20.83 & 81.44 & 46.44 \\
40 & & 118.27 & 47.47 & 26.97 & 77.2 & 43.19 \\
50 & & 114.05 & 42.11 & 19.25 & 83.12 & 54.29 \\
10 & s4 & 587.27 & 212.83 & 183.25 & 68.8 & 13.9 \\
20 & & 583.26 & 189.41 & 161.75 & 72.27 & 14.6 \\
30 & & 610.97 & 202.41 & 184.06 & 69.87 & 9.07 \\
40 & & 580.37 & 194.53 & 163.15 & 71.89 & 16.13 \\
50 & & 586.65 & 188.69 & 168.23 & 71.32 & 10.84 \\
10 & s5 & 562.62 & 330.84 & 229.6 & 59.19 & 30.6 \\
20 & & 560.79 & 303.95 & 213.6 & 61.91 & 29.73 \\
30 & & 554.21 & 306.81 & 200.55 & 63.81 & 34.63 \\
40 & & 544.98 & 218.56 & 206.14 & 62.17 & 5.68 \\
50 & & 548.48 & 304.2 & 203.92 & 62.82 & 32.97 \\
10 & s6 & 3,053.12 & 2,143.16 & 1,508.29 & 50.6 & 29.62 \\
20 & & 3,082.73 & 2,221.58 & 1,560.49 & 49.38 & 29.76 \\
30 & & 3,062.17 & 2,137.81 & 1,527.92 & 50.1 & 28.53 \\
40 & & 3,099.17 & 2,169.34 & 1,518.15 & 51.01 & 30.02 \\
50 & & 3,078.29 & 2,086.7 & 1,487.11 & 51.69 & 28.73 \\
\hline
\end{tabular}
\end{center}
\end{table}
%
%
\bibliographystyle{spmpsci}
\section{Reviewer 1}
\noindent The authors present a two-stage stochastic programming model (TSM) to make given a flight schedule more robust against flight delays by determining a scheme of slack times used to re-timing (some) flights. The model optimizes the sum of first-stage rescheduling costs and second-stage costs of propagated delays. The authors propose a heuristic solution framework based on the parallel multi-cut L-shaped algorithm where the (relaxed) recourse problems are solve by a column generation technique. A comprehensive numerical evaluation based on real-world data shows that good-quality solutions of the TSM can be obtained fast and that these solutions perform significantly better than mean-data solutions of a deterministic model in an out-of-sample simulation study. \newline
\noindent This interesting, solid and well-written research article fits well into the scope of ANOR. The authors, who are obviously experts in this field, present a well-motivated planning approach, which balances aspects of existing approaches in a new way to make it more suitable and valuable for practical deployment (see page 3). The solution framework combines standard methods in a carefully calibrated way with the goal of keeping implementation effort at bay while addressing and adding detail where it pays off (e.g., the way, columns are generated in Algorithm 1). In fact, I do not have major issues with this manuscript and recommend publication after the consideration of the few minor remarks below.\newline
\begin{enumerate}[\Romannum{1}.1]
\item \textbf{p.12, Table 3: You could add a column for the relative difference between MDM and TSM. Some kind of visualization of Tables 3 to 5 could also be helpful.} \newline
\textit{Response:} Thank you for the comment. As suggested, we added a new column to showcase the difference between MDM and TSM, and added graphs for tables 3, 4, 5 and the new table 6. To clarify presentation of the results, we have moved the tables to the appendix.
\end{enumerate}
\section{Reviewer 2}
\noindent The paper was well written and the rigour of the optimisation model was good. However, it's difficult for the reviewer to identify the contribution of this paper for the following reasons:\newline
\begin{enumerate}[\Romannum{2}.1]
\item \textbf{The literature review on p2 failed to highlight the contribution of this paper, although the authors went through a number of past papers.} \newline
\textit{Response:} Thanks for the comment. We acknowledge the concern raised by the reviewer. To address it, we have reworked the presentation of the main contributions of this paper. We present below the updated part of the manuscript. \newline
\papertext{To motivate our research, we present some concerns we observed with scheduled robustness models proposed so far. First, there is no clear differentiation between the cost of rescheduling flights a few weeks before the day-of-operations versus delaying them a few hours before departure. This difference can be significant in practice. Second, the stochastic programming approaches proposed in literature use very complex first-stage models with a wide variety of first-stage decisions. This may be undesirable, as each adjustment of a schedule can affect other operational considerations such as staff scheduling, maintenance scheduling, crew and passenger connectivity, among others. Also, there is no clarity on how to reduce the scope of such models while still generating useful results for scheduling practitioners. Computationally, the size and complexity of first-stage models proposed in literature makes it difficult to scale them and use them for real-world airline schedules.} \newline
\papertext{In this research, we seek to fill the aforementioned gaps in literature. Our main contributions are (i) a two-stage stochastic programming model that re-times flights of a given schedule in a controlled manner while minimizing the sum of first-stage rescheduling costs and expected cost of propagated delays on the day of operations; (ii) a parallel decomposition framework based on the L-shaped method \cite{van1969shaped} that uses column generation to solve recourse second-stage models; (iii) extensive computational study using disruptions caused by randomly generated flight delays that show a significant reduction in propagated delays in schedules adjusted by the proposed model; and (iv) recommendations and insights that can boost the performance of decomposition techniques like Benders decomposition and column generation for flight schedule models.} \newline
\item \textbf{The authors highlighted on p2 that: ``Our main contribution is a two-stage stochastic programming model that uses the first-stage to re-time flights in a controlled manner in order to minimize the sum of the first-stage rescheduling costs and the expected propagated delay costs of the second-stage". The reviewer can't see why this two-stage model is significantly contributing to the literature, given all that have been gone through on p2 regarding those two-stage models?} \newline
\textit{Response:} We kindly refer the reviewer to our response to the previous comment, in which we present a reworked explanation of our contribution highlighting the importance of the addition of our model and solution framework to stochastic flight scheduling literature. \newline
\item \textbf{On page 3 of the paper, the authors used one paragraph to highlight their 'difference' to some key papers they cited. However, that's not convincing to the reviewer. For instance, compared with [6], this paper didn't allow swaps and cancellations. Then, why is this better than [6]? The authors also highlighted similarities to [1, 6 and 18]. In particular, the paper used the method from [18] to compute delays and claimed that it's better than [6]. I suppose [18] has already done that in its paper (that's why [18] was published?)? Then, what's the real contribution of this paper?} \newline
\textit{Response:} Thanks for the comment. We agree with the reviewer that the differences from \cite{froyland2013recoverable} and \cite{yan2016robust} should have been described better. We have addressed this in the reviewer's first comment. Also, we conducted additional computational experiments and updated the manuscript. For clarity, we describe a specific set of differences as follows:
\begin{itemize}
\item We solve much larger problems than both \cite{froyland2013recoverable} and \cite{yan2016robust}. The largest network in our data set contains $248$ flights and $67$ aircraft. These numbers are $53$ flights and $10$ aircraft in \cite{froyland2013recoverable}, and $117$ flights and $23$ aircraft in \cite{yan2016robust}. The ability to solve such larger schedules with our framework in reasonable times is due to: (i) computational strategies that were refined through multiple experiments; (ii) solving sub-problems in parallel; and (iii) deliberate restriction of decisions in the planning stage. To further substantiate the performance of our model, we performed additional computational experiments with a larger network having $324$ legs and $71$ aircraft, and the results are added to all tables in the computational results section of the manuscript.\newline
\item Delays are not allowed in the planning stage in \cite{froyland2013recoverable}, but swaps and cancellations are. From our experience working with operations departments of multiple airlines, we observed that in many cases, almost the exact opposite is preferred. Mostly, the operations team do not wish to change the tail or equipment of a flight to minimize operational hassles, but may accept rescheduling flights by small amounts as long as it is done two weeks before departure. The rationale behind this is that the major bottleneck while rescheduling is making sure that passengers are informed well ahead of time. \newline
\item The difference in the cost of rescheduling flight a week ahead of time versus delaying them a few minutes before departure is not considered in \cite{froyland2013recoverable} or \cite{yan2016robust}. To the best of our knowledge, this difference has not been considered in a stochastic programming context so far. Our work presents a clear way to accommodate this difference and related trade-offs. \newline
\item Keeping the planning stage in our work simpler than \cite{froyland2013recoverable} is a deliberate choice as it allows to solve large scale instances.\newline
\item The reviewer is correct in pointing out that we use the delay generation method from \cite{yan2016robust}. However, the authors in \cite{yan2016robust} adopt it in a robust optimization model, and not in a stochastic programming context. Our research builds on their work to show that their approach suits the solution of large scale instances in a stochastic programming setting as well. \newline
\end{itemize}
\item \textbf{On the same page, the authors stated that: ``Our modeling of uncertain flight delays is similar to that in [9, 4, 18]." Then, what's special about this paper? The reviewer could see some minor tweaks in the paper, e.g. the collection of columns in the label-setting algorithm. But, again, the label setting algorithm was based on [4, 18] and nothing special for the version in this paper.} \newline
\textit{Response:} We kindly refer the reviewer to our responses for the previous comments, in which we provide a detailed explanation of our research contributions. \newline
\item \textbf{Result discussion was disappointing in Section 4. The authors did provide a number of tables but failed to discuss how the results (in terms of flight scheduling) could look like when compared with the original schedule. I'd see this paper as a very good exercise or a case study for robust airline scheduling but can't find the contribution it could provide for publication.} \newline
\textit{Response:} Thanks for the comment. We believe we have addressed the reviewer's concerns about contributions in the previous responses. To address the reviewer's comment about results, we note that Tables $3$, $4$ and $5$ in the original manuscript described exactly this: the reductions in average propagated delay that can be obtained with respect to the average propagated delay in the original schedule (reported in the \textit{original} column. Also, we have completely reworked our presentation of the computational results to better highlight the improvements generated by our solution. The comparison is presented in the form of relative reduction in Figures $1$, $2$, $3$ and $4$. The corresponding tables have been moved to the appendix for clarity of presentation, and are now numbered $7$, $8$, $9$ and $10$.\newline
\end{enumerate}
\section{Reviewer 3}
\noindent This paper studies the construction of robust schedules that are robust to future uncertainty. The authors adopt a stochastic programming approach that focuses on minimizing the sum of rescheduling costs at the planning stage and propagated delay costs at the operational/recovery stage. The problem is solved using the L-shaped method, with a relaxed version of the second-stage stochastic programming model solved using column generation. \newline
\noindent The paper is mostly well written. I have some comments, primarily regarding the model and experiments. \newline
\noindent \textbf{Major Comments:}
\newline
\begin{enumerate}[\Romannum{3}.1]
\item \textbf{Questions about the model:}
\begin{enumerate}
\item Given that this area of work on modeling propagated delays is quite mature, I would like to see some clearer justification of the model and its contributions. This model purely captures rescheduling of flights at the first-stage and minimizing propagated delays at the second-stage. There have been many other more complex models that capture crew or passenger connectivity. Hence, the relevance of this model must be more clearly articulated. \newline
\textit{Response:} Thanks for the comment. We acknowledge the reviewer's concern that our contributions should have been presented better. To address it, we have re-written the main contributions of our paper to better highlight its differentiation from literature. The manuscript is updated as follows: \newline
\papertext{To motivate our research, we present some concerns we observed with scheduled robustness models proposed so far. First, there is no clear differentiation between the cost of rescheduling flights a few weeks before the day-of-operations versus delaying them a few hours before departure. This difference can be significant in practice. Second, the stochastic programming approaches proposed in literature use very complex first-stage models with a wide variety of first-stage decisions. This may be undesirable, as each adjustment of a schedule can affect other operational considerations such as staff scheduling, maintenance scheduling, crew and passenger connectivity, among others. Also, there is no clarity on how to reduce the scope of such models while still generating useful results for scheduling practitioners. Computationally, the size and complexity of first-stage models proposed in literature makes it difficult to scale them and use them for real-world airline schedules.} \newline
\papertext{In this research, we seek to fill the aforementioned gaps in literature. Our main contributions are (i) a two-stage stochastic programming model that re-times flights of a given schedule in a controlled manner while minimizing the sum of first-stage rescheduling costs and expected cost of propagated delays on the day of operations; (ii) a parallel decomposition framework based on the L-shaped method \cite{van1969shaped} that uses column generation to solve recourse second-stage models; (iii) extensive computational study using disruptions caused by randomly generated flight delays that show a significant reduction in propagated delays in schedules adjusted by the proposed model; and (iv) recommendations and insights that can boost the performance of decomposition techniques like Benders decomposition and column generation for flight schedule models.} \newline
\item In particular, the first paragraph on Page 3 (lines 4-16) should also discuss why the ability to go beyond the discrete-copy approach is useful – this seems to be the major benefit of the approach in this paper. Similarly the experiments should confirm that larger sized problems compared to [6] can be solved using this approach. \newline
\textit{Response:} Thanks for the suggestion. We expanded our discussion on the demerits of discrete delay copies and problem size comparisons in the paragraph pointed out by the reviewer. The manuscript is updated as follows: \newline
\papertext{The proposed model and solution framework allow to solve much larger instances than those solved so far in literature. For example, one of the networks we consider has $324$ flights and $71$ aircraft, much larger in size than networks used in recent works like \cite{froyland2013recoverable}, \cite{yan2016robust}. Furthermore, we use a dynamic delay approach similar to \cite{yan2016robust} to solve our recourse problems. This approach uses the least required delay on each flight while building paths. This eliminates the need for discrete delay copies which can generate unnecessary flight delays due to discretization and cause significant run time increases (see Figure 7 in \cite{froyland2013recoverable}).} \newline
\item Also, for example, the authors should describe or demonstrate how additional features that capture connections for crew or passengers can be modeled, or other features can be incorporated.\newline
\textit{Response:} Thanks for the comment. We concur with the reviewer about presenting some directions on how the scope of our model can be increased. We added the following to the manuscript. \newline
\papertext{The path-based recourse formulation of our model can be easily extended to incorporate requirements from other operational domains of airlines. This includes hard constraints like minimum crew/passenger connection times and soft requirements like the loss of passenger goodwill that can be incorporated into path costs.} \newline
\item It is not clear how the second-stage model (5) – (8) explicitly captures the swaps, because swaps are not penalized although alternative routes are generated. Should swap costs not be penalized as well (compared to the original aircraft routes), along with delay-costs, at the second stage? \newline
\textit{Response:} Thanks for the comment. The reviewer is correct in pointing out that swap costs are not captured in the second-stage model. This was a deliberate choice motivated by the following reasons: (i) From our experience working with multiple airline operation control centers, swap costs are usually much smaller than delay costs (ii) We will only use the first-stage reschedule solutions, and not the second-stage re-routing solutions (iii) It is consistent with the theme of keeping models as simple as possible to enable solving large scale instances and making them practically usable. In fact, in many cases, we observed that swap penalties are artificial (just to guide the model), while there are actual costs (e.g. additional meals served) and metrics (e.g. reduction in on-time performance) related to delays. \newline
\end{enumerate}
\item \textbf{Literature review: Some key papers are not cited} \newline
Arikan, M., Deshpande, V., \& Sohoni, M. (2013). Building Reliable Air-Travel Infrastructure Using Empirical Data and Stochastic Models of Airline Networks [Journal Articles]. Operations Research, 61(1), 45–64. \newline
\textit{Response:} Thanks for the suggestion. We have included this citation in the literature review section of the manuscript.\newline
\item \textbf{Use of the MDM for comparison: The baseline MPM method is not convincing. Given that propagated delays are not linear, the use of the average primary delay for every single flight is not the right comparison. Can the authors compare this with Lan et al.'s methods of rescheduling, such as minimize expected propagated delay? Alternatively, the authors could use any other implemented model from the literature.}\newline
\textit{Response:} We respectfully disagree with the reviewer about these concerns and present our rationale for this difference below.
\begin{itemize}
\item \textbf{Use of MDM for comparison}: For our delay generation, we follow the delay generation approach used in \cite{dunbar2014integrated} to draw the delay of each flight independently from a probability distribution. Because of this independence and the fact that each scenario is defined by primary delays alone, the mean-value scenario becomes the scenario with mean delays for each flight, and is not affected by the non-linearity of propagated delays. Comparing the solution of the two-stage stochastic optimization model with the solution of a mean-value model is a classical and recommended approach (please refer Chapter $4$ in \cite{birge2011introduction}), which is why we used it in our research. Furthermore, the significant improvements of the stochastic solution over both the do-nothing approach and the mean-value model substantiate its effectiveness in making schedules robust to uncertain delays. \newline
\item \textbf{Comparing with other works}: One of the primary features of the proposed model is the differentiation of rescheduling flights much before the departure and delaying of flights on the day of departure. To the best of our knowledge, this feature has not been considered in a stochastic programming framework in literature so far. Comparing our work with any of the proposed approaches is beyond the scope of this research, as it would require the modification of any approach to incorporate this differentiation, in which case it would fail to be a true comparison with published results. \newline
\end{itemize}
\item \textbf{Page 7, labeling algorithm:} \newline
\begin{enumerate}
\item Are there other aspects, such as time to maintenance, elapsed time and other factors included in the labels? As these are typical features used in the design of aircraft routes, can you discuss how these are modeled, approximately or exactly? \newline
\textit{Response:} As the focus of our research is on route \textit{adjustment} and not route planning, we did not incorporate these features into the labels used in our implementation. As the solution proposed by our model is a set of flight reschedules, it does not affect flying time or time to maintenance of any aircraft. However, we do wish to clarify that incorporation of any additive rule into the labels is straightforward. Such rules include limits like maximum flying time before maintenance, maximum time before maintenance and maximum landing times at airports. \newline
\item Page 7, line 44: The aspect of adding the starting and ending airports at the end could change the reduced costs, is it not? \newline
\textit{Response:} Use of the dummy source and dummy sink nodes only serves as an implicit validation of whether a route is valid for an aircraft or not. By valid, we mean that the route has to begin at the aircraft's station during the start of the original schedule, and end at the aircraft's station at the end of the original schedule. This serves to retain connectivity with prior and next days of the schedule. As there are no costs on any of these nodes, it does not affect reduced costs. \newline
\end{enumerate}
\item \textbf{Methodologically, it would be useful to understand why the Benders optimality gap is small, particularly at the root node. Are there some structural properties of the network that make such gaps small in practice? Also please add some discussion by citing other papers in which similar gaps are seen, and papers in which gaps are large.} \newline
\textit{Response:} Thanks for the suggestion. We have accommodated the reviewer's feedback by updating the discussion about the results of Table 2 and citing some relevant research. The manuscript is updated as follows: \newline
\papertext{We believe that the low optimality gap occurs because of the set partitioning structure in the second-stage model in TSM. As set partitioning models are known to have a property called \textit{quasi-integrality} \cite{balas1975set,balas1972set,tahir2019integral}, their linear relaxations typically yield integer solutions in most cases.} \newline
\item \textbf{Experiments:} \newline
\begin{enumerate}
\item Please provide further data about the timeframe in real operations from which the schedule is drawn, and if the schedule used is a daily/weekly schedule. \newline
\textit{Response:} Thanks for the suggestion. Each of the networks used in our experiments represents a day's schedule of an airline. The data was taken from airline schedule data in early $2017$. For completeness, we present relevant text updated in the manuscript below. \newline
\papertext{Each network is based on daily schedules of two different airlines on different days in early $2017$, and is the planned schedule for a single equipment type.} \newline
\item Is each of these networks related to a single fleet type? The readers should be informed why these problems are solvable by fleet type and that it is meaningful to solve these by separating by fleet type. \newline
\textit{Response:} We thank the reviewer for suggesting this improvement and have incorporated it into our description of Table $1$. As noted in the beginning of Section $4.1$, all our data sets have schedules of aircraft belonging to a single fleet type. We have added the following clarification to the manuscript to motivate this restriction. \newline
\papertext{We avoid solving multiple equipment types together, as such swaps can cause operational issues like unfilled seats or passenger spillage. \newline}
\item The instances used seem cherry-picked, I am surprised that s1 with a larger number of flights has a fewer number of paths compared to s4 and s5. Also, 5 instances is too small a number to generalize and gather the benefits of the approach. \newline
\textit{Response:} Due to the detailed nature of our response, we break it down into multiple parts below. \newline
\begin{itemize}
\item \textbf{Understanding path counts:} We have added an explanation about how the path counts are calculated. We present the added explanation here for completeness. \newline
\papertext{The ``Number of paths'' values are the maximum number of paths that can be built during column generation. To calculate them, we build a flight network and add a dummy source and dummy sink node for each aircraft based on its original first-departure and last-arrival stations. We then add dummy source arcs to flights departing from the source node station and dummy sink arcs from flights arriving at the sink node station. The number of paths for each aircraft is recursively computed as the number of paths from the aircraft's dummy source to the aircraft's dummy sink. The total number of paths is the sum of paths of all aircraft.} \newline
We would also like to explain how a network with fewer flights can have more paths using an example. Consider a network $N_1$ with $4$ flights $f_1$ from airport A to B, $f_2$ from B to C, $f_3$ from C to D, and $f_4$ from D to A. If flight connections are $f_1 \rightarrow f_2$, $f_2 \rightarrow f_3$, and $f_3 \rightarrow f_4$, there is only one possible path for an aircraft $t$ required to depart from and arrive at A: $(f_1,f_2,f_3,f_4)$. Alternatively, consider a network $N_2$ with flights $f_5$ from A to B, $f_6$ from B to A, $f_7$ from C to A, and connections $f_5 \rightarrow f_6$, $f_5 \rightarrow f_7$. Though $N_2$ has fewer flights than $N_1$, aircraft $t$ has two possible paths in it: $(f_5,f_6)$ and $(f_5,f_7)$. \newline
\item \textbf{Number of instances} For the comment about the number of instances being too small, we respectfully differ from the reviewer for the following reasons:
\begin{itemize}
\item The number of instances we use is still more than the number used in recent works like \cite{yan2016robust}, \cite{froyland2013recoverable} which use at most two instances.
\item Due to the numerous dimensions in which we explore the behavior of our framework, explanation becomes difficult if the number of instances increases.
\item Nevertheless, to alleviate the reviewer's concerns, we have included one more instance larger in size than those currently considered, and report results for a total of $6$ instances in all tables in the updated manuscript. \newline
\end{itemize}
\item \textbf{Cherry-picked instances}: We addressed this concern about paths in the example described above. To further address concerns about cherry picking and in the interest of complete transparency, we have open-sourced the code and data (after changing airport names to avoid proprietary issues) used for the experiments. We have added this sentence in the introduction to Section $4$ in the manuscript: \papertext{All code and data used for our experiments is publicly available at \url{https://github.com/sujeevraja/stochastic-flight-scheduler}.} \newline
\end{itemize}
\item Do the ``hub'' instances only include delays at one hub? That may be limiting \newline
\textit{Response:} While we agree with the reviewer that consideration of one hub may be limiting in general, all the airlines used in our experiments use a single hub. While we showed one-hub cases for experimentation, our model and approach is flexible enough to handle multiple hubs, as it can accommodate different delay distribution parameters for different airports. \newline
\item Is the value of l=30 minutes reasonable, to maintain the competitive advantage of the airline compared to other airlines? \newline
\textit{Response:} The limit $l$ serves as a way for schedulers to control the maximum number of minutes by which each flight can be rescheduled. We picked a value of $30$ minutes as we believe it is reasonable, but users of the model are free to choose any value for $l$. \newline
\item Are the 30 threads for 30 problems or 30 scenarios $\omega \in \Omega$? Can you show experiments that discuss sensitivity to the size of $\Omega$ ? \newline
\textit{Response:} The $30$ threads are to solve the $30$ recourse scenarios $\omega \in \Omega$ in parallel in each iteration of the $L$-shaped method. We have incorporated the reviewer's suggestion by adding a new table and a discussion on how solutions change with changing sizes of $\Omega$. These results are available in Figure $4$ and Table $10$.\newline
\item Can connectivity for crews or passengers be evaluated for the solutions obtained from each approach? \newline
\textit{Response:} As we use route-based recourse formulations, our framework is flexible enough to incorporate features from other airline domains. This includes hard constraints like minimum crew/passenger connection times that can be checked with building routes, and soft constraints like goodwill loss per passenger that can be added as penalties to costs. While all of this is indeed possible, it makes the model impractical due to high problem complexity and the need to consume data from disparate airline systems. \newline
\item Page 12: I did not understand the reason for the budget B to be increased from 0.25 to 2, especially having the budget greater than 1. It seems like a very large budget to allow flights to be delayed as much as twice the original primary flight delay. Please explain this further. \newline
\textit{Response:} We chose values for all parameters explored in the computation result section including the budget fraction parameter purely as a way to show the consistency of quality and performance of our framework over a wide variety of use cases. We do agree with the reviewer that use of a value of $2$ for B would not be practical, as it allows the model to suggest large reschedules. \newline
\item Tables 4 and 5 – the `distributions' refer to primary delays, correct? \newline
\textit{Response:} Yes, the distributions refer to primary delays of each flight. \newline
\item Tables 7 – what are the units for runtimes? \newline
\textit{Response:} All solution times are reported in seconds, and this is mentioned in first part of the computational results section. \newline
\end{enumerate}
\end{enumerate}
\noindent \textbf{Minor edits:}
\newline
\begin{enumerate}
\item Page 5, line 40: spelling of ``scenario''. \newline
\textit{Response:} This typo has been fixed in the updated version of the manuscript. \newline
\item Page 5, line 44: ``optimality of the linear program''. \newline
\textit{Response:} We have fixed this in the updated version of the manuscript. \newline
\end{enumerate}
\bibliographystyle{spmpsci}
|
train/arxiv
|
BkiUc904dbgg2jQDlZNV
| 5 | 1 |
\section*{Introduction}
The study of \ensuremath{e^{+}e^{-}\,} annihilation has always been one of the most useful means
of exploring QCD and determining a value for $\ensuremath{\alpha_s}(M_z)$, the strong coupling
constant. Multi-jet rates\footnote{The $n$-jet rate is defined as $R_{n}(Q) =
\sigma_{n\textrm{-}jet}/\sigma_{hadrons}$} in particular enable us to
examine the perturbative nature of QCD with long distance effects kept
comparatively low. It is also important to make sure that the shape variable
being investigated is \textit{both} collinear \textit{and} infra-red safe.
Jet rates can satisfy all of these criteria as long as care is taken in the
choice of the jet clustering algorithm. In this paper we calculate the
leading and next-to-leading logarithmic contribution to the four-jet rate for
\ensuremath{e^{+}e^{-}\,} annihilation using the Durham algorithm. (For an explanation of various
algorithms and the reasoning behind choosing the Durham one see
refs.~\cite{durham,brown,jetalg}). We then obtain an expression for the jet
rate in terms of a dimensionless jet resolution parameter, \ensuremath{y_{cut}}, which can
be considered as a measure of how well we are able to resolve two
approximately collinear partons. According to the Durham algorithm we define
$\ensuremath{y_{cut}}=Q_0^2/Q^2$, where $Q \sim \sqrt{s}$ is the scale of the jet-production
process and hence the cut-off energy scale $Q_0$ can be considered to be the
energy threshold below which the process starts to become non-perturbative.
In the region of small \ensuremath{y_{cut}} ($\ll1$) the emitted gluons are predominantly
soft and collinear resulting in the logarithmic enhancement of higher orders
\cite{muel,catani}. It is therefore necessary to resum them to all orders in
\ensuremath{\alpha_s}\mbox{} to obtain a reliable prediction for the four-jet rate.
\subsection*{Leading Logarithms and Exponentiation}
First it is important to stress what we are actually calculating in the
resummation procedure. Using the coherent branching formalism
\cite{catani,pQCD,fadin}, we are able to resum \textit{exactly} all
contributions to the shape variable at leading logarithms, LL ($\ensuremath{\alpha_s}^n
L^{2n}$) and next-to-leading logarithms, NLL ($\ensuremath{\alpha_s}^n L^{2n-1}$) in the
perturbative expansion where $L=-\ln(\ensuremath{y_{cut}})$. This means that all terms
sub-leading are not completely reproduced and therefore they are dropped in
our calculation.
The idea behind exponentiation is to increase the domain of applicability of
the shape variable such that it extends into the region of $\ensuremath{\alpha_s} L \leq 1$.
The result of this procedure is to obtain a closed function of the form
$\mathcal{F}\left(L g_1(\ensuremath{\alpha_s} L) + g_2(\ensuremath{\alpha_s} L)\right)$, where $g_1(\ensuremath{\alpha_s} L)$
resums all leading-logarithmic contributions and $g_2(\ensuremath{\alpha_s} L)$ resums the
next-to-leading ones such that when expanded the whole perturbation series is
reproduced down to terms of the form $\ensuremath{\alpha_s}^n L^m $, where $n\le m\le 2n$. For
the jet fractions being studied, a simple exponentiation does not arise. It
therefore only makes sense to calculate the LL and NLL contributions of the
perturbation series.
\section*{Calculation}
\indent The primary aim of this paper is to find an analytic expression
for the $4$-jet rate, \(R_{4}(y_{cut}) \). The most simple way to solve this
is to work in terms of a generating function defined by
\begin{equation}
\phi^{p}(Q,Q_{0};u)= \sum_{n=0}^{\infty}u^n R^p_n(Q,Q_0)
\end{equation}
where $R^p_n(Q,Q_0)$ is the probability of finding $n$-partons of a
particular type in the final state of a process, $p$, and $u$ is a jet label
to distinguish each of the probabilities. In this case we are dealing with
\epem~$ \rightarrow\!$~ hadrons, therefore $\phi^{(\ensuremath{e^{+}e^{-}\,})} = [\phi_q]^2$, where
$\phi_q$ is the generating function for a single quark to branch,
\begin{equation}
\phi_q(Q,Q_0;u)=u+\int\limits^{\scriptscriptstyle Q}_{\scriptscriptstyle Q_{0}} \,
\frac{\mathrm{d} \tilde{q}}{\tilde{q}} \, \int\limits^{\scriptscriptstyle
1}_{\scriptscriptstyle {Q_0/\tilde{q}}}\mathrm{d} z \,
\ensuremath{\alpha_s} (z\tilde{q})\,\frac{C{\!_F}}{\pi} \left( \frac{2}{z}-\frac{3}{2}
\right)\,[\phi_g(z\tilde{q},Q_0;u)-1]\, .
\end{equation}
To obtain the $n$-jet rate, $R_n$, is simply a matter of
differentiating the generating function $n$ times at $u=0$. The $n$-jet rate
is then
\begin{equation}
R_{n}(y_{cut}) = \frac{1}{n!}\left(\frac{\partial}{\partial
u}\right)^{\!n}\! \left[ \phi_q(Q,Q_{0};u) \right]^2 \biggr|_{u=0}.
\end{equation}
We find from the application of the coherent branching formalism, the
generating function obeys the following implicit coupled equations \cite{durham}:
\begin{equation}
\phi_{q}(Q,Q_{0};u) = u^{2}\exp\left(2\int_{Q_{0}}^{Q} \mathrm{d} q\,
\Gamma_{q}(Q,q)[\phi_{g}(q,Q_{0};u) -1]\right)
\end{equation}
and
\begin{eqnarray}
\phi_{g}(Q,Q_{0};u) =
u\,\exp\biggr(\int_{Q_{0}}^{Q}\mathrm{d} q
\{\Gamma_{g}(Q,q)[\phi_{g}(q,Q_{0};u)-1] -\Gamma_{f}(q)\} \biggr) \quad \quad
\qquad \qquad \qquad \qquad\nonumber \\
\times \biggr(1+u\int_{Q_{0}}^{Q}\mathrm{d} q\,\Gamma_{f}(q)
\exp\biggr( \int_{Q_{0}}^{q}\mathrm{d} q'\{[2\Gamma_{q}(q,q')-\Gamma_{g}(q,q')]
[\phi_{g}(q',Q_{0};u)-1]+\Gamma_{f}(q')\}\biggr)\biggr) .
\end{eqnarray}
Where the emission probabilities are defined as
\begin{eqnarray}
\Gq{Q}{q} &=& \frac{2C{\!_F}}{\pi} \frac{\ensuremath{\alpha_s}(q)}{q}
\left(\ln\frac{Q}{q}-\frac{3}{4} \right) , \\
\Gg{Q}{q} &=& \frac{2C{\!_{\!A}}}{\pi} \frac{\ensuremath{\alpha_s}(q)}{q}
\left(\ln\frac{Q}{q}-\frac{11}{12} \right) , \\
\Gf{q}\quad \, &=& \frac{N_{f}}{3\pi} \frac{\ensuremath{\alpha_s}(q)}{q}.
\end{eqnarray}
The 2-jet limit is important as the jet rate becomes semi-inclusive
and exponentiation holds exactly. This gives
\begin{eqnarray}
R_2(y_{cut}) = \exp\left(\frac{C{\!_F} a L}{2} (3-L)-\ensuremath{\beta_0}\frac{C{\!_F} a^2 L^3}{12}\right) ,
\end{eqnarray}
where $L=\ln(1/y_{cut})$, $a=\ensuremath{\alpha_s}(Q)/\pi$ and we have used $\ensuremath{\beta_0}=(11C{\!_{\!A}}-2N{\!_f})/3$. The 3-jet case was evaluated
in \cite{charles} and we proceed in a similar way.
Firstly we find (3) gives, in the n=4 case \cite{durham},
\begin{eqnarray}
R_4(y_{cut}) &=& 2 R_2(y_{cut}) \left( \int^{Q}_{Q_0} \mathrm{d} q \, \Gq{Q}{q}\Dg{q}
\int^{q}_{Q_0} \mathrm{d} q' \, \Gg{q}{q'} \Dg{q'} \right) \nonumber \\
&&+ 2R_2(y_{cut}) \left(\int^{Q}_{Q_0} \mathrm{d} q \, \Gq{Q}{q}\Dg{q}
\int^{q}_{Q_0} \mathrm{d} q' \, \Gf{q'} \Df{q'} \right) \nonumber \\
&&+ R_3(y_{cut}) \left( \int^Q_{Q_0}\mathrm{d} q \Gq{Q}{q} \Dg{q} \right) ,
\end{eqnarray}
where we have introduced the Sudakov form factors
\begin{eqnarray}
\Dq{Q} &=& \exp\left( -\int^{Q}_{Q_0}\mathrm{d} q \,\Gq{Q}{q} \right), \\
\Dg{Q} &=& \exp\left( -\int^{Q}_{Q_0}\mathrm{d} q \,[\,\Gg{Q}{q} +\Gf{q}] \right), \\
\Df{Q} &=& \exp\left( -\int^{Q}_{Q_0}\mathrm{d} q
\,[\,2\Gq{Q}{q} - \Gg{Q}{q} -\Gf{q}] \right).
\end{eqnarray}
We need only work with the one-loop definition of the strong coupling
constant,
\begin{equation}
\ensuremath{\alpha_s}(Q) = \frac{\ensuremath{\alpha_s}(\mu)}{1+\frac{\ensuremath{\beta_0}
\ensuremath{\alpha_s}(\mu)}{2\pi}\ln\big({\frac{Q}{\mu}}\big)},
\end{equation}
as higher order corrections will be sub-leading. Even at this order we are
still faced with an extremely complicated set of nested integrals.
Therefore, as in \cite{charles} we proceed by expressing $R_4(y_{cut})$ as
\begin{equation}
R_4(y_{cut}) = R_4\biggr|_{\ensuremath{\beta_0}=0} + \ensuremath{\beta_0} \frac{\partial
R_4}{\partial \ensuremath{\beta_0}}\biggr|_{\ensuremath{\beta_0}=0}.
\end{equation}
This is permissable for any jet multiplicity evaluated at next-to-leading
logarithmic order because in general we will have
\begin{eqnarray}
R_n &=& \fanC{12} a L^2 + \fanC{11} a L + \cdots \nonumber \\
&& + \fanC{24} a^2 L^4 + \fanC{23} a^2 L^3 +\cdots \nonumber \\
&& \vdots \nonumber \\
&&+ \fanC{n\, 2n}a^n L^{2n} + \fanC{n\, 2n-1}a^n L^{2n-1} +\cdots,
\end{eqnarray}
where the coefficients \fanC{p\, q} are either \ensuremath{\beta_0} independent (\fanC{p\,
2p}) or contain a single \ensuremath{\beta_0} (\fanC{p\, 2p-1}). All other \ensuremath{\beta_0} dependence is
contained in the strong coupling constant.
We note that
\begin{equation}
\frac{\partial [\ensuremath{\alpha_s}{\scriptstyle(Q)}]^m}{\partial \ensuremath{\beta_0}}\Biggr|_{\ensuremath{\beta_0}=0} =
-m \frac{[\ensuremath{\alpha_s}{\scriptstyle(Q)}]^{m+1}}{2\pi}\ln \!
{\textstyle\left(\frac{Q}{Q_0}\right)} \sim a^{m+1}L .
\end{equation}
It is now apparent that beyond the first derivative there will only be terms
of the form $a^{n}L^{2n-2}$ which in the NLL approximation can be dropped. The
assumption of (15) then seems to be a valid one. In fact this expansion
greatly simplifies the calculation by enabling us to work with terms
evaluated with \ensuremath{\beta_0} equal to zero. In doing so the coupling \ensuremath{\alpha_s} \, can really
be treated as a constant and hence no longer depends on the integration
variable. Proceeding in this way, we then calculate the four jet rate to be
\begin{eqnarray*}
R_4&=&\lf{{C{\!_F}^2}}{{C{\!_{\!A}}^2}}\Big({e^{-2 (A+F)}} \big({{({e^A}-1)}^2} (2+3
C{\!_F} a L) \\ & &
\hspace{1cm} -{\sqrt{C{\!_{\!A}} a}}\, {e^A} ({e^A}-1) (-3+12 F+2 L) {\sqrt{\pi }}\, \mbox{erf}\big({\sqrt{A}}\big)\big)\Big) \\ & & +
\lf\CfC{\!_{\!A}}\Big(\lf{1}{24} {e^{-2 F}} \big(-24 {e^{-2 A}} ({e^A}-1) (2+3 a C{\!_F} L) \\ & &
\hspace{1cm} -4 {\sqrt{C{\!_{\!A}} a}}\, {e^{-A}} (2+3 {e^A} (-3+12 F+2 L)) {\sqrt{\pi }}\, \mbox{erf}\big({\sqrt{A}}\big) \\ & &
\hspace{1cm} -(12+a L (11 C{\!_{\!A}} -6 C{\!_F} (-9+12 F+2 L))) \pi\, {{\mbox{erf}\big({\sqrt{A}}\big)}^2} \\ & &
\hspace{1cm} +2 {\sqrt{C{\!_{\!A}} a}}
(-7+72 F+12 L) {\sqrt{2 \pi }}\,
\mbox{erf}\big({\sqrt{2A}}\big)\big)\Big)- \lf{11}{3}\,
\mbox{{\Large$\varphi$}} \\ & &
+\lf{\beta_0 }C{\!_{\!A}} \Bigg(\lf{{C{\!_F}^2}}{{C{\!_{\!A}}^2}}\lf{1}{12} \Big({e^{-2 (A+F)}} \big(-2 {\sqrt{C{\!_{\!A}} a}} {e^A} (3+2 A (-3+4 F)) {\sqrt{\pi }}\, \mbox{erf}\big({\sqrt{A}}\big)\big) \\ & &
\hspace{1.5cm}\big(1+{e^A} \big(-1+{\sqrt{A}} {\sqrt{\pi }}\, \mbox{erf}\big ({\sqrt{A}}\big)\big)\big)\Big) \\ & &
\hspace{1cm}+\lf\CfC{\!_{\!A}}\lf{1}{12}\Big( {e^{-2 (A+F)}}2 C{\!_F} a L (1+2 {e^A}+4 ({e^A}-1)F) \\ & &
\hspace{1.5cm} \big(1+{e^A} \big(-1+{\sqrt{A}} {\sqrt{\pi }}\,
\mbox{erf}\big({\sqrt{A}}\big)\big)\big) \Big) \\ & &
\hspace{1cm}+{\sqrt{\lf\CfC{\!_{\!A}}}}\lf{1}{24} \Big({\sqrt{C{\!_F} a}}\, {e^{-2
(A+F)}} \big(2 {e^A} (5+{e^A} \\ & &
\hspace{1.5cm}+ A (-4-2{e^A}(3-8F))) {\sqrt{\pi }}\, \mbox{erf}\big({\sqrt{A}}\big) \\ & &
\hspace{1.5cm}- {e^{2 A}}(9-4A(3-8F)) {\sqrt{2 \pi }}\,
\mbox{erf}\big({\sqrt{2A}}\big)\big) \Big) \\ & &
\hspace{1cm}+\lf{1}{12} C{\!_F} aL\, {e^{-2 (A+F)}} (1-{e^A}(1-3{e^A})-8F(1-{e^A})\\&&
\hspace{1cm}+2F{e^{2A}} \pi {\mbox{erf\,}^2}\big({\sqrt{A}}\big)) + \mbox{{\Large$\varphi$}} \Bigg), \\
\end{eqnarray*}
erf$(x)$ is the error function defined to be $\frac{2}{\sqrt{\pi}}\int^x_0
e^{-y^2} \mathrm{d} y$ and erf\mbox{}i$(x) =$ erf$(i x)/i$. We have also defined
$A=C{\!_{\!A}} aL^2/4,\ F=C{\!_F} aL^2/4$ and {\large$\mathcal{G}$}$(x,z)= x\int
_{0}^{z}{e^{x {y^2}}} \mbox{erf}(y)\mathrm{d} y$. Attempts were made to solve
{\large$\mathcal{G}$} exactly but no closed form was found. It appears that
the integral is just a generalisation of the error function and hence cannot
be solved, except in certain cases. A reference containing various properties
of this function is given \cite{rosser}.
\newpage
\subsection*{Properties of the Four-Jet Rate}
With the complete result calculated we are able to reproduce the exact LL and
NLL coefficients of \ensuremath{\alpha_s}\mbox{} at any order. The first three orders in
\ensuremath{\alpha_s}/$\pi$ are
given.
i.e.
$R_4(y_{cut})=a^2(B_4L^4+B_3L^3+\mathcal{O}(L^2))+
a^3(C_6L^6+C_5L^5+\mathcal{O}(L^4))+
a^4(D_8L^8+D_7L^7 +\mathcal{O}(L^6))+
\cdots$.
\begin{eqnarray*}
B_4&=& \lf{1}{8} C{\!_F}^2+\lf{1}{48}\CfC{\!_{\!A}} . \\
B_3&=& \lf{-3}{4}C{\!_F}^2-\lf{5}{18}\CfC{\!_{\!A}}+\lf{1}{36}C{\!_F} N_f. \\
C_6&=& \lf{-1}{16}C{\!_F}^3 -\lf{1}{48}C{\!_F}^2C{\!_{\!A}} -\lf{7}{2880}\CfC{\!_{\!A}}\!^2. \\
C_5&=& \lf{9}{16}C{\!_F}^3+\lf{71}{144}C{\!_F}^2C{\!_{\!A}} +\lf{217}{2880}\CfC{\!_{\!A}}\!^2
-\lf{41}{720}C{\!_F}^2N_f-\lf{1}{120} C{\!_{\!A}}C{\!_F} N_f . \\
D_8&=& \lf{1}{64}C{\!_F}^4 +\lf{1}{128}C{\!_F}^3C{\!_{\!A}} +\lf{1}{512}C{\!_F}^2C{\!_{\!A}}\!^2
+\lf{1}{5120}\CfC{\!_{\!A}}\!^3. \\
D_7&=& \lf{-3}{16}C{\!_F}^4 -\lf{17}{64}C{\!_F}^3C{\!_{\!A}} -\lf{1439}{17280}C{\!_F}^2C{\!_{\!A}}\!^2
-\lf{2371}{241920}\CfC{\!_{\!A}}\!^3 +\lf{323}{10080}C{\!_F}^3 N_f
+\lf{31}{3024}C{\!_F}^2C{\!_{\!A}} N_f \\
&& +\lf{1}{840}\CfC{\!_{\!A}}\!^2 N_f. \\
\end{eqnarray*}
This is in agreement with \cite{durham} which gives the $B_{4,3}$
coefficients. The $C_{6,5}$ coefficients were in addition calculated by
expanding out the integral equation (10) as a function of \ensuremath{\alpha_s}. Another test
was to calculate $R_4$ in the large $N_c$ limit ($N_c$ is the number of
colours) to the order of leading logarithms. This greatly simplifies the
equations as $C{\!_{\!A}} \rightarrow N_c$, $C{\!_F} \rightarrow N_c/2$ and $N_f$ can be disregarded. Eq.~(4) now collapses down to
\begin{equation}
\phi(Q,Q_{0};u) = u^{2}\exp\left(2\int_{Q_{0}}^{Q} \mathrm{d} q\,
\Gamma_{q}(Q,q)\left[\frac{1}{u} \phi(q,Q_{0};u) -1\right]\right).
\end{equation}
Also noting that at leading logarithmic order $R_4$ will be independent of
$\beta_0$ we can safely set it to zero. We then get
\begin{eqnarray}
R_4^{N_c}= \frac{1}{4}e^{-3A} \left(6\, -\, 8e^A+\, 4\sqrt{A}\,
e^A(1-2e^A)\sqrt{\pi}\, \mbox{erf}\sqrt{A}\, -\, (1-2A)\, e^{2A}\, \mbox{erf}\,
^2\sqrt{A}\, \right. \nonumber \\
\left. +2e^{2A}(1+2\sqrt{A}\, \sqrt{2\pi}\, \mbox{erf}\sqrt{2A}) \right).
\end{eqnarray}
This is in agreement with the full NLL result in the appropriate limit.
We also note that in the psuedo-abelian limit of simply $C{\!_{\!A}}$ and $N_f$ going
to zero, exact exponentiation holds. We also find that this gives a
reasonably good approximation to the full non-abelian case within about
15-20\%.
\newpage
\section*{Conclusion}
To conclude, we have found an analytic expression for $R_4(y_{cut})$ which exactly
resums the leading and next-to-leading logarithmic contributions to all
orders in \ensuremath{\alpha_s}. Previous work on including the leading and next-to-leading
logarithms to all orders \cite{signer} was performed by solving (10)
numerically. We propose that it is beneficial to have an explicit result
since it is then possible to extract the purely LL and NLL contributions and
drop the incomplete sub-leading terms. Most importantly we will be able to
utilise the result to address the so called `renormalisation scale ambiguity'
in combining the resummed result with the fixed order one. We will be
implementing a novel technique and hope to report on this in the future.
\subsection*{Acknowledgements}
I would firstly like to thank C.J.~Maxwell for suggesting the problem,
reading the manuscript and providing helpful comments. I am also grateful to
the U.K. Particle Physics and Astronomy Research Council, \mbox{(PPARC)}, for
a research~studentship.
Finally I would like to thank L.J.~Dixon for pointing out an error in the
original calculation and for useful subsequent conversations.
|
train/arxiv
|
BkiUfwDxK4tBViZLH_oR
| 5 | 1 |
\section*{\abstractname}%
\else
\small
\list{}{%
\settowidth{\labelwidth}{\textbf{\abstractname:}}
\setlength{\leftmargin}{50pt}
\setlength{\rightmargin}{50pt}
\setlength{\itemindent}{\labelwidth}
\addtolength{\itemindent}{\labelsep}
}
\item[\textbf{\abstractname:}]
\fi}
{\if@twocolumn\else\endlist\fi}
\fi
\makeatother
\begin{document}
\title{On some Hermite series identities and their applications to
Gabor analysis}
\date{\today}
\author{Jakob Lemvig\footnote{Technical University of Denmark, Department of Applied Mathematics and Computer Science, Matematiktorvet 303B, 2800 Kgs.\ Lyngby, Denmark, E-mail: \protect\url{[email protected]}}\phantom{$\ast$}}
\xdef\@thefnmark{}\@footnotetext{2010 {\it Mathematics Subject Classification.} Primary
42C15. Secondary: 42C05, 33C45}
\xdef\@thefnmark{}\@footnotetext{{\it Key words and phrases.} Hermite functions, frame, frame set, Gabor
system, Zak transform, Zibulski-Zeevi matrix}
\maketitle
\thispagestyle{plain}
\begin{abstract}
We prove some infinite series identities for the Hermite functions. From these identities we disprove the Gabor frame set conjecture for Hermite functions of order $4m+2$ and $4m+3$ for $m\in \{0\} \cup \numbersys{N}$. The results hold not only for Hermite functions, but for two large classes of eigenfunctions of the Fourier transform associated with the eigenvalues $-1$ and $i$, and the results indicate that the Gabor frame set of all such functions must have a rather complicated structure.
\end{abstract}
\section{Introduction}
\label{sec:non-frame-property}
Since John von Neumann's claim of completeness of the
coherent state subsystems generated by the
Gaussian in his work on
quantum mechanics \cite{MR0223138}, it has been of interest in mathematical physics and
analysis to determine when the set of coherent
states $\gaborG{g}:= \set{\myexp{2\pi i bm \cdot}g(\cdot-ak)}_{k,m\in
\numbersys{Z}}$ is complete in various function spaces, e.g., $L^2(\numbersys{R})$. In
engineering, $\gaborG{g}$ is the so-called Gabor system
generated by the window function $g \in L^2(\numbersys{R})$ with time-frequency
shifts along the lattice $a\numbersys{Z} \times b\numbersys{Z}$ in phase space.
For most
applications in signal processing and functional analysis,
completeness of $\gaborG{g}$ is nowadays not considered to be
sufficient; for instance, to guarantee unconditionally $L^2$-convergent and
stable expansions of functions in $L^2(\numbersys{R})$ and to provide
characterizations of classical function spaces, one needs a stronger
property of $\gaborG{g}$, namely that the Gabor system constitutes a
frame for $L^2(\numbersys{R})$, i.e, existence of constants $A,B>0$, termed frame
bounds, such that
\begin{equation}
A \norm{f}^2 \le \sum_{k,m \in \numbersys{Z}} \abs{\innerprod{f}{\myexp{2\pi i bm
\cdot}g(\cdot-ak)}}^2 \le B \norm{f}^2 \quad \text{for all } f
\in L^2(\numbersys{R}).\label{eq:frame-def}
\end{equation}
In this work we are interested in the frame properties of Gabor
systems generated by Hermite functions.
We define the $n$th Hermite function $h_n$ by
\[
h_n(x) = (c_n)^{-1/2} \myexp{\pi x^2} \left(\frac{d^n}{dx^n} \myexp{-2\pi x^2}\right) ,
\]
where $c_n = (2\pi)^n 2^{n-1/2} n!$
for $n \in \numbersys{N} \cup \{0\}$. The class of Hermite functions forms a natural
continuation of the study of von Neumann~\cite{MR0223138} and
Gabor~\cite{GaborTheory1946} as it contains the Gaussian
as a special case, $n=0$.
The \emph{frame set} of a window function $g \in L^2(\numbersys{R})$, denoted by
$\mathscr{F}{(g)}$, is the parameter
values $(a,b)\in\mathbb{R}_+^2$ for which the associated Gabor system
$\gaborG{g}$ is a frame for $L^2(\numbersys{R})$. Hence, we will study the set $\mathscr{F}{(h_n)}$, or to be
more precise, properties of its compliment. That is, following \cite{MR3232589}, we will ask what
prevents $\gaborG{g}$ from generating a frame? Our answers will show that the
Gabor frame set of Hermite functions must have a rather complicated
structure. Indeed, we will derive new obstructions
of the frame property for two classes of eigenfunctions of the Fourier
transform associated with the eigenvalue $-1$ and $i$, respectively,
which, in particular, disproves a conjecture on Hermite functions by Gr\"ochenig~\cite{MR3232589}.
To understand Gr\"ochenig's conjecture, let us recall what is known
about $\mathscr{F}(h_n)$. Since Hermite functions have exponential decay
in time and frequency domain, it is known, see e.g., \cite{MR3232589},
that the upper frame bound holds, that the set $\mathscr{F}(h_n)$ is
open in $\numbersys{R}^2$ and that $\mathscr{F}(h_n) \subset \setprop{(a,b)\in \mathbb{R}^2_+
}{ ab < 1}$. For the Gaussian $h_0$, the necessary condition $ab<1$ for the
frame property is also
sufficient. This important result was
conjectured by Daubechies and Grossmann~\cite{MR924682} and proved by
Lyubarskii~\cite{MR1188007} and by Seip and Wallst\'en \cite{MR1173117,MR1173118}.
The proof relies on analytic properties of the short-time Fourier
transform of the Gaussian and the fact that
the Bargmann transform of an $L^2$-function is analytic.
In \cite{MR2529475,MR2292280} Gr\"ochenig and Lyubarskii obtained the
following generalization: for any pair $(a,b)$ in $\mathbb{R}^2_+$ with $ab < \frac{1}{n+1}$,
the Gabor family $\gaborG{h_n}$ is a frame. Finally, Lyubarskii and
Nes~\cite{MR3027914} proved that the frame set of any sufficiently nice, \emph{odd} window
function, in particular, $h_{2m+1}$, $m \in \numbersys{N} \cup \{0\}$, cannot contain the hyperbolas
$ab= \tfrac{p}{p+1}$ for any $p \in \numbersys{N}$.
As no other obstructions for the frame property of $h_n$ was known,
this led Gr\"ochenig~\cite{MR3232589} to conjecture that
the frame set for the even Hermite
functions is the largest possible set $\mathscr{F}{(h_{2m})}=\setprop{(a,b)\in
\numbersys{R}^2_+}{ab<1}$, and that the frame set for the odd Hermite functions
is $\mathscr{F}{(h_{2m+1})}=\setpropsmall{(a,b)\in
\numbersys{R}^2_+}{ab<1, ab \neq \tfrac{p}{p+1}, p \in \numbersys{N}}$, $m \in \numbersys{N} \cup \{0\}$.
The conjecture is true for $h_0$ by the above mentioned results. The
conjecture for $h_1$ is due to
Lyubarskii and Nes~\cite{MR3027914}, and this paper will not shed new
light on this case. However, our results show that the conjecture is false for
$h_n$ with $n=4m+2$ and $n=4m+3$, $m \in \numbersys{N}\cup\{0\}$. We also give numerical
evidence in Section~\ref{sec:numer-exper} that it is false for $n=4$ and $n=5$ which leads us to
believe that the conjecture is also false for $n=4m$ and $n=4m+1$,
whenever $m>0$.
Our proofs are based on Zak transform methods and certain infinite
series identities which are of independent interest. As an example, we
will show that $h_{4m+2}$, $m \in \numbersys{N}\cup\{0\}$, satisfies
\begin{equation}
\sum_{k\in \numbersys{Z}} (-1)^k h_{4m+2}(\sqrt{2}(k+\tfrac{p}{4}))= 0 \quad \text{for $p \in \set{1,3}$}.
\label{eq:h2-identity}
\end{equation}
For $m=0$ the identity concerns $h_2$, and it reads, for $p=1$,
\begin{equation}
\sum_{k\in \numbersys{Z}} (-1)^k (8\pi (k+\tfrac{1}{4})^2-1)
\myexp{-2\pi\bigl(k+\tfrac{1}{4}\bigr)^2}= 0 ,
\label{eq:h2-identity-expl}
\end{equation}
which is illustrated in Figure~\ref{fig:hermite}. As we shall see in
Section~\ref{sec:some-infinite-series}, the identities in
\eqref{eq:h2-identity} are even true for any sufficiently nice
function that is an eigenfunction of the Fourier transform with
eigenvalue $-1$.
\begin{figure}
\centering
\includegraphics{hermite-figure0.pdf}
\caption{The graph of $h_2$ and an illustration of the identity
\eqref{eq:h2-identity-expl}, where the samples for even and odd $k \in \numbersys{Z}$ are marked with
blue circles and red squares, respectively. Note that the sampling has no
simple symmetries, e.g., $h_2(-\sqrt{2}\,3/4)\neq h_2(\sqrt{2}/4)$.}
\label{fig:hermite}
\end{figure}
From the identity \eqref{eq:h2-identity} it follows that the Zak
transform $Z_{\sqrt{2}}$ of $h_{4m+2}$ has two zeros in $\itvco{0}{1}^2$, located one-half
apart on a horizontal line. By standard Zak transform methods in Gabor analysis,
detailed in Section~\ref{sec:obstruc},
it follows that
$\gaborG[1/\sqrt{2}][1/\sqrt{2}]{h_{4m+2}}$ is not a
frame. Note that it is not our focus to give a detailed analysis of
the frame set of specific Hermite functions, e.g.,
$\mathscr{F}{(h_2)}$. Instead, we are interested in determining values of $a$ and
$b$ for which $\gaborG{g}$ fails to be a frame for every nice window
$g$ in, e.g., the class of eigenfunctions of the Fourier transform
associated with the eigenvalue $-1$ to which all Hermite
functions of the
form $h_{4m+2}$, $m\in \numbersys{N} \cup\{0\}$, belong. Previously, not a single
obstruction for the frame property was known for any of the functions in this class.
\section{Preliminaries}
\label{sec:preliminaries}
We begin by recalling some properties of
the Hermite functions and the Zak transform.
\subsection{Hermite functions}
\label{sec:hermite-functions}
Hermite functions arise in many different
contexts, e.g., as eigenfunctions of the Hermite operator
$H=-\frac{d^2}{dx^2}+(2\pi x)^2$.
What is more important for us is that the Hermite functions are also
eigenfunctions for the Fourier transform:
\[
\hat{h}_n(\gamma) = (-i)^n h_n(\gamma) \quad a.e.\ \gamma \in \numbersys{R}.
\]
Here, the Fourier transform is defined for $f \in L^1(\numbersys{R})$ by
\[
\ft
f(\xi)=\hat f(\gamma) = \int_{\numbersys{R}} f(x)\myexp{-2 \pi i
\gamma x} \mathrm{d}x
\]
with the usual extension to $L^2(\numbersys{R})$. We let $H_j$, $j=0,1,2,3$,
denote the eigenspace of the Fourier transform corresponding to the
eigenvalue $(-i)^j$. More specifically, since $\set{h_n}_{n=0}^\infty$
is an orthonormal basis for $L^2(\numbersys{R})$,
\[
H_j = \ker (\ft-(-i)^jI) = \overline{\Span{\setprop{h_{4m+j}}{m \in \numbersys{N}}}} =
\setprop{\sum_{m \in \numbersys{N}} c_m h_{4m+j}}{(c_m)\in \ell^2(\numbersys{N})}.
\]
By $\mathcal{F}^2\{f(x)\}=f(-x)$, it follows that any function in $H_j$, $j=0,2$, is
even and that any function in $H_j$, $j=1,3$, is odd.
Since the Fourier transform is a unitary operator, it preserves the
frame property, that is, the system $\gaborG{g}$ is a frame if and
only if the Fourier transform of the system $\gaborG[b][a]{\hat{g}}$
is a frame. Since the eigenvalue of the Hermite functions is of
modulus one, we immediately have the following simple result. It
implies that the frame set of Hermite functions is symmetric about the line $a=b$,
i.e., $(a,b)\in \mathscr{F}(h_n)$ if and only if $(b,a)\in \mathscr{F}(h_n) $.
\begin{lemma}
Let $a,b>0$, $A,B>0$, and let $g \in H_j$ for some $j=0,1,2,3$. Then the
following are equivalent:
\begin{enumerate}[(i)]
\item $\gaborG{g}$ is a frame with bounds $A$ and $B$,
\item $\gaborG[b][a]{g}$ is a frame with bounds $A$ and $B$.
\end{enumerate}
\end{lemma}
\subsection{The Zak transform}
\label{sec:zak-transform}
For any $\lambda>0$, the Zak transform of a function $f \in L^2(\numbersys{R})$ is defined as
\begin{equation}\label{eq:zakTransform}
\left(Z_{\lambda}f\right)(x,\gamma)
= \sqrt{\lambda}\sum_{k\in\mathbb{Z}} f(\lambda(x+
k))\myexp{-2\pi i k \gamma}, \quad a.e.\ x, \gamma \in \mathbb{R},
\end{equation}
with convergence in $L^2_\mathrm{loc}(\numbersys{R})$. The Zak transform
$Z_\lambda$ is a unitary map of $L^2(\numbersys{R})$ onto $L^2(\itvco{0}{1}^2)$, and
it has the following quasi-periodicity:
\[
Z_\lambda f(x+1,\gamma)= \myexp{2\pi i\gamma} Z_\lambda f(x,
\gamma), \quad Z_\lambda f(x, \gamma +1) = Z_\lambda f(x, \gamma) \quad \text{for
a.e. } x,\gamma \in \numbersys{R}.
\]
The Zak transform has been used by Weil~\cite{MR0165033} in harmonic
analysis on locally compact abelian groups, by
Gel'fand~\cite{MR0073136} in the study of Schr\"odinger's equation,
and by Zak\cite{MR1478343} in solid state physics. For a systematic
treatment of the Zak transform and its use in applied mathematics, we
refer to the paper by Janssen~\cite{MR947891}. Recent
applications in Gabor analysis include
\cite{GrochenigCompleteness2015,MR3218799,MR3393698}.
The Zak transform inherits symmetries of the function $f$. The
following basic lemma will be used several times in the later
sections. The Wiener space $W(\numbersys{R})$ consists of functions $g \in L^\infty(\numbersys{R})$
for which $\sum_{k \in \numbersys{Z}} \esssup_{x \in
\itvcc{0}{1}}\abs{g(x+k)}<\infty$. The assumption that $f$ belongs
to $W(\numbersys{R})$ and is continuous in
Lemma~\ref{lem:Zak-symm} implies that $Z_\lambda f$ is continuous
which guarantees that the identities in the lemma hold pointwise.
\begin{lemma}
\label{lem:Zak-symm}
Let $m \in \numbersys{Z}$ and $\lambda>0$. Assume that $f \in W(\numbersys{R})$ is continuous.
\begin{enumerate}[(i)]
\item
If $f$ is an even function, then
\[ Z_\lambda f(x,\gamma) = Z_\lambda f(-x,-\gamma) \qquad \text{for
all } x,\gamma \in \numbersys{R} .\] In particular, $Z_\lambda f(x,\tfrac{m}{2}) = (-1)^m Z_\lambda
f(1-x,\tfrac{m}{2})$ and
\[ Z_\lambda f(x,\gamma) = 0 \qquad (x,\gamma) \in \numbersys{Z}^2 + (\tfrac12,\tfrac12).\]
\item If $f$ is an odd function, then
\[ Z_\lambda f(x,\gamma) = - Z_\lambda
f(-x,-\gamma) \qquad \text{for
all } x,\gamma \in \numbersys{R} . \] In particular, $Z_\lambda f(x,\tfrac{m}{2}) = (-1)^{m+1} Z_\lambda
f(1-x,\tfrac{m}{2})$ and
\[ Z_\lambda f(x,\gamma) = 0 \qquad (x,\gamma) \in \tfrac12 \numbersys{Z}^2
\setminus \left(\numbersys{Z}^2 + (\tfrac12,\tfrac12)\right).\]
\end{enumerate}
\end{lemma}
By quasi-periodicity, the function $Z_\lambda f$ on $\numbersys{R}^2$ is
determined by its values on $\itvco{0}{1}^2$. Hence, if
$Z_\lambda f(x_0,\gamma_0)=0$ for some $(x_0,\gamma_0) \in \numbersys{R}^2$, then
$Z_\lambda f(x,\gamma)=0$ for all $(x,\gamma)\in \numbersys{Z}^2 +
(x_0,\gamma_0)$. For this reason we will often only explicitly mention the
zeros of $Z_\lambda f$ on $\itvco{0}{1}^2$.
If $f \in W(\numbersys{R})$ and $\hat{f} \in W(\numbersys{R})$, it follows by an application
of Poisson summation formula, see e.g., \cite{MR947891} or \cite[Proposition 8.2.2]{MR1843717}, that
\begin{equation}
\label{eq:F-of-Zak}
Z_\lambda f(x,\gamma) = \myexp{2\pi i x \gamma}
Z_{1/\lambda}\hat{f}(\gamma,-x) \quad \text{for all } x,\gamma \in \numbersys{R},
\end{equation}
with absolute convergence of the series .
In particular, this relation holds for any function $f$ in $H_j \cap
W(\numbersys{R})$ for $j=0,1,2,3$. Note that any
function $f$ in $H_j \cap
W(\numbersys{R})$ is continuous since $\hat{f} \in W(\numbersys{R}) \subset L^1(\numbersys{R})$.
\section{Some infinite series identities}
\label{sec:some-infinite-series}
The infinite series identities for Hermite functions derived in this
section will play a crucial role in the counterexamples in Section~\ref{sec:obstruc}. The
identities are of independent interest and can be formulated as
multiple zeros of the Zak transform. We remark that it is not
difficult to find a single zero of the Zak transform of Hermite functions, see, e.g.,
Lemma~\ref{lem:Zak-symm}. We will find $k$ zeros of $Z_\lambda h_n(x,\gamma)$
for a fixed value of $\gamma$, each $1/(k+1)$ apart with respect to
the $x$ variable, which is a much harder task that depends delicately
on the parameter $\lambda$.
\begin{lemma}
\label{lem:h2-identities}
Let $n=4m+2$ for some $m \in \numbersys{N} \cup \{0\}$. Then
\begin{equation}
Z_{\sqrt{2}}h_n(\tfrac{p}{4},\tfrac12) \stackrel{\mathrm{def}}{=} 2^{1/4} \sum_{k\in \numbersys{Z}} (-1)^k h_{n}(\sqrt{2}(k+\tfrac{p}{4}))=0 \quad \text{for $p \in \set{1,3}$},
\label{eq:h-even-sqrt-2}
\end{equation}
and
\begin{equation}
Z_{\sqrt{3}}h_n(\tfrac{p}{6},\tfrac12) \stackrel{\mathrm{def}}{=} 3^{1/4} \sum_{k\in \numbersys{Z}}
(-1)^k h_{n}(\sqrt{3}(k+\tfrac{p}{6}))=0 \quad \text{for $p \in
\set{1,5}$}. \label{eq:h-even-sqrt-3}
\end{equation}
\end{lemma}
\begin{proof}
We first prove the assertions in \eqref{eq:h-even-sqrt-2}.
Let
$p=1$. Since the sum in \eqref{eq:h-even-sqrt-2} converges absolutely,
we can split the sum in even and odd
indices $k \in \numbersys{Z}$. Hence, proving \eqref{eq:h-even-sqrt-2} is
equivalent to proving:
\[
\sum_{k\in \numbersys{Z}} h_{n}(\sqrt{2}(2k+\tfrac{1}{4}))= \sum_{k\in \numbersys{Z}} h_{n}(\sqrt{2}(2k+\tfrac{5}{4})).
\]
In terms of the Zak transform, we need to prove that
\begin{equation}
Z_{2^{3/2}}h_n (\tfrac18,0) = Z_{2^{3/2}}h_n (\tfrac58,0). \label{eq:Zak-sqrt-8}
\end{equation}
We first consider the left hand side. By \eqref{eq:F-of-Zak} and the
fact that $\hat{h}_n= -h_n$, we obtain
\[
Z_{2^{3/2}}h_n (\tfrac18,0) = Z_{2^{-3/2}}\hat{h}_n (0,-\tfrac18) \stackrel{\mathrm{def}}{=} -2^{-3/4} \sum_{k \in \numbersys{Z}} h_n(2^{-3/2}k)
\myexp{2\pi i k/8}.
\]
Substituting $k \in \numbersys{Z}$ for $8m+\ell$, where $m\in \numbersys{Z}$ and $\ell
=0,1,\dots, 7$, we find that
\begin{align}
Z_{2^{3/2}}h_n (\tfrac18,0)
&= -2^{-3/4}
\sum_{\ell=0}^7 \sum_{m \in \numbersys{Z}} h_n(2^{3/2}(m+\frac{\ell}{8}))
\myexp{2\pi i \ell/8} \nonumber \\
&= -2^{-3/2}\sum_{\ell=0}^7 Z_{2^{3/2}}h_n (\tfrac{\ell}{8},0)
\myexp{2\pi i \ell/8} \label{eq:2}
\end{align}
The odd terms over $\ell$ sum to:
\begin{align*}
\sum_{\ell\in\{1,3,5,7\}} Z_{2^{3/2}}h_n
(\tfrac{\ell}{8},0) \myexp{2\pi i \ell/8} =& Z_{2^{3/2}}h_n
(\tfrac{1}{8},0) (\myexp{2\pi i /8}+\myexp{2\pi i 7/8}) +
Z_{2^{3/2}}h_n (\tfrac{5}{8},0) (\myexp{2\pi i 3/8}+\myexp{2\pi i
5/8})\\ =& \sqrt{2} Z_{2^{3/2}}h_n (\tfrac{1}{8},0) - \sqrt{2}
Z_{2^{3/2}}h_n (\tfrac{5}{8},0),
\end{align*}
where we have used Lemma~\ref{lem:Zak-symm}. Similarly, we find that
\begin{equation}
Z_{2^{3/2}}h_n (\tfrac58,0)
= -2^{-3/2}
\sum_{\ell=0}^7 Z_{2^{3/2}}h_n
(\tfrac{\ell}{8},0)
\myexp{2\pi i 5\ell/8} \label{eq:3}
\end{equation}
where the odd terms over $\ell$ sum to:
\begin{align*}
\sum_{\ell\in\{1,3,5,7\}} Z_{2^{3/2}}h_n
(\tfrac{\ell}{8},0) \myexp{2\pi i 5\ell /8} =& Z_{2^{3/2}}h_n
(\tfrac{5}{8},0) (\myexp{2\pi i /8}+\myexp{2\pi i 7/8}) +
Z_{2^{3/2}}h_n (\tfrac{1}{8},0) (\myexp{2\pi i 3/8}+\myexp{2\pi i
5/8})\\ =& \sqrt{2} Z_{2^{3/2}}h_n (\tfrac{5}{8},0) - \sqrt{2}
Z_{2^{3/2}}h_n (\tfrac{1}{8},0).
\end{align*}
Note that $\ell \equiv 5\ell \pmod 8$ for even $\ell \in 2\numbersys{Z}$.
Thus, if we subtract the two right hand sides of \eqref{eq:2} and
\eqref{eq:3},
the
even terms over $\ell=0,2,4,6$ cancel out. Hence,
\[
Z_{2^{3/2}}h_n (\tfrac18,0) - Z_{2^{3/2}}h_n (\tfrac58,0) = -(Z_{2^{3/2}}h_n (\tfrac18,0) - Z_{2^{3/2}}h_n (\tfrac58,0)) .
\]
However, this is only possible if \eqref{eq:Zak-sqrt-8} holds which was what we had to prove. This
completes the proof of the case $p=1$.
For the case $p=3$, note that, by Lemma~\ref{lem:Zak-symm},
\[
Z_{\sqrt{2}}h_n(\tfrac{1}{4},\tfrac12) =-Z_{\sqrt{2}}h_n(\tfrac{3}{4},\tfrac12),
\]
hence the identity follows from the case $p=1$.
The proof of \eqref{eq:h-even-sqrt-3} goes along the same lines as
the proof of \eqref{eq:h-even-sqrt-2}; the details are left for the reader.
\end{proof}
\begin{lemma}
\label{lem:h3-identities}
Let $n=4m+3$ for some $m \in \numbersys{N} \cup \{0\}$ and let $s \in \{2,3,4\}$. Then
\[
Z_{\sqrt{s}}h_n(\tfrac{p}{s},0) \stackrel{\mathrm{def}}{=} s^{1/4} \sum_{k\in \numbersys{Z}} h_{n}(\sqrt{s} (k+\tfrac{p}{s}))=0
\quad \text{for $p \in \set{0,1,\dots, s-1}$.}
\]
\end{lemma}
\begin{proof}
We will only prove the case $s=3$ as the other cases are similar.
For $p=0$ the identity follows from the fact that $h_n$ is an odd
function. For $p=1$ we have, using \eqref{eq:F-of-Zak} and
$\hat{h}_n= i h_n$,
\begin{multline}
3^{-1/4} Z_{\sqrt{3}}h_n(\tfrac{1}{3},0) =
3^{-1/4} Z_{\tfrac{1}{\sqrt{3}}}\hat{h}_n(0,-\tfrac{1}{3}) = 3^{-1/2} \sum_{k\in \numbersys{Z}}
i h_{n}(\tfrac{1}{\sqrt{3}}k) \myexp{2\pi i k/3} \\
= 3^{-1/2} i \left( \sum_{m\in \numbersys{Z}}
h_{n}(\sqrt{3}m) + \sum_{m\in \numbersys{Z}}
h_{n}(\sqrt{3}(m+\tfrac{1}{3})) \myexp{2\pi i /3} + \sum_{m\in \numbersys{Z}}
h_{n}(\sqrt{3}(m-\tfrac{1}{3})) \myexp{-2\pi i /3} \right),
\label{eq:5}
\end{multline}
where we have substituted $k$ for $3m+\ell$ with
$m \in \numbersys{Z}$ and $\ell \in \set{-1,0,1}$,
Since $h_n$ is odd, it follows directly that $\sum_{m\in \numbersys{Z}}
h_{n}(\sqrt{3}m)=0$. By yet another symmetry argument (e.g.,
Lemma~\ref{lem:Zak-symm}), we also see that
\[ \sum_{m\in \numbersys{Z}}
h_{n}(\sqrt{3}(m-\tfrac{1}{3})) = \sum_{m\in \numbersys{Z}}
h_{n}(\sqrt{3}(m+\tfrac{2}{3})) = -\sum_{m\in \numbersys{Z}}
h_{n}(\sqrt{3}(m+\tfrac{1}{3})).
\]
Continuing the computation in \eqref{eq:5} yields
\begin{align*}
\sum_{k\in \numbersys{Z}} h_{n}(\sqrt{3} (k+\tfrac{1}{3})) &\stackrel{\mathrm{def}}{=} 3^{-1/4}
Z_{\sqrt{3}}h_n(\tfrac{1}{3},0) = 3^{-1/2} i (\myexp{2\pi i /3}-\myexp{-2\pi i /3}) \sum_{m\in \numbersys{Z}}
h_{n}(\sqrt{3}(m+\tfrac{1}{3}))\\ &= - \sum_{m\in \numbersys{Z}}
h_{n}(\sqrt{3}(m+\tfrac{1}{3})),
\end{align*}
where we use that $\myexp{2\pi i /3}-\myexp{-2\pi i /3}=i \sqrt{3}$. Thus
$\sum_{m\in \numbersys{Z}}
h_{n}(\sqrt{3}(m+\tfrac{1}{3})) =0$ which completes the case
$p=1$.
Consider now $p=2$. By Lemma~\ref{lem:Zak-symm} we have
\[
Z_{\sqrt{3}}h_n(\tfrac{1}{3},0) =-Z_{\sqrt{3}}h_n(\tfrac{2}{3},0),
\]
hence the assertion for $p=2$ follows from the case $p=1$.
\end{proof}
Note that the only property of $h_n$ used in the proof of the above two
lemmas is that $h_n$ is an eigenfunction of the Fourier transform
associated with the eigenvalue $-1$ and $i$, respectively, for which
Poisson summation formula~\eqref{eq:F-of-Zak} holds pointwise with absolute convergence. Recall that
functions in $H_2 \cap W(\numbersys{R})$ are even and continuous, while
functions in $H_3 \cap W(\numbersys{R})$ are odd and continuous. Therefore, we
can formulate the following extension of the results in this section using Lemma~\ref{lem:Zak-symm}.
\begin{lemma}
\label{lem:extension-identities}
\begin{enumerate}[(i)]
\item For $g \in H_2 \cap W(\numbersys{R})$, we have:
\[
Z_{\sqrt{2}} g (x,\gamma) = 0 \quad \text{for } (x,\gamma) \in
(\tfrac14 \numbersys{Z} \setminus \numbersys{Z}) \times (\numbersys{Z} +\tfrac12),
\]
and
\[
Z_{\sqrt{3}} g (x,\gamma) = 0 \quad \text{for } (x,\gamma) \in
(\tfrac13 \numbersys{Z} + \tfrac16) \times (\numbersys{Z} +\tfrac12).
\]
\item For $g \in H_3 \cap W(\numbersys{R})$ and $s\in \{2,3,4\}$, we have:
\[
Z_{\sqrt{s}} g (x,\gamma) = 0 \quad \text{for } (x,\gamma) \in
\tfrac1s \numbersys{Z} \times \numbersys{Z} .
\]
\end{enumerate}
\end{lemma}
\section{New obstructions of the frame property}
\label{sec:obstruc}
For rationally oversampled Gabor systems, i.e., $\mathcal{G}(g,a,b)$ with
\[
ab \in \mathbb{Q}, \quad ab=\frac{p}{q} \quad \gcd(p,q)=1,
\]
we define column vectors $\phi^g_\ell(x,\gamma) \in \numbersys{C}^p$ for $\ell
\in \set{0,1, \dots, q-1}$ by
\[
\phi^g_\ell(x,\gamma) = \left(p^{-\frac{1}{2}} (Z_{\frac{1}{b}}g)(x-\ell
\frac{p}{q},\gamma+\frac{k}{p})\right)_{k=0}^{p-1} \ a.e. \ x,\gamma \in \mathbb{R}.
\]
The following characterization of rationally oversampled Gabor frames
is due to Zibulski and Zeevi~\cite{MR1448221}.
\begin{theorem}
\label{thm:ZZ_singular_values}
Let $A,B>0$, and let $g \in L^2(\numbersys{R})$. Suppose $\mathcal{G}(g,a,b)$
is a rationally oversampled Gabor system. Then the following
assertions are equivalent:
\begin{enumerate}[(i)]
\item $\mathcal{G}(g,a,b)$ is a Gabor frame for $L^2(\numbersys{R})$ with bounds $A$ and $B$,
\item $\set{\phi^g_\ell(x,\gamma)}_{\ell=0}^q$ is a frame for $\numbersys{C}^p$ with
uniform bounds $A$ and $B$ for a.e. $(x,\gamma) \in
\itvcos{0}{1}^2$.
\end{enumerate}
\end{theorem}
If $p=1$, i.e., $ab=1/q$, the Gabor system $\mathcal{G}(g,a,b)$ is
said to be integer oversampled. By
Theorem~\ref{thm:ZZ_singular_values} it is a frame with bounds $A$ and
$B$ if and only if
\begin{align}
\label{eq:int-oversampl}
A \le \left(\sum_{\ell =0}^{q-1}
\absbig{Z_{\tfrac{1}{b}}g(x-\ell/q,\gamma)}^2\right)^{1/2} \le B
\quad \text{for a.e. $x,\gamma \in \itvco{0}{1}^2$.}
\end{align}
If $g\in W(\numbersys{R})$ is odd and continuous, then, by
Lemma~\ref{lem:Zak-symm}(ii),
$Z_{1/b}g(0,0)=Z_{1/b}g(\tfrac12,0)=0$ for any $b>0$, which by
\eqref{eq:int-oversampl} immediately implies
that $\gaborG{g}$ is not a
frame along the hyperbola $ab=\tfrac12$. Lyubarskii and
Nes~\cite{MR3027914} showed that this assertion extends to any of the
hyperbolas $ab= \tfrac{p}{p+1}$ for $p \in \numbersys{N}$ for any such odd window
function. The results in the remainder of this section show that
the frame property also must fail for certain $(a,b)$-values for
window functions with other symmetries formulated in terms of the
Fourier transform. We denote the new ``failure'' points in $\setprop{(a,b)\in
\numbersys{R}^2_+}{ab<1}$ by $(a_i,b_i)$, $i=0,1,2,3,4$, where
\begin{equation}
a_i=b_i=\frac{1}{\sqrt{i+2}} \; (i=0,1,2), \quad a_3=\frac{2}{\sqrt{3}}, b_3=\frac{1}{\sqrt{3}} \quad a_4=\frac{1}{\sqrt{3}}, b_4=\frac{2}{\sqrt{3}}. \label{eq:non-frame-points}
\end{equation}
\begin{theorem}
\label{thm:4mp2}
Let $g \in H_2\cap W(\numbersys{R})$. For any point $(a_i,b_i)$, $i \in
\set{0,1,3,4}$, as defined in \eqref{eq:non-frame-points}, the Gabor
system $\gaborG[a_i][b_i]{g}$ is not
a frame for $L^2(\numbersys{R})$, in particular, $\gaborG[a_i][b_i]{h_n}$ is not
a frame for $n=4m+2$, $m \in \numbersys{N} \cup
\{0\}$.
\end{theorem}
\begin{proof}
We consider first the assertion for $i=0$. Note that $a_0 b_0=1/2$, hence $\gaborG[a_0][b_0]{g}$ is an integer
oversampled Gabor system with $p=1$ and $q=2$. By
\eqref{eq:h-even-sqrt-2} in Lemma~\ref{lem:extension-identities}(i),
it follows that
$Z_{1/b_0}g(x-\ell/q,\gamma)=0$ for $\ell=0,1$ for
$(x,\gamma)=(\tfrac{3}{4},\tfrac{1}{2})$. Since the Zak transform is
continuous for $g \in H_2\cap W(\numbersys{R})$, we see that the lower bound
in \eqref{eq:int-oversampl} cannot hold. Thus,
$\gaborG[a_0][b_0]{g}$ is not a frame.
For the case $i=1$, we have $a_1 b_1=1/3$, hence $p=1$ and $q=3$. By
\eqref{eq:h-even-sqrt-3} in Lemma~\ref{lem:extension-identities}(i),
it follows that
$Z_{1/b_1}g(x-\ell/q,\gamma)=0$ for $\ell=0,1,2$ for
$(x,\gamma)=(\tfrac{5}{6},\tfrac{1}{2})$. As before, this violates
the frame property of $\gaborG[a_1][b_1]{g}$.
For the case $i=3$, we have $a_3b_3=1/3$, hence $p=2$ and $q=3$. From case
$i=1$, we see that the matrix
$\Phi^g=\set{\phi^g_\ell(x,\gamma)}_{\ell=0}^q$ has a row of zeros. It
follows from Theorem~\ref{thm:ZZ_singular_values} that
$\gaborG[a_3][b_3]{g}$ is not a frame.
The assertion for $i=4$ follows from case $i=3$ by symmetry using Lemma~\ref{lem:Zak-symm}.
\end{proof}
\begin{theorem}
\label{thm:4mp3}
Let $g \in H_3\cap W(\numbersys{R})$. For any point $(a_i,b_i)$, $i \in
\set{1,2}$, as defined in \eqref{eq:non-frame-points}, the Gabor
system $\gaborG[a_i][b_i]{g}$ is not
a frame for $L^2(\numbersys{R})$, in particular, $\gaborG[a_i][b_i]{h_n}$ is not
a frame for $n=4m+3$, $m \in \numbersys{N} \cup
\{0\}$.
\end{theorem}
\begin{proof}
We consider first the assertion for $i=1$. In this case $a_1b_1=1/3$ and
$\gaborG[a_1][b_1]{g}$ is an integer oversampled Gabor system with
$p=1$ and $q=3$. By Lemma~\ref{lem:extension-identities}(ii),
it follows that
$Z_{1/b_1}g(x-\ell/q,\gamma)=0$ for $\ell=0,1,2$ for
$(x,\gamma)=(\tfrac{2}{3},0)$. As in the proof of
Theorem~\ref{thm:4mp2}, this shows that $\gaborG[a_1][b_1]{g}$
cannot be a frame. The proof for $i=2$ we note that $Z_{1/b_2}g(x-\ell/q,\gamma)=0$ for $\ell=0,1,2,3$ for
$(x,\gamma)=(\tfrac{3}{4},0)$
\end{proof}
Note that it also follows from
Lemma~\ref{lem:extension-identities}(ii) that
$(a_0,b_0), (a_3,b_3)$ and $(a_4,b_4)$ fall outside $\mathscr{F}{(g)}$
for $g \in H_3\cap W(\numbersys{R})$. However, these obstructions are already
known by the results in \cite{MR3027914} since functions in $H_3\cap
W(\numbersys{R})$ are odd.
From Theorem~\ref{thm:4mp2} we have four obstruction points for the
window class $H_2\cap W(\numbersys{R})$. Theorem~\ref{thm:4mp3} provides us with
two new obstruction points for the window class $H_3\cap W(\numbersys{R})$, not
already covered by the hyperbolic obstructions $ab=p/(p+1)$, $p \in
\numbersys{N}$. On the other hand, in general, no obstruction points can exist
for the class $H_0\cap W(\numbersys{R})$ since it contains the Gaussian $h_0$. If
the conjecture by Lyubarskii and Nes~\cite{MR3027914} holds true, then
there are no general obstructions for the class $H_1\cap W(\numbersys{R})$ in addition
to $ab=p/(p+1)$, $p \in \numbersys{N}$.
One might ask how badly the Gabor system fails to be a frame in the obstruction points. From the proofs above, it is clear that it is the lower frame bound that fails. In fact, any window in $W(\numbersys{R})$ satisfy the upper frame bound. The lower frame bound is a strong condition that is equivalent to injectivity and closedness of the range of the analysis operator $C_{g,a,b}: L^2(\numbersys{R}) \to \ell^2(\numbersys{Z}^2)$ defined by $C_{g,a,b} f=\set{\innerprod{f}{\myexp{2\pi i bm \cdot}g(\cdot-ak)}}_{k,m\in \numbersys{Z}}$. Note that injectivity of $C_{g,a,b}$ is equivalent with the Gabor system $\gaborG{g}$ being complete in $L^2(\numbersys{R})$. For Hermite windows, Gr\"ochenig, Haimi, and Romero~\cite{GrochenigCompleteness2015} recently showed that, at least, completeness is guaranteed. To be precise, they proved as part of a more general result that, for any $n \in \numbersys{N}$, the system $\gaborG{h_n}$ is complete in $L^2(\numbersys{R})$ for any rational $ab\le 1$. Hence, for each $(a_i,b_i)$, $i\in \set{0,1,2,3,4}$, given in \eqref{eq:non-frame-points}, the
Gabor system $\gaborG[a_i][b_i]{h_n}$, for $n=4m+2$ or $n=4m+3$, is a
complete Bessel system for which the lower frame bound is not
satisfied because the range of $C_{h_n,a_i,b_i}$ fails to be closed.
Even though both $(1/\sqrt{2},1/\sqrt{2}) \notin \mathscr{F}{(g)}$ and
$(1/\sqrt{3},1/\sqrt{3}) \notin \mathscr{F}{(g)}$ for $g \in H_2\cap
W(\numbersys{R})$, no other points of the form $(1/\sqrt{k},1/\sqrt{k})$ can be
obstruction points for the frame property for the window class
$H_2\cap W(\numbersys{R})$. In fact, by \cite{MR2292280,MR2529475} we know that
$ab<1/(n+1)$ is sufficient for the frame property of
$\gaborG{h_n}$, hence, in particular, that
$\gaborG{h_2}$ is a frame for $ab<1/3$. Moreover, the obstruction
point $(a_1,b_1)=(1/\sqrt{3},1/\sqrt{3})$ shows that the region $ab<1/3$ is
sharp for $h_2$ in the sense that the smallest constant $c$ such that
$\setprop{(a,b)}{ab<c} \subset \mathscr{F}{(h_2)}$ is $c=1/3$. A similar
observation holds for $H_3\cap W(\numbersys{R})$. In this case, the obstruction
point $(a_2,b_2)=(1/2,1/2)$ shows that the region $ab<1/4$ is sharp
for $h_3$. Since $ab<1$ and $ab<1/2$ are sharp for $h_0$ and $h_1$,
respectively, it is natural to ask if $ab<\tfrac{1}{n+1}$ is sharp for
$h_n$ for all $n\in \numbersys{N}$.
We have here focused on finding
$(a,b)$-values that serve as obstructions of the frame property
simultaneously for an
entire class of window functions. For a specific choice of a Hermite
function $h_n$, $n \ge 2$, one can most likely find many more new
obstructions; this is indeed indicated by the
numerical experiments in the next section.
\section{Numerical experiments}
\label{sec:numer-exper}
The numerical experiments in Matlab below use double precision
floating-point numbers. We
truncate the Hermite functions to obtain compactly supported functions
whenever the function value drops sufficiently low.
This way a close
approximation to the Zak transform
$Z_{1/b}h_n$ can be computed as a finite sum. We then discretize the
Zak transform domain
on a
uniform sampling grid, e.g., $51 \times 51$.
As we only consider integer oversampled Gabor systems, close
approximations to the frame bounds
are easily computed for given values of $a$ and $b$ using the
formula~\eqref{eq:int-oversampl}. The approximated bounds
$A_{\mathrm{apx}}$ and $B_{\mathrm{apx}}$ will (up to machine
precision) be
larger and smaller, respectively, than the true optimal frame bounds from \eqref{eq:int-oversampl}, i.e.,
$A_{\mathrm{opt}} \le A_{\mathrm{apx}} \le B_{\mathrm{apx}} \le B_{\mathrm{opt}}$.
\begin{example}
\label{exa:h2}
Let us first illustrate Theorem~\ref{thm:4mp2} for $h_2$.
Figure~\ref{fig:plots} shows that the upper and lower frame bound of
$\mathcal{G}(h_2,a,b)$ along $ab=1/2$ for $b \in
\itvcc{\tfrac18}{4}$.
\begin{figure}[!h]
\centering \input{figH2}
\caption{Numerical approximations of the upper (red)
and lower (blue) frame bound for
$\mathcal{G}(h_2,a,b)$ along $ab=1/2$. At the point
$(a_0,b_0)=(1/\sqrt{2},1/\sqrt{2})$ the estimate of $\sqrt{A}$
essentially drops to machine precision $\approx 7 \cdot 10^{-16}$ (not
shown)
}
\label{fig:plots}
\end{figure}
We first remark that $A_{\mathrm{apx}}^{1/2}$
drops to machine precision at
$(a_0,b_0)=(1/\sqrt{2},1/\sqrt{2})$. Note also that the frame bounds
are symmetric about $b=1/\sqrt{2}$ according to
Lemma~\ref{lem:Zak-symm}, that is, $\mathcal{G}(h_2,1/(2b),b)$ and
$\mathcal{G}(h_2,b,1/(2b))$ have the same frame bounds.
The behavior of ${A_{\mathrm{apx}}^{1/2}}$ is rather
complicated. The drops of ${A_{\mathrm{apx}}^{1/2}}$ below, say,
$10^{-3}$, are very narrow and therefore difficult to resolve
due to the discretization of the $b$ range. Moreover, it is unclear if
$(a_0,b_0)=(1/\sqrt{2},1/\sqrt{2})$ is the only point along
$ab=1/2$ that does not belong to $\mathscr{F}{(h_2)}$. Around
$b=2.35$ and $b=2.82$ the values of $A_{\mathrm{apx}}^{1/2}$ are
in the order of $10^{-4}$ and $10^{-7}$, respectively. At
$b=3.5261848971734$ the value of $A_{\mathrm{apx}}^{1/2}$ even
drops to $1.6 \cdot 10^{-12}$, however, it does not drop below
this value even when the discretization is refined. There may
very well exist a $(a,b)$-pair near the point
$(1/(2b),b)$, where $b=3.5261848971734$, for which $\gaborG{h_2}$
is not a frame. In any event, since $A_{\mathrm{apx}}^{1/2}
\approx 10^{-12}$, i.e., $A_{\mathrm{apx}} \approx
10^{-24}$, such a Gabor system is
badly conditioned and should not be used for numerical purposes.
\end{example}
Let us end this paper with two examples not covered by the results
in Section~\ref{sec:obstruc}.
\begin{example}
\label{exa:h4}
In this example we consider Gabor systems generated by $h_4$ and
$h_5$. Note that these functions belong to $H_0$ and $H_1$,
respectively. Recall that no obstructions of the frame property is
known of $h_4$, while the hyperbolas $ab=\tfrac{p}{p+1}$, $p \in \numbersys{N}$,
are the only known obstructions for $h_5$. Figure~\ref{fig:plot-H4-H5}
shows the approximated frame bounds of $\mathcal{G}(h_4,a,b)$ along
$ab=1/2$ and of $\mathcal{G}(h_5,a,b)$ along $ab=1/3$. The general
behavior is similar to that of $h_2$ in Figure~\ref{fig:plots}. For
$\mathcal{G}(h_4,a,b)$ the lower frame bound $A_{\mathrm{apx}}^{1/2}$
drops to machine precision four times in the considered $b$
range. This behavior can be explained as follows. In Maple one can verify with
arbitrary precision that
\begin{equation}
Z_{\sqrt[4]{3}} h_4(0,\tfrac{1}{2}) \stackrel{\mathrm{def}}{=} 3^{1/8} \sum_{k \in \numbersys{Z}}
(-1)^k h_4(3^{1/4}k) = 0
\label{eq:id-h4}
\end{equation}
holds. Recall that the Zak transform also has a zero at
$(\tfrac12,\tfrac12)$ since $h_4$ is even. Hence,
equation~\eqref{eq:id-h4} implies that the lower bound in
\eqref{eq:int-oversampl} is violated for
$(x,\gamma)=(\tfrac12,\tfrac12)$. Therefore, $\gaborG{h_4}$ is not
frame for $(a,b)=(3^{1/4}/2,3^{-1/4})$, and by
symmetry using Lemma~\ref{lem:Zak-symm}, also not for
$(a,b)=(3^{-1/4},3^{1/4}/2)$.
Similarly, one can verify in Maple with
arbitrary precision that
\begin{equation}
Z_{\tfrac{1}{\sqrt[4]{3}}} h_4(0,\tfrac{1}{2}) \stackrel{\mathrm{def}}{=} 3^{-1/8} \sum_{k \in \numbersys{Z}}
(-1)^k h_4(3^{-1/4}k) = 0
\label{eq:id-h4-2}
\end{equation}
holds. Equation~\eqref{eq:id-h4-2} implies that $\gaborG{h_4}$ is not
frame for $(a,b)=(3^{-1/4}/2,3^{1/4})$, and by
symmetry, also not for
$(a,b)=(3^{1/4},3^{-1/4}/2)$. A proof of the
identities~\eqref{eq:id-h4} and \eqref{eq:id-h4-2} must rely on other
methods that used in Section~\ref{sec:some-infinite-series} since the
Gaussian $h_0$ does not satisfy the identities and since both $h_0$ and
$h_4$ belong to $H_0$.
\begin{figure}[!h]
\definecolor{mycolor1}{rgb}{0.00000,0.44700,0.74100}%
\definecolor{mycolor2}{rgb}{0.85000,0.32500,0.09800}%
\begin{minipage}[t]{.5\textwidth}
\centering
\input{figH4}
\end{minipage}
\hspace{.2em}
\phantom{e}
\begin{minipage}[t]{0.45\textwidth}
\centering
\input{figH5}
\end{minipage}
\vspace*{-.5em}
\caption{Numerical approximations of the square root of
the upper (red)
and lower (blue) frame bound for
$\mathcal{G}(h_4,a,b)$ along $ab=1/2$ (left) and
$\mathcal{G}(h_5,a,b)$ along $ab=1/3$ (right). In all
instances,
where $A_{\mathrm{apx}}^{1/2}$ drops below $10^{-5}$, it drops
to a value in the order of machine precision $\approx 10^{-16}$
(not shown). }
\label{fig:plot-H4-H5}
\end{figure}
For the $5$th Hermite function $h_5$ the lower frame bound
$A_{\mathrm{apx}}^{1/2}$ drops to machine precision seven times along
$ab=1/3$ in the considered range in
Figure~\ref{fig:plot-H4-H5}. Here, similar arguments as for $h_4$ can be used
to show that
$(a,b)=(\sqrt[4]{27}/3,1/\sqrt[4]{27})$ and $(a,b)=(1/\sqrt[4]{27},\sqrt[4]{27}/3)$
do not belong to $\mathscr{F}{(h_5)}$.
Indeed, one can verify in Maple with
arbitrary precision that
\[
Z_{\sqrt[4]{27}}h_5(\tfrac{p}{3},\tfrac12) = 0 \quad \text{for } p\in \set{0,1,2}.
\]
Similar identities that explain the other
five drops of $A_{\mathrm{apx}}^{1/2}$ for
$\mathcal{G}(h_5,a,b)$ most likely exist.
\end{example}
In Example~\ref{exa:h4} above we briefly considered obstructions of
the frame property for Hermite functions
outside the two classes $H_2$ and $H_3$. The
methods developed in this paper for Wiener space functions in the
eigenspaces $H_2$ and $H_3$
rely only on the corresponding eigenvalue of the Fourier transform.
However, since
both $h_0 \in H_0$ and $h_4 \in H_0$ have the same eigenvalue, namely
$1$, it is obvious that other methods are needed if one attempts to
disprove Gr\"ochenig's conjecture for, say, all functions in $\setprop{h_{4m}}{m
\in \numbersys{N}}$.
\def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$}
\def\uarc#1{\ifmmode{\lineskiplimit=0pt\oalign{$#1$\crcr
\hidewidth\setbox0=\hbox{\lower1ex\hbox{{\rm\char"15}}}\dp0=0pt
\box0\hidewidth}}\else{\lineskiplimit=0pt\oalign{#1\crcr
\hidewidth\setbox0=\hbox{\lower1ex\hbox{{\rm\char"15}}}\dp0=0pt
\box0\hidewidth}}\relax\fi} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$}
\def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$}
|
train/arxiv
|
BkiUfeg4uzliDHHXlhFi
| 5 | 1 |
\section*{Introduction}
New and proposed experiments to study the cosmic-ray spectrum up to
$10^{20}$~eV and beyond
\cite{Abu-Zayyad97a,Boratav92a,Teshima97a,Ormes96a,Linsley97a,Linsley97b}
will depend for their interpretation on extrapolations of models of
hadronic interactions more than two orders of magnitude in center
of mass energy beyond what is accessible with present colliders.
The interaction lengths of hadrons in the atmosphere, and hence
their cross sections, are the most obvious determining factor
in the rate at which the showers develop. An extra source of model
dependence is the relation between hadron cross sections
in air and the more basic
hadron--hadron cross sections.
Cosmic-ray measurements have been used in the past to
determine $\sigma_{p-air}^{\rm inel}$ and, with the
help of Glauber multiple scattering theory \cite{Glauber70a},
to estimate $\sigma_{pp}^{\rm tot}$. Frequently quoted examples
are the Fly's Eye experiment \cite{Baltrusaitis84,Baltrusaitis85} and the
Akeno experiment \cite{Honda93}. Both experiments find
rather large central values of $\sigma_{p-air}^{\rm inel}$
($\approx 540$~mb \cite{Baltrusaitis84} and $\approx 570$~mb
\cite{Honda93} at lab energy $E_0\, \sim$ 4$\times$10$^8$ GeV).
In both experiments, the proton--air cross section
has to be inferred from some measure of the attenuation of
the rate of showers deep in the atmosphere. The measured
attenuation depends on the cross section which determines
the depth at which showers are initiated, but it also depends
very significantly
on the rate at which energy is dissipated in the subsequent atmospheric
cascades. For this reason, a simulation which includes a full
representation of the hadronic interactions in the cascade is needed.
Because these two experiments measure the attenuation in quite different
ways, the fact that their inferred values of $\sigma_{p-air}^{\rm inel}$
agree is a non-trivial result.
Having determined $\sigma_{p-air}^{\rm inel}$, the experimental
groups go on to derive corresponding values
for $\sigma_{pp}^{\rm tot}$ of $120$~mb \cite{Baltrusaitis84,Baltrusaitis85}
and $125$~mb \cite{Honda93}
at $\sqrt{s}$ about 30 TeV. As noted in the Review of Particle Physics
\cite{PDG96}, $\sigma_{pp}^{\rm tot}\sim 120$~mb is in good agreement
with extrapolation of the parameterization of Donnachie and
Landshoff (DL) \cite{Donnachie92b}. As we discuss in the next section,
however, the cosmic--ray values of $\sigma_{pp}^{\rm tot}$ are
based on a parameterization of the nucleon--nucleon scattering amplitude
that is in disagreement with high energy collider data. Therefore, the
quoted values cannot be used
to pin down a high energy extrapolation of the {\it pp} cross section.
Indeed, it has been pointed out in the past \cite{Gaisser87a,Nikolaev93b}
that such large values of $\sigma_{p-air}^{\rm inel}$ ($\sim 550$~mb)
would require significantly larger values of $\sigma_{pp}^{\rm tot}$
than that predicted by the parameterization of Ref. \cite{Donnachie92b}.
Conversely, if that predicted behavior of the hadronic cross
section is correct, then the hadron--air cross sections should
be smaller, and this could have important consequences for development
of high energy cascades.
The plan of the paper is as follows. In the next section we discuss
the relation between the nucleon-nucleon cross section and
the nucleon-nucleus cross section, in particular, how it depends
on the slope of the elastic $pp$ cross section. Next we review
how the hadron--air cross sections are inferred from air shower
experiments and discuss the resulting uncertainties in
$\sigma_{p-air}^{\rm inel}$
and their implications for $\sigma_{pp}^{\rm tot}$.
\section*{Proton-proton vs. proton--air cross section}
The relation
between the hadron-nucleon cross section and the corresponding
hadron-nucleus cross section depends significantly
on the elastic slope parameter $B(s)$
\begin{equation}
B(s) = {d\over dt}\left[\ln\left({d\sigma_{pp}^{\rm el}\over dt}
\right)\right]_{t=0}\ .
\label{slope-def}
\end{equation}
This relation is discussed in the context of cosmic--ray cascades in
detail in Ref.~\cite{Gaisser87a}. Qualitatively, the relation is such that for
a given value of $\sigma_{pp}^{\rm tot}$, a larger value of the slope
parameter corresponds to a larger proton--air cross section.
Conversely for a given value of
$\sigma_{p-air}^{\rm inel}$, a larger value of $B(s)$ leads to a smaller
value of $\sigma_{pp}^{\rm tot}$. In addition, the smaller the slope
parameter, the larger is the uncertainty in the derived proton--proton cross
section.
{}For example, the Fly's Eye value of $\sigma_{pp}^{\rm tot}=122\pm 11$~mb
at $\sqrt{s}=30$~TeV \cite{Baltrusaitis84,Baltrusaitis85} is obtained
using a outdated geometrical scaling fit \cite{DiasdeDeus78a,Buras74a}
to extrapolate the
slope parameter to this energy. This results in a large value
of $B>30$~GeV$^{-2}$ and hence (for a measured value of
$\sigma_{p-air}^{\rm inel}\approx 540 \pm 50$~mb) a small value of
$\sigma_{pp}^{\rm tot}$.
Using a different model for the
slope parameter \cite{Chou68,Bourrely84a}, for example, as advocated in
the review article of Block and Cahn \cite{Block85}, leads to a
slower increase in $B(s)$ and to a considerably larger value of
$\sigma_{pp}^{\rm tot}\approx 175_{-30}^{+40}$~mb \cite{Gaisser87a}.
The same applies to the Akeno analysis
and numbers~\cite{Honda93}.
Before discussing the slope parameter further, it is useful to review
briefly the basis of the very successful DL fits of cross sections,
which are based on a one-pomeron exchange model (e.g.\
\cite{Donnachie83} and Refs. therein). In such a model,
the energy dependence of the
total cross section for $AB$ scattering is given by \cite{Donnachie92b}
\begin{equation}
\sigma^{\rm tot}_{AB}(s) = X_{AB} \left(\frac{s}{s_0}\right)^\Delta
+ Y_{AB} \left(\frac{s}{s_0}\right)^{-\epsilon} \ .
\label{DL-par}
\end{equation}
The constants $X_{AB}$ and $Y_{AB}$ are target and projectile specific
whereas the effective powers
$\Delta\approx 0.08$ and $\epsilon\approx 0.45$
are independent of the considered particles $A$ and $B$.
Within the uncertainties of the measurements, this parameterization
is in agreement with almost all currently available data on $pp$, $p\bar p$,
$\pi p$, $\gamma p$, and $\gamma\gamma$ total cross sections.
It should be noted that the high energy $p\bar p$ data are not fully
self-consistent. There is some disagreement between measurements
of the total cross section at
$\sqrt{s} = 1800$ GeV. Whereas the E710 \cite{Amos90a}
and the preliminary E811 \cite{Avila97a} data
are in perfect agreement with the DL prediction
\cite{Donnachie92b}, the CDF measurement \cite{Abe94d}
shows a steeper rise of the total $p\bar p$ cross section.
New data from HERA ($\sigma^{\rm tot}_{\gamma p}$, \cite{Aid95b})
and LEP2 ($\sigma^{\rm tot}_{\gamma \gamma}$, \cite{Acciarri97a}),
although being compatible with an energy dependence of $\Delta \approx
0.08$, indicate that the cross section may rise
faster with energy than assumed in the DL fit.
{}Furthermore, in a recent fit to $pp$ and $p\bar p$ data \cite{Cudell97a}
a slightly higher value of $\Delta = 0.096_{-0.009}^{+0.012}$ was found.
Given the success of the one-pomeron exchange model in predicting the
total cross section, one might apply it
to derive further predictions. The one-pomeron amplitude can be
written as
\begin{equation}
{\cal A}(s,t) = g_{AB}(t) \left(\frac{s}{s_0}\right)^{\alpha(t)}
\label{ampl-elast}
\end{equation}
with $\alpha(t=0) = 1+\Delta$. Collider data
on elastic scattering suggest for small $|t|$ the
functional dependence $g_{AB}(t) = X_{AB} \exp\{\frac{1}{2} B_0 t\}$.
{}Following the predictions of Regge theory, $B_0$ is an energy-independent
constant. Consequently, the elastic slope $B(s)$ is given by
\begin{equation}
B(s) = B_0 + 2 \alpha^\prime(0) \ln \left(\frac{s}{s_0}\right)
\label{slope}
\end{equation}
where the parameter $\alpha^\prime(0)$ is a constant and has to be
determined from data \cite{Donnachie83}.
The elastic cross section follows from
\begin{equation}
\sigma^{\rm el}_{AB} = (1+\rho^2) \frac{(\sigma^{\rm tot}_{AB})^2}{16
\pi B(s)}\ .
\label{sig-el}
\end{equation}
At high energies the ratio $\rho$ between the real and the imaginary part of
the forward scattering amplitude is small and $\rho^2$ can be neglected.
In a model with geometrical scaling it is assumed that the increase of
the total cross section stems entirely from an increase of the
transverse size of the scattering particles.
The opacity of the particles is considered as constant. A direct
consequence of this assumption is the energy independence of the ratio
$R\,=\,\sigma^{\rm el}_{pp} (s) / \sigma^{\rm tot}_{pp} (s)$, which, in
combination with Eq.~(\ref{sig-el}), leads to the relation
\begin{equation}
B(s) = (1+\rho^2) \frac{\sigma_{pp}^{\rm tot}(s)}{16 \pi\; R}\ .
\label{geom-def}
\end{equation}
Over the ISR energy range $R\approx 0.17$, which was the value
used in (\ref{geom-def}) in Refs.~\cite{Baltrusaitis84,Honda93}.
\begin{figure}[!hbt]
\centerline{\psfig{figure=fig1.ps,width=12cm}}
\vspace*{5mm}
\caption{\em
Data on $pp$ and $p\bar p$ interactions~\protect\cite{PDG96}
are compared with the
DL parameterization \protect\cite{Donnachie92b} (lower curve) and the fit
of Ref.~\protect\cite{Cudell97a} (upper curve).
The predictions for the elastic cross section from
Eq.~(\protect\ref{sig-el})
and in the case of geometrical scaling (dotted curve) are also shown.
The data point at $\sqrt{s} = 30$
TeV is the original Fly's Eye estimate \protect\cite{Baltrusaitis84}.
\label{pptot}}
\end{figure}
In Fig.~\ref{pptot} the parameterizations of Refs.~\cite{Donnachie92b} and
\cite{Cudell97a} are compared to data. The data point at $\sqrt{s} = 30$
TeV is the original Fly's Eye estimate \cite{Baltrusaitis84}.
The prediction for geometrical scaling has been calculated using the
DL model for the total cross section.
Whereas both Regge
parameterizations are in agreement with data on total as well as elastic
cross sections, the geometrical scaling
model fails to describe the elastic scattering data.
This becomes even more obvious if one considers the predictions for the
energy dependence of the elastic slope parameter as shown in Fig.~\ref{ppsl}.
In contrast, the single pomeron exchange model
is in very good agreement with collider data. Such an
$a+b\ln(s)$ extrapolation of the slope parameter is often used to fit
data (for example, \cite{Goulianos83}) and also to estimate
cross sections and interaction lengths for cascade
calculations \cite{Kalmykov92e,Ranft95a}.
Remarkably, the minijet calculation of Block, Halzen and Margolis
\cite{Block92} (BHM) predicts a slope parameter that almost coincides with the
one-pomeron model extrapolation using $\alpha^\prime(0) = 0.3$
GeV$^{-2}$.
\begin{figure}[!hbt]
\centerline{\psfig{figure=fig2.ps,width=12cm}}
\vspace*{5mm}
\caption{\em
Elastic slope parameter for $pp$ and $p\bar p$ interactions.
The solid lines are the predictions of the one-pomeron exchange model with
$\alpha^\prime(0) = 0.25$ and $0.3$ GeV$^{-2}$. The dotted line
corresponds to geometrical scaling. The data are taken from
Refs.~\protect\cite{Castaldi85,Amos90b,Abe94b}.
\label{ppsl}}
\end{figure}
As recognized by DL the single pomeron exchange model is not
consistent with unitarity. One way to see this is to note
from Eqs.~(\ref{DL-par},\ref{slope},\ref{sig-el}), that at asymptotically
high energy the unitarity requirement\begin{equation}
{\sigma_{AB}^{\rm el}\over\sigma_{AB}^{\rm tot}}\;<\;{1\over 2}
\label{unitarity}
\end{equation}
is violated. We point out, however, that the model of BHM~\cite{Block92}
does satisfy unitarity and it gives a similar prediction to the
single pomeron fit over the energy range shown in Fig.~\ref{ppsl}.
We summarize some of the results of this section
in Fig.~\ref{PLfig}, by displaying them in
the ($\sigma_{pp}^{\rm tot}$--$B$) plane.
The shaded region corresponds to the region excluded by the unitarity
constraint of Eq.~(\ref{unitarity}). The points represent experimental
measurements at ISR (triangles) and ${\bar{p}p}$~collider (squares).
The dotted line indicates the relation between $B$ and
$\sigma_{pp}^{\rm tot}$ predicted by geometrical scaling with $R=0.17$ in
Eq.~(\ref{geom-def}). This line fails to describe the highest energy
measurements. The dashed line corresponds to the DL fit
to $\sigma_{pp}^{\rm tot}$ together with
equation (\ref{slope}) for the energy
dependence of the slope (with $\alpha^\prime(0) = 0.3$~GeV$^{-2}$).
Each point on the dashed line corresponds to a value of the center of mass
energy of the $pp$ (or $p\bar{p}$)
reaction. We have indicated with a circle
the point for $\sqrt{s} = 30$~TeV.
\begin{figure}[!hbt]
\centerline{\psfig{figure=fig3.ps,width=12cm}}
\caption{\em
$B$ dependence on $\sigma_{pp}^{\rm tot}$ and the values of
$\sigma_{pp}^{\rm tot}$ allowed by the Fly's Eye measurement. The shaded
area is excluded by the unitarity constraint. Solid symbols give
experimental data points. Dashed line shows $B$ as in DL fit; dotted line
shows geometrical scaling. The open point indicates
$\sigma_{pp}^{\rm tot}$ at $\protect\sqrt{s}$ = 30 TeV from the DL fit.
The five curved lines show the region allowed by $\sigma_{p-air}^{\rm prod}$
= 540 mb $\pm$1$\sigma$ and $\pm$2$\sigma$ (see text).
\label{PLfig}}
\end{figure}
Using the Glauber formalism a fixed value of the $p$--air cross
section can be represented as a curve in the ($\sigma_{pp}^{\rm tot}$--$B$)
plane. The five curved lines in Fig.~\ref{PLfig} indicate the set of
values of $\sigma_{pp}^{\rm tot}$ and $B$ that result in a
proton--air cross section of $\sigma_{p-air}^{\rm inel}$ of 540, $540\pm 50$
and $540 \pm 100$ mb, that is the central value and $\pm 1, 2$~standard
deviations of the Fly's Eye measurement at $\sqrt{s} = 30$~TeV.
The intersections of the curves corresponding to 590 and 490 mb
with the dotted line that describes geometrical scaling give the
(one standard deviation) allowed interval for $\sigma_{pp}^{\rm tot}$,
as estimated in the original Fly's Eye publication.
However it is clear that any reasonable extrapolation of the
collider data (for the $B$--$\sigma_{pp}^{\rm tot}$) will result in the
estimate of a higher central value for the $pp$ cross and in larger
uncertainty. Nominally the prediction of Donnachie and Landshoff for
$\sigma_{pp}^{\rm tot}$ at $\sqrt{s} = 30$~TeV is one standard
deviation below the Fly's Eye measurement.
It is important to notice that the
experimentally measured and published inelastic $p$--air cross
section is only that part of the total cross section which belongs to
particle production.
Following \cite{Gaisser87a} we write this cross section as
\begin{equation}
\sigma_{p-air}^{\rm prod} = \sigma_{p-air}^{\rm tot} -
\sigma_{p-air}^{\rm el} - \sigma_{p-air}^{\rm q-el}\ ,
\label{sig-inel}
\end{equation}
where $\sigma_{p-air}^{\rm q-el}$ is the quasielastic
$p$--air cross section corresponding to scattering
processes where the nucleus gets excited without direct
particle production. The Glauber formalism~\cite{Glauber70a}
gives explicit expressions for all terms in Eq.~\ref{sig-inel}.
Unfortunately, there is ambiguity in the literature about
the designation of the production cross section. It has also
been called $\sigma_{p-air}^{\rm inel}$
in experimental~\cite{Baltrusaitis84,Baltrusaitis85,Honda93} and
theoretical~\cite{Gaisser87a} papers and it
is also often referred to as absorptive
cross section \cite{Nikolaev86a,Durand88a,Kopeliovich89a,Nikolaev93b}.
In the hope of removing this confusion, we introduce the notation
`prod' to represent the inelastic cross section in which at least
one new hadron is produced in addition to nuclear fragments.
\section*{Uncertainties in the $p$--air cross section measurement}
In addition to uncertainties in converting from $\sigma_{p-air}^{\rm prod}$
to $\sigma_{pp}^{\rm tot}$, there are significant uncertainties in
the determination of $\sigma_{p-air}^{\rm prod}$ itself. Both at
Fly's Eye~\cite{Baltrusaitis84} and at Akeno~\cite{Honda93}, the
approach is to look at the frequency of deeply penetrating showers
and to assign a corresponding attenuation length ($\Lambda$) on
the assumption that, for a given energy, the most deeply
penetrating showers are initiated by protons.
The Fly's Eye group measures the depth of maximum development ($X_{\rm max}$)
distribution for air showers in a relatively narrow interval of
$S_{\rm max}$, where $S_{\rm max}\,\propto\,E_0$ is the shower size at maximum.
The tail of that distribution, well after its peak, is a measure of the
depth of the first interaction convoluted with the intrinsic fluctuations
in the shower development.
The Akeno group selects deeply penetrating showers by cutting on
showers with the highest size, $S$, at the observation level in narrow
bins of the shower muon size $S_\mu$. The reason for this procedure is
that $S_\mu$ is nearly proportional to the primary energy $E_0$.
$\Lambda$ is then derived from the frequency of such showers at
different zenith angles, i.e. from the decrease of the frequency
with atmospheric depth, which is a different measure of
attenuation from that used in the Fly's Eye approach.
The model--dependence then is compressed into a single parameter $a>1$
in the relation
\begin{equation}
\Lambda\;=\;a\times\lambda_{p-air}\;=\;a\times
{14.5\,m_p\over\sigma_{p-air}^{\rm prod}}.
\end{equation}
Here $\lambda_{p-air}$ is the interaction length of protons in air, which
has a mean atomic mass of 14.5. The effective value of $a$ for proton
initiated showers depends on the pion inelastic cross section in air and
on the inclusive cross sections in the proton and pion inelastic
interactions~\cite{Gaisser82,Bellandi95a}.
The Fly's Eye proton air cross section value of 540$\pm$50 mb is
derived by fitting the tail of the $X_{\rm max}$ distribution to an
exponential with a slope of $\Lambda$ = 70$\pm$6 g/cm$^2$ and then using
$a \, \simeq \,$1.60, which is similar to the value calculated in
Ref.~\cite{Ellsworth82a}. The $\Lambda$ values in Ref.~\cite{Ellsworth82a}
are calculated by simulating air showers assuming different energy
dependences of $\sigma_{p-air}^{\rm prod}$ and fitting the tails of the
resulting $X_{\rm max}$ distributions. $\Lambda$ values are then compared
to $\sigma_{p-air}^{\rm prod}$ at primary energy $E_0>$3$\times$10$^{17}$~eV.
The calculation was performed with an essentially $pp$ scaling interaction
model~\cite{Hillas81a}.
Models with even very modest scaling violation, that also account for
the nuclear target effect yield smaller values of $a$. It is not possible
to separate the effects of the energy dependence of the inelastic cross
section from those of the scaling violation in the `one parameter'
approach. The relevant parameter in $p$--air interactions is the rate
of energy dissipation by the primary proton
$K^{\rm inel}_{p-air}/\lambda_{p-air}$. The inelasticity coefficient
$K_{p-air}^{\rm inel} \, = \, {{E_0 - \langle E_L \rangle}
\over {E_0}}$, where $E_0$ is the primary proton energy in the lab
system and
$\langle E_L \rangle$ is the average lab system energy of the leading
nucleon. The equally strong contribution of $\pi$--air collisions is
even more difficult to quantify in simple terms. On the other hand $a$
tends to saturate for very strong scaling violation models, because the
nuclear target effects in such models are small.
The Akeno experiment uses
calculations~\cite{Kasahara79} made also with a model implementing
radial scaling. In Ref.~\cite{Bellandi95a} the results of Akeno have been
reanalyzed making use of an interaction model with scaling violations,
resulting in the derivation of lower values for $a$ that used by
the Akeno experiment.
We have updated the calculations of Ref.~\cite{Ellsworth82a} to
illustrate how different values of $\sigma_{p-air}^{\rm prod}$ can
be extracted from the same measured value of $\Lambda$ depending
on the inclusive cross sections of the interaction model.
The calculations were performed with three interaction models
characterized by $K_{p-air}^{\rm inel}$: the scaling model of
Hillas~\cite{Hillas81a}, SIBYLL~\cite{Fletcher94} and a SIBYLL--based
model with significantly stronger scaling violation in $pp$ interactions
(High--K). All three calculations use the same
input $\sigma_{p-air}^{\rm prod}=520$~mb
($\lambda_{p-air} \, = \, 46 \, {\rm g/cm}^2 $) at
$\sqrt{s}$ = 30 TeV. The resulting values of
$a$ are given in Table~\ref{a-tab}. The last column of the table
gives the values of $\sigma_{p-air}^{\rm prod}$ that would be inferred
from the Fly's Eye measurements if the corresponding value of
$a$ had been used.
The effects of scaling violation on the shower attenuation rate
used by the Akeno experiment~\cite{Honda93} are similar, although
the numerical values of $a$ are somewhat different.
\begin{table}[htb]
\caption{\em Cross section values that can be extracted from the
measured $\Lambda$ = 70$\pm$6 g/cm$^2$ with different interaction
models.
\label{a-tab}
}
\medskip
\begin{tabular}{lcccc}
Model & $\langle K_{pp}^{\rm inel} \rangle$
& $\langle K_{p-air}^{\rm inel} \rangle$
& $a(\sqrt{s}$ = 30 TeV) & $\sigma_{p-air}^{\rm prod}$, mb\\
\tableline
Hillas & --- & 0.50 & 1.47$\pm$0.05 & 504 \\
SIBYLL & 0.57 & 0.67 & 1.20$\pm$0.05 & 411 \\
High--K & 0.64 & 0.74 & 1.12$\pm$0.05 & 384 \\
\end{tabular}
\end{table}
\section*{Discussion}
Cosmic--ray experiments detect air showers that result from
interactions of particles with energy up to and exceeding
10$^{11}$ GeV. Such observations have the potential to provide
information about the growth of $\sigma_{p-air}^{\rm prod}$
up to $\sqrt{s}\, \simeq \, 10^5$ GeV. The long lever
arm would be helpful for discriminating among
models that give nearly identical results
at lower energy. Here we attempt to summarize the problems and
complications involved in the measurement and interpretation
of $\sigma_{p-air}^{\rm prod}$ in cosmic ray experiments.
The experimental shower sets are inevitably contaminated by
showers initiated by heavier nuclei. Neglecting this
contamination would result in an overestimate of
$\sigma_{p-air}^{\rm prod}$. To minimize this contamination,
the Fly's Eye cross section was estimated by analyzing
only the most penetrating showers, that is a
subset of 20\% of the entire data sample, strongly enriched
in protons. A subsequent analysis
\cite{Gaisser93} found that the composition of primary cosmic rays
may be very heavy in the energy region considered. If so, the
contamination
of heavy primaries could be larger than what was
estimated in the original work leading to
an overestimate of $\sigma_{p-air}^{\rm prod}$.
The cross section estimates
in Ref.~\cite{Baltrusaitis84,Baltrusaitis85,Honda93} were
based on interaction models with scaling particle
momentum distributions.
Models with scaling violations predict faster shower development
(e.g. smaller values of $a$). If such models were used they
would imply a smaller $p$--air cross section (as illustrated
in Table~\ref{a-tab}). In addition, such models could also be
consistent with a smaller fraction of heavy nuclei.
If the shower development is described with a single parameter,
as done in the first generation cross section estimates,
it is impossible to distinguish between the effects of the
proton and pion cross sections and the inclusive distributions
of the secondary particles.
Once $\sigma_{p-air}^{\rm prod}$ is determined, the Glauber formalism
can be used to infer $\sigma_{pp}^{\rm tot}$ with extrapolations
for $B(s)$ based on all available collider data.
Previous analyses~\cite{Baltrusaitis84,Baltrusaitis85,Honda93}
used a parameterization based on data up through ISR energies
which fails to describe recent high energy measurements and leads to an
underestimation of $\sigma_{pp}^{\rm tot}$.
Our basic conclusion is that cosmic-ray values of $\sigma_{pp}^{\rm
tot}$
do not at present strongly constrain extrapolations of fits
of this cross section up to collider energies. With the prospect
of much more precise experimental measurements forthcoming from the
high--resolution Fly's Eye and other proposed
experiments~\cite{NewExp} there is the potential for
much better estimates of the proton--proton cross section.
Realizing this potential will depend also on the use of
a new generation \cite{Knapp96a} of shower simulations based on
interaction models that incorporate all the physics of minimum
bias interactions up to collider energies and a correspondingly
detailed treatment of nuclear effects. The corresponding analysis
should involve a full Monte Carlo simulation of each experimental
data set rather than characterizing the simulation with a single
parameter.
\noindent
{\bf Acknowledgements.} \\
One of the authors (PL) wishes to thank the Bartol
Research Institute for hospitality during the time this work was completed.
One of the authors (R.E.) is grateful to J.\
Ranft and S.\ Roesler for many discussions.
|
train/arxiv
|
BkiUe845qhDCTY2qMvI1
| 5 | 1 |
\section{Introduction}
\label{}
T CrB is one of the few known Galactic recurrent novae (Warner 1995,
Schaefer 2010), with outbursts recorded in 1866 and 1946. They have been
spectacular events, peaking around 2.0 mag (Pettit 1946), displaying nearly
identical lightcurves characterized by an extremely fast rise to maximum and
a rapid decline, taking $t_2$=3.8 days to decline by 2 mag (Payne-Gaposchkin
1957). The spectroscopic evolution (Sanford 1946, 1949) was - in modern
terms - that of an He/N nova (Williams 1992), reaching very high
ionization conditions as indicated by the presence of strong [FeX] and
[FeXIV] coronal lines (Sanford 1947). Peculiar to T CrB is the presence on
both outbursts, of a secondary, fainter and broader maximum $\sim$110 days
past the primary maximum, which physical nature is still debated (e.g.
Webbink 1976, Cannizzo \& Kenyon 1992, Selvelli, Cassatella, \& Gilmozzi
1992, Ruffert, Cannizzo, \& Kenyon 1993).
The donor star in T CrB is an M3III, filling its Roche lobe (Bailey 1975,
Yudin \& Munari 1993), on a 227.55 day orbit (Kenyon \& Garcia 1986, Fekel
et al. 2000) around a WD companion (Selvelli, Cassatella, \& Gilmozzi 1992,
Belczynski \& Mikolajewska 1998). The presence of a cool giant makes T CrB
also a member of the class of symbiotic binaries (Allen 1984, Kenyon 1986),
similarly to the other symbiotic recurrent novae RS Oph, V745 Sco and V3890
Sgr (Munari 1997).
The brightness in quiescence ($V$$\sim$10 mag) and the favourable position
on the sky (absence of seasonal gaps in the observability from the northern
emisphere), have fostered continued interest and study of T CrB. In
quiescence, typical symbiotic binaries display a rich and high ionization
emission line spectrum (comprising [NeV], [CaV], [FeVII], HeII, and Raman
scattered OVI), superimposed to the absorption spectrum of the cool giant,
with the nebular continuum veiling in the yellow, and overwhelming in the
blue, its molecular absorption bands (Allen 1984, Munari and Zwitter 2002,
Skopal 2005). As a symbiotic binary, the optical quiescence spectrum of T
CrB is atypical in showing very little else than the M3III absorption
spectrum. Most of the time, only a weak emission in H$\alpha$ is noticeable
on low resolution spectra (Kenyon 1986). On rare occasions, a surge in
activity causes the optical spectrum of T CrB to show something more typical
of symbiotic binaries, e.g. Balmer lines and continuum in emission, and
sometimes even the appearance of a weak HeII 4686 in emission (Iijima 1990,
Anupama \& Prabhu 1991).
In this paper we report on the super-active conditions displayed by T CrB
during 2015 (in the following SACT-2015 for short), conditions never seen
before. SACT-2015 appears to be much stronger, both photometrically and
spectroscopically, than previous periods of enhanced activity recorded after
the 1946 nova outburst.
\begin{table*}
\caption{Our $B$$V$$R_{\rm C}$$I_{\rm C}$ photometric observations of T CrB.
The full table is available electronically via CDS, a small portion is
shown here for guidance on its form and content.}
\centering
\includegraphics[width=120mm]{Table_1_as_published.ps}
\label{tab1}
\end{table*}
\section{Observations}
$B$$V$$R_{\rm C}$$I_{\rm C}$ optical photometry of T CrB is regularly obtained since
2006 with ANS Collaboration telescopes N. 11 and 36, located in Italy in
Trieste and Cembra, respectively. The star has been observed on 205 nights,
from May 11, 2006 to Dec 20, 2015. The operation of ANS Collaboration
telescopes is described in detail by Munari et al. (2012) and Munari \&
Moretti (2012). The same local photometric sequence, calibrated by Henden
\& Munari (2006) against Landolt equatorial standards, was used at both
telescopes on all observing epochs, ensuing a high consistency of the data.
The $B$$V$$R_{\rm C}$$I_{\rm C}$ photometry of T CrB
is given in Table~1, where the quoted uncertainties are the total error
budget, which quadratically combines the measurement error on the variable
with the error associated to the transformation from the local to the
standard photometric system (as defined by the photometric comparison
sequence). All measurements were carried out with aperture photometry, the
long focal length of the telescopes and the absence of nearby contaminating
stars not requiring to revert to PSF-fitting.
Low resolution spectra of T CrB were obtained with the 1.22m
telescope + B\&C spectrograph operated in Asiago by the Department of
Physics and Astronomy of the University of Padova. The CCD camera is a
ANDOR iDus DU440A with a back-illuminated E2V 42-10 sensor, 2048$\times$512
array of 13.5 $\mu$m pixels. It is highly efficient in the blue down to the
atmospheric cut-off around 3250~\AA. The spectral dispersion is 2.31 Ang/pix
and the spectral resolution is constant at $\sim$2.2 pix, with the spectra
extending from $\sim$3300 to $\sim$8050~\AA. The slit width has been kept
fixed at 2 arcsec, and the slit always alligned with the parallactic angle
for optimal absolute flux calibration.
High resolution spectra were obtained with the Echelle spectrograph mounted
on the 1.82m Asiago telescope. It is equipped with an EEV~CCD47-10 CCD,
1024$\times$1024 array, 13 $\mu$m pixel, covering the interval
$\lambda\lambda$~3600$-$7300~\AA\ in 32 orders, at a resolving power of
20\,000 and without inter-order wavelength gaps.
\begin{figure*}[!Ht]
\centering
\includegraphics[width=16cm]{Figure_1.ps}
\caption{{\em Left}: the 2006-2015 $B$$V$$R_{\rm C}$$I_{\rm C}$
lightcurves of T CrB based on our data in Table~1 (solid and open
circles mark observations obtained with ANS Collaboration telescopes 11
and 36, respectively). The surge in brightness during 2015 is
prominent. {\em Right}: the quiescence part of the data at left is here
phase plotted against the P=227.55 days orbital period. The curves are
low order Legendre polynomials, symmetric with respect to the WD
transit at lower conjunction (phase 0.5), to provide a simple fit to
guide the eye to the ellipsoidal modulation.}
\label{fig1}
\end{figure*}
\section{Photometric evolution during 2006-2015}
The 2006-2015 $B$$V$$R_{\rm C}$$I_{\rm C}$ lightcurves of T CrB from our CCD
observations of Table~1 are plotted on the left panel of Figure~1. The much
brighter state of T CrB in 2015 is evident, with increasing revelance toward
shorter wavelengths. On the right panel of Figure~1, the 2006-2014 data
(preceeding SACT-2015) are phase plotted against the orbital ephemeris
\begin{equation}
{\rm Min} I = 2431933.83 + 227.55 \times E
\end{equation}
which gives the epochs of primary minima (passages of the M3III companion at
inferior conjunction) for the orbital period derived by Kenyon \& Garcia
(1986) and Fekel et al. (2000). The resulting phased lightcurve is
dominated by the well known ellipsoidal distortion of the M3III giant, first
reported by Bailey (1975) at optical wavelengths and by Yudin \& Munari
(1993) in the infrared. The over-plotted curves are simple fits to guide the
eye, in particular to demonstrate that: ($a$) the amplitude of
the ellipsoidal modulation, as given by the fitting curves, is $\Delta
B$=0.63, $\Delta V$=0.49, $\Delta R_{\rm C}$=0.42, and $\Delta I_{\rm
C}$=0.34 mag; ($b$) the secondary minimum (phase 0.5, WD passing at inferior
conjunction) is shallower at shorter wavelengths, a fact due to
the irradiation of the cool giant by the hot WD companion; and ($c$)
the dispersion of points around the fitting curves increases toward shorter
wavelengths and is well in excess of the small observational errors (cf
Table~1). The reason for that is associated with the erratic
behaviour of accretion phenomena in the system. In fact, a long record of
observations document a large amplitude {\em flickering} affecting time series
observations of T CrB (eg. Zamanov \& Bruch 1998, Zamanov et al. 2004,
Gromadzki et al. 2006, Dobrotka et al. 2010).
\begin{figure}[!Ht]
\centering
\includegraphics[width=7.8cm]{Figure_2.ps}
\caption{The photometric observations of T CrB during the 2015 super-active
period are phase-plotted against the orbital ephemeris (Eq. 1), with
mean curves for quiescence imported from Figure~2.}
\label{fig2}
\end{figure}
The photometric data on T CrB secured during SACT-2015 are not inserted in
the right panel of Figure~1, which only deals with the preceding quiescence.
The SACT-2015 data are instead plotted in Figure~2, against the same
ephemeris as used for Figure~1, from which the polynomial fits are also
copied. From Figure~2 we infer that: (i) the shorter the wavelength, the
larger the increase in brightness during SACT-2015: compared with preceding
quiescence (as given by the fitting curves in Figures~1 and 2), the increase
is $\Delta$$B$=0.72, $\Delta$$V$=0.28, $\Delta$$R_{\rm C}$=0.21, and
$\Delta$$I_{\rm C}$=0.09 mag; (ii) during SACT-2015, the orbital modulation
disappears from the $B$ lightcurve, and the depth of secondary minimum
(orbital phase 0.5) is reduced in the $V$ and $R_{\rm C}$ lightcurves; and
(iii) contrary to previous high states that did not influence T CrB
brightness at longer wavelengths, the effect of SACT-2015 extends well into
the far red, with all $I_{\rm C}$ measurements laying above the polynomial
fit to quiescence data. Anticipating the spectroscopic results from
following sections, these photometric signatures are due to: (1) nebular emission from
a much larger
fraction of the M3III wind now ionized by the hot source, which visibility is not
affected by orbital motion, and (2) increased irradiation and
therefore higher re-emission from the side of the M3III facing the hot source. In
this respect it is interesting to note that the dispersion of the
observations is similar during SACT-2015 and quiescence, with
$\sigma$($B$)=0.115 and 0.120 mag, respectively. This suggests that the
increased output from the hot source is powered by the same accretion processes and
associated instabilities that dominates during quiescence. In addition, the
large amplitude and short time scale (of the order of a day or a few days at
most) of these erratic fluctuations, suggests that the electron density in
the ionized gas (dominating the system brightness in $B$) is high enough to
drive a short recombination time scale, so that the short time scale and
large amplitude of the variations in the photo-ionization input are not
washed out by reprocessing from the recombining gas.
\section{Long term 1947-2015 evolution and previous active states}
To place SACT-2015 into a broader perspective, we investigated the long term
brightness evolution of T CrB following the 1946 nova outburst. The
outburst was over by the summer of 1947 when the star had returned to
quiescence brightness, $V$$\sim$10 mag. This value is similar to what
F.~W.~A. Argelander measured in 1855 - before the 1866 nova outburst - for
his Bonner Durchmusterung (BD) star atlas, and to the brighntess that
characterized the star in between the two nova eruptions of 1866 and 1946
(Barnard 1907, Campbell \& Shapley 1923).
To reconstruct the 1947-2015 lightcurve of T CrB, we selected to use the
120,000 visual estimates collected by AAVSO (privately communicated by
Stella Kafka, Director). This choice is based on two primary reasons.
The first in that the visual estimates collected by AAVSO constitute an
uninterrupted record of T CrB brightness during the last 70 years, whereas
other sources of information (measurements in any photometric band) are too
sparse in time and scattered through so many different observers,
instrumental combinations and photometric systems to be of no use to our
goal.
The second argument is based on the fact that during the last 70 years, each
epoch has been chacarterized by a different type of instrument to record
stellar brightness: initially unfiltered blue sensitive photographic
emulsions, then filtered pancromatic photographic emulsions, followed by
photoelectric photometers, and finally by CCD devices. The very red color
of T CrB (much redder than most of the suitable comparison stars around the
variable) has impacted in different ways and by different amounts the
photometry collected with such a broad assortment of instruments. In
addition, only rarely the data have been properly transformed to standard
systems, most of the measurements being just differential with respect to a
single field star (usually of unmatching colors). On the contrary, visual
estimates seem to be far more stable over different epochs and subsequent
generations of observers: the comparison sequence has not changed much and
the many different observers participating in the 70 years of AAVSO
monitoring have used the same measuring device, their unfiltered eyes.
\begin{table}[!Ht]
\caption{Integrated fluxes (in units of 10$^{-13}$ erg cm$^{-2}$
sec$^{-1}$) of emission lines in the spectrum of
T CrB for 2015-10-16 from Figure~4.}
\centering
\includegraphics[height=165mm]{Table_2.ps}
\label{tab2}
\end{table}
\subsection{Mean magnitudes}
In tracking the secular evolution of T CrB following the 1946 outburst, we
are looking for subtle effects (of the order of hundredths of a magnitude),
much less than the scatter intrinsic to visual estimates. We have to filter
out the noise.
\begin{figure}[!Ht]
\centering
\includegraphics[width=7.5cm]{Figure_3.ps}
\caption{Long term evolution of the brightness of T CrB in quiescence,
from 120,000 AAVSO visual estimates collected after the 1946 nova
outburst. Each point represents the mean value of the visual estimates
covering an orbital cycle (227.55 days).}
\label{fig3}
\end{figure}
The simplest way could be to average all data within a given time step. Such
a straight average would however most likely generate spurious signals. In
fact, the majority of the visual estimates are concentrated during the
summer and autumn months, when T CrB is best located in the evening sky.
Given the long orbital period of the system (about 62\% of 1 year), this
would mean that in different years the system is on average observed at
different orbital phases. Given the large amplitude of the orbital
modulation, this would cause a spurious beating signal.
To overcome the problem, we have divided the AAVSO data into contingous
orbital cycles (110 cycles from 1947 to current time), and $\chi^2$ fitted
to them the average phased lightcurve for quiescence (the continous curve
for $V$ band in Figure~1). In this way, no matter how unevenly distributed
in orbital phase the observations could be, a correct estimate for the
magnitude averaged along a whole orbital cycle is obtained. The values so
derived are plotted in Figure~3.
\subsection{The secular trend and past active phases}
\begin{figure*}[!Ht]
\centering
\includegraphics[width=16cm]{Figure_4.ps}
\caption{{\em Top:} spectra of T CrB for quiescence (2012-09-03) and
super-active states (2015-10-16) are compared for exactly
the same orbital phase (0.53), to cancel out any dependence from orbital
aspect. {\em Bottom:} their subtraction results in the nebular
spectrum plotted here. The principal emission lines are identified and
their integrated fluxes are listed in Table~2.}
\label{fig5}
\end{figure*}
\begin{figure*}[!Ht]
\centering
\includegraphics[width=16cm]{Figure_5.ps}
\caption{Spectra for M2III and M3III templates from the
atlas of Fluks (1994), and their subtraction, showing the
largest difference in the red.}
\label{fig5}
\end{figure*}
The secular trend of T CrB in quiescence, shown in Figure~3, is
characterized by three basic features: (i) an initial faster decline, at a
mean rate of 0.0178 mag per orbital period, lasting for the initial $\sim$12
years (1947-1959), (ii) a slower decline, at a mean rate of 0.0025 mag per
orbital period, characterizing the following $\sim$55 years until present
time, and (iii) a few episodes of enhanced brightness occouring after 1975.
Four such episodes are clearly present in Figure~3: the first three occurred
in 1975-1985, 1996-1997 and 2001-2004, and were of decreasing peak
brightness; the fourth and last one, SACT-2015 is characterized by the
largest amplitude with respect the underlying secular trend.
\begin{table}[!Ht]
\caption{Integrated fluxes (in units of 10$^{-13}$ erg cm$^{-2}$
sec$^{-1}$) from Asiago 1.22m+B\&C spectra for H$\alpha$, H$\beta$,
HeI 5876~\AA, and HeII 4686~\AA\ emission lines during the 2015
super-active state of T CrB.}
\centering
\includegraphics[width=62mm]{Table_3.ps}
\label{tab3}
\end{table}
\begin{figure*}[!Ht]
\centering
\includegraphics[width=16cm]{Figure_6.ps}
\caption{Sample of low res spectra of T CrB obtained during the
2015 super-active phase, arranged in order of increasing HeII
4686~\AA\ flux.}
\label{fig5}
\end{figure*}
\begin{figure*}[!Ht]
\centering
\includegraphics[angle=270,width=16.5cm]{Figure_7.ps}
\caption{Zooming on different portions of the spectra of T CrB for the
2015 super-active state, to document the presence of OIV and [NeV] in
emission and the main OIII and NIII lines produced by the Bowen
fluorescence mechanism.}
\label{fig6}
\end{figure*}
\begin{figure*}[!Ht]
\centering
\includegraphics[width=16.5cm]{Figure_8.ps}
\caption{Comparison of the spectral appearance of T CrB around
H$\alpha$ and HeII 4686~\AA\ during quiescence and 2015 auper-active
state. Asiago 1.82m+Echelle spectra are compared for exactly the same
orbital phase, to cancel out any dependence from orbital aspect.}
\label{fig7}
\end{figure*}
\section{Spectroscopy of the 2015 super-active state}
In addition to the largest increase in brightness, SACT-2015 has also seen
the greatest spectroscopic changes displayed by T CrB since the 1946 nova
outburst. The most obvious changes are the unprecedented intensity attained
by HeII 4686 (in excess of H$\gamma$), the large intensity of OIII and NIII
lines involved in the Bowen fluorescence mechanism, and the appearance of
high ionization lines like [NeV] 3427, all these on top of strong nebular
and Balmer continua.
The spectroscopic changes are best illustrated by Figure~4, where two
spectra for exactly the same orbital phase ($\theta$=0.53, WD passing at
inferior conjunction) are compared, one from SACT-2015 and the other from
the preceeding quiescence. The 2015-10-16 spectrum corresponds to the
strongest recorded intensity of HeII during SACT-2015, while the quiescence
2012-09-03 spectrum is well representative of the usual (and dull) apperance
of T CrB during quiescence, when only some weak emission in H$\alpha$ is
visible over an otherwise normal M3III absorption spectrum. The result of
the subtraction of the quiescence (2012-09-03) from the SACT-2015 spectrum
(2015-10-16) is plotted on the lower panel of Figure~4. It is a fine
example of emission from ionized gas of high density (plots built in the same
way from other SACT-2015 spectra provide similar results). Integrated flux
of the emission lines identified in this nebular spectrum are provided in
Table~2.
At the reddest wavelengths, weak features are left in the subtraction of the
TiO molecular bands from the two spectra of Figure~4. To evaluate them, in
Figure~5 we plot template spectra for M2III and M3III giants taken from the
atlas of Fluks et al. (1994), after scaling them to the $V$ and $V-I_{\rm
C}$ mean values of T CrB in quiescence at the same orbital phase
($\theta$=0.53). In the same Figure~5 we plot their difference, which is also
over-plotted to the nebular spectrum of Figure~4, where it provides a
perfect match to the weak features left over at reddest wavelengths by the
subtraction of the SACT-2015 and quiescence spectra. This indicates that
the irradiation by the hot source has rised the temperature of the facing side of
the giant companion, from that of a M3III during quiescence to that of a
M2III during SACT-2015. This change corresponds to an increase in the
effective temperature of $\Delta$$T_{\rm eff}$$\sim$80~K, averaging between
$\Delta$$T_{\rm eff}$=90~K and $\Delta$$T_{\rm eff}$=70~K reported by
Ridgway et al. (1980) and Fluks et al. (1994), respectively, as the
difference in temperature between M2III and M3III giants.
The intensity of HeII has varied considerably during SACT-2015, as
illustrated by the sample of spectra presented in Figure~6. In this figure
we also plot for comparison the spectrum for 2014-11-02, that caught T CrB on
the transition from quiescence to SACT-2015, which show increased nebular
emission (both continuum and lines) and the first appearance of HeII.
Table~3 lists the integrated flux for a few representative lines as measured
on our SACT-2015 low resolution spectra.
\section{The nebular spectrum}
Some comments are here in order concerning the nebular spectrum of T CrB
during SACT-2015, a detailed photo-ionization modeling being pursued
elsewhere.
The fluxes of emission lines listed in Tables~2 and 3 are little affected by
the low extinction experienced by T CrB. The 3D Galactic dust model by
Munari et al. (2014) indicate a total interstellar extinction
$E_{B-V}$=0.048 along the line of sight to T CrB, and similarly very low
values of $E_{B-V}$=0.058 and $E_{B-V}$=0.067 are derived from the 3D
Galactic dust distributions of Schlegel, Finkbeiner, \& Davis (1998) and Schlafly
\& Finkbeiner (2011), respectively. Cassatella et al. (1982) from analysis
of IUE ultraviolet spectra derived a slightly larger reddening,
$E_{B-V}$=0.15, that could indicate some contribution by local
circumstellar matter around T CrB. After dereddening, the ratio of Balmer
line fluxes listed in Table~2 suggests a negligible optical depth in
H$\alpha$, following the analysis of Barker (1978) and Feibelman (1983).
The spectra of T CrB in Figure~4 display a strong $\lambda$ 4640~\AA\ blend.
It is due to three NIII lines (multiplet N.2) pumped by Bowen fluorescence
mechanism (BFM in the following; Bowen 1934, 1935). These lines (4634,
4640, 4641~\AA) are resolved in the Echelle spectrum for 2015-11-02
presented in Figure~7 (right panel). Their production begins with emission
of HeII Ly-$\alpha$ photons at 303.8~\AA, which wavelength corresponds to
that of OIII transiting from its ground state to an excited level. The OIII
downward transitions produce photons at 3415, 3428 and 3444~\AA\, in the
1:8:54 proportions. The OIII 3444 line is observed in strong emission in
SACT-2015 spectra, as illustrated in the left panel of Figure~7, while the
other two OIII lines are blended with nearby lines. The end product of the
OIII downward transitions is the emission of photons at 374.4~\AA\ which
correspond to the transition of NIII from its ground to an excited state.
The following de-excitation results in the emission of the three lines
constituting the $\lambda$ 4640~\AA\ blend above mentioned, and in a pair of
lines at 4097 and 4103~\AA\ (NIII multiplet N.1), which are also in strong
emission in T CrB as illustrated by the central panel of Figure~7. It is
also worth noticing that in the SACT-2015 spectra Figure~6, the intensity of
OIII 3444 and NIII 4640 blend varies in parallel with that of HeII 4686, as
expected when BFM is ruling. The BFM pumping has been studied in detail in
symbiotic stars by Eriksson et al. (2005) and Selvelli et al. (2007). The
efficency of the BFM in T CrB during SACT-2015 (defined as the fraction of
HeII Ly-$\alpha$ photons that leads to OIII upward transitions) is
$\sim$0.35, following Harrington (1972) formalism.
We have detected [NeV] 3427 emission line in the optical spectra of T CrB,
to the best of our knowledge the first time this have occourred away from
nova eruptions, a further indication of the exceptional state T CrB has
underwent in 2015. The line is identified in the portion of T CrB
spectrum highlighted in the left panel of Figure~7. The line is observed at
3428~\AA, at the mean position for [NeV] 3427 and OIII 3429, which
contributes equal amounts to the observed flux. In fact, from Table~2, the
observed flux for the 3428 blend is 1/3 of OIII 3444, while that expected
from OIII 3429 alone would be 1/6.5, according to both theoretical
transition probabilities and actual observations in symbiotic binaries
(Selvelli et al. 2007).
The simultaneous presence of both [NeV] 3427 and [NeIII] 3869 could suggest
that [NeIV] lines should be equally present in the nebular spectrum of T
CrB. The strongest optical [NeIV] lines are located at 4715-4727~\AA\
(Merrill 1956), and they never attend a significant intensity (when seen in
symbiotic binaries, they score only a few \% of the intensity of [NeV] and
[NeIII] lines; cf. Allen 1983, Munari \& Zwitter 2002). The non-detection
of [NeIV] in the spectra of T CrB is therefore not surprising. Similarly,
the strongest OIV line that is observed at optical wavelengths is the 3411
\AA\ (Jaschek \& Jaschek 2009), and given its modest intensity in the
spectra of T CrB no further lines from this ion are expected to be visible.
The relative intensity of HeI and HeII emission lines varied greatly during
SACT-2015, as illustrated by the sample of spectra plotted in Figure~6 and
the line fluxes listed in Table~3. The HeII 4686 / (HeI 5876 + HeI 6678)
ratio is seen to vary from 0.17 on 2014-11-02, to 0.34 on 2015-08-30, to
0.88 on 2015-10-16. For the latter two dates, the HeI lines augment their
intensity by just 20\% while HeII increases by three times. This behaviour
indicates that during quiescence and the initial rise toward SACT-2015, the
nebula was {\em ionization bounded}, with properly nested Stromgren's
spheres for different ions and neutral material further out, ready to be
ionized by an increase in the hot source output. During SACT, the nebula became
{\em density bounded}, i.e. all the available gas was already ionized and
any increase in the hot source output could only rise the ionization degree of
the gas but not further expand the nebula into pre-existing external
neutral material. The ionization to density bounded transition is nicely
confirmed by the evolution of the H$\alpha$ profiles shown in Figure~8,
where a quiescence (2013-04-23) and a SACT-2015 (2015-10-26) profile for
exactly the same orbital phase (0.56) are compared. The H$\alpha$ profile
for quiescence shows a weak emission and superimposed to it a narrow
aborption, which is missing from the SACT-2015 profile that displays a
vastly stronger emission. This narrow absorption is typical of symbiotic
stars, and originates in the outflowing wind of the cool giant, specifically
from the neutral portion external to the fraction ionized by the WD, as it
was demonstrated by Munari (1993) who followed for several cycles the
orbital motion of the cool giant, of the emission lines and of the narrow
absorptions in EG And, a symbiotic star with optical spectra closely similar
to those of T CrB in quiescence. The absence of the narrow central
aborption from the SACT-2015 profile indicate that, in the direction of the
observer, no neutral gas exists external to the ionized gas. The velocity
of the narrow absorption is -19 km~sec$^{-1}$ with respect to that of the
cool giant, which is therefore the terminal velocity of its outfowing wind,
a value typical of cool giants.
Finally, the values reported in Table~3 shows how the intensity of HeII 4686
emission line is more responsive to the varying activity of the hot source than to
the orbital aspect.
\section{Three levels of activity for T CrB in quiescence}
Iijima (1990) noted that during quiscence, i.e. away from the 1866 and 1946
nova outbursts, T CrB exhibits two states: an ``high" one when emission
lines (Balmer, HeI) and the nebular continuum are relatively strong, and a
``low" state when they essentially disappear (except some weak residual
emission in H$\alpha$). The unique conditions eperienced by T CrB during
SACT-2015 requires the introduction of a new, third state that we term
``super-active", which is characterized by (1) the presence of OIV and [NeV]
lines and a very strong HeII 4686, a strong 4640 Bowen fluorescence blend,
(2) a large increse in mean brightness, and (3) disappearance of orbital
modulation from $B$-band lightcurve.
We have searched the available leterature (e.g. Kraft 1958, Gravina 1981,
Andrillat \& Houziaux 1982, Blair et al. 1983, Williams 1983, Kenyon and
Garcia 1986, Iijima 1990, Anupama \& Prabhu 1991, Ivison et al. 1994,
Anupama 1997, Zamanov and Marti 2001, Munari \& Zwitter 2002) in the attempt
to reconstruct the history of spectroscopic activity of T CrB during
quiescence. This has turned out a difficult task because rarely integrated
absolute fluxes are provided for the emission lines, few observations
ventured enough into the blue to cover the Balmer continuum, and usually
only equivalent widths are given if not just a mere descrition like 'weak'
or 'strong'. In addition the observations reported in literature were
obtained at different wavelength intervals and resolving powers. We tried
our best to homogenize the different sources, and in this we took advantage
of the many (unpublished) spectra of T CrB that we have regularly obtained
since 1987.
T CrB has always been in a 'low' state when observed for the first 3 decades
after the 1946 outburst. The last of these spectra, those of Blair et al.
(1983) for 1981-02-06, Williams (1983) for 1981-06-10, and Gravina (1981)
for 1981-07-15 and 1981-09-15 record only feeble emission in H$\alpha$,
H$\beta$ and H$\gamma$. Iijima (1990) reports that, in addition to Balmer
and HeI, a weak emission in HeII 4686~\AA\ was visible on his spectra on
several dates distributed between 1982 and 1987, but Kenyon and Garcia
(1986) saw no HeII in emission on their 1984 and 1985 spectra and the
absolute flux they measured for Balmer lines was only twice larger than that
reported by Blair et al. (1983) for 1981. The extensive spectral
monitoring by Anupama \& Prabhu (1991) and Anupama (1997) shows that the
intensity of Balmer and HeI emission lines gradually increased starting with
June 1985, peaked during November 1986, and returned to the 'low' state by
October 1987, where T CrB has remained until 1996. Iijima (1990) confirms
that HeII was absent from his spectra for 1988, 1989 and 1990, the same
reported by Ivison et al (1994) for their 1989 spectra. Just a feable
emission in the lower Balmer lines and no HeII were found by Munari \&
Zwitter (2002) on various dates of 1993 and 1995. Then a new 'high' state
was briefly observed in 1996-1997. On 1996-02-01 Zamanov \& Marti (2001)
found H$\alpha$ to be weak and this is confirmed by a 1996-02-08 spectrum
from Munari \& Zwitter (2002) that in addition reveals HeII to be absent.
Then, Mikolajewski et al. (1997) found H$\alpha$ to be in strong emission
during April, May and June of 1996. This is confirmed by a 1996-05-30
spectrum from Munari \& Zwitter (2002), that in addition shows how HeII was
still absent. Zamanov \& Marti (2001) reports that by 1998 this second
'high' state of T CrB was over. Since then and to the best of our knowledge,
T CrB has never been observed again in a 'high' state until the 2015 episode
described in this paper. The long term photometric behaviour presented in
Figure~3 shows that T CrB rised significantly above the underlying secular
decline only in correspondence of the high spectroscopic states.
To the best of our knowledge, T CrB has been caught in a super-active state
in only one other occasion, on the summer of 1938 by Hachenberg \& Wellmann
(1939). On their spectrum for 22 July 1938, HeII 4686 is half the intensity
of H$\gamma$ and the 4640~\AA\ blend stands in prominent emission (2/3 the
intensity of HeII). The Hachenberg \& Wellmann (1939) spectrum for August 28
confirms the super-active state, while that for September 22 indicates a
rapid return of T CrB toward lower excitation conditions.
There is an intriguing parallelism between SACT-2015 and what Hachenberg \&
Wellmann (1939) observed in 1938. The super-active state they caught
occurred $\sim$70 years past the 1866 nova outburst, and SACT-2015 is
occurring $\sim$70 years past the 1946 nova outburst. Is therefore everything
in place for a new nova outburst in 2026, again $\sim$80 years past the last
eruption~?
|
train/arxiv
|
BkiUc2A5qoTAmEQQopb8
| 5 | 1 |
\section{Introduction and preliminaries}
Let $\A$ denote the family of functions $f$ that are analytic in the unit disk $\D =\{z:\, |z|<1\}$ and normalized by $f(0)=f'(0)-1=0$ and let $\es\subset \A $ be the class of all univalent functions. In \cite{Duren-1983-book, A-W-Goodman-1983-book}, Various subclass of univalent functions are characterised by its geometrical properties and the important subclasses are convex, starlike and close-to-convex functions.\\
\bdefe
A function $f$ in $\es$ is a convex function in $\D$ if and only if
\begin{eqnarray*}
{ \rm Re }\left( 1 + \frac{zf^{\prime\prime}(z)}{f^\prime(z)} \right) > 0
\end{eqnarray*} Let us denote by ${\CC}$ the class of all convex univalent functions in $\mathbb{D}$.\\
\edefe
\bdefe
A function $f$ in $\es$ is a starlike function with respect to the origin in $\D$ if and only if
\begin{eqnarray*}
{ \rm Re } \left( \frac{zf^{\prime}(z)}{f(z)} \right) > 0
\end{eqnarray*}
\edefe
Let us denote by $\es^{\ast}$, the class of all starlike univalent functions in $\D$.
\bdefe
A function $f$ in $\es$ is a close to convex in $\D$ if and only if
\begin{eqnarray*}
{ \rm Re } \left( \frac{f^{\prime}(z)}{g^{\prime}(z)} \right) > 0
\end{eqnarray*}
for some $ g \in \CC $.
\edefe
The class of all close to convex univalent functions in $\D$ is denoted by $\K$. It is well known that the chain of inclusion relations $\CC \subset \es^{*} \subset \K \subset \es$.\\
The family of functions in $\A$ which are close-to-convex with respect to $-\log (1-z)$, and also, starlike in $\D$ is denoted by $KS^{*}$.\\
\bdefe
Let $ \displaystyle f(z)= z+\sum_{n=2}^{\infty}\, a_n\,z^n $ and $ \displaystyle g(z)= z+\sum_{n=2}^{\infty}\, b_n\,z^n $ be analytic in $\D$. Then the Hadamard product or convolution of $f(z)$ and $g(z)$ is defined by
\begin{eqnarray*}
\displaystyle f(z)*g(z)= z+\sum_{n=2}^{\infty} a_nb_n z^n,\, |z|<1.
\end{eqnarray*}
\edefe
\bdefe
The Alexander transform, is defined as
\begin{eqnarray}\label{alexeq1}
\Lambda_f(z) = \int_{0}^{z}\frac{f(t)}{t}\ dt, \ f \in \es , z \in \D
\end{eqnarray}
\edefe
Using convolution technique the above transform is given by $ F(z) = f(z) \ast h(z) $
where $h(z) = - \log \left( 1-z \right)$, is not univalent in $\D$. It is know that the function $h(z)$ is well known and is a convex function of order $1/2$.\\
For any complex variable $a\neq 0$, the shifted factorial (or Pochhammer symbol) is defined as
$$(a)_0\,=\,1,\quad (a)_n\,=\,a(a+1)\cdots (a+n-1),\quad n\,=\,1,2,3,\cdots$$
In terms of Euler gamma function, the Pochhammer symbol can also be defined as $$(a)_n = \frac{\Gamma(n+a)}{\Gamma(a)},\quad n=0,1,2,\cdots$$
where $a$ is neither zero nor a negative integer.\\
\bdefe
The Hypergeometric Function $_3F_2(a,b,c;d,e;z)$ is defined by
\begin{eqnarray}\label{inteq5}
_3F_2(a,b, c;d,e;z)=\sum_{n=0}^{\infty}\frac{(a)_n(b)_n(c)_n}{(d)_n(e)_n(1)_n}z^n;\, \, \, a,b,c,d,e\in \IC
\end{eqnarray}
\edefe
where $ a,b,c,d,e \in \IC$ and $d,e \neq 0, -1, -2, -3, \cdots$ which is analytic and convergent in unit disc $\D$.\\
Using (\ref{inteq5}), the normalized hypergeometric function be written as $z\, _3F_2(a,b,c;d,e;z)$ and defined by
\begin{eqnarray}\label{eq1}
f(z)=z\, _3F_2(a,b,c;d,e;z)=z+\sum_{n=2}^{\infty} A_n z^n,
\end{eqnarray}
where
\begin{eqnarray} \label{2f2eq1}
A_n = \frac{(a)_{n-1}(b)_{n-1}(c)_{n-1}}{(d)_{n-1}(e)_{n-1}(1)_{n-1}}, for\ n\geq 2.
\end{eqnarray}
and $A_1=1$.
\blem \label{lemeq5} \cite{Fejer-1936-Trans-ams}
If $A_n \geq 0, \{ nA_n\}$ and $\{nA_n-(n+1)A_{n+1}\}$ both are non-increasing, i.e., $\{nA_n\}$ is monotone of order 2, then $f$ defined by (\ref{eq1}) is in $\es^{*}.$
\elem
\blem \label{lemeq2} \cite{Ozaki-1935-Tokyo} Suppose that
\begin{eqnarray*} \label{lmeq1}
1\geq 2 A_2 \geq \cdots \geq nA_n \geq \cdots \geq 0
\end{eqnarray*}
or
\begin{eqnarray*} \label{lmeq2}
1\leqq 2 A_2 \leq \cdots \leq nA_n \leq \cdots \leq 2
\end{eqnarray*}
Then $f(z)$ is defined by (\ref{eq1}) is close-to-convex with respect to $-\log(1-z)$.
\elem
\blem \label{lemeq3} \cite{Ozaki-1935-Tokyo}
Suppose that $f$ is an odd function ( i.e., the value of $A_{2n}$ in $(\ref{eq1})$ is zero for each $n\geq1$ ) such that
\begin{eqnarray*} \label{lmeq3}
1\geq 3 A_3 \geq \cdots \geq (2n+1)A_{2n+1} \geq \cdots \geq 0
\end{eqnarray*}
or
\begin{eqnarray*} \label{lmeq4}
1\leq 3 A_3 \leq \cdots \leq (2n+1)A_{2n+1} \leq \cdots \leq 2
\end{eqnarray*}
Then $f\in \es$. In fact, $f(z)$ is close-to-convex with respect to the convex function $ \frac 1 2 \log((1+z)/(1-z))$.
\elem
In 1986, Ruscheweyh and Singh \cite{Ruscheweyh-and-Singh-1986} obtained the sufficient conditions on the parameters $a,\,b\,$ and $c$ for $z\,_2F_1(a,b;c;z)$ to be starlike of order $\beta < 1$. Further the year 1995, Ponnusamy and Vourinen \cite{Ponnusamy-Vuorinen-1995} have established the univalence and convexity properties of Gaussian hypergeometric functions. Subsequently Ponnusamy derived the conditions on the parameters $a,\,b$ and $c$ for univalence and starlike properties of Alexander transform and also the close-to-convexity properties of the Gaussian hypergeometric functions \cite{Ponnusamy-1996,Ponnusamy-1998}.\\
Inspired by the above results, in this paper, we find conditions on $ a, b, c, d$ and $e$ such that the function $ z\ _3F_2(a,b,c;d,e;z) $
to be close-to-convex with respect to the functions $ -\log(1-z),\, \frac{1}{2}\log\left(\frac{1+z}{1-z}\right)$ and is in the class $ KS^{\ast}$. We also find similar conditions such that the Alexander transform is in the class $KS^{\ast}$.
\section{Main Results and Proofs:}
\bthm\label{thm12f2}
If $a,b,c > 0$, $de \geq 2\,a\,b\,c\, $ and
\begin{eqnarray*}\label{thm12f2eq1}
d+e \geq Max\left\{ a+b+c,\, \frac{1}{2}(ab+bc+ac+2(a+b+c)-1-2abc),2[ab+bc+ac]-3abc \right\},
\end{eqnarray*} then $z\ _3F_2(a,b,c;d,e;z)$ is close-to-convex with respect to $-log(1-z)$.
\ethm
\bpf Consider the function $f(z)$ is defined in $(\ref{eq1})$. Replace $n$ by $n+1$ in the equation (\ref{2f2eq1}) and Using the Pochhammer symbol, we have
\begin{eqnarray}\label{2f2eq2}
A_{n+1} &=& \frac{(a)_{n}(b)_{n}(c)_{n}}{(d)_{n}(e)_{n}(1)_{n}} = \frac{(a+n-1)(b+n-1)(c+n-1)}{(d+n-1)(e+n-1)n} A_n
\end{eqnarray}
and observe that $A_n >0$ for all $n\geq 1$. \\
To prove $f$ is close-to-convex with respect to $-\log(1-z)$. It is enough to show that $\{nA_n\}$ is non-increasing sequence.\\
From (\ref{2f2eq1}) and (\ref{2f2eq2}), we have the following after some manipulation
\begin{eqnarray*}
nA_n - (n+1)A_{n+1} &=& nA_n - \frac{(n+1)(a+n-1)(b+n-1)(c+n-1)}{(d+n-1)(e+n-1)n} A_n\\
&=& \frac{A_n U(n) }{(d+n-1)(e+n-1)n}
\end{eqnarray*}
where
\begin{eqnarray*}\label{2f2eq3}
U(n) &=&n^2(e+n-1)(d+n-1)-(n+1)(a+n-1)(b+n-1)(c+n-1)\nonumber\\
&=& {\left( e+d-c-b-a\right)} \,{n}^{3}+\left(d\,e-(e+d)-(a\,b+b\,c+a\,c)+c+b+a+1\right) \,{n}^{2}\nonumber\\ && -\left( a\,b\,c-c-b-a+2\right) \,n-\left( a-1\right) \,\left( b-1\right) \,\left( c-1\right)
\end{eqnarray*}
For every $n\geq 1$, we have $n^3 \geq 3\,n^2-3\,n+1,$
\begin{eqnarray*}
V(n) &\geq & {\left(d\,e+2\,(e+d)-(a\,b+b\,c+a\,c)-2\,(a+b+c)+1\right)} \,{n}^{2}\\
&& \qquad +\left(-3\,(e+d)-a\,b\,c+4\,(c+b+a)-2\right) \,n\\
&& \qquad \qquad +e+d-a\,b\,c+b\,c+a\,c-2\,c+a\,b-2\,b-2\,a+1
\end{eqnarray*}
Using the fact that $n^2 \geq 2n-1$, for all $n\geq1.$
\begin{eqnarray*}
W(n) &\geq & {\left(e+d+2\,d\,e-a\,b\,c-2\,(a\,b+b\,c+a\,c)\right)} \,n\\
&& \qquad -\left( d\,e+e+d+a\,b\,c-2\,(a\,b+b\,c+\,a\,c)\right)
\end{eqnarray*}
Put $n=1$ in above the equation, we find
\begin{eqnarray*}
W(1) &=&{ d\,e-2\,a\,b\,c} \geq 0
\end{eqnarray*}
Since $d\,e \geq 2\,a\,b\,c$, The above equation implies that
$$ U(n) \geq V(n) \geq W(n) \geq W(1) \geq 0.$$
Hence $\{n A_n\}$ is a non-increasing sequence). Hence by Lemma \ref{lemeq2}, the function \\$z\ _3F_2(a,b,c;d,e;z)$ is close-to-convex with respect to $-log(1-z)$.
\epf
\bthm\label{thm22f2}
If $a,b,c > 0$, $de \geq 3abc $ and
\begin{eqnarray}\label{thm22f2eq1}
d+e \geq Max\left\{a+b+c, \alpha(a,b,c),\, 3(a\,b+b\,c+a\,c)-7a\,b\,c\right\}
\end{eqnarray}
where $$\alpha(a,b,c)=\frac{1}{3}\left((2(a\,b+b\,c+a\,c)+3(a+b+c)-6\,a\,b\,c-1\right)$$
then $z\ _3F_2(a,b,c;d,e;z^2)$ is close-to-convex with respect to $\frac{1}{2}\log((1+z)/(1-z))$.
\ethm
\bpf Consider the function defined as follows by replacing $z$ by $z^2$ in the equation (\ref{eq1})
\begin{eqnarray*}
f(z)=z\ _3F_2(a,b,c;d,e;z^2)=z+\sum_{n=2}^{\infty} A_{2n-1} z^{2n-1},
\end{eqnarray*}
where
\begin{eqnarray}\label{2f2eq4}
A_{2n-1} = \frac{(a)_{n-1}(b)_{n-1}(c)_{n-1}}{(d)_{n-1}(e)_{n-1}(1)_{n-1}}, for\ n\geq 2.
\end{eqnarray}
and $A_1=1$. Then, we have the following from the equation (\ref{2f2eq4}) by replacing $n$ by $n+1$.
\begin{eqnarray}\label{2f2eq6}
A_{2n+1} &=& \frac{(a+n-1)(b+n-1)(c+n-1)}{(d+n-1)(e+n-1)n} A_{2n-1}\nonumber
\end{eqnarray}
and therefore we obtain
\begin{eqnarray*}
(2n-1)A_{2n-1} - (2n+1)A_{2n+1} &=& \frac{A_{2n-1} X(n)}{(e+n-1)(d+n-1)n}
\end{eqnarray*}
where
\begin{eqnarray*}
X(n) &=&2{\left( e+\,d-\,c-\,b-\,a\right)} \,{n}^{3}\\
&& \qquad +\left(2\,d\,e-3\,(e+d)-2\,(a\,b+b\,c+\,a\,c)+3\,(c+b+a)+1\right) \,{n}^{2}\\
&& \qquad \qquad \qquad -\left( d\,e-(e+d)+2\,a\,b\,c-(a\,b+b\,c+a\,c)+2\right) \,n \\
&& \qquad \qquad \qquad \qquad \qquad -\left( a-1\right) \,\left( b-1\right) \,\left( c-1\right)
\end{eqnarray*}
Replace $n^3 \geq 3\,n^2-3\,n+1,$ for every $n\geq 1$, we have
\begin{eqnarray*}
Y(n) &=& {\left(2\,d\,e+3\,(e+d)-2\,(a\,b+b\,c+\,a\,c)-3\,(c+b+a)+1\right)}\,n^2\\
&&\qquad -\left( d\,e+5\,(e+d)+2\,a\,b\,c-(b\,c+a\,c+a\,b)-6\,(c+\,b+\,a)+2\right) \,n \\
&&\qquad \qquad\qquad + 2\,(e+d)-a\,b\,c+a\,b+b\,c+a\,c-3\,(c+b+a)+1
\end{eqnarray*}
Using $n^2 \geq 2n-1$, for all $n\geq1$, implies that
\begin{eqnarray*}
Z(n) &=& {\left(3\,d\,e+e+d-2\,a\,b\,c-3(a\,b+\,b\,c+a\,c)\right)} \,n\\
&& \qquad -\left( 2\,d\,e+e+d+a\,b\,c-3(a\,b+\,b\,c+a\,c)\right)
\end{eqnarray*}
Replace $n$ by 1 in above, we get
\begin{eqnarray*}
Z(1) &=& {d\,e-3\,a\,b\,c }\geq 0
\end{eqnarray*}
Since $d\,e \geq 3\,a\,b\,c$, The above equation implies that
$$ X(n) \geq Y(n) \geq Z(n)\geq Z(1) \geq 0$$
Hence by the condition on $d+e$ in (\ref{thm22f2eq1}), we have $X(n)$ is non-negative, for all $n\geq 1$. Thus $\{(2n-1)A_{2n-1}\}$ is non-increasing sequence. The function $z\ _3F_2(a,b,c;d,e;z^2)$ is close-to-convex with respect to $\frac{1}{2}\log((1+z)/(1-z))$ by Lemma \ref{lemeq3}.
\epf
\bthm\label{thm32f2}
Let $a,\, b,\,and\, c > 0$, $$d+e \geq Max\left\{ T_1(a,b,c),\, T_2(a,b,c),\, T_3(a,b,c),\, T_4(a,b,c) \right\}$$
where
\begin{eqnarray*}
T_1(a,b,c) &=& \left( e+d-c-b-a\right) \,\left( e+d-c-b-a+1\right)\\
T_2(a,b,c) &=& \left( e+d-c-b-a+1\right) \,\left( 2\,d\,e+5\,(e+d)-2\,(a\,b+b\,c+a\,c)-5\,(c+b+a)+2\right)\\
T_3(a,b,c) &=& \left(\left( {d}^{2}+9\,d+9\right) \,{e}^{2}+\left( 9\,{d}^{2}+\left( \left( -2\,b-2\,a-8\right) \,c+\left( -2\,a-8\right) \,b-8\,a+29\right) \,d \right.\right.\\
&& \qquad \left.\left.+\left( \left( -2\,a-10\right) \,b-10\,a-16\right) \,c+\left( -10\,a-16\right) \,b-16\,a+15\right) \,e+9\,{d}^{2} \right.\\
&& \qquad \qquad \left.+\left( \left( \left( -2\,a-10\right) \,b-10\,a-16\right) \,c+\left( -10\,a-16\right) \,b-16\,a+15\right) \,d\right.\\
&& \qquad \qquad \qquad \left.+\left( {b}^{2}+\left( 4\,a+9\right) \,b+{a}^{2}+9\,a+7\right) \,{c}^{2}+\left( \left( 4\,a+9\right) \,{b}^{2}\right.\right.\\
&& \qquad \qquad \qquad \qquad \left.\left.+\left( 4\,{a}^{2}+24\,a+3\right) \,b+9\,{a}^{2}+3\,a-11\right) \,c+\left( {a}^{2}+9\,a+7\right) \,{b}^{2}\right.\\
&& \qquad \qquad \qquad \qquad \qquad \left.+\left( 9\,{a}^{2}+3\,a-11\right) \,b+7\,{a}^{2}-11\,a+4\right)\\
T_4(a,b,c) &=& \left(\left( 4\,{d}^{2}+14\,d+7\right) \,{e}^{2}+\left( 14\,{d}^{2}+\left( \left( \left( -2\,a-8\right) \,b-8\,a-8\right) \,c+\left( -8\,a-8\right) \,b\right.\right.\right.\\
&& \qquad \left.\left.\left.-8\,a+32\right) \,d+\left( \left( -10\,a-16\right) \,b-16\,a-8\right) \,c+\left( -16\,a-8\right) \,b-8\,a+11\right) \,e\right.\\
&& \qquad \qquad \left.+7\,{d}^{2}+\left( \left( \left( -10\,a-16\right) \,b-16\,a-8\right) \,c+\left( -16\,a-8\right) \,b-8\,a+11\right) \,d\right.\\
&& \qquad \qquad \qquad \left.+\left( \left( 2\,a+4\right) \,{b}^{2}+\left( 2\,{a}^{2}+16\,a+10\right) \,b+4\,{a}^{2}+10\,a+3\right) \,{c}^{2}\right.\\
&& \qquad \qquad \qquad \qquad \left.+\left( \left( 2\,{a}^{2}+16\,a+10\right) \,{b}^{2}+\left( 16\,{a}^{2}+16\,a-8\right) \,b+10\,{a}^{2}-8\,a\right.\right.\\
&& \qquad \qquad \qquad \qquad \qquad\left. \left.-5\right) \,c+\left( 4\,{a}^{2}+10\,a+3\right) \,{b}^{2}+\left( 10\,{a}^{2}-8\,a-5\right) \,b\right.\\ && \qquad \qquad \qquad \qquad \qquad \qquad \left.+3\,{a}^{2}-5\,a+2\right)
\end{eqnarray*}
and
\begin{eqnarray*}
T(a,b,c,d,e)&=&\left( 2\,{d}^{2}+2\,d\right) \,{e}^{2}+\left( 2\,{d}^{2}+\left( 2-8\,a\,b\,c\right) \,d-8\,a\,b\,c\right) \,e-8\,a\,b\,c\,d\\
&& \qquad +\left( \left( 3\,{a}^{2}+3\,a\right) \,{b}^{2}+\left( 3\,{a}^{2}+3\,a\right) \,b\right) \,{c}^{2}+\left( \left( 3\,{a}^{2}+3\,a\right) \,{b}^{2}+\left( 3\,{a}^{2}-5\,a\right) \,b\right) \,c \geq 0
\end{eqnarray*}
Then $z\, _3F_2(a,b,c;d,e;z)$ is in $KS^{*}$.
\ethm
\bpf The function $f(z)= z\, _3F_2(a,b,c;d,e;z)$ is defined by (\ref{eq1}), where $A_n$ is as in $(\ref{2f2eq1})$.\\
For $a,b,c > 0$, we observe that $de \geq 2\,a\,b\,c$ and $$d+e \geq Max\left\{ a+b+c,\, \frac{1}{2}(ab+bc+ac+2(a+b+c)-1-2abc),2[ab+bc+ac]-3abc \right\}.$$ By Theorem \ref{thm12f2}, this condition implies that the sequence $\{nA_n\}$ is non-increasing. To prove $f$ is starlike. We need to show that the sequence $\{nA_n-(n+1)A_{n+1}\}$ is also non-increasing using Lemma \ref{lemeq5}. Let
\begin{eqnarray*}
B_n = nA_n-(n+1)A_{n+1}\, \, \, {\ and}\, \, \, B_{n+1}&=& (n+1)A_{n+1} - (n+2)A_{n+2}
\end{eqnarray*}
Using $A_n$, we find that
\begin{eqnarray}\label{eqnP3}
B_{n}-B_{n+1}&=&nA_n -2(n+1)A_{n+1} + (n+2)A_{n+2}\\
&=& A_n \left[n -2(n+1)\left(\frac{A_{n+1}}{A_n}\right) + (n+2)\left(\frac{A_{n+2}}{A_n}\right)\right]\nonumber
\end{eqnarray}
Where
\begin{eqnarray*}
\frac{A_{n+1}}{A_n} = \frac{(a+n-1)(b+n-1)(c+n-1)}{(d+n-1)(e+n-1)n}
\end{eqnarray*}
and
\begin{eqnarray*}
\frac{A_{n+2}}{A_n} = \frac{(a+n)(a+n-1)(b+n)(b+n-1)(c+n)(c+n-1)}{(d+n)(d+n-1)(e+n)(e+n-1)n(n+1)}
\end{eqnarray*}
After some simplification the equation (\ref{eqnP3}) implies
\begin{eqnarray}\label{eqnP4}
B_{n}-B_{n+1}&=&\frac{ A_n P(n)}{n(n+1)(d+n)(d+n-1)(e+n)(e+n-1)}
\end{eqnarray}
\begin{eqnarray*}
P(n)&=&n^2(n+1)(d+n)(d+n-1)(e+n)(e+n-1)\nonumber\\
&&\qquad -2(n+1)^2(d+n)(e+n)(a+n-1)(b+n-1)(c+n-1)\nonumber\\
&& \qquad \qquad +(n+2)(a+n)(a+n-1)(b+n)(b+n-1)(c+n)(c+n-1)\nonumber\\
&=& { \left( e+d-c-b-a\right) \,\left( e+d-c-b-a+1\right)}\,n^5 \nonumber\\
&& +{2\,\left( e+d-c-b-a+1\right) \,\left( d\,e-b\,c-a\,c-a\,b+1\right)}\, n^4\nonumber\\
&& + \left(\left( {d}^{2}+d-1\right) \,{e}^{2}+\left( {d}^{2}+\left( \left( -2\,b-2\,a\right) \,c-2\,a\,b+1\right) \,d\right.\right.\nonumber\\
&&\qquad \left.\left.+\left( \left( -2\,a-2\right) \,b-2\,a+4\right) \,c+\left( 4-2\,a\right) \,b+4\,a-3\right) \,e-{d}^{2}\right.\nonumber\\
&& \qquad \qquad \left.+\left( \left( \left( -2\,a-2\right) \,b-2\,a+4\right) \,c+\left( 4-2\,a\right) \,b+4\,a-3\right) \,d \right. \nonumber\\
&& \qquad \qquad \qquad \left.+\left( {b}^{2}+\left( 4\,a+1\right) \,b+{a}^{2}+a-3\right) \,{c}^{2}+\left( \left( 4\,a+1\right) \,{b}^{2}+\left( 4\,{a}^{2}-9\right) \,b\right.\right.\nonumber\\
&& \qquad \qquad \qquad \qquad \left.\left.+{a}^{2}-9\,a+7\right) \,c+\left( {a}^{2}+a-3\right) \,{b}^{2}+\left( {a}^{2}-9\,a+7\right) \,b-3\,{a}^{2}\right.\nonumber\\
&& \qquad \qquad \qquad \qquad \qquad \left.+7\,a-4\right)\,{n}^{3}\nonumber\\
&& + \left(\left( {d}^{2}-d\right) \,{e}^{2}+\left( -{d}^{2}+\left( \left( \left( -2\,a-2\right) \,b-2\,a+4\right) \,c+\left( 4-2\,a\right) \,b+4\,a-3\right) \,d\right.\right.\nonumber \\
&& \qquad \left.\left.+\left( \left( 2-4\,a\right) \,b+2\,a\right) \,c+2\,a\,b-2\right) \,e+\left( \left( \left( 2-4\,a\right) \,b+2\,a\right) \,c+2\,a\,b-2\right) \,d\right.\nonumber \\
&& \qquad \left.+\left( \left( 2\,a+1\right) \,{b}^{2}+\left( 2\,{a}^{2}+4\,a-5\right) \,b+{a}^{2}-5\,a+2\right) \,{c}^{2}+\left( \left( 2\,{a}^{2}+4\,a-5\right) \,{b}^{2}\right.\right.\nonumber \\
&& \qquad \qquad \left.\left.+\left( 4\,{a}^{2}-20\,a+11\right) \,b-5\,{a}^{2}+11\,a-4\right) \,c+\left( {a}^{2}-5\,a+2\right) \,{b}^{2}\right.\nonumber \\
&& \qquad \qquad \qquad \left.+\left( -5\,{a}^{2}+11\,a-4\right) \,b+2\,{a}^{2}-4\,a+2\right)\,{n}^{2}\nonumber\\
&&+\left(\left( \left( \left( \left( 2-4\,a\right) \,b+2\,a\right) \,c+2\,a\,b-2\right) \,d+\left( \left( 2-2\,a\right) \,b+2\,a-2\right) \,c+\left( 2\,a-2\right) \,b\right.\right.\nonumber\\
&& \qquad \left.\left.-2\,a+2\right) \,e+\left( \left( \left( 2-2\,a\right) \,b+2\,a-2\right) \,c+\left( 2\,a-2\right) \,b-2\,a+2\right) \,d\right.\nonumber\\
&& \qquad \qquad \left.+\left( \left( {a}^{2}+3\,a-2\right) \,{b}^{2}+\left( 3\,{a}^{2}-7\,a+2\right) \,b-2\,{a}^{2}+2\,a\right) \,{c}^{2}\right.\nonumber\\
&& \qquad \qquad \qquad \left.+\left( \left( 3\,{a}^{2}-7\,a+2\right) \,{b}^{2}+\left( -7\,{a}^{2}+11\,a-2\right) \,b+2\,{a}^{2}-2\,a\right) \,c\right.\nonumber\\
&& \qquad \qquad \qquad \qquad \left.+\left( 2\,a-2\,{a}^{2}\right) \,{b}^{2}+\left( 2\,{a}^{2}-2\,a\right) \,b\right)\,n \nonumber\\
&& {-2\,\left( a-1\right) \,\left( b-1\right) \,\left( c-1\right) \,\left( d\,e-a\,b\,c\right)}
\end{eqnarray*}
Our aim is to check that $P(n)$ is non-negative for all $n\geq1$. After few steps of calculation, we get
\begin{eqnarray*}
T(n)&\geq & \left(\left( 5\,{d}^{2}+9\,d+2\right) \,{e}^{2}+\left( 9\,{d}^{2}+\left( \left( \left( -8\,a-8\right) \,b-8\,a\right) \,c-8\,a\,b+13\right) \,d\right.\right.\\
&& \qquad \left.\left.+\left( \left( -16\,a-8\right) \,b-8\,a\right) \,c-8\,a\,b+2\right) \,e+2\,{d}^{2}+\left( \left( \left( -16\,a-8\right) \,b-8\,a\right) \,c\right.\right.\\
&& \qquad \qquad \left.\left.-8\,a\,b+2\right) \,d+\left( \left( {a}^{2}+7\,a+3\right) \,{b}^{2}+\left( 7\,{a}^{2}+13\,a+3\right) \,b+3\,{a}^{2}+3\,a\right) \,{c}^{2}\right.\\
&& \qquad \qquad \qquad \left.+\left( \left( 7\,{a}^{2}+13\,a+3\right) \,{b}^{2}+\left( 13\,{a}^{2}-5\,a-5\right) \,b+3\,{a}^{2}-5\,a\right) \,c\right.\\
&& \qquad \qquad \qquad \qquad \left.+\left( 3\,{a}^{2}+3\,a\right) \,{b}^{2}+\left( 3\,{a}^{2}-5\,a\right) \,b\right)\,n\\
&& +\left( -3\,{d}^{2}-7\,d-2\right) \,{e}^{2}+\left( -7\,{d}^{2}+\left( \left( 8\,b+8\,a\right) \,c+8\,a\,b-11\right) \,d+\left( \left( 8\,a+8\right) \,b+8\,a\right) \,c\right.\\
&& \qquad \left.+8\,a\,b-2\right) \,e-2\,{d}^{2}+\left( \left( \left( 8\,a+8\right) \,b+8\,a\right) \,c+8\,a\,b-2\right) \,d+\left( \left( 2\,{a}^{2}-4\,a-3\right) \,{b}^{2}\right.\\
&& \qquad \qquad \left.+\left( -4\,{a}^{2}-10\,a-3\right) \,b-3\,{a}^{2}-3\,a\right) \,{c}^{2}+\left( \left( -4\,{a}^{2}-10\,a-3\right) \,{b}^{2}\right.\\
&& \qquad \qquad \qquad \left.+\left( 5-10\,{a}^{2}\right) \,b-3\,{a}^{2}+5\,a\right) \,c+\left( -3\,{a}^{2}-3\,a\right) \,{b}^{2}+\left( 5\,a-3\,{a}^{2}\right) \,b
\end{eqnarray*}
Put $n=1$, we have
\begin{eqnarray*}
T(1)&\geq&\left( 2\,{d}^{2}+2\,d\right) \,{e}^{2}+\left( 2\,{d}^{2}+\left( 2-8\,a\,b\,c\right) \,d-8\,a\,b\,c\right) \,e-8\,a\,b\,c\,d\\
&& \qquad +\left( \left( 3\,{a}^{2}+3\,a\right) \,{b}^{2}+\left( 3\,{a}^{2}+3\,a\right) \,b\right) \,{c}^{2}+\left( \left( 3\,{a}^{2}+3\,a\right) \,{b}^{2}+\left( 3\,{a}^{2}-5\,a\right) \,b\right) \,c \geq 0
\end{eqnarray*}
Hence by hypothesis, the following inequalities holds true
$$P(n) \geq Q(n)\geq R(n)\geq S(n)\geq T(n)\geq T(1)\geq 0.$$
Therefore sequences $\{B_n\}$ and $\{nA_n-(n+1)A_{n+1}\}$, is non-increasing. We deduce that $f$ is starlike by Lemma \ref{lemeq5}. Also the function $f$ is close-to-convex with respect to $-\log(1-z)$. Since the conditions of Lemma \ref{lemeq2} are verified.
\epf
\bthm\label{thm42f2}
Let $a,b,\ and\ c > 0$. Suppose that $$ d+e \geq max\{T_1(a,b,c),\, T_2(a,b,c),\, T_3(a,b,c)\}$$ where
\begin{eqnarray*}
T_1(a,b,c) &=& \left( e+d-c-b-a+1\right) \,\left( e+d-c-b-a+2\right),\\
T_2(a,b,c) &=& 2\,\left( e+d-c-b-a+2\right) \,\left( d\,e+2\,e+2\,d-b\,c-a\,c-c-a\,b-b-a+1\right),\\
T_3(a,b,c)&=& \left(\left( {d}^{2}+7\,d+5\right) \,{e}^{2}+\left( 7\,{d}^{2}+\left( \left( -2\,b-2\,a-4\right) \,c+\left( -2\,a-4\right) \,b-4\,a+21\right) \,d\right.\right.\\
&& \qquad \left.\left.+\left( \left( -2\,a-6\right) \,b-6\,a-4\right) \,c+\left( -6\,a-4\right) \,b-4\,a+9\right) \,e+5\,{d}^{2}\right.\\
&& \qquad \qquad\left.+\left( \left( \left( -2\,a-6\right) \,b-6\,a-4\right) \,c+\left( -6\,a-4\right) \,b-4\,a+9\right) \,d\right.\\
&&\qquad \qquad \qquad \left.+\left( {b}^{2}+\left( 4\,a+3\right) \,b+{a}^{2}+3\,a+1\right) \,{c}^{2}+\left( \left( 4\,a+3\right) \,{b}^{2}\right.\right.\\
&&\qquad \qquad \qquad \qquad \left.\left.+\left( 4\,{a}^{2}+4\,a-5\right) \,b+3\,{a}^{2}-5\,a-3\right) \,c+\left( {a}^{2}+3\,a+1\right) \,{b}^{2}\right.\\
&&\qquad \qquad \qquad \qquad \qquad \left.+\left( 3\,{a}^{2}-5\,a-3\right) \,b+{a}^{2}-3\,a+2\right)
\end{eqnarray*}
and satisfies
\begin{eqnarray*}
T(a,b,c,d,e)&=&\left( 2\,{d}^{2}+2\,d\right) \,{e}^{2}+\left( 2\,{d}^{2}+\left( 2-4\,a\,b\,c\right) \,d-4\,a\,b\,c\right) \,e-4\,a\,b\,c\,d\\
&& \qquad +\left( \left( {a}^{2}+a\right) \,{b}^{2}+\left( {a}^{2}+a\right) \,b\right) \,{c}^{2}+\left( \left( {a}^{2}+a\right) \,{b}^{2}+\left( {a}^{2}-3\,a\right) \,b\right) \,c\geq 0
\end{eqnarray*}
then the Alexander transform is defined by (\ref{alexeq1}) is in $KS^{*}$.
\ethm
\bpf
Let $f(z) = z\, _3F_2(a,b,c;d,e;z)$, then from the definition of hypergeometric function $_3F_2$, we have
\begin{eqnarray*}
f(z)=\sum_{n=1}^{\infty} \frac{(a)_{n-1}(b)_{n-1}(c)_{n-1}}{(d)_{n-1}(e)_{n-1}(1)_{n-1}}z^n
\end{eqnarray*}
so that the corresponding Alexander transform defined by (\ref{alexeq1}) takes the form
\begin{eqnarray*}
\Lambda_f(z)=\sum_{n=1}^{\infty}A_nz^n,
\end{eqnarray*}
with\ $A_1 = 1$ and
\begin{eqnarray}
A_n = \frac{(a)_{n-1}(b)_{n-1}(c)_{n-1}}{n(d)_{n-1}(e)_{n-1}(1)_{n-1}}, for\ n\geq 2.
\end{eqnarray}
Using the definition of the shifted factorial notation, we have
\begin{eqnarray*}
A_{n+1} &=& \frac{(a)_{n}(b)_{n}(c)_{n}}{(n+1)(d)_{n}(e)_{n}(1)_{n}}
\end{eqnarray*}
and
\begin{eqnarray*}
(n+1)A_{n+1}&=& \frac{(a+n-1)(b+n-1)(c+n-1)}{(d+n-1)(e+n-1)} A_n\\
\end{eqnarray*}
After some simplification,
\begin{eqnarray*}
nA_n - (n+1)A_{n+1} &=& nA_n - \frac{(a+n-1)(b+n-1)(c+n-1)}{(d+n-1)(e+n-1)} A_n\\
&=& A_n \left[\frac{n(d+n-1)(e+n-1)-(a+n-1)(b+n-1)(c+n-1)}{(d+n-1)(e+n-1)} \right]\\
&=& \frac{A_n U(n) }{(d+n-1)(e+n-1)}
\end{eqnarray*}
where
\begin{eqnarray*}
U(n) &=& n(d+n-1)(e+n-1)-(a+n-1)(b+n-1)(c+n-1)\\
&\geq& {\left( e+d-c-b-a+1\right)} \,{n}^{2}\\
&& \qquad +\left( \left( d-1\right) \,e-d+\left( -b-a+2\right) \,c+\left( 2-a\right) \,b+2\,a-2\right) \,n\\
&& \qquad \qquad +\left( \left( 1-a\right) \,b+a-1\right) \,c+\left( a-1\right) \,b-a+1\\
&=&{\left(d\,e+e+d-b\,c-a\,c-a\,b\right)} \,n-e-d+\left( \left( 1-a\right) \,b+a\right) \,c+a\,b
\end{eqnarray*}
By hypothesis, $d+e\geq a\,b+b\,c+a\,c-abc$ and so for every $n\geq 1$, $$ U(n)\geq U(1)= de-abc \geq 0$$ and since $de\geq abc$, we proved that the above condition satisfied. Thus the sequence $\{nA_n\}$ is decreasing. Next, we prove that $\{nA_n-(n+1)A_{n+1}\}$ is also decreasing.\\
Let $B_n = nA_n-(n+1)A_{n+1}$. Then after some manipulation, we get
\begin{eqnarray*}
B_n-B_{n+1}&=&\frac{A_n U(n)}{(d+n-1)(e+n-1)}-\frac{A_{n+1} U(n+1)}{(d+n)(e+n)}\\
&=&\frac{A_n U(n)}{(d+n-1)(e+n-1)}-\frac{A_{n} U(n+1)}{(d+n)(e+n)(n+1)}\\ && \qquad \qquad \times\left( \frac{(a+n-1)(b+n-1)(c+n-1)}{(d+n-1)(e+n-1)}\right)\\
&=&\frac{A_n C(n)}{(d+n-1)(c+n-1)(d+n)(e+n)(n+1)}
\end{eqnarray*}
where
\begin{eqnarray*}
C(n)&=&U(n)(d+n)(e+n)(n+1)-U(n+1)(a+n-1)(b+n-1)(c+n-1)\\
&=& {\left( e+d-c-b-a+1\right) \,\left( e+d-c-b-a+2\right)} \,{n}^{4}\\
&& +{\left( e+d-c-b-a+2\right) \,\left( d\,e-b\,c-a\,c+c-a\,b+b+a-1\right)}\,{n}^{3}\\
&& +{2\,\left( e+d-c-b-a+2\right) \,\left( d\,e-b\,c-a\,c+c-a\,b+b+a-1\right)}\,{n}^{2}\\
&& + \left(\left( {d}^{2}-d\right) \,{e}^{2}+\left( -{d}^{2}+\left( \left( 2-2\,a\,b\right) \,c+2\,b+2\,a-3\right) \,d\right.\right.\\
&&\qquad \left.\left.+\left( \left( 2-2\,a\right) \,b+2\,a-2\right) \,c+\left( 2\,a-2\right) \,b-2\,a+2\right) \,e\right.
\end{eqnarray*}
\begin{eqnarray*}
&& \qquad \qquad \left. +\left( \left( \left( 2-2\,a\right) \,b+2\,a-2\right) \,c+\left( 2\,a-2\right) \,b-2\,a+2\right) \,d\right.\\
&& \qquad \qquad \qquad\left.+\left( \left( 2\,a-1\right) \,{b}^{2}+\left( 2\,{a}^{2}-4\,a+1\right) \,b-{a}^{2}+a\right) \,{c}^{2}\right.\\
&& \qquad \qquad \qquad \qquad \left.+\left( \left( 2\,{a}^{2}-4\,a+1\right) \,{b}^{2}+\left( -4\,{a}^{2}+6\,a-1\right) \,b+{a}^{2}-a\right) \,c\right.\\
&& \qquad \qquad \qquad \qquad \qquad \left.+\left( a-{a}^{2}\right) \,{b}^{2}+\left( {a}^{2}-a\right) \,b\right)\,n\\
&& \qquad \qquad \qquad \qquad \qquad \qquad -{\left( a-1\right) \,\left( b-1\right) \,\left( c-1\right) \,\left( 2\,d\,e-a\,b\,c\right)}
\end{eqnarray*}
Our aim is to check that $C(n)$ is non-negative for all $n\geq1$. After manipulation, we get
\begin{eqnarray*}
F(n)&\geq&\left(\left( 3\,{d}^{2}+7\,d+2\right) \,{e}^{2}+\left( 7\,{d}^{2}+\left( \left( \left( -2\,a-4\right) \,b-4\,a\right) \,c-4\,a\,b+11\right) \,d\right.\right.\\
&& \qquad \left. \left.+\left( \left( -6\,a-4\right) \,b-4\,a\right) \,c-4\,a\,b+2\right) \,e+2\,{d}^{2}+\left( \left( \left( -6\,a-4\right) \,b-4\,a\right) \,c \right.\right.\\
&& \qquad \qquad \left.\left.-4\,a\,b+2\right) \,d+\left( \left( 2\,a+1\right) \,{b}^{2}+\left( 2\,{a}^{2}+4\,a+1\right) \,b+{a}^{2}+a\right) \,{c}^{2}\right.\\
&& \qquad \qquad \qquad \left.+\left( \left( 2\,{a}^{2}+4\,a+1\right) \,{b}^{2}+\left( 4\,{a}^{2}-4\,a-3\right) \,b+{a}^{2}-3\,a\right) \,c\right.\\
&& \qquad \qquad \qquad \qquad \left.+\left( {a}^{2}+a\right) \,{b}^{2}+\left( {a}^{2}-3\,a\right) \,b\right)\,n\\
&& +\left( -{d}^{2}-5\,d-2\right) \,{e}^{2}+\left( -5\,{d}^{2}+\left( \left( \left( 4-2\,a\right) \,b+4\,a\right) \,c+4\,a\,b-9\right) \,d\right.\\
&& \qquad \left.+\left( \left( 2\,a+4\right) \,b+4\,a\right) \,c+4\,a\,b-2\right) \,e-2\,{d}^{2}+\left( \left( \left( 2\,a+4\right) \,b+4\,a\right) \,c\right.\\
&& \qquad \qquad \left.+4\,a\,b-2\right) \,d+\left( \left( {a}^{2}-a-1\right) \,{b}^{2}+\left( -{a}^{2}-3\,a-1\right) \,b-{a}^{2}-a\right) \,{c}^{2}\\
&& \qquad \qquad \qquad +\left( \left( -{a}^{2}-3\,a-1\right) \,{b}^{2}+\left( -3\,{a}^{2}+a+3\right) \,b-{a}^{2}+3\,a\right) \,c\\
&& \qquad \qquad \qquad \qquad +\left( -{a}^{2}-a\right) \,{b}^{2}+\left( 3\,a-{a}^{2}\right) \,b
\end{eqnarray*}
Put $n=1$ in the above, we get
\begin{eqnarray*}
F(1)&=&\left( 2\,{d}^{2}+2\,d\right) \,{e}^{2}+\left( 2\,{d}^{2}+\left( 2-4\,a\,b\,c\right) \,d-4\,a\,b\,c\right) \,e-4\,a\,b\,c\,d\\
&& \qquad +\left( \left( {a}^{2}+a\right) \,{b}^{2}+\left( {a}^{2}+a\right) \,b\right) \,{c}^{2}+\left( \left( {a}^{2}+a\right) \,{b}^{2}+\left( {a}^{2}-3\,a\right) \,b\right) \,c\geq 0
\end{eqnarray*}
For every $n \geq 1$, $C(n)\geq D(n)\geq E(n) \geq F(n)\geq F(1)$. Which is true \\
Therefore, the sequences $\{B_n\}$ and $\{nA_n - (n+1)A_{n+1}\}$ are non-increasing. The Alexander transform $\Lambda_f(z)$ is starlike in unit disc $\D$ by Lemma \ref{lemeq5} and already verified $\{nA_n\}$ is non-increasing. Therefore, the Alexander transform $\, \Lambda_f(z)$ is close-to-convex with respect to $-\log(1-z)$ using Lemma \ref{lemeq2}. Thus we have $\Lambda_f(z) \in KS^{*}$. Which completes the proof of Theorem.
\epf
|
train/arxiv
|
BkiUdvk5qoTDtv38v2Dq
| 5 | 1 |
\section*{Acknowledgements}
The authors thank P. von Brentano and H. Lenske as well as many participants of the Trento workshop 2012 on ``The Nuclear Dipole Polarizability and its Impact on Nuclear Structure and Astrophysics" for discussions. We further acknowledge the support of the accelerator staff at KVI and at HI$\vec{\gamma}$S during the beam times. This work was supported by the DFG (ZI 510/4-2 and SFB 634), by the European Commission within the Sixth Framework Programme through I3-EURONS (contract No. RII3-CT-2004-506065), by the Alliance Program of the Helmholtz Association (HA216/EMMI), by the Helmholtz International Center for FAIR (HIC for FAIR), by the U.S. Department of Energy Grant No. DE-FG02-97ER41033, and by the Rare Isotope Science Project of the Institute for Basic Science funded by the Ministry of Science, ICT and Future Planning and the National Research Foundation of Korea (2013M7A1A1075766). V.D. is supported by the Bonn-Cologne Graduate School of Physics and Astronomy.
|
train/arxiv
|
BkiUdmo5ixsDMO6yuSNd
| 5 | 1 |
\section*{Introduction and formulation of the problems}
\label{section0}
\setcounter{equation}{0}
Let $\cS\subset\bR^3$ be some closed orientable surface, bordering a compact inner $\Omega^+$ and outer $\Omega^-:=\bR^3\setminus\ov{\Omega^+}$ domains. By $\cC$ we denote a subsurface of $\cS$, which has two faces $\cC^-$ and $\cC^+$ and inherits the orientation from $\cS$: $\cC^+$ borders the inner domain $\Omega^+$ and $\cC^-$ borders the outer domain $\Omega^-$. $\cC$ has the smooth boundary $\Gamma:=\pa\cC$, which is decomposed into two closed parts $\Gamma=\Gamma_D\cup\Gamma_N$, consisting each of finite number of smooth arcs, having in common only endpoints.
Let $\nub (\omega)=\left(\nu_1(\omega),\nu_2(\omega), \nu_3(\omega)\right)^\top$, $\omega\in\ov\cC$ be the unit normal vector field on the surface $\cC$ and $\partial_\nub=\dst\sum_{j=1}^3\nu_j\pa_j$ be the normal derivative. Let us consider the Laplace-Beltrami operator in $\mathcal{C}$ written in terms of the G\"unter's tangent derivatives (see \cite{DMM06,Du09,DTT14} for more details)
\begin{eqnarray}\label{e0.0}
\Delta_\cC:=\cD^2_1+\cD^2_2+\cD^2_3,\qquad \cD_j:=\pa_j-\nu_j\pa_\nub,\quad j=1,2,3.
\end{eqnarray}
Let $\nub_\Gamma(t)=({\nu_{\Gamma,1}(t),\nu_{\Gamma,2}(t), \nu_{\Gamma,3}(t)})^\top$, $t\in\Gamma$, be the unit normal vector field on the boundary $\Gamma$, which is tangential to the surface $\cC$ and directed outside of the surface. And, finally, let $\partial_{\nub_\Gamma}:=\dst\sum_{j=1}^3\nu_{\Gamma,j}\cD_j$ be the normal derivative on the boundary of the surface, which is the outer tangential derivative on the surface.
We study the following mixed boundary value problem for the Laplace-Beltrami equation
\begin{eqnarray}\label{e0.1}
\left\{\begin{array}{ll}
\Delta_\cC u(t)=f(t),\qquad & t\in\cC, \\[0.2cm]
u^+(\tau)=g(\tau), \qquad & \tau\in\Gamma_D, \\[0.2cm]
(\partial_{\nub_\Gamma}u)^+(\tau)=h(\tau),\qquad & \tau\in\Gamma_N.
\end{array}\right.
\end{eqnarray}
where $u^+$ and $(\partial_{\nub_\Gamma}u)^+$ denote respectively the Dirichlet and the Neumann traces on the boundary.
We need the Bessel potential $\bH^s_p(\cS)$, $\bH^s_p(\cC)$, $\wt{\bH}^s_p(\cC)$ and Sobolev-Slobode\v{c}kii $\bW^r_p(\Gamma)$ spaces, where $\cS$ is a closed smooth surface (without boundary), which contains $\cC$ as a subsurface, $1<p<\infty,\quad \frac1p<s<1-\frac1p$. The Bessel potential space $\bH^s_p(\bR^n)$ is defined as a subset of the space of Schwartz distributions $\bS'(\bR^n)$ endowedp with the norm (see \cite{Tr95})
\[
||u\big|\bH_p^s(\bR^n)||:=||\langle D\rangle^su\big| L_p(\bR^n)||,
\]
where $\langle D\rangle^s:=\cF^{-1}(1+|\xi|^2)^{\frac s2}\cF$ is the Bessel potential and $\cF$, $\cF^{-1}$ are the Fourier transformations. For the definition of the Sobolev-Slobode\v{c}kii space $\bW_p^s(\bR^n)=\bB_{p,p}^s(\bR^n)$ see \cite{Tr95}. The space $\bW_p^s(\cS)$ coincides with the trace space of $\bH_p^{s+\frac1p}(\bR^3)$ on $\cS$ and is known that $\bW^s(\cS)=\bH^s(\cS)$ for $s\geq0$, $1<p<\infty$ (see \cite{Tr95}).
We use, as common, the notation $\bH^s(\cS)$ and $\bW^s(\cS)$ for the spaces $\bH_2^s(\cS)$ and $\bW_2^s(\cS)$ (the case $p=2$).
The spaces $\bH_p^s(\cS)$ and $\bW_p^s(\cS)$ are defined by a partition of the unity $\{\psi_j\}_{j=1}^\ell$ subordinated to some covering $\{ Y_j\}_{j=1}^\ell$ of $\cS$ and local coordinate diffeomorphisms (see \cite{Tr95,HW08} for details)
\[
\varkappa_j : X_j\rightarrow Y_j , \qquad X_j\subset\bR^2\, ,\quad j=1,\ldots,\ell.
\]
The space $\wt {\bH}_p^s(\cC)$ is defined as the subspace of $\bH_p^s(\cS)$ of those functions $\vf\in \bH_p^s(\cS)$, which are supported in the closed sub-surface $\supp\vf\subset\ov{\cC}$, whereas $\bH_p^s(\cC)$ denotes the quotient space $\bH_p^s(\cC):=\bH_p^s(\cS)\Big/\wt{\bH}_p^s(\cC^c)$, and $\cC^c:=\cC\setminus\ov{\cC}$ is the complemented sub-surface. For $s>1/p-1$ the space $\bH_p^s(\cC)$ can be identified with the space of those distributions $\vf$ on $\bR^n_+$ which admit extensions $\ell\vf\in\bH_p^s(\cS)$, while $\bH_p^s(\cC)$ is identified with the space $r_\cC\bH_p^s(\cS)$, where $r_\cC$ denotes the restriction from $\cS$ to the sub-surface $\cC$.
It is worth noting that for an integer $m=1,2,\ldots$ the Sobolev spaces $\bH^m_p(\cS)$ and $\bW^m_p(\cS)$ coincide and the equivalent norm is defined with the help of the G\"unter's derivatives (see \cite{Du01,Du09,DMM06}):
\[
||u\big|\bW_p^m(\cS)||:=\left[\sum_{\alpha|\leqslant m}||\cD^\alpha u\big| L_p(\cS)||^p\right]^{\frac1p},
\quad\mbox{ where }\quad \cD^\alpha:=\cD^{\alpha_1}_1\cD^{\alpha_2}_2\cD^{\alpha_3}_3
\]
and the G\"unter's derivatives $\cD_1, \cD_2, \cD_3$ are defined in \eqref{e0.0}.
Let us also consider $\widetilde{\mathbb{H}}^{-1}_0(\mathcal{C})$, a subspace of $\widetilde{\mathbb{H}}^{-1}(\mathcal{C})$, orthogonal to
\[
\widetilde{\bH}^{-1}_\Gamma (\cC):=\left\{f \in\widetilde{\mathbb{H}}^{-1} (\cC)\;:\;\langle f,\varphi\rangle = 0 \;\text{for all}\;\varphi\in C^1_0(\cC)\right\}.
\]
$ \widetilde{\mathbb{H}}^{-1}_\Gamma(\mathcal{C})$ consists of those distributions on $\cS$, belonging to $\widetilde{\mathbb{H}}^{-1}(\mathcal{C})$ which have their supports just on $\Gamma$ and $\widetilde{\mathbb{H}}^{-1}(\mathcal{C})$ can be decomposed into the direct sum of subspaces:
\[
\widetilde{\mathbb{H}}^{-1}(\mathcal{C})= \widetilde{\mathbb{H}}^{-1}_\Gamma(\mathcal{C})\oplus
\widetilde{\mathbb{H}}^{-1}_0(\mathcal{C}).
\]
The space $\widetilde{\bH}^{-1}_\Gamma (\cC)$ is non-empty (see \cite[\S\, 5.1]{HW08}) and excluding it from $\widetilde{\mathbb{H}}^{-1}(\mathcal{C})$ is needed to make BVPs uniquelly solvable (cf. \cite{HW08} and the next Theorem \ref{t0.1}).
The Lax-Milgram Lemma applied to the BVP \eqref{e0.1} gives the following result.
\begin{theorem}[Theorem 14, \cite{DTT14} and \S\, 5.1, \cite{HW08}]\label{t0.1}
The BVP \eqref{e0.1} has a unique solution in the classical weak setting:
\begin{equation}\label{e0.2}
u\in\mathbb{H}^1(\mathcal{C}),\quad f\in\widetilde{\mathbb{H}}^{-1}_0(\mathcal{C}),
\quad g\in\mathbb{H}^{1/2}(\Gamma_D), \quad h\in\mathbb{H}^{-1/2}(\Gamma_N).
\end{equation}
\end{theorem}
From Theorem \ref{t0.1} we can not even conclude that a solution is continuous. If we can prove that there is a solution $u\in\mathbb{H}^1_p(\mathcal{C})$ for some $2<p<\infty$, we can enjoy even a H\"older continuity of $u$. It is very important to know maximal smoothness of a solution as, for example, while designing approximation methods. To this end we will investigate the solvability properties of the BVP \eqref{e0.1} in the following non-classical setting
\begin{eqnarray}\label{e0.3}
u\in\mathbb{H}^s_p(\mathcal{C}),\quad
f\in\widetilde{\mathbb{H}}^{s-2}_p(\mathcal{C})\cap\widetilde{\mathbb{H}}^{-1}_0(\mathcal{C}),\quad g\in\mathbb{W}^{s-1/p}_p(\Gamma),\quad h\in\mathbb{W}^{s-1-1/p}_p(\Gamma),\\
1<p<\infty, \quad s>\frac1p\nonumber
\end{eqnarray}
and find necessary and sufficient conditions of solvability. Note, that the constraint $s>\dst\frac1p$ is necessary to ensure the existence of the trace $u^+$ on the boundary.
To formulate the main theorem of the present work we need the following definition.
\begin{definition}\label{t0.2}
The BVP \eqref{e0.1}, \eqref{e0.3} is Fredholm if the homogeneous problem $f=g=h=0$ has a finite number of linearly independent solutions and only a finite number of orthogonality conditions on the data $f,g,h$ ensure the solvability of the BVP.
\end{definition}
We prove below the following theorem (see the concluding part of \S\ 5).
\begin{theorem}\label{t0.3}
Let $1<p<\infty$, $s>\dst\frac1p$.
The BVP \eqref{e0.1} is Fredholm in the non-classical setting \eqref{e0.3} if and only if:
\begin{eqnarray}\label{e0.4}
p\not=2\quad or \quad p=2\quad and \quad s\not=\frac12+k, \qquad \text{for}\quad k=0,1,2,\ldots.
\end{eqnarray}
In particular, the BVP \eqref{e0.1} has a unique solution $u$ in the non-classical setting \eqref{e0.3} if
\begin{eqnarray}\label{e0.5}
\dst\frac12<s<\dst\frac32,\qquad 1<p<\infty.
\end{eqnarray}
\end{theorem}
Note, that conditions \eqref{e0.4} and \eqref{e0.5} are independent of the parameter $p$.
The proof of the foregoing Theorem \ref{t0.3} in \S\, \ref{sect5} is based on the Theorem \ref{t0.3a} and Theorem \ref{t0.4}.
\begin{theorem}\label{t0.3a}
Let $1<p<\infty$, $s>\dst\frac1p$. Let $g_0\in\bW^{s-1/p}_p(\Gamma)$ and $h_0\in\bW^{s-1-1/p}_p(\Gamma)$ be some fixed extensions of the boundary data $g\in\bW^{s-1/p}_p(\Gamma_D)$ and $h\in\bW^{s-1-1/p}_p(\Gamma_N)$ (non-classical formulation), initially defined on the parts of the boundary $\Gamma=\Gamma_D\cup\Gamma_N$.
A solution to the BVP \eqref{e0.1} is represented by the formula
\begin{eqnarray}\label{e0.5a}
u(\cx)=\N_\cC f(\cx)+\W_\Gamma(g_0+\varphi_0)(\cx)-\V_\Gamma(h_0
+\psi_0)(\cx), \qquad \cx\in\cC.
\end{eqnarray}
Here $\N_\cC$, $\W_\Gamma$ and $\V_\Gamma$ are the Newton's, double and single layer potentials, defined below $($see \eqref{e1.5}$)$ and $\varphi_0$, $ \psi_0$ in \eqref{e0.5a} are solutions to the following system of pseudodifferential equations
\begin{eqnarray}\label{e0.6}
\begin{array}{l}
\left\{\begin{array}{ll}\dst\frac12\varphi_0-r_N\W_{\Gamma,0}\varphi_0
+r_N\V_{\Gamma,-1}\psi_0=G_0&\text{on}\quad\Gamma_N,\\[3mm]
\dst\frac12\psi_0+r_D\W^*_{\Gamma,0} \psi_0-r_D\V_{\Gamma,+1}\varphi_0=H_0\qquad
&\text{on}\quad\Gamma_D, \end{array}\right.
\end{array}\\[2mm]
\label{e0.7}
\begin{array}{c}
\varphi_0\in\wt{\bW}^{s-1/p}_p(\Gamma_N),\quad \psi_0\in\wt{\bW}^{s-1-1/p}_p(\Gamma_D),\\[3mm]
G_0\in\bW^{s-1/p}_p(\Gamma_N),\qquad H_0\in\bW^{s-1-1/p}_p(\Gamma_D),
\end{array}
\end{eqnarray}
where $G_0$ and $H_0$ are given functions and the participating pseudodifferential operators are defined \eqref{e1.13} in \S\, 1 below.
Vice versa: if $u$ is a solution to the BVP \eqref{e0.1}, $g:=r_Du^+$, $h:=r_N(\pa_\nub u)^+$ and
$g_0\in\bW^{s-1/p}_p(\Gamma_N)$, $h_0\in\bW^{s-1-1/p}_p(\Gamma_\D)$ are some fixed extensions of $g$ anf $h$ to $\Gamma$, then $\vf_0:=\ r_{\Gamma_D}(u^+ - g_0)$, $\psi_0:=r_{\Gamma_N}((\pa_\nub u)^+ - h_0)$
are solutions to the system \eqref{e0.6}.
The system of boundary pseudodifferential equations \eqref{e0.6} has a unique pair of solutions $\varphi_0\in\bW^{1/2}(\Gamma_N)$ and $\psi_0\in\bW^{-1/2}(\Gamma_D)$ in the classical setting $p=2$, $s=1$.
\end{theorem}
The proof of Theorem \ref{t0.3a} is exposed in \S\, \ref{sect1}.
For the system \eqref{e0.6} we can remove the constraint $s>\dst\frac1p$ and prove the following result for arbitrary $r\in\bR$.
\begin{theorem}\label{t0.4}
Let $1<p<\infty$, $r>-1$.
The system of boundary pseudodifferential equations \eqref{e0.6} is Fredholm in the Sobolev-Slobode\v{c}kii space setting
\begin{subequations}
\begin{eqnarray}\label{e0.6a}
\begin{array}{c}
\varphi_0\in\wt{\bW}^r_p(\Gamma_N),\quad \psi_0\in\wt{\bW}^{r-1}_p(\Gamma_D),\\[3mm]
G_0\in\bW^r_p(\Gamma_N),\qquad H_0\in\bW^{r-1}_p(\Gamma_D)
\end{array}
\end{eqnarray}
and also in the Bessel potential space setting
\begin{eqnarray}\label{e0.6b}
\begin{array}{c}
h, h_0,\varphi_0\in\wt{\bH}^r_p(\Gamma_N),\quad g, g_0, \psi_0\in\wt{\bH}^{r-1}_p(\Gamma_D),\\[3mm]
G_0\in\bH^r_p(\Gamma_N),\qquad H_0\in\bH^{r-1}_p(\Gamma_D)
\end{array}
\end{eqnarray}
\end{subequations}
if the following condition holds:
\begin{eqnarray}\label{e0.8}
p\not=2\quad or \quad p=2\quad and \quad r\not=0,1,2,\ldots.
\end{eqnarray}
In particular, the system \eqref{e0.6} has a unique solution in both settings \eqref{e0.6a} and \eqref{e0.6b} if:
\begin{eqnarray}\label{e0.9}
1<p<\infty, \qquad -1<r<0.
\end{eqnarray}
\end{theorem}
The proof of the foregoing Theorem \ref{t0.4} in \S\, \ref{sect5} is based on the auxiliary Theorem \ref{t0.5}. To formulate the theorem consider the following model system of boundary integral equations (BIEs)
\begin{eqnarray}\label{e0.10}
&&\hskip-8mm\left\{\begin{array}{ll}\vf(t) + \K^1_{-1}\psi(t)=G(t),\\[3mm]
\psi(t) + \K^1_{-1}\vf(t)=H(t),\qquad &t\in\bR^+,\end{array}\right.\\[2mm]
&& \varphi,\psi\in\wt{\bW}^{s-1-1/p}_p(\bR^+),\qquad G,\; H\in\bW^{s-1-1/p}_p(\bR^+), \nonumber
\end{eqnarray}
where
\begin{eqnarray}\label{e0.11}
\K^1_{-c}v(t):=\dst\frac1\pi\int_0^\infty\frac{v(\tau)d\tau}{t+c\tau},
\qquad-\pi<\arg\,c<\pi, \quad v\in\bL_p(\bR^+)
\end{eqnarray}
is a Mellin convolution operator with the kernel homogeneous of order $-1$ (see \cite{Du79,Du84b,Du86,Du82}).
\begin{theorem}\label{t0.5}
Let $1<p<\infty$, $r>-1$. \\
\indent
The system of boundary pseudodifferential equations \eqref{e0.6} is Fredholm in the Sobolev-Slobode\v{c}kii \eqref{e0.6a} and Bessel potential \eqref{e0.6b} space settings if the system of boundary integral equations \eqref{e0.10} is locally invertible at $0$ in the Sobolev-Slobode\v{c}kii
\begin{eqnarray}\label{e0.11a}
\varphi, \psi\in\wt{\bW}^{r-1}_p(\bR^+),\qquad G, H\in\bW^{r-1}_p(\bR^+)
\end{eqnarray}
and the Bessel potential space
\begin{eqnarray}\label{e0.11b}
\varphi, \psi\in\wt{\bH}^{r-1}_p(\bR^+),\qquad G, H\in\bH^{r-1}_p(\bR^+)
\end{eqnarray}
settings, respectively.
\end{theorem}
\begin{remark}\label{r0.7}
Theorem \ref{t0.5} is proved at the end of \S\, 1. For the proof we apply a quasi-localization of the BVP \eqref{e0.1} with some model BVPs on the half space (see Lemma \ref{l1.4} and Lemma \ref{l1.5}). The constraint $r>-1$ is due to this approach, since the boundary value problems are involved.
In a forthcoming paper will be proved directly the local quasi-equivalence of the equation \eqref{e0.6} and the system \eqref{e0.10} at the points where the Dirichlet and Neumann boundary conditions collide and some simpler equations, which are uniquely solvable, at all other points. Then the constraint $r>-1$ can be dropped and replaced by $r\in\bR$.
Correspondingly, Theorem \ref{t0.4} is also valid for all $r\in\bR$ and the condition \eqref{e0.8} acquires the form
\[
p\not=2\quad or \quad p=2\quad and \quad r\not=0,\pm1,\pm2,\ldots.
\]
\end{remark}
A quasi-localization means "freezing coefficients" and "rectifying" underling contours and surfaces. For details of a quasi-localization we refer the reader to the papers \cite{Si65} and \cite{CDS03}, where the quasi-localization is well described for singular integral operators and for BVPs, respectively. We also refer to \cite[\S\, 3]{Du15}, where is exposed a short introduction to quasi-localization.
In the present case under consideration we get 3 different model problems by localizing the mixed BVP \eqref{e0.1} to:
\begin{itemize}
\item[1]
Inner points of $\cC$.
\item[2]
Inner points on the boundary $\Gamma_D$ and $\Gamma_N$.
\item[3]
Points of the boundary $\Gamma$ where different boundary conditions collide (endpoints of $\Gamma_N$ and $\Gamma_D$).
\end{itemize}
The model BVPs obtained by a quasi-localization, are well investigated in the first two cases and such model problems have unique solutions without additional constraints. In the third case we get a mixed BVP on the half plane for the Laplace equation (cf. \eqref{e0.6x} below). The system \eqref{e0.10} is related to this model mixed problem \eqref{e0.6x} just as BVP \eqref{e0.1} is related to the system \eqref{e0.6} (cf. Lemma \ref{l1.5} below).
The investigation of the boundary integral equation system \eqref{e0.10} is based on recent results on Mellin convolution equations with meromorphic kernels in Bessel potential spaces (see R. Duduchava \cite{Du15}, R. Duduchava and V. Didenko \cite{DD14}).
The symbol $\cB_0^s(\omega)$ of the system \eqref{e0.10} is a continuous function on some infinite rectangle $\mathfrak{R}$ and is responsible for the Fredholm property and the index of the system. This provides necessary and sufficient conditions for the Fredholm property of \eqref{e0.10} which is then used to prove the solvability of the original BVP in the non-classical setting.
A rigorous analysis of solvability of the above and similar problems with Dirichlet, Neumann, mixed and impedance boundary condition for the Helmholtz and other other elliptic equations are very helpful for a general understanding of elliptic boundary value problems in conical domains (see \cite{KS03,KMR01,No58}).
In \cite{ENS11,ENS14} the authors suggest another approach to the investigation of the model mixed problem for the Helmholtz equation: they write explicit formulae for a solution with two different methods. But the setting is classical only (the case $p=2$) and the approach can not be applied to the non-classical setting. Other known results are either very limited to special situations such as the rectangular case \cite{CST04,CST06,MPST93} or apply rather sophisticated analytical methods \cite{KMM05,ZM00}, or are missing a precise setting of appropriate functional spaces (see, e.g., \cite{Ma58,Uf03}). For the historical survey and for further references we recommend \cite{CK13,ZM00,Va00}.
There is another approach, which can also be applied is the limiting absorption principle, which is based on variational formulation and Lax-Milgram Lemma and its generalizations. Such approach is presented, e.g., in \cite{BT01,BCC12a,BCC12b}. But again, these results are for the classical setting.
In 1960's there was suggested to solve canonical diffraction problems in Sobolev spaces, based on the recent development in pseudodifferential equations in domains with corners and, more generally,with a Lipschitz boundary. It was popularized by E. Meister \cite{Me85,Me87}, E. Meister and F.-O. Speck \cite{MS79}, W.L. Wendland \cite{WSH79}, A. Ferreira dos Santos \cite{ST89} and their collaborators in the 1980's. Also see the book of Vasil'ev \cite{Va00} with a considerable list of references. The results are also restricted to the classical setting.
\section{Potential operators and boundary integral equations}
\label{sect1}
\setcounter{equation}{0}
Let $\cS$ be a closed, sufficiently smooth orientable surface in $\mathbb{R}^n$. We use the notation $\mathbb{X}_p^s(\mathcal{S})$ for either the Bessel potential $\mathbb{H}^s_p(\mathcal{S})$ or the Sobolev-Slobode\v{c}kii $\mathbb{W}{}^s_p(\mathcal{S})$ spaces for $\mathcal{S}$ closed or open and a similar notation $\widetilde{\mathbb{X}}_p^s(\mathcal{S})$ for $\mathcal{S}$ open.
Consider the space
\begin{eqnarray}\label{e1.1}
\mathbb{X}^s_{p,\#}(\mathcal{S}):=\left\{{\varphi}\in\mathbb{X}^s_p
(\mathcal{S})\;:\;\mbox{\bf{(}}{\varphi},1\mbox{\bf{)}}=0\right\},
\end{eqnarray}
where $\mbox{\bf{(}}\cdot,\cdot\mbox{\bf{)}}$ denotes the duality pairing between the adjoint spaces. It is obvious, that $\mathbb{X}^s_{p,\#}(\mathcal{S})$ does not contain constants: if $c_0={\rm const}\in\mathbb{X}^s_{p,\#}(\mathcal{S})$ than
\[
0=\mbox{\bf{(}} c_0,1\mbox{\bf{)}}=c_0\mbox{\bf{(}}1,1\mbox{\bf{)}}=c_0{\rm mes}\,\mathcal{S}
\]
and $c_0=0$. Moreover, $\mathbb{X}^s_p(\mathcal{S})$ decomposes into the direct sum
\begin{eqnarray}\label{e1.2}
\mathbb{X}^s_p(\mathcal{S})=\mathbb{X}^s_{p,\#}(\mathcal{S})+\{{\rm const}\}
\end{eqnarray}
and the dual (adjoint) space is
\begin{eqnarray}\label{e1.3}
(\mathbb{X}^s_{p,\#}(\mathcal{S}))^*=\mathbb{X}^{-s}_{p',\#}(\mathcal{S}), \qquad p':=\frac p{p-1}.
\end{eqnarray}
The following is a part of Theorem 10 proved in \cite{DTT14}.
\begin{theorem}\label{t1.1a}
Let $\mathcal{S}$ be $\ell$-smooth $\ell=1,2,\ldots$, $1<p<\infty$ and $|s|\leqslant\ell$. Let $\mathbb{X}^s_{p,\#}(\mathcal{S})$ be the same as in \eqref{e1.1}-\eqref{e1.3}.
The Laplace-Beltrami operator $\Delta_\cS:={\bf\rm div}_\mathcal{S}\nabla_\mathcal{S}$ is invertible between the spaces with detached constants
\begin{eqnarray}\label{e1.4}
\Delta_\cS\;:\;\mathbb{X}^{s+1}_{p,\#}(\mathcal{S})\to\mathbb{X}^{s-1}_{p,\#}(\mathcal{S}),
\end{eqnarray}
i.e., has the fundamental solution $\cK_\cS$ in the setting \eqref{e1.4}.
\end{theorem}
Let $\cC\subset\cS$ be a subsurface with a smooth boundary $\Gamma:=\partial\cC$. With the fundamental solution $\cK_\cS$ of the Laplace-Beltrami operator at hand we can consider the standard Newton, single and double layer potentials on the surface $\cC$:
\begin{eqnarray}\label{e1.5}
\begin{array}{l}
\N_\cC v(x):=\dst\int_{\cC}\cK_\cS (x,y)v(y)\,d\sigma\,\\[3mm]
\V_\Gamma v(x):=\dst\int_{\Gamma}\cK_\cS (x,\tau)v(\tau)d\tau,\\[3mm]
\W_\Gamma v(x):=\dst\int_{\Gamma}\pa_{\nub_\Gamma(\tau)}\cK_\cS (x,\tau)v(\tau)d\tau,
\qquad x\in\cC.
\end{array}
\end{eqnarray}
The potential operators, defined above, have standard boundedness properties
\begin{eqnarray*}
\N_\cC &:&\bH_{p,\#}^s(\cC)\longrightarrow\bH^{s+2}_{p,\#}(\cC)\, ,\\
\V_\Gamma &:&\bH_{p,\#}^s(\Gamma)\longrightarrow\bH^{s+1+\frac1p}_{p,\#}(\cC)\, ,\\
\W_\Gamma &:&\bH_{p,\#}^s(\Gamma)\longrightarrow\bH^{s+\frac1p}_{p,\#}(\cC)
\end{eqnarray*}
and any solution to the mixed BVP \eqref{e0.1} in the space $\bH^1_\#(\cC)$ is represented as follows:
\begin{eqnarray}\label{e1.6}
u(x)=\N_\cC f(x)+\W_\Gamma u^+(x)-\V_\Gamma[\pa_{\nub_\Gamma} u]^+(x) \qquad u\in\bH^1_\#(\cC),
\quad x\in\cC
\end{eqnarray}
(see \cite{DNS95,Du01}). Densities in \eqref{e1.6} represent the Dirichlet $u^+$ and the Neumann $[\pa_{\nub_\Gamma} u]^+$ traces of the solution $u$ on the boundary.
Since $\bX_p^s=\bX_{p,\#}^s+\{{\rm const}\}$, we can extend layer potentials to the entire space as follows:
\begin{eqnarray}\label{e1.6b}
\begin{array}{c}
\text{for}\quad \varphi=\varphi_0+c, \qquad \varphi_0\in\bX_{p,\#}^s,
\quad c={\rm const},\\[3mm]
\text{we set}\quad \V_\Gamma\vf=\V_\Gamma\vf_0+c, \quad \W_\Gamma\vf=\W_\Gamma\vf_0+c, \quad \N_\cC\vf=\N_\cC\vf_0+c,
\end{array}
\end{eqnarray}
i.e., by setting $\V_\Gamma c=\W_\Gamma c=c\N_\cC c=c$.
\begin{lemma}\label{l1.1}
The representation formula \eqref{e1.6} remains valid for a solution in the space $\bH^1(\cC)$, provided the potentials are extended as in \eqref{e1.6b}.
\end{lemma}
{\bf Proof:} Indeed, since $u=u_0+c$, $u_0\in\bH_{p,\#}^s(\cC)$, $u\in\bH_p^s(\cC)$, we apply the representation formula \eqref{e1.6} for a solution in the space $\bH^1_\#(\cC)$, formula \eqref{e1.6b}, and get the representation formula \eqref{e1.6} for a solution in the space $\bH^1(\cC)$:
\begin{eqnarray}\label{e1.6c}
\begin{array}{rcl}
u(x)&\hskip-3mm=&\hskip-3mm u_0(x)+c=\N_\cC f(x)+\W_\Gamma u_0^+(x)-\V_\Gamma[\pa_{\nub_\Gamma} u_0]^+(x)+c\\[3mm]
&\hskip-3mm=&\hskip-3mm \N_\cC f(x)+\W_\Gamma (u-c)^+(x)-\V_\Gamma[\pa_{\nub_\Gamma} (u-c)]^+(x)+c\\[3mm]
&\hskip-3mm=&\hskip-3mm \N_\cC f(x)+\W_\Gamma u^+(x)-\V_\Gamma[\pa_{\nub_\Gamma} u]^+(x), \qquad u\in\bH^1(\cC),
\quad x\in\cC.
\end{array}
\end{eqnarray}
\vskip-9mm \QED\vskip7mm
\noindent
{\bf Proof of Theorem \ref{t0.3a}:} Let us recall the Plemelji formulae
\begin{eqnarray}\label{e1.7}
\begin{array}{l}
(\W_\Gamma v)^\pm(t)=\pm\dst\frac12v(t)+\W_{\Gamma,0}v(t),\quad
(\pa_{\nub_\Gamma}\W_\Gamma\psi)^\pm(t)=\V_{\Gamma,+1}v(t), \\[3mm]
(\pa_{\nub_\Gamma}\V_\Gamma v)^\pm(t)=\mp\dst\frac12v(t)+\W^*_{\Gamma,0}v(t),\quad
(\V_\Gamma v)^\pm(t)=\V_{\Gamma,-1}v(t) ,
\end{array}
\end{eqnarray}
where $t\in\pa\Omega_\alpha$ and
\begin{eqnarray}\label{e1.7a}
\begin{array}{rcl}
\V_{\Gamma,-1} v(t) &\hskip-3mm:=&\hskip-3mm\dst\int_\Gamma\cK_\cS (t,\tau) v(\tau)d\tau,\\[3mm]
\W_{\Gamma,0} v(t)&\hskip-3mm:=&\hskip-3mm\dst\int_\Gamma(\pa_{\nub_\Gamma(\tau)}\cK_\cS )(t,\tau)
v(\tau)d\tau,\\
\W^*_{\Gamma,0}w(t)&\hskip-3mm:=&\hskip-3mm\dst\int_\Gamma(\pa_{\nub_\Gamma(t)}\cK_\cS )(t,\tau)
w(\tau)d\tau,\\
\V_{\Gamma,+1}w(t)&\hskip-3mm:=&\hskip-3mm\dst\int_\Gamma(\pa_{\nub_\Gamma(t)}\pa_{\nub_\Gamma(\tau)}
\cK_\cS )(t,\tau)w(\tau)d\tau,\qquad\qquad t\in\Gamma,
\end{array}
\end{eqnarray}
are pseudodifferential operators on $\Gamma$, have orders $-1$, $0$, $0$ and $+1$, respectively, and represent the direct values of the corresponding potentials $\V_\Gamma$, $\W_\Gamma$, $\pa_{\nub_\Gamma}\V_\Gamma$ and $\pa_{\nub_\Gamma}\W_\Gamma$.
Let $g_0\in\bW^{s-1/p}_p(\Gamma)$ and $h_0\in\bW^{s-1-1/p}_p(\Gamma)$ be some fixed extensions of the boundary conditions $g\in\bW^{s-1/p}_p(\Gamma_D)$ and $h\in\bW^{s-1-1/p}_p(\Gamma_N)$ (non-classical formulation), initially defined on the parts of the boundary $\Gamma=\Gamma_D\cup\Gamma_N$. Since the difference between such two extensions belong to the spaces $\wt{\bW}^{s-1/p}_p(\Gamma_N)$ and $\wt{\bW}^{s-1-1/p}_p(\Gamma_D)$ respectively, let us look for two unknown functions $\varphi_0\in\wt{\bW}^{s-1/p}_p(\Gamma_N)$ and $\psi_0\in\wt{\bW}^{s-1-1/p}_p(\Gamma_D)$, such that for $g_0+\vf_0$ and $h_0+\psi_0$ the boundary conditions in \eqref{e0.1} hold on the entire boundary
\begin{eqnarray}\label{e1.8}
\begin{array}{c}
u^+(t)=g_0(t)+\varphi_0(t)=\left\{\begin{array}{ll} g(t)\quad &{\rm if}\quad
t\in\Gamma_D,\\[3mm]
g_0(t)+\varphi_0(t)\quad &{\rm if}\quad
t\in\Gamma_N,
\end{array}\right.\\ \\
(\pa_{\nub_\Gamma} u)^+(t)=h_0(t)+\psi_0(t)=\left\{\begin{array}{ll} h_0(t)+\psi_0(t)
\quad &{\rm if}\quad t\in\Gamma_D,\\[3mm]
h(t)\quad &{\rm if}\quad t\in\Gamma_N,
\end{array}\right.
\end{array}
\end{eqnarray}
provided f $u(x)$ is a solution to the BVP \eqref{e0.1}.
By introducing the boundary values of a solution \eqref{e1.8} to the BVP \eqref{e0.1} into the representation formula \eqref{e1.6c} (see Lemma \ref{l1.1}) we get the following representation of a solution:
\begin{eqnarray}\label{e1.9}
u(x)=\N_\cC f(x)+\W_\Gamma[g_0+\varphi_0](x)-\V_\Gamma[h_0+\psi_0](x),\qquad x\in\cC,
\end{eqnarray}
where
\[
\hskip-5mm g_0\in\bW^{s-1/p}_p(\Gamma), \; h_0\in\bW^{s-1-1/p}_p(\Gamma),\;
\varphi_0\in\wt{\bW}^{s-1/p}_p(\Gamma_N), \; \psi_0\in\wt{\bW}^{s-1-1/p}_p(\Gamma_D).
\]
By applying the Plemelji formulae \eqref{e1.7} to \eqref{e1.9} and taking into account \eqref{e1.8} we get the following:
\[
\begin{array}{r}
\left\{\begin{array}{l}
g_0(t)+\varphi_0(t)=u^+(t)={(\N_\cC f)^+}+\dst\frac12(g_0(t)+\varphi_0(t))\\
\hskip20mm+\W_{\Gamma,0}[g_0+\varphi_0](t)-\V_{\Gamma,-1}[h_0+\psi_0](t),\\[2mm]
h_0(t)+\psi_0(t)=(\pa_{\nub_\Gamma} u)^+(t)={(\pa_{\nub_\Gamma}\N_\cC f)^+}+\V_{\Gamma,+1}[g_0+\varphi_0](t)\\
\hskip20mm+\dst\frac12(h_0(t) +\psi_0(t))-\W^*_{\Gamma,0}[h_0+\psi_0](t),\qquad t\in\Gamma.
\end{array}\right.
\end{array}
\]
If we apply the restriction operator $r_D$ to $\Gamma_D$ to the first equation in the obtained system and the restriction operator $r_N$ to $\Gamma_N$ to the second one, we obtain the system \eqref{e0.6}, where
\begin{eqnarray}\label{e1.13}
\begin{array}{c}
G_0:=r_N\left[(\N_\cC f)^+-\dst\frac12g_0+\W_{\Gamma,0}g_0-\V_{\Gamma,-1}
h_0\right]\in\bW^{s-1/p}_p(\Gamma_N),\\[2mm]
H_0:=r_D\left[(\pa_{\nub_\Gamma}\N_\cC f)^+-\dst\frac12h_0+\V_{\Gamma,+1}
g_0-\W^*_{\Gamma,0}h_0\right]\in\bW^{s-1-1/p}_p(\Gamma_D).
\end{array}
\end{eqnarray}
Thus, we have proved the inverse assertion of Theorem \ref{t0.3a}: if $u$ is a solution to the BVP \eqref{e0.1}, the functions $\vf_0$ and $\psi_0$ are solutions to the system \eqref{e0.6}.
The direct assertion is easy to prove:
\begin{itemize}
\item
The function in \eqref{e1.6c} represented by the potentials, satisfies the equation \eqref{e0.1}.
\item
If $\vf_0$ and $\psi_0$ are solutions to the system \eqref{e0.6}, using Plemelji formulae \eqref{e1.7} it can easily be verified that $u$ in \eqref{e1.6c} satisfies the boundary conditions in \eqref{e0.1}.
\end{itemize}
The existence and the uniqueness of a solution to the BVP \eqref{e0.1} in the classical setting \eqref{e0.2} is stated in Theorem \ref{t0.3}, while for the system \eqref{e0.6} it follows from the equivalence with the BVP \eqref{e0.1}. \QED
The remainder of the paper is devoted to the proof of solvability properties of the system \eqref{e0.6} in the non-classical setting \eqref{e0.3}.
Consider the following equation on the 2-dimensional Euclidean space
\begin{equation}\label{e0.6m}
\Delta u=f^0\qquad\text{on}\quad \bR^2, \qquad u\in\mathbb{H}^{s}_p(\bR^2),\quad f^0\in\mathbb{H}^{s-2}_p(\bR^2),
\end{equation}
also the model Dirichlet
\begin{eqnarray}\label{e0.4m}
\left\{\begin{array}{ll}
\Delta u(x)=f_0(x),\qquad & x\in\bR^2_+, \\[0.2cm]
u^+(t)=g_0(t), \qquad & t\in\partial\bR^2_+=\bR, \\[0.2cm]
\end{array}\right.
\end{eqnarray}
the model Neumann
\begin{eqnarray}\label{e0.5m}
\left\{\begin{array}{ll}
\Delta u(x)=f_0(x),\qquad & x\in\bR^2_+, \\[0.2cm]
-(\partial_2u)^+(t)=h_0(t), \qquad &t\in\partial\bR^2_+=\bR,
\end{array}\right.
\end{eqnarray}
and the model mixed
\begin{eqnarray}\label{e0.6x}
\left\{\begin{array}{ll}
\Delta u(x)=f_1(x),\qquad & x\in \bR^2_+, \\[0.2cm]
u^+(t)=g_1(t), \qquad & t\in\bR^-:=(-\infty,0), \\[0.2cm]
-(\partial_{2}u)^+(t)=h_1(t),\qquad
&t\in \bR^+:=(0,\infty),
\end{array}\right.
\end{eqnarray}
boundary value problems for the Laplace equation on the upper half plane $\bR^2_+:=\bR\times\bR^+$, where $\pa_{\nub_\Gamma}=-\pa_2$ is the normal derivative on the boundary of $\bR^2_+$.
The BVPs \eqref{e0.4m} and \eqref{e0.5m} will be treated in the non-classical setting:
\begin{equation}\label{e0.7m}
\begin{array}{r}
f_0\in\widetilde{\mathbb{H}}^{s-2}_p(\bR^2_+)\cap\widetilde{
\mathbb{H}}^{-1}_0(\bR^2_+),\quad g_0\in\mathbb{W}^{s-1/p}_p(\bR), \quad h_0\in\mathbb{W}^{s-1-1/p}_p(\bR),\\[3mm]
1<p<\infty, \qquad s>\dst\frac1p
\end{array}
\end{equation}
and the BVP \eqref{e0.6x} will be treated in the non-classical setting:
\begin{equation}\label{e0.7x}
\begin{array}{r}
f_1\in\widetilde{\mathbb{H}}^{s-2}_p(\bR^2_+)\cap\widetilde{\mathbb{H}}^{-1}_0(\bR^2_+),\quad
g_1\in\mathbb{W}^{s-1/p}_p(\bR^-),\quad h_1\in\mathbb{W}^{s-1-1/p}_p(\bR^+),\\[3mm]
1<p<\infty, \qquad s>\dst\frac1p.
\end{array}
\end{equation}
\begin{proposition}\label{p1.7}
The BVPs \eqref{e0.4m}, \eqref{e0.5m} have unique solutions in the setting \eqref{e0.7m} and the Laplace equation in the setting \eqref{e0.6m} has a unique solution as well.
\end{proposition}
{\bf Proof:} The assertion is a well-known classical result, available in many textbooks on partial differential equations (see e.g. \cite{HW08}). \QED
As a paticular case of Theorem \ref{t0.1} (can easily be proved with the Lax-Milgram Lemma) we have the following.
\begin{proposition}\label{p1.4}
The mixed BVP \eqref{e0.6x} has a unique solution $u$ in the classical weak setting
\begin{equation*}
u\in\mathbb{H}^1(\bR^2_+),\quad f_1\in\widetilde{\mathbb{H}}^{-1}_0(\bR^2_+),
\quad g_1\in\mathbb{H}^{1/2}(\bR^+), \quad h_1\in\mathbb{H}^{-1/2}(\bR^-),
\end{equation*}
\end{proposition}
\begin{lemma}\label{l1.4}
The BVP \eqref{e0.1} is Fredholm in the non-classical setting \eqref{e0.3} if the model mixed BVP \eqref{e0.6x} is locally Fredholm (ie., is locally invertible) at $0$ in the non-classical setting \eqref{e0.7x}.
\end{lemma}
{\bf Proof:} We apply quasi-localization of the boundary value problem \eqref{e0.1} in the more general non-classical setting \eqref{e0.3}, which includes the classical setting \eqref{e0.2} as a particular case (see \cite{CDS03,Du84a}) for details of quasi-localization of boundary value problems and also \cite{DiS08,GK79,Si65} for general results on localization and quasi-localization.
By quasi-localization at the point $\omega\in\ov\cC$ we first localize to the tangential plane $\bR^2(\omega)$ (tangential half plane $\bR^2_+(\omega)$) to $\cC$ at $\omega\in\cC$ (at $\omega\in\Gamma=\partial\cC$, respectively). The differential operators remain the same
\begin{equation}\label{e0.x7}
\begin{array}{c}
\Delta_{\bR^2}:=\dst\sum_{j=1}^3\cD_j^2, \quad \cD_j=\pa_j-\nu_j\pa_{\nub},\\ \pa_{\nub}=\dst\sum_{j=1}^3\nu_j\cD_j, \quad \pa_{\nub_\Gamma}=\dst\sum_{j=1}^3\nu_{\Gamma,j}\cD_j,
\end{array}
\end{equation}
but the normal vector $\nub(\omega)$ to the tangent plane $\bR^2$ and the normal vector $\nub_\Gamma(\omega)$ to the boundary of the tangent plane $\bR(\omega)=\partial\bR^2_+(\omega)$ are now constant. Next we rotate the tangent planes $\bR^2(\omega)$ and $\bR^2_+(\omega)$ to match them with the plane $\bR^2$ and $\bR^2_+$. The normal vector fields $\nub(\omega)$ will transform into $\nub=(0,0,1)$ and $\nub_\Gamma(\omega)=(0,-1,0)$. The rotation is an isomorphism of the spaces $\bW^r_p(\bR^2(\omega))\to\bW^r_p(\bR^2)$, $\bW^r_p(\bR^2_+(\omega))\to\bW^r_p(\bR^2_+)$, $\wt\bW^r_p(\bR^2_+(\omega))\to\wt\bW^r_p(\bR^2_+)$ etc. and transforms the operators in \eqref{e0.x7} into the operators
\begin{equation*}
\begin{array}{c}
\Delta_{\bR^2(\omega)}\;\to\;\Delta:=\dst\sum_{j=1}^2\pa_j^2, \quad \cD_j\;\to\;\pa_j,
\quad j=1,2,\quad, \cD_3\;\to\;0,\\
\pa_{\nub(\omega)}\;\to\; \pa_3, \quad \pa_{\nub_\Gamma(\omega)}\;\to\;-\pa_2
\end{array}
\end{equation*}
and we get \eqref{e0.6m}, \eqref{e0.4m}, \eqref{e0.5m}, \eqref{e0.6x} as a local representatives of BVP \eqref{e0.1}.
For the BVP \eqref{e0.1} in the non-classical setting \eqref{e0.3} we get the following local quasi-equivalent equations and BVPs at different points of the surface $\omega\in\ov\cC$:
\begin{itemize}
\item[i.]
The equation \eqref{e0.6m} at $0$ if $\omega\in\cC$ is an inner points of the surface;
\item[ii.]
The Dirichlet BVP \eqref{e0.4m} in the non-classical setting \eqref{e0.7m} at $0$ if $\omega\in\Gamma_D$;
\item[iii.]
The Neumann BVP \eqref{e0.5m} in the non-classical setting \eqref{e0.7m} at $0$ if
$\omega\in\Gamma_N$;
\item[iv.]
The mixed BVP \eqref{e0.6x} in the non-classical setting \eqref{e0.7x} at $0$ if $\omega\in\ov{\Gamma_D}\cap\ov{\Gamma_N}$ is one of two points of collision of different boundary conditions.
\end{itemize}
The main conclusion of the present theorem on Fredholm properties of BVPs \eqref{e0.1} and \eqref{e0.6x} follows from Proposition \ref{p1.7} and the general theorem on quasi-localizaion (see \cite{CDS03,Du84a,DiS08,GK79,Si65}): {\em The BVP \eqref{e0.1}, \eqref{e0.3} is Fredholm if all local representatives \eqref{e0.6m}, \eqref{e0.4m}, \eqref{e0.5m} and \eqref{e0.6x} in non-classical settings are locally Fredholm (i.e., are locally invertible)}. \QED
Now we concentrate on the model mixed BVP \eqref{e0.6x}.To this end let us recall that the function
\begin{eqnarray*}
\cK_\Delta(x):=\frac1{2\pi}\ln|x|
\end{eqnarray*}
is the fundamental solution to the Laplace's equation in two variables
\begin{eqnarray}\label{e1.25a}
\begin{array}{cc}
\Delta\cK_\Delta(x) = \delta(x), \qquad x\in\bR^2,\\[3mm]
\Delta=\partial^2_1+\partial^2_2=\partial^2_\nub+\partial^2_\ell.
\end{array}
\end{eqnarray}
From \eqref{e1.25a} follows the equality
\[
\delta=\Delta\cK_\Delta=\partial^2_\nub\cK_\Delta +\partial^2_\ell\cK_\Delta,
\]
which we use to prove the following:
\begin{eqnarray}\label{e1.25b}
\partial_{\nub(x)}\partial_{\nub(y)}\cK_\Delta(x-y)=-\partial^2_{\nub(y)}\cK_\Delta(x-y)
=-\delta(x-y)+\partial^2_{\ell(y)}\cK_\Delta(x-y).
\end{eqnarray}
Applying the latter equality \eqref{e1.25b}, we represent the hypersingular operator $\V_{\bR,+1}$ as follows
\begin{eqnarray}\label{e1.25c}
&&\hskip-10mm\V_{\bR,+1}\vf(t):=\dst\int_{\bR}\pa_{\nub(t)}\pa_{\nub(\tau)}\cK_\Delta(t-\tau)
\vf(\tau)d\tau=-\vf(t)+\dst\int_{\bR}\pa^2_{\ell(\tau)}\cK_\Delta(t-\tau)\vf(\tau)d\tau\nonumber\\
&&\hskip-0mm=-\vf(t)-\dst\int_{\bR}\pa_\tau\cK_\Delta(t-\tau)\pa_\tau\vf(\tau)d\tau, \quad t\in\bR,
\end{eqnarray}
since $\pa_{\ell(\tau)}=\pa_\tau$ on $\bR$ and for the tangential differential operator $\pa_\ell$ on arbitrary smooth contour $\Gamma$ the following "partial integration" formula is valid (see \cite{Du01,DMM06}):
\[
\dst\int_\Gamma\pa_{\ell(\tau)}\psi(\tau)\vf(\tau)d\sigma=-\dst\int_{\Gamma}
\psi(\tau)\pa_{\ell(\tau)}\vf(\tau)d\sigma.
\]
We can define standard layer potential operators, the Newton, the single and the double layer potentials respectively (cf. \eqref{e1.5})
\begin{eqnarray}\label{e1.20x}
\N_{\bR^2_+}v(x)&\hskip-3mm:=&\hskip-3mm\dst\frac1{2\pi}\dst\int_{\bR^2_+}
\ln|x-y| v(y)\,dy,\nonumber\\[3mm]
\V_{\bR} v(x)&\hskip-3mm:=&\hskip-3mm\dst\frac1{2\pi}\dst\int_{\bR}\ln|x-\tau|
v(\tau)d\tau,\nonumber\\
\W_{\bR} v(x)&\hskip-3mm:=&\hskip-3mm-\dst\frac1{2\pi}\dst\int_{\bR}\pa_{2}
\ln|(x_1,x_2)-(\tau,y_2)|\Big|_{y_2=0} v(\tau)d\tau\\
&\hskip-3mm=&\hskip-3mm-\dst\frac1{2\pi}\dst\int_\bR\pa_{2}
\ln\sqrt{(x_1-\tau)^2+(x_2-y_2)^2}\Big|_{y_2=0} v(\tau)d\tau\nonumber\\[3mm]
&\hskip-3mm=&\hskip-3mm\dst\frac1{2\pi}\dst\int_\bR\frac{x_2 v(\tau)d\tau}{(x_1-\tau)^2
+x_2^2}, \qquad x=(x_1,x_2)^\top\in\bR^2_+.\nonumber
\end{eqnarray}
The pseudodifferential operators on $\V_{\bR,-1}$, $\W_{\bR,0}$, $\W^*_{\bR,0}$ and $\V_{\bR,+1}$, associated with the layer potentials (see \eqref{e1.7a}), acquire the form
\begin{eqnarray}\label{e1.27}
\begin{array}{rcl}
\V_{\bR,-1}v(t)&\hskip-3mm:=&\hskip-3mm\dst\frac1{2\pi}\dst\int_\bR\ln|t-\tau|
v(\tau)d\tau,\\[3mm]
\W_{\bR,0} v(t)&\hskip-3mm:=&\hskip-3mm\lim\limits_{x_2\to0}\dst\frac1{2\pi}\dst\int_\bR
\frac{x_2 v(\tau)d\tau}{(x_1-\tau)^2+x_2^2}=0,\qquad \W^*_{\bR,0}v(t)=0,\\[3mm]
\end{array}
\end{eqnarray}
By using the representation \eqref{e1.25c} we find the following:
\begin{eqnarray*}
\begin{array}{rcl}
\V_{\bR,+1} v(t)
&\hskip-3mm=&\hskip-3mm-v(t)-\dst\frac1{2\pi}\dst\int_{\bR}\pa_\tau\ln|t-\tau|\pa_\tau v(\tau)d\tau\\[3mm]
&\hskip-3mm=&\hskip-3mm-v(t)+\dst\frac1{2\pi}\dst\int_{\bR}\dst\frac{t-\tau}{(t-\tau)^2}\pa_\tau v(\tau)d\tau\\[3mm]
&\hskip-3mm=&\hskip-3mm-v(t)+\dst\frac1{2\pi}\dst\int_{\bR}\dst\frac{\pa_\tau v(\tau)d\tau}{t-\tau}, \quad t\in\bR
\end{array}
\end{eqnarray*}
and the Plemelji formulae \eqref{e1.7} acquire the form
\[
\begin{array}{c}
(\W_{\bR} v)^\pm(t)=\pm\dst\frac12 v(t),\qquad
-(\pa_{y_2}\V_\bR v)^\pm)(t)=\mp\dst\frac12v(t),\\[3mm]
-(\pa_{y_2}\W_{\bR}v)^\pm(t)=\V_{\bR,+1}v(t)\qquad
(\V_{\bR} v)^\pm(t)=\V_{\bR,-1} v(t) \qquad t\in\bR.
\end{array}
\]
Now we prove the following.
\begin{lemma}\label{l1.5}
Let $1<p<\infty$, $s>\dst\frac1p$. Let $g^0_1\in\bW^{s-1/p}_p(\bR)$ and $h^0_1\in\bW^{s-1-1/p}_p(\bR)$ be some fixed extensions of the boundary conditions $g_1\in\bW^{s-1/p}_p(\bR^-)$ and $h_1\in\bW^{s-1-1/p}_p(\bR^+)$ $($non-classical formulation \eqref{e0.7x}$)$, initially defined on the parts of the boundary $\bR=\bR^-\cup\bR^+$.
A solution to the BVP \eqref{e0.6x} is represented by the formula
\begin{eqnarray}\label{e1.28}
u(x)=\N_{\bR^2_+}f(x)+\W_\bR(g^0_1+\varphi^0)(x)-\V_\bR(h^0_1+\psi^0)(x),\qquad x\in\bR^2
\end{eqnarray}
(cf. \eqref{e1.20x} for the potential operators) and $\varphi^0$ and $\psi^0$ are solutions to the system of pseudodifferential equations
\begin{eqnarray}\label{e1.22}
&&\begin{array}{l}
\left\{\begin{array}{ll}\dst\frac12\varphi^0 - r_+\W_{\bR,0}\varphi^0 + r_+\V_{\bR,-1}\psi^0=G_1
\qquad \text{on}\quad \bR^+\\[3mm]
\dst\frac12\psi^0 + r_-\W^*_{\bR,0}\psi^0 - r_-\V_{\bR,+1}\varphi^0=H_1 \qquad \text{on}\quad \bR^-, \end{array}\right.
\end{array} \\[2mm]
\label{e1.23}
&&\varphi^0, \R_*\psi^0\in\wt\bW^{s-1-1/p}_p(\bR^+),\quad G_1, \R_* H_1\in\bW^{s-1-1/p}_p(\bR^+),
\end{eqnarray}
where $r_+$ and $r_-$ are the restriction operators from the axes $\bR$ to the semi- axis $\bR^+$ and $\bR^-$.
The system of boundary pseudodifferential equations \eqref{e1.22} has a unique pair of solutions $\varphi^0$ and $\psi^0$ in the classical setting $p=2$, $s=1$.
\end{lemma}
{\bf Proof:} By repeating word by word the proof of Theorem \ref{t0.3a}, we prove the equivalence via the representation formulae \eqref{e1.28} of the BVP \eqref{e0.6x} in the non-classical setting \eqref{e0.7x} and of the system \eqref{e1.22}.
The existence and uniqueness of a solution to the BVP \eqref{e0.6x} in the classical setting \eqref{e0.7x} is stated in Proposition \ref{p1.4}, while for the system \eqref{e1.22} it follows from the proved equivalence with the BVP \eqref{e0.6x}. \QED
\begin{lemma}\label{l1.5a}
Let $1<p<\infty$, $s>\dst\frac1p$.
The system of boundary pseudodifferential equations \eqref{e1.22}
is locally invertible at $0$ if and only if the system \eqref{e0.10} is locally invertible at $0$ in the non-classical setting \eqref{e0.11a} and the space parameters are related as follows: $r=s-\frac1p>0$.
\end{lemma}
{\bf Proof:} Due to the equalities \eqref{e1.27} $r_+\W_{\bR,0}\varphi^0=0$, $r_-\W^*_{\bR,0}\psi^0=0$ and the equation in \eqref{e1.22} acquires the form
\begin{eqnarray*}
\left\{\begin{array}{ll}\dst\frac12\varphi^0(t) + \dst\frac1{2\pi}\dst\int_{\bR^-}\ln|t-\tau|\psi^0(\tau)d\tau=G_1(t),
\qquad t\in\bR^+,\\[3mm]
\dst\frac12\psi^0(t) -\dst\frac1{2\pi}\dst\int_{\bR^+}\frac{(\pa_\tau\varphi^0)(\tau)
d\tau}{t-\tau}=H_1(t), \qquad t\in\bR^-. \end{array}\right.
\end{eqnarray*}
Multiply both equations by 2, apply to the first equation the differentiation $\pa_t$, replace $\vf:=\pa_t\vf^0$, apply to the second equation the reflection $\R_* v(t)=v(-t)$ and replace $\psi=\R_*\psi^0$, also under the integral. We get the following
\begin{eqnarray*}
\left\{\begin{array}{ll}\varphi(t) + \dst\frac1\pi\dst\int_{\bR^+}\pa_t\ln(t+\tau)\psi(\tau)d\tau
=\varphi(t) + \dst\frac1\pi\dst\int_{\bR^+}\dst\frac{\psi(\tau)d\tau}{t+\tau} =2\pa_tG_1(t)=:G(t), \\[5mm]
\psi(t) +\dst\frac1\pi\dst\int_{\bR^+}\frac{\varphi(\tau)d\tau}{t+\tau}=2H_1(-t)=:H(t), \qquad t\in\bR^+ \end{array}\right.
\end{eqnarray*}
and the obtained equation coincides with the system \eqref{e0.10}.
To prove the local equivalence at $0$ of the systems \eqref{e1.22} and \eqref{e0.10} note, that the multiplication by $2$ and the reflection
\[
\R_*\;\;:\bW^r_p(\bR^+)\to\bW^r_p(\bR^-),\qquad
\R_*\;:\;\wt\bW^r_p(\bR^+)\to\wt\bW^r_p(\bR^-)
\]
are invertible operators since $\R_*^2=I$ and $\R_*^{-1}=\R_*$ and, therefore, are locally invertible at $0$.
The differentiation
\[
\pa_t:=\dst\frac d{dt}\;:\;\bW^r_p(\bR^+)\to\bW^{r-1}_p(\bR^+),\qquad
\pa_t\;:\;\wt\bW^r_p(\bR^+)\to \wt\bW^{r-1}_p(\bR^+)
\]
is locally invertible at any finite point $x\in\bR$ because the operators
\[
\pa_t-iI\;:\;\bW^r_p(\bR^+)\to\bW^{r-1}_p(\bR^+),\qquad
\pa_t+iI\;:\;\wt\bW^r_p(\bR^+)\to \wt\bW^{r-1}_p(\bR^+)
\]
are isomorphisms (represent Bessel potentials, see Theorem \ref{t4.8} below, \cite[Lemma 5.1]{Du79} and \cite{Es81}). On the other hand, the embeddings
\[
iI\;:\;\bW^r_p(\bR^+)\to\bW^{r-1}_p(\bR^+),\qquad
iI\;:\;\wt\bW^r_p(\bR^+)\to \wt\bW^{r-1}_p(\bR^+)
\]
are locally compact due to the Sobolev's embedding theorem and the compact perturbation does not influences the local invertibility. \QED
\noindent
{\bf Proof of Theorem \ref{t0.5}:} By Theorem \ref{t0.3a} the system \eqref{e0.6} is Fredholm in the Sobolev-Slobode\v{c}kii space setting \eqref{e0.6a} if the BVP \eqref{e0.1} is Fredholm in the non-classical setting \eqref{e0.3}. On the other hand, by Lemma \ref{l1.4} the BVP \eqref{e0.1} is Fredholm in the non-classical setting \eqref{e0.3} if the BVP \eqref{e0.6x} is locally invertible at $0$ in the non-classical setting \eqref{e0.7x}. And, finally, by Lemma \ref{l1.5} and Lemma \ref{l1.5a} the BVP \eqref{e0.6x} is locally invertible in the non-classical setting \eqref{e0.7x} if the system of boundary integral equations \eqref{e0.10} is locally invertible at $0$ in the Sobolev-Slobode\v{c}kii space setting \eqref{e0.11a}. This accomplishes the proof of the first part of the assertion, concerning the solvability in the Sobolev-Slobode\v{c}kii space settings \eqref{e0.6a} and \eqref{e0.11a}.
The second part of the assertion, concerning the solvability in the Bessel potential space settings \eqref{e0.6b} and \eqref{e0.11b}, follows from the first part and Proposition \ref{p4.3}, exposed below and proved in \cite{Du15,DD14}, which states that these solvability properties are equivalent. \QED
\section{Fourier convolution operators in the Bessel potential spaces $\bH^s_p(\bR^+)$}
\label{sect2}
\setcounter{equation}{0}
To formulate the next theorem we need to introduce Fourier convolution and Bessel potential operators.
For the spaces of scalar, vector and matrix functions we will use the same notation if this will not lead to a confusion. For example, $\mathbb{L}_{\infty,loc}(\mathbb{R})$ might be the space of locally bounded functions either scalar, but also vector or matrix valued functions; this will be clear from the context.
Let $a\in\mathbb{L}_{\infty,loc}(\mathbb{R})$ be a locally bounded $m\times m$ matrix function. The Fourier convolution operator (FCO) with the symbol $a$ is defined by
\[
W^0_a:=\mathcal{F}^{-1}a\mathcal{F}.
\]
Here
\begin{equation*}
\cF u(\xi):=\dst\int_{\bR^n}e^{i\xi x}u(x)dx,\quad \xi\in\bR^n,
\end{equation*}
is the Fourier transform and
\begin{equation*}
\cF^{-1}v(\xi):=\dst\frac{1}{(2\pi)^n}\int_{\bR^n}e^{-i\xi x}v(\xi)d\xi, \quad x\in\bR^n,
\end{equation*}
is its inverse transform. If the operator
\begin{equation*}
W^0_a:\mathbb{H}^s_p(\mathbb{R})\longrightarrow \mathbb{H}^{s-r}_p(\mathbb{R})
\end{equation*}
is bounded, we say that $a$ is an $\mathbb{L}_p$-multiplier of order $r$ and use "$\mathbb{L}_p$-multiplier" if the order is $0$. The set of all $\mathbb{L}_p$-multipliers of order $r$ (of order $0$) is denoted by $\mathfrak{M}^r_p(\mathbb{R})$ (by $\mathfrak{M}_p(\mathbb{R})$, respectively).
Let
\[
\wt{\mathfrak{M}}^r_p(\mathbb{R}):=\bigcap_{p-\ve<q<p+\ve}\mathfrak{M}^r_q(\mathbb{R}),
\qquad\wt{\mathfrak{M}}_p(\mathbb{R}):=\bigcap_{p-\ve<q<p+\ve}\mathfrak{M}_q(\mathbb{R}).
\]
Note, that $\wt{\mathfrak{M}}^r_p(\mathbb{R})$ and $\wt{\mathfrak{M}}_p(\mathbb{R})$ are independent of $\ve$ because, due to interpolation theorem $\mathfrak{M}^r_{p_0}(\mathbb{R})\subset\mathfrak{M}^r_{p_-}(\mathbb{R})\bigcap
\mathfrak{M}^r_{p_+}(\mathbb{R})$ for all $1<p_-<p_0<p_+<\infty$.
For an $\mathbb{L}_p$-multiplier of order $r$, $a\in\mathfrak{M}^r_p(\mathbb{R})$, the Fourier convolution operator (FCO) on the semi-axis $\mathbb{R}^+$ is defined by the equality
\begin{equation}\label{e4.7}
W_a=r_+W^0_a\; :\; \widetilde{\mathbb{H}}^s_p(\mathbb{R}^+)\longrightarrow
\mathbb{H}^{s-r}_p(\mathbb{R}^+)
\end{equation}
where $r_+:=r_{\mathbb{R}^+}:\mathbb{H}^s_p(\mathbb{R})\longrightarrow \mathbb{H}^s_p(\mathbb{R}^+)$ is the restriction operator to the semi-axes
$\mathbb{R}^+$.
We did not use the parameter $s\in\mathbb{R}$ in the definition of the class of multipliers $\mathfrak{M}^r_p(\mathbb{R})$ . This is due to the fact that $\mathfrak{M}^r_p(\mathbb{R})$ is independent of $s$: if the operator $W_a$ in \eqref{e4.7} is bounded for some $s\in\mathbb{R}$, it is bounded for all other values of $s$. Another definition of the multiplier class $\mathfrak{M}^r_p(\mathbb{R})$ is written as follows: $a\in\mathfrak{M}^r_p(\mathbb{R})$ if and only if $\lambda^{-r}a\in\mathfrak{M}_p(\mathbb{R})=\mathfrak{M}^0_p(\mathbb{R})$, where $\lambda^r(\xi):=(1+|\xi|^2)^{r/2}$. This assertion is one of the consequences of Theorem \ref{t4.8} below.
Consider the Bessel potential operators defined as follows
\begin{equation}\label{e4.1}
\begin{array}{l}
\mathbf{\Lambda}_\gamma^r=W^0_{\lambda^r_\gamma}\;:\;\widetilde{\mathbb{H}
}^s_p(\mathbb{R}^+)\rightarrow\widetilde{\mathbb{H}}^{s-r}_p(\mathbb{R}^+),
\,\\[3mm]
\mathbf{\Lambda}_{-\gamma}^r=r_+W^0_{\lambda^r_{-\gamma}}\ell\;:\;\mathbb{H}^s_p(
\mathbb{R}^+)\rightarrow\mathbb{H}^{s-r}_p(\mathbb{R}^+)\, ,\\[3mm]
\lambda^r_{\pm\gamma}(\xi):=(\xi\pm\gamma)^r,\qquad\xi\in\mathbb{R}, \qquad {\rm Im}\,\gamma>0
\end{array}
\end{equation}
for a non-negative $s\geqslant0$. Here $\ell\;:\;\mathbb{H}^s_p(\mathbb{R}^+)\to\mathbb{H}^s_p(\mathbb{R})$ is some extension operator. In \eqref{e4.7} there is no need of any extension operator since the space $\widetilde{\mathbb{H}}^s_p(\mathbb{R}^+)$ is automatically embedded in $\bH^s_p(\bR)$ provided functions are extended by $0$.
For a negative $s<0$ the Bessel potential operators $\mathbf{\Lambda}_{\pm\gamma}^r$ are defined by the duality between the spaces.
\begin{theorem}\label{t4.8}
Let $1<p<\infty$. Then:
\begin{enumerate}
\vskip+0.15cm
\item
For any $r,s\in\mathbb{R}$, $\gamma\in\mathbb{C}$, ${\rm Im}\,\gamma>0$ the Bessel potential
operators \eqref{e4.1} arrange isomorphisms of the corresponding spaces $($see {\rm \cite{Du79, Es81}}$)$ and are independent of the choice of an extension operator $\ell:\mathbb{H}^s_p(\mathbb{R}^+)\longrightarrow\mathbb{H}^s_p (\mathbb{R})$.
\vskip+0.15cm
\item
For any operator $\mathbf{A}:\widetilde{\mathbb{H}}^s_p(\mathbb{R}^+) \longrightarrow \mathbb{H}^{s-r}_p(\mathbb{R}^+)$ of order $r$, the following diagram is commutative
\begin{equation}\label{e4.10}
\begin{array}{ccc}\widetilde{\mathbb{H}}^s_p(\mathbb{R}^+) & \stackrel{\mathbf{A}}{\longrightarrow}
&\mathbb{H}^{s-r}_p(\mathbb{R}^+)\\
\uparrow\mathbf{\Lambda}^{-s}_\gamma & &\downarrow \mathbf{\Lambda}_{-\gamma}^{s-r}\\
\mathbb{L}_p(\mathbb{R}^+) & \stackrel{\mathbf{\Lambda}_{-\gamma}^{s-r}\mathbf{A}
\mathbf{\Lambda}^{-s}_\gamma}{\longrightarrow}& \mathbb{L}_p(\mathbb{R}^+). \end{array}
\end{equation}
The diagram \eqref{e4.10} provides an equivalent lifting of the
operator $\mathbf{A}$ of order $r$ to the operator
$\mathbf{\Lambda}_{-\gamma}^{s-r}\mathbf{A}\mathbf{\Lambda}^{-s}_\gamma:\mathbb{L}_p(
\mathbb{R}^+)\longrightarrow\mathbb{L}_p(\mathbb{R}^+)$ of order~$0$.
\vskip+0.15cm
\item
For any bounded convolution operator $W_a:\mathbb{H}^s_p(\mathbb{R}^+) \longrightarrow \mathbb{H}^{s-r}_p(\mathbb{R}^+)$ of order $r$ and for any pair of complex numbers $\gamma_1, \gamma_2$ such that ${\rm Im}\,\gamma_j>0$, $j=1,2$, the lifted operator
\begin{equation}\label{e4.11}
\begin{array}{c}
\mathbf{\Lambda}_{-\gamma_1}^\mu W_a\mathbf{\Lambda}_{\gamma_2}^\nu
=W_{a_{\mu,\nu}}\;:\;\mathbb{H}^{s+\nu}_p(\mathbb{R}^+) \longrightarrow\mathbb{H}^{s-r-\mu}_p(\mathbb{R}^+),\\[2mm]
a_{\mu,\nu}(\xi):=(\xi-\gamma_1)^\mu a(\xi)(\xi+\gamma_2)^\nu
\end{array}
\end{equation}
is again a Fourier convolution.
In particular, the lifted operator $W_{a_0}$ in $\mathbb{L}_p$-spaces, $\mathbf{ \Lambda}_{-\gamma}^{s-r}W_a\mathbf{\Lambda}^{-s}_\gamma:\mathbb{L}_p(\mathbb{R}^+)
\longrightarrow\mathbb{L}_p(\mathbb{R}^+)$ has the symbol
\begin{equation*}
a_{s-r,-s}(\xi)=\lambda^{s-r}_{-\gamma}(\xi)a(\xi)\lambda^{-s}_\gamma(\xi)
=\Big(\frac{\xi-\gamma}{\xi+\gamma}\Big)^{s-r}\,\frac{a(\xi)}{(\xi+i)^r}\,.
\end{equation*}
\end{enumerate}
\end{theorem}
\begin{remark}\label{r4.12}
For any pair of multipliers $a\in\mathfrak{M}^r_p(\mathbb{R})$, $b\in\mathfrak{M}^s_p(\mathbb{R})$
the corresponding convolution operators on the full axes $W^0_a$ and $W_b^0$ have the property $W^0_aW^0_b=W^0_bW^0_a=W^0_{ab}$.
For the corresponding Wiener-Hopf operators on the half axis a similar equality
\begin{equation}\label{e4.13}
W_aW_b=W_{ab}
\end{equation}
is valid if at least one of the following conditions hold: the function $a(\xi)$ has an analytic extension in the lower half plane or the function $b(\xi)$ has an analytic extension in the upper half plane (see \cite{Du79}).
Note, that actually \eqref{e4.11} is a consequence of \eqref{e4.13}.
\end{remark}
Let $\dR:=\bR\cup\{\infty\}$ denote the one point compactification of the real axis $\bR$ and $\overline{\bR}:=\bR\cup\{\pm\infty\}$-the two point compactification of $\bR$. By $C(\dR)$ (by $C(\overline{\bR})$, respectively) we denote the space of continuous functions $g(x)$ on $\bR$ which have the same limits at the infinity $g(-\infty)=g(+\infty)$ (limits at the infinity might differ $g(-\infty)\not=g(+\infty)$). By $PC(\dR)$ is denoted the space of piecewise-continuous functions on $\dR$, having limits $a(t\pm0)$ at all points $t\in\dR$, including infinity.
\begin{proposition}[Lemma 7.1, \cite{Du79} and Proposition 1.2, \cite{Du87}]\label{p2.5} Let $1<p<\infty$, $a\in C(\dR{}^+)$, $b\in C(\dR)\cap\wt{\mathfrak{M}}_p(\dR)$ and $a(\infty)= b(\infty)=0$. Then the operators $aW_b,W_b\,aI:\mathbb{L}_p(\mathbb{R}^+)\longrightarrow \mathbb{L}_p(\mathbb{R}^+)$ are compact.
Moreover, these operators are compact in all Bessel potential and Besov spaces, where they are bounded, due to the Krasnoselskij interpolation theorem for compact operators.
\end{proposition}
\begin{proposition}[Lemma 7.4, \cite{Du79} and Lemma 1.2, \cite{Du87}]\label{p2.7}
Let $1<p<\infty$ and let $a$ and $b$ satisfy at least one of the following conditions:
\begin{itemize}
\vskip+0.15cm
\item[(i)] $a\in C(\overline{\mathbb{R}}{}^+)$, $b\in\wt{\mathfrak{M}}_p(\mathbb{R})\cap PC(\overline{\mathbb{R}})$,
\vskip+0.15cm
\item[(ii)] $a\in PC(\overline{\mathbb{R}}{}^+)$, $b\in C\wt{\mathfrak{M}}_p(\overline{\mathbb{R}})$.
\end{itemize}
Then the commutants $[aI,W_b]$ are compact operators in the space $\mathbb{L}_p(\mathbb{R}^+)$ and also, due to Krasnoselskij interpolation theorem for compact operators, in all Bessel potential and Besov spaces, where they are bounded.
\end{proposition}
\section{Mellin convolution operators in the space $\bH^s_p(\bR^+)$}
\label{sect3}
\setcounter{equation}{0}
In this section we expose auxiliary results from \cite{Du15} (also see \cite{Du79,Du87,DD14}), which are essential for the investigation of boundary integral equations from the foregoing section.
Let $a(\xi)$ be a $N\times N$ matrix function $a\in C\mathfrak{M}^0_p(\bR)$, continuous on the real axis $\mathbb{R}$ with the only possible jump at infinity. Consider a Mellin convolution operator $\mathfrak{M}^0_a$ with the symbol $a$ in the Bessel potential spaces
\begin{eqnarray*}
\mathfrak{M}^0_a:=\cM^{-1}_\beta a\cM_\beta\;:\;\widetilde{\mathbb{H}}^s_p(
\mathbb{R}^+)\longrightarrow\mathbb{H}^s_p(\mathbb{R}^+),\quad s\in\mathbb{R},
\end{eqnarray*}
where
\begin{eqnarray*}
\begin{array}{c}
\mathcal{M}_\beta v(\xi):=\displaystyle\int_0^\infty \tau^{\beta-i\xi}v(\tau)\frac{d\tau}\tau,\quad
\xi\in\mathbb{R},\\[1ex]
\mathcal{M}^{-1}_\beta u(t):=\displaystyle\frac1{2\pi}
\displaystyle\int_{-\infty}^{\infty}t^{i\xi-\beta}
u(\xi)d\xi,\qquad t\in\mathbb{R}^+,
\end{array}
\end{eqnarray*}
are the Mellin transformation and the inverse to it.
The most important example of a Mellin convolution operator is an integral operator of the form
\begin{eqnarray}\label{e2.2}
\mathfrak{M}^0_a\mathbf{u}(t):=c_0\mathbf{u}(t)+\frac{c_1}{\pi i}
\int_0^\infty\frac{\mathbf{u}(\tau)\,d\tau}{\tau-t}+\int_{0}^\infty
\mathcal{K}\left(\frac t\tau\right)\mathbf{u}(\tau)\frac{d\tau}{\tau}
\end{eqnarray}
with $n\times n$ matrix coefficients and $n\times n$ matrix kernel
\begin{eqnarray}\label{e2.4}
\int_0^\infty t^{\bt-1}\cK(t)dt<\infty, \quad 0<\bt<1.
\end{eqnarray}
Then $\mathfrak{M}^0_a$ is a bounded operator in the weighted Lebesgue space of vector functions
\begin{eqnarray}\label{e2.3}
\fM^0_a\;:\;\bL_p(t^\gamma,\mathbb{R}^+)\longrightarrow\bL_p(t^\gamma,\mathbb{R}^+),\\[3mm] \bt:=\frac{1+\gamma}p, \quad 1<p<\infty,\quad -1<\gamma<p-1,\nonumber
\end{eqnarray}
endowed with the norm
\[
\|u|\bL_p(t^\gamma,\mathbb{R}^+)\|:=\left[\int_0^\infty t^\gamma|u(t)|^pdt
\right]^{1/p}
\]
(cf. \cite{Du79}). The symbol of the operator \eqref{e2.2} is the Mellin transform of the kernel
\begin{eqnarray*}
a_\bt(\xi)&\hskip-3mm:=&\hskip-3mm c_0+c_1\coth\,\pi\left(i\beta
+\xi\right)+\cM_\bt\cK(\xi)\nonumber\\
&\hskip-3mm:=&\hskip-3mm c_0+c_1\coth\,\pi\left(i\beta+\xi\right)
+\int_{0}^{\infty}t^{\bt-i\xi}\cK(t)\frac{dt}t,\quad \xi\in\bR.
\end{eqnarray*}
Obviously,$ \mathfrak{M}^0_a\mathfrak{M}^0_b\vf =\mathfrak{M}^0_{ab}\vf$ for $\vf\in C^\infty_0(\mathbb{R}^+)$.
\begin{theorem}\label{t2.1}
Let $1<p<\infty$ and $-1<\gamma<p-1$ (or $0\leqslant p\leqslant\infty$ provided $c_1=0$ in \eqref{e2.2}). The following three properties are equivalent:
\begin{itemize}
\item [i.]
Operator $\fM_a^0$ in \eqref{e2.2}--\eqref{e2.3} is Fredholm;
\item [ii.]
The symbol of the operator is invertible (is elliptic)
\begin{eqnarray*}
\inf_{\xi\in\bR}\left|\det\,a_\beta(\xi)\right|>0;
\end{eqnarray*}
\item [iii.]
The operator is invertible and the inverse operator is $\fM_{a^{-1}}^0$.
\end{itemize}
\end{theorem}
\begin{proposition}[Lemma 7.4, \cite{Du79} and Lemma 1.2, \cite{Du87}]\label{p2.7a}
Let $1<p<\infty$ and let $a$ and $b$ satisfy at least one of the following conditions:
\begin{itemize}
\vskip+0.15cm
\item[(i)] $a\in C(\overline{\mathbb{R}}{}^+)$, $b\in\wt{\mathfrak{M}}_p(\mathbb{R})\cap PC(\overline{\mathbb{R}})$,
\vskip+0.15cm
\item[(ii)] $a\in PC(\overline{\mathbb{R}}{}^+)$, $b\in C\wt{\mathfrak{M}}_p(\overline{\mathbb{R}})$.
\end{itemize}
Then the commutants $[aI,\mathfrak{M}^0_b]$ are compact operators in the space $\mathbb{L}_p(\mathbb{R}^+)$ and also, due to Krasnoselskij interpolation theorem for compact operators, in all Bessel potential and Besov spaces, where they are bounded.
\end{proposition}
Things are different in the Bessel potential spaces \ if\ compared\ with the\ Lebesgue\ spaces. Let us recall some results from \cite[\S\, 2]{Du15}. Consider meromorphic functions in the complex plane $\mathbb{C}$, vanishing at infinity
\begin{eqnarray}\label{e2.6a}
\begin{array}{r}
\mathcal{K}(t):=\dst\sum_{j=0}^N\dst\frac{d_j}{(t-c_j)^{m_j}}
\end{array}
\end{eqnarray}
with poles at $c_0,c_1,\ldots\in\mathbb{C}\setminus\{0\}$, complex coefficients $d_j\in\mathbb{C}$ and $m_j\in\mathbb{N}$.
\begin{definition}[see \cite{Du15}]\label{d2.2}
We call a kernel $\mathcal{K}(t)$ in \eqref{e2.6a} admissible if for those poles $c_0,\ldots,c_\ell$ which belong to the positive semi-axes $\arg\,c_0=\cdots =\arg\,c_\ell=0$, the corresponding multiplicities are one, i.e., $m_0=\cdots=m_\ell=1$.
\end{definition}
For example: The Mellin convolution operator
\begin{eqnarray*}
\K^m_cv(t):=\dst\frac1\pi\int_0^\infty\frac{\tau^{m-1}v(\tau)d\tau}{(t-c\tau)^m},
\qquad 0<\arg\,c<2\pi,\quad t\in\bR^+, \quad v\in\bL_p(\bR^+)
\end{eqnarray*}
has an admissible kernel for arbitrary $m=1,2,\ldots$ if $m=1$ as soon as $c$ is real $\arg\,c=0$.
\begin{proposition}[see \cite{Du15}, Corollary 2.3, Theorem 2.4]\label{p2.4}
Let $1<p<\infty$ and $-1<\gamma<p-1$ (or $1\leqslant p\leqslant\infty$ provided $c_1=0$ in \eqref{e2.2}) and $\mathcal{K}(t)$ in \eqref{e2.6a} be an admissible kernel. Then the Mellin convolution
\[
\mathfrak{M}^0_{a_\beta}\mathbf{u}(t):=c_0\mathbf{u}(t)+\int_{0}^\infty
\mathcal{K}\left(\frac t\tau\right)\mathbf{u}(\tau)\frac{d\tau}{\tau}
\]
is a bounded operator in the Lebesgue space $\mathbb{L}_p(\mathbb{R}^+,t^\gamma)\to \mathbb{L}_p(\mathbb{R}^+,t^\gamma)$ and, also, in the Bessel potential spaces $\mathfrak{M}^0_{a_\beta}\;:\;\widetilde{\mathbb{H}}^s_p(\mathbb{R}^+)\to
\mathbb{H}^s_p(\mathbb{R}^+)$ for all $s\in\bR$.
\end{proposition}
The next theorem provides the lifting of the Mellin convolution operator from a pair of Bessel potential spaces to the Lebesgue spaces.
\begin{theorem}[\cite{Du15}, Theorem 4.1]\label{t4.7}
Let $0<\arg\,c<2\pi$, $0<\arg \gamma<\pi$ and $r,s\in\mathbb{R}$, $1<p<\infty$. Then the operator $K^1_c\;:\;\widetilde{\mathbb{H}}^s_p (\mathbb{R}^+)\to\mathbb{H}^s_p(\mathbb{R}^+)$ is lifted equivalently to the operator
\begin{eqnarray*}
\A^{1,s}_c:=\mathbf{\Lambda}^s_{-\gamma}\K^1_c\mathbf{\Lambda}^{-s}_{\gamma}
\;:\;\mathbb{L}_p(\mathbb{R}^+)\to\mathbb{L}_p(\mathbb{R}^+),
\end{eqnarray*}
where
\[
\A^{1,s}_c= c^{-s}\K^1_c W_{g^s_{-c\gamma,\gamma}}, \qquad c^{-s}:=|c|^{-s}e^{-\arg\,c\,ri}
\]
if only $0<\arg(-c\gamma)<\pi$.
If $0<\arg\,(c\,\gamma)<\pi$, choose any $\gamma_0\in\bC$ such that $0<\arg \gamma_0<\pi$ and $0<\arg(-c\,\gamma_0)<\pi$ (such a choice of $\gamma_0$ is possible since $c$ is not a real constant $\arg\,c\not=0$). Then
\begin{eqnarray*}
\A^{1,s}_c=c^{-s}W_{g^s_{-\gamma,-\gamma_0}\cdot}\K^1_c
W_{g^s_{-c\gamma_0,\gamma}}
=\mathbf{K}^1_cW_{g^s_{-\gamma,-\gamma_0}g^s_{-c\gamma_0,\gamma}}
\!+\!\mathbf{T},\\
g^s_{-c\gamma_0,\gamma}(\xi):=\left(\displaystyle\frac{\xi-c\gamma_0}{\xi
+\gamma}\right)^s,\quad g^s_{-\gamma,-\gamma_0}(\xi):=\left(\displaystyle\frac{\xi-\gamma}{\xi
-\gamma_0}\right)^s,
\end{eqnarray*}
where $\mathbf{T}\;:\;\mathbb{L}_p(\mathbb{R}^+)\to\mathbb{L}_p(\mathbb{R}^+)$ is a compact operator.
\end{theorem}
\section{Investigation of a lifted Mellin convolution operator}
\label{sect4}
\setcounter{equation}{0}
The results of the foregoing two sections together with results on a Banach algebra generated by Mellin and Fourier convolution operators (see \cite{Du87}) allow the investigation of lifted Mellin convolution operators. For this we need to write the symbol of a model operator
\begin{equation}\label{e4.1a}
\mathbf{A}:=d_0I+\sum_{j=1}^nd_j\mathbf{K}^1_{c_j}\;:\;\wt{\mathbb{H}}^s_p(\mathbb{R}^+)\to
\mathbb{H}^s_p( \mathbb{R}^+),
\end{equation}
where $\mathbf{K}^1_{c_1},\ldots,\mathbf{K}^1_{c_n}$ are admissible Mellin convolution operators.
To expose the symbol of the operator \eqref{e4.1a}, consider the infinite clockwise oriented ``rectangle'' $\mathfrak{R}:=\Gamma_1\cup\Gamma_2^-\cup\Gamma_2^+ \cup\Gamma_3$, where (cf. Figure~1)
$$ \Gamma_1:=\overline{\mathbb{R}}\times\{+\infty\},\;\;\Gamma^\pm_2:=\{\pm\infty\}\times\overline{\mathbb{R}}^+,\;\;
\Gamma_3:=\overline{\mathbb{R}}\times\{0\}. $$
\setlength{\unitlength}{0.4mm}
\vskip7mm
\hskip15mm
\begin{picture}(300,140)
\put(-00,40){\epsfig{file=Rectangle.pdf,height=40mm, width=80mm}}
\put(80,50){\makebox(0,0)[lc]{$(0,\xi)$}}
\put(80,130){\makebox(0,0)[lc]{$(\infty,\xi)$}}
\put(80,33){\makebox(0,0)[lc]{$\Gamma_3$}}
\put(80,145){\makebox(0,0)[lc]{$\Gamma_1$}}
\put(-13,90){\makebox(0,0)[lc]{$\Gamma^-_2$}}
\put(8,90){\makebox(0,0)[lc]{$(\eta,-\infty)$}}
\put(203,90){\makebox(0,0)[lc]{$\Gamma^+_2$}}
\put(160,90){\makebox(0,0)[lc]{$(\eta,+\infty)$}}
\put(-10,145){\makebox(0,0)[lc]{$(\infty,-\infty)$}}
\put(170,35){\makebox(0,0)[lc]{$(0,+\infty)$}}
\put(-10,35){\makebox(0,0)[lc]{$(0,-\infty)$}}
\put(170,145){\makebox(0,0)[lc]{$(\infty,+\infty)$}}
\put(0,15){\makebox(0,0)[lc]{The domain $\mathfrak{R}$ of definition of the symbol $\mathcal{A}^s_p(\omega)$.}}
\end{picture}
\noindent
According to \cite{DD14} the symbol $\mathcal{A}^s_p(\omega)$ of the operator $\mathbf{A}$ is
\begin{equation}\label{e4.2}
\mathcal{A}^s_p(\omega):=d_0\mathcal{I}^s_p(\omega)+\sum_{j=1}^nd_j\mathcal{K}^{1,s}_{c_j,p}(\omega),
\end{equation}
where
\begin{subequations}
\begin{eqnarray}\label{e4.3a}
\mathcal{I}^s_p(\omega)&\hskip-3mm:=&\hskip-3mm\begin{cases}
g^s_{-\gamma,\gamma,p}(\infty,\xi), & \omega=(\infty,\xi)\in\overline{\Gamma}_1,
\\[1ex]
\left(\displaystyle\frac{\eta-\gamma}{\eta+\gamma}\right)^{\mp s}, &
\omega=(\eta,\pm\infty)\in\Gamma^\pm_2, \\[1ex]
e^{\pi si}, &\omega=(0,\xi)\in\overline{\Gamma}_3, \qquad \xi,\eta\in\mathbb{R},\end{cases}\\[2ex]
&&\hskip-20mm g^s_{-\gamma,\gamma,p}(\infty,\xi):=\frac{e^{2\pi si}+1}2
+\frac{e^{2\pi si}-1}{2i}\cot\pi\Big(\frac1p-i\xi\Big)=e^{\pi si}\frac{\sin\pi\Big(\frac1p+s-i\xi\Big)}
{\sin\pi\Big(\frac1p-i\xi\Big)}, \quad \xi\in\mathbb{R},\nonumber
\end{eqnarray}
\begin{eqnarray}\label{e4.3b}
\mathcal{K}^{1,s}_{c,p}(\omega):=\begin{cases}
\displaystyle\frac{e^{-i\pi(\frac1p-i\xi-1)}c^{\frac1p-i\xi-s-1}}{
\sin\pi(\frac1p-i\xi)},&\omega=(\infty,\xi)
\in\overline{\Gamma}_1,\\[1ex]
0, &\omega=(\eta,\pm\infty)\in\Gamma^\pm_2,\\[1ex]
\displaystyle\frac{e^{-i\pi(\frac1p-i\xi-1)}c^{\frac1p-i\xi-s-1}}{
\sin\pi(\frac1p-i\xi)},&\omega=(0,\xi)
\in\overline{\Gamma}_3,\end{cases}\\[1.5ex]
0<\arg\,c<2\pi,\quad 0<|\arg(c\,\gamma)|<\pi,\quad 0<\arg\gamma<\pi
\nonumber
\end{eqnarray}
and $c^\delta=|c|^\delta e^{i\delta\arg\,c}$, $\delta\in\bR$.
\end{subequations}
Note, that the Mellin convolution operator $\mathbf{K}^1_{-1}$,
\[
\begin{array}{c}
\displaystyle
\mathbf{K}^1_{-1}\varphi(t)=\mathbf{K}^1_{e^{i\pi}}\varphi(t)=\frac1\pi
\int\limits_0^\infty\displaystyle\frac{\varphi(\tau)\,d\tau}{t+\tau}
=\mathfrak{M}^0_{k_p}\varphi(t),\qquad k_p(\xi)=\displaystyle\frac{1}{\sin\pi\left(\frac1p-i\xi \right )},
\end{array}
\]
which we encounter in applications (see \eqref{e0.10} and Lemma \ref{l1.5}), has a rather simple symbol in the Bessel potential space $\mathbb{H}^s_p(\mathbb{R}^+)$: From \eqref{e4.3b} follows that:
\begin{equation}\label{e4.4}
\hskip-5mm\mathcal{K}^{1,s}_{-1,p}(\omega):=\begin{cases}
\displaystyle\frac{e^{-\pi si}}{\sin\pi(\beta-i\xi)},&\omega =(\infty,\xi)\in\overline{\Gamma}_1,\\
0, &\omega=(\eta,\pm\infty))\in\Gamma^\pm_2,\\
\displaystyle\frac{e^{-\pi si}}{\sin\pi(\beta-i\xi)},&\omega =(0,\xi)\in\overline{\Gamma}_3. \end{cases}
\end{equation}
The image of the function $\det\mathcal{A}^s_p(\omega)$, $\omega\in\mathfrak{R}$ is a closed curve in the complex plane (easy to check analyzing the symbol in \eqref{e4.3a}-\eqref{e4.3b}). Hence, if the symbol is elliptic, i.e. if
\[
\inf_{\omega\in\mathfrak{R}} \big|\det\mathcal{A}^s_p(\omega)\big|>0,
\]
the increment of the argument $(1/2\pi)\arg \mathcal{A}^s_p(\omega)$ when $\omega$ ranges through $\mathfrak{R}$ in the direction of orientation, is an integer. It is called the winding number or the index of the curve $\Gamma:=\{z\in \mathbb{C}: z=\det\mathcal{A}_p(\omega),\;\omega \in \mathfrak{R}\}$ and is denoted by ${\rm ind}\,\det \mathcal{A}^s_p$.
Propositions \ref{p4.1}-\ref{p4.3}, exposed below, are well known and will be applied in the next section in the proof of main theorems.
\begin{proposition}[\cite{Du15} and Theorem 5.4, \cite{DD14}]\label{p4.1}
Let $1<p<\infty$, $s\in\mathbb{R}$. The operator
\begin{equation}\label{e4.5}
\mathbf{A}:\widetilde{\mathbb{H}}{}^s_p(\mathbb{R}^+)\longrightarrow
\mathbb{H}^s_p (\mathbb{R}^+)
\end{equation}
defined in \eqref{e4.1} is Fredholm if and only if its symbol $\mathcal{A}^s_p(\omega)$ defined in \eqref{e4.2}, \eqref{e4.3a}-- \eqref{e4.3b}, is elliptic. If $\mathbf{A}$ \, is Fredholm, then
\[
{\rm Ind}\mathbf{A}=-{\rm ind}\det\mathcal{A}^s_p.
\]
The operator $\mathbf{A}$ in \eqref{e4.5} is locally invertible at $0$
if and only if its symbol $\mathcal{A}^s_p(\omega)$ is elliptic on the set $\Gamma_1$ only: $\inf_{\omega\in\Gamma_1} \big|\det\mathcal{A}^s_p(\omega)\big|>0$.
\end{proposition}
\begin{proposition}[\cite{Du15,DD14}]\label{p4.2}
Let $1<p<\infty$, $s\in\mathbb{R}$ and let $\mathbf{A}$ be defined by \eqref{e4.1}. If the operator $\mathbf{A}\;:\;\widetilde{\mathbb{H}}{}^s_p (\mathbb{R}^+) \longrightarrow\mathbb{H}^s_p (\mathbb{R}^+)$ is Fredholm (is invertible) for all $a\in(s_0,s_1)$ and $p\in(p_0,p_1)$, where $-\infty<s_0<s_1 <\infty$, $1<p_o<p_1<\infty$, then $\A$ is Fredholm (is invertible, respectively) in the Sobolev-Slobode\v{c}kii space setting
\[
\mathbf{A}\;:\;\widetilde{\mathbb{W}}{}^s_p (\mathbb{R}^+) \longrightarrow
\mathbb{W}^s_p (\mathbb{R}^+),\qquad \text{for all}\quad s\in(s_0,s_1) \quad\text{and}\quad p\in(p_0,p_1)
\]
and has the same index
\[
{\rm Ind}\,\mathbf{A}=-{\rm ind}\,\det\,\mathcal{A}^s_p.
\]
\end{proposition}
\begin{proposition}[\cite{Du73,DNS95}]\label{p4.3}
Let two pairs of parameter-dependent Banach spaces $\fB^s_1$ and $\fB^s_2$, $s_1<s<s_2$, have intersections $\fB^{s'}_j\cap\fB^{s''}_j$ dense in $\fB^{s'}_j$ and in $\fB^{s''}_j$ for all $j=1,2$, $s',s''\in(s_1,s_2)$.
If a linear bounded operator $A\;:\;\fB^s_1\to\fB^s_2$ is Fredholm for all $s\in(s_1,s_2)$, it has the same kernel and co-kernel for all values of this parameter $s\in(s_1,s_2)$.
In particular, If $A\;:\;\fB^s_1\to\fB^s_2$ is Fredholm for all $s\in(s_1,s_2)$ and is invertible for only one value $s_0\in(s_1,s_2)$, it is invertible for all values of this parameter $s\in(s_1,s_2)$.
\end{proposition}
\section{Investigation of the boundary integral equations}
\label{sect5}
\setcounter{equation}{0}
The proof of Theorem \ref{t0.4} (see below) is based, besides Theorem \ref{t0.5}, on the following theorem.
\begin{theorem}\label{t5.1}
Let $1<p<\infty$, $r\in\bR$.
The system of the boundary pseudodifferential equations \eqref{e0.10} is Fredholm in the Sobolev-Slobode\v{c}kii space setting \eqref{e0.11a} and in the Bessel potential space setting \eqref{e0.11b} if and only if the condition \eqref{e0.8} holds. The system \eqref{e0.10} has a unique solution in both settings \eqref{e0.11a} and \eqref{e0.11b} if the condition \eqref{e0.9} holds.
\end{theorem}
{\bf Proof:} Let us write the equation \eqref{e0.10} in an operator form
\begin{subequations}
\begin{eqnarray}\label{e5.2a}
&\M\Phi=\F, \quad \M:=\left[\begin{array}{cc} I & \K^1_{-1}\\
\K^1_{-1}& I \end{array}\right],\\
\label{e5.2b}
&\Phi:=\left(\begin{array}{c}\vf\\ \psi\end{array}\right)\in
\widetilde{\bW}{}^r_p(\bR^+),\qquad {\bf F}:=\left(\begin{array}{c}G\\
H\end{array}\right)\in\bW^r_p(\bR^+),\\
\label{e5.2c}
&\Phi:=\left(\begin{array}{c}\vf\\ \psi\end{array}\right)\in
\widetilde{\bH}{}^r_p(\bR^+),\qquad {\bf F}:=\left(\begin{array}{c}G\\
H\end{array}\right)\in\bH^r_p(\bR^+)
\end{eqnarray}
\end{subequations}
and apply Proposition \ref{p4.1} to the investigation of equation \eqref{e5.2a} in the setting \eqref{e5.2c}.
Due to formulae \eqref{e4.3a} and \eqref{e4.4} the symbol of $\M$ on $\Gamma_1$ reads
\begin{eqnarray}\label{e5.3}
\cM^r_p(\omega)=
\left[\begin{array}{cc} e^{\pi r i}\dst\frac{\sin\pi(\Xi+r)}{\sin\pi\Xi} &\dst\frac{ e^{-\pi r i}}{\sin\pi\Xi}\\[3mm]
\dst\frac{ e^{-\pi r i}}{\sin\pi\Xi}& e^{\pi r i}\dst\frac{\sin\pi(\Xi+r)}{\sin\pi\Xi} \end{array}\right],\quad \omega=(\infty,\xi)\in\overline{\Gamma_1},
\end{eqnarray}
where $\Xi:=\dst\frac1p-i\xi$, $\xi\in\bR$, $\eta\in\bR^+$. We have dropped the information about the symbol $\cM^r_p(\omega)$ on the contours $\Gamma^\pm_2$ and $\Gamma_3$ because, due to Theorem \ref{t0.5}, we are interested only in the local invertibility of the operator $\M$ at $0$. This information, due to the concluding part of the Proposition \ref{p4.1}, is contained in the symbol $\cM^r_p(\omega)$ on the contour $\Gamma_1$ only.
According the formula \eqref{e5.3} the symbol $\cM^r_p(\infty,\xi)$ is elliptic on the contour $\Gamma_1$ if and only if
\[
\det\cM^r_p(\infty,\xi)=\dst\frac{ e^{2\pi r i}\sin^2\pi\left(\dst\frac1p+r -i\xi\right) - e^{-2\pi r i}}{\sin^2\pi\left(\dst\frac1p
-i\xi\right)}\not=0, \qquad \omega\in\Gamma_1
\]
or, equivalently,
\[
\sin^2\pi\left(\dst\frac1p+r -i\xi\right)\not= e^{-4\pi r i}=\cos\,4\pi r-i\sin4\pi r\qquad \text{for all}\quad \xi\in\bR.
\]
The symbol is non-elliptic if
\[
\sin4\pi r=0 \quad\text{and}\quad \sin^2\pi\left(\dst\frac1p+r\right)=\cos\,4\pi r=\pm1.
\]
The latter equation has the following solutions
\begin{eqnarray}\label{e5.1}
4\pi r=2\pi k \quad\text{and}\quad \sin^2\pi\left(\dst\frac1p+\frac k2\right)=1, \qquad k=0\pm1,\ldots,
\end{eqnarray}
because for $4\pi r=2k+1$ the equation $\sin^2\pi\left(\dst\frac1p+r\right)=-1$ has no solution. Equation \eqref{e5.1} decomposes into the following two equations for even and odd $k$:
\[
\begin{array}{c}
r=k, \quad \sin^2\dst\frac\pi p=1\quad \Rightarrow\quad r=k,\quad p=2, \quad k=0,\pm1,\ldots,\\
r=k+\dst\frac12,\quad \cos^2\dst\frac\pi p=1\quad \Rightarrow \quad r=k+\dst\frac12,\quad p=1, \quad k=0,\pm1,\ldots.
\end{array}
\]
Due to Proposition \ref{p4.2} the operator $\M$ in \eqref{e5.2a} is Fredholm in the setting \eqref{e5.2b} if and only if the same condition \eqref{e0.8} holds.
From \eqref{e0.8} follows that if conditions \eqref{e0.9} hold, the operator $\M$ is Fredholm in both settings \eqref{e5.2b} and \eqref{e5.2c}. On the other hand, for the values $p=2$, $r=-1/2$, which also satisfy the conditions \eqref{e0.9}, the operator $\M$ is invertible (see the concluding assertion in Lemma \ref{l1.5}). Then, due to Proposition \ref{p4.3}, $\M$ is invertible in both settings \eqref{e5.2b} and \eqref{e5.2c} for all those $r$ and $p$ which satisfy \eqref{e0.9}. \QED
\noindent
{\bf Proof of Theorem \ref{t0.4}:} The Fredholm criterion \eqref{e0.8} for the system of boundary pseudodifferential equations \eqref{e0.6} in the settings \eqref{e0.6a} and \eqref{e0.6b} is a direct consequence of Theorem \ref{t0.5} and Theorem \ref{t5.1}.
From \eqref{e0.8} follows that, if conditions \eqref{e0.9} hold, the operator $M_0$, corresponding to the system \eqref{e0.6}, is Fredholm in both settings \eqref{e0.6a} and \eqref{e0.6b}. On the other hand, for the values $p=2$, $r=-1/2$, which also satisfy the conditions \eqref{e0.9}, the operator $\M_0$ is invertible (see the concluding assertion in Theorem \ref{t0.3a}). Then, due to Proposition \ref{p4.3}, $\M_0$ is invertible in both settings \eqref{e0.6a} and \eqref{e0.6b} for all those $r$ and $p$ which satisfy \eqref{e0.9}. \QED
\noindent
{\bf Proof of Theorem \ref{t0.3}:} Due to Theorem \ref{t0.3a} and Theorem \ref{t0.4} the BVP \eqref{e0.1} is Fredholm if the system \eqref{e0.6} in the non-classical setting \eqref{e0.7} is, provided $r=\dst\frac1p-s$, i.e., if the condition \eqref{e0.8} holds with $r=\dst\frac1p-s$ (cf. the condition \eqref{e0.8}), which is the same condition as \eqref{e0.4}.
From \eqref{e0.4} follows that if conditions \eqref{e0.5} hold, the BVP \eqref{e0.1} is Fredholm in the non-classical setting \eqref{e0.3}. On the other hand, for the values $p=2$, $s=1$, which also satisfy the conditions \eqref{e0.5}, the BVP \eqref{e0.1} has a unique solution (see Theorem \ref{t0.1}). Then, due to Proposition \ref{p4.3}, the BVP \eqref{e0.1} has a unique solution in the non-classical setting \eqref{e0.3} for all those $s$ and $p$ which satisfy \eqref{e0.5}. \QED
|
train/arxiv
|
BkiUdLc4uzlhbeH0Zq7j
| 5 | 1 |
\section{Some concentration inequalities}
\subsection{Concentration for independent non-identically distributed exponential random variables}\label{appendix_concentration_for_nonid_exp_rvs}
\begin{restatable}{lemma}{lemmaWeightedExpChernoff}\label{Lemma_weighted_exp_chernoff}
Let $\mathbf{u}\in\ensuremath{\mathbb{R}}^n_+$ be a vector with $\|\mathbf{u}\|_1=n$ and let $\mathbf{Z}$ be a random vector with independent $\Exp(1)$ components. Then for any $t\in[0,1)$, we have
\begin{equation}
\mathbb{P}(\mathbf{u}\cdot \mathbf{Z} \le tn) \le (te^{1-t})^n \prod_{i=1}^n u_i^{-1} \le (te)^n \prod_{i=1}^n u_i^{-1}.
\end{equation}
In fact, the upper bound given by the second inequality holds trivially when $t\ge 1$ and is invariant under simultaneous scaling of $u$ and $t$.
Further, when $1/K \le u_i\le K$ for some constant $K\ge 1$, we have
\begin{equation}
\mathbb{P}(\mathbf{u}\cdot \mathbf{Z} \le tn) \ge e^{-\ensuremath{O}(n^{2/3})} (te^{1-Kt})^n \prod_{i=1}^n u_i^{-1}.
\end{equation}
For $t=o(1)$, this lower bound becomes
\[
e^{-\ensuremath{O}(n^{2/3})+(1-K)tn} (te^{1-t})^n \prod_{i=1}^n u_i^{-1} = e^{o(n)} (te)^n \prod_{i=1}^n u_i^{-1},
\]
indicating that the upper bound is tight up to a factor of $e^{o(n)}$.
In particular, when $t=\ensuremath{O}(n^{-1/3})$, the gap is $e^{\ensuremath{O}(n^{2/3})}$.
\end{restatable}
\begin{proof}
First we establish the upper bound. Directly applying Chernoff's method on $\mathbf{u}\cdot \mathbf{Z}$, we have
\begin{equation}
\mathbb{P}(\mathbf{u}\cdot \mathbf{Z} \le tn) \le \inf_{\lambda \ge 0} \frac{\ensuremath{\mathbb{E}}[\exp(-\lambda \mathbf{u}\cdot \mathbf{Z})]}{\exp(-\lambda tn)} = \inf_{\lambda\ge 0} e^{\lambda tn} \prod_{i=1}^n \frac{1}{1+\lambda u_i}.
\end{equation}
Taking $\lambda=1/t - 1$ (which is the minimizer when $\mathbf{u}=\mathbf{1}$) gives
\begin{equation}\label{Eqn_lemma_weighted_exp_chernoff_3}
\mathbb{P}(\mathbf{u}\cdot \mathbf{X} \le tn) \le e^{n-tn} \prod_{i=1}^n \frac{t}{t+(1-t)u_i}.
\end{equation}
Notice that $\mathbf{u} \mapsto \sum_{i=1}^n \log u_i$ is a concave function on $\ensuremath{\mathbb{R}}_+^n$, and hence
\[
\prod_{i=1}^n (t+(1-t)u_i) \ge \left(\prod_{i=1}^n u_i\right)^{1-t} \ge \prod_{i=1}^n u_i
\]
since $\prod_{i=1}^n u_i \le \big(n^{-1}\sum_{i=1}^n u_i\big)^n=1$.
Plugging the above inequality into \eqref{Eqn_lemma_weighted_exp_chernoff_3} gives the desired upper bound.
Now we establish the tightness of the bound under the additional assumption that $1/K \le u_i \le K$ for all $i\in[n]$.
Consider independent random variables $W_i\sim \Exp(u_i R/t)$ for $i=1,\cdots,n$ with $R=1+n^{-1/3}$, so that by Chebyshev's inequality
\[
q_n := \mathbb{P}(\mathbf{u}\cdot \mathbf{W}\le tn) = \mathbb{P}_{T\sim \Gamma(n,1)}(T \le nR) \ge 1 - n^{-1/3}.
\]
For convenience, we similarly write
\[
p_n := \mathbb{P}(\mathbf{u}\cdot \mathbf{Z}\le tn)
\]
and write the (joint) distributions of $\mathbf{Z}$ and $\mathbf{W}$ as $P_n = \Exp(1)^{\otimes n}$ and $Q_n = \bigotimes_{i=1}^n \Exp(u_i R/t)$, respectively. Applying the data processing inequality to the channel $\mathcal{C}$ that maps $\mathbf{\zeta}\in\ensuremath{\mathbb{R}}^n$ to $\mathbbm{1}\{\mathbf{u}\cdot \mathbf{\zeta} \le tn\}$ gives
\begin{multline}\label{Eqn_lemma_weighted_exp_chernoff_tightness_data_proc_ineq}
D(Q_n\|P_n) \ge D(\mathcal{C}(Q_n) \| \mathcal{C}(P_n)) = q_n \log\frac{q_n}{p_n} + (1-q_n) \log\frac{1-q_n}{1-p_n} \\
= - q_n \log p_n + (1-q_n) \log(1-p_n) + (q_n\log q_n + (1-q_n)\log(1-q_n)),
\end{multline}
where $D(\cdot\|\cdot)$ denotes the Kullback-Leibler (KL) divergence between two probability distributions. A direct computation gives
\begin{align}
D(Q_n\|P_n) &= \sum_{i=1}^n \left(\frac{t}{Ru_i} - 1 - \log\frac{t}{Ru_i}\right) \nonumber\\
&\le \sum_{i=1}^n \left(\frac{Kt}{R} - 1 - \log\frac{t}{Ru_i}\right) \nonumber\\
&= -n + R^{-1} Ktn - n \log t + n\log R + \sum_{i=1}^n \log u_i \\
&\le -n + Ktn - n \log t + n^{2/3} + \sum_{i=1}^n \log u_i.\label{Eqn_lemma_weighted_exp_chernoff_tightness_compute_kl}
\end{align}
Combining this with \eqref{Eqn_lemma_weighted_exp_chernoff_tightness_data_proc_ineq} and letting $n\to\infty$ gives
\begin{equation}
-n + Ktn - n \log t + n^{2/3} + \sum_{i=1}^n \log u_i \ge - (1-O(n^{-1/3}))\log p_n + o(1),
\end{equation}
where we used the fact that
$\log(1-p_n)\to 0$ (due to our upper bound). Exponentiating both sides gives the desired lower bound for $p_n$.
\end{proof}
As a consequence, we have the following lemma.
\begin{restatable}{lemma}{lemmaWgtExpCondConcentration}\label{Lemma_wgt_exp_cond_concentration}
Let $\mathbf{u},\mathbf{v}\in\ensuremath{\mathbb{R}}^n_+$ be two vectors with bounded components, namely $\|\mathbf{u}\|_1=\|\mathbf{v}\|_1=n$ and $1/K \le u_i,v_i\le K$ for some fixed $K\ge 1$. For independent $Z_1,\cdots,Z_n\sim \Exp(1)$, we have
\begin{equation}\label{Eqn_main_Lemma_wgt_exp_cond_concentration}
\mathbb{P}\left(\left|\frac{\mathbf{u}\cdot \mathbf{Z}}{t \mathbf{u}\cdot \mathbf{v}^{-1}} - 1\right| >\zeta \;\middle|\; \mathbf{v}\cdot \mathbf{Z} < tn\right) \le \exp(-\Theta(n\zeta^2))
\end{equation}
for $t=o(1)$ and and any fixed constant $\zeta>0$, where $\mathbf{v}^{-1}$ denotes the component-wise inverse of vector $\mathbf{v}$.
\end{restatable}
Notice that this result is invariant under simultaneous scaling of vector $\mathbf{u}$, $\mathbf{v}$, and $t$. Essentially, we only need $tn/\|\mathbf{v}\|_1 = o(1)$ and bounded ratios between among the entries of $\mathbf{u}$ and $\mathbf{v}$. Further, the result remains unchanged if $Z_i\sim\Exp(c_i)$ independently with $c_i$'s bounded on some $[1/K', K']$; the $c_i$'s can simply be absorbed into $\mathbf{u}$ and $\mathbf{v}$.
\begin{proof}
We first prove the concentration bound for the lower tail.
Writing
\begin{multline}
\mathbb{P}(u\cdot x < (1-\zeta) tu\cdot v^{-1} | v\cdot x < tn) = \frac{\mathbb{P}(u\cdot x < (1-\zeta) tu\cdot v^{-1}, v\cdot x < tn)}{\mathbb{P}(v\cdot x < tn)} \\
\le \frac{\mathbb{P}((\lambda u+(1-\lambda)v)\cdot x < (1-\zeta)\lambda t u\cdot v^{-1} + (1-\lambda)tn)}{\mathbb{P}(v\cdot x < tn)}
\end{multline}
for some $\lambda > 0$ to be determined later, the previous Lemma bounds the numerator by
\[
t^n \left(1-\lambda + \frac{(1-\zeta)\lambda u\cdot v^{-1}}{n}\right)^n e^{n - (1-\zeta)\lambda t u\cdot v^{-1} - (1-\lambda)tn} \prod_{i=1}^n \frac{1}{\lambda u_i + (1-\lambda)v_i}.
\]
For lower bounding the denominator $\mathbb{P}(v\cdot x<tn)$, Lemma~\ref{Lemma_weighted_exp_chernoff} indicates that for $t=o(n)$, the denominator is well approximated by $t^n e^{n-tn} \prod_i v_i^{-1}$, up to an error of $e^{o(n)}$. Taking the ratio between the two quantities gives
\begin{equation}\label{Eqn_proof_wgt_exp_cond_concentration_lower_p_ratio_up_to_e^o(n)_1}
\left(1-\lambda + \frac{(1-\zeta)\lambda u\cdot v^{-1}}{n}\right)^n e^{\lambda tn-(1-\zeta)\lambda t u\cdot v^{-1}} \prod_{i=1}^n \frac{1}{\lambda u_i v_i^{-1} + 1-\lambda}
\end{equation}
Focus on the quantity $\prod_{i=1}^n (\lambda u_i v_i^{-1} + 1-\lambda)$. We use the following claim for a bound on the gap between the arithmetic and geometric means.
\begin{claim}
For $z\in\ensuremath{\mathbb{R}}_+^n$ with $\bar{z} = n^{-1}\sum_{i=1}^n z_i$, the function $f:[0,1]\to\ensuremath{\mathbb{R}}$ given by $f(\alpha) = \sum_{i=1}^n \log(\bar{z} + \alpha (z_i-\bar{z}))$ is concave with a maximum at $\alpha=0$. (Indeed, the function $z\mapsto \sum_{i=1}^n \log z_i$ is concave on $\ensuremath{\mathbb{R}}_+^n$.) Hence,
\begin{equation}
0\le f(0) - f(1) \le -f'(1) = -\sum_{i=1}^n \frac{z_i-\bar{z}}{z_i} = -n + \bar{z}\sum_{i=1}^n\frac{1}{z_i}.
\end{equation}
Exponentiating both sides gives
\begin{equation}
\bar{z}^n \prod_{i=1}^n z_i^{-1} \le \exp\left( -n + \bar{z}\sum_{i=1}^n\frac{1}{z_i} \right).
\end{equation}
\end{claim}
Applying the above claim to the product in \eqref{Eqn_proof_wgt_exp_cond_concentration_lower_p_ratio_up_to_e^o(n)_1} with $z_i = \lambda u_i v_i^{-1} + 1-\lambda$, we obtain
\begin{equation}
\prod_{i=1}^n \frac{1}{\lambda u_i v_i^{-1} + 1-\lambda} \le \left(1-\lambda + \frac{\lambda u\cdot v^{-1}}{n}\right)^{-n} \exp\left( -n +\sum_{i=1}^n\frac{1-\lambda + \lambda u\cdot v^{-1}/n}{1-\lambda + \lambda u_i v_i^{-1}}\right)
\end{equation}
Hence, the conditional probability of interest is upper bounded, up to $e^{o(n)}$, by the following expression
\begin{equation}
\left(1-\lambda + \frac{(1-\zeta)\lambda u\cdot v^{-1}}{n}\right)^n e^{\lambda tn-(1-\zeta)\lambda t u\cdot v^{-1}} \left(1-\lambda + \frac{\lambda u\cdot v^{-1}}{n}\right)^{-n} \exp\left( -n +\sum_{i=1}^n\frac{1-\lambda + \lambda u\cdot v^{-1}/n}{1-\lambda + \lambda u_i v_i^{-1}}\right).
\end{equation}
Denote the negative logarithm of the $n$-th root of the quantity above by $\underline{\chi}(\lambda)$. That is,
\begin{equation}
\mathbb{P}(u\cdot x < (1-\zeta) tu\cdot v^{-1} | v\cdot x < tn)
\le \inf_{\lambda>0}e^{o(n) - n \underline{\chi}(\lambda)} = \exp\Big(o(n) - n \sup_{\lambda>0}\underline{\chi}(\lambda)\Big)
\end{equation}
for any $\lambda > 0$ with
\begin{multline}
\underline{\chi}(\lambda) := -\log\left(1-\lambda + \frac{(1-\zeta)\lambda u\cdot v^{-1}}{n}\right) - \lambda t + \frac{(1-\zeta)\lambda t u\cdot v^{-1}}{n} + \log\left(1-\lambda + \frac{\lambda u\cdot v^{-1}}{n}\right) \\
+ 1 - \frac{1}{n}\sum_{i=1}^n\frac{1-\lambda + \lambda u\cdot v^{-1}/n}{1-\lambda + \lambda u_i v_i^{-1}}.\label{Eqn_proof_wgt_exp_cond_concentration_lower_neg_log_prob_def}
\end{multline}
The $o(n)$ factor is of lower order, and it suffices to show that there exists some $\lambda$ such that $\underline{\chi}(\lambda)=\Theta(\zeta^2)$.
For $\lambda$ sufficiently small (e.g., $\lambda \le K^{-2}/2$, recalling that $u_i,v_i\in[1/K,K]$), we may approximate the logarithm function near its zero and obtain
\begin{equation}
\log\left(1-\lambda + \frac{\lambda u\cdot v^{-1}}{n}\right) \ge \frac{\lambda u\cdot v^{-1}}{n} -\lambda - \left(\frac{\lambda u\cdot v^{-1}}{n} -\lambda\right)^2.
\end{equation}
Then the two log terms in \eqref{Eqn_proof_wgt_exp_cond_concentration_lower_neg_log_prob_def} combined can be bounded below by
\begin{equation}
\lambda - \frac{(1-\zeta)\lambda u\cdot v^{-1}}{n} + \frac{\lambda u\cdot v^{-1}}{n} -\lambda - \left(\frac{\lambda u\cdot v^{-1}}{n} -\lambda\right)^2 = \zeta \lambda \frac{u\cdot v^{-1}}{n} - \lambda^2 \left(\frac{u\cdot v^{-1}}{n}-1\right)^2.
\end{equation}
With $u_i,v_i\in[1/K,K]$, a naive lower bound is the following
\begin{equation}\label{Eqn_proof_wgt_exp_cond_concentration_lower_neg_log_prob_lower_bound}
\underline{\chi}(\lambda) \ge \zeta \lambda K^{-2} - \lambda^2 K^4 - \lambda t + (1-\zeta)\lambda t K^{-2}
+ 1 - \frac{(2-2\lambda+K^2\lambda+K^{-2}\lambda)^2}{4(1-\lambda+K^2\lambda)(1-\lambda+K^{-2}\lambda)},
\end{equation}
where the summation at the end of \eqref{Eqn_proof_wgt_exp_cond_concentration_lower_neg_log_prob_def} is bounded using Schweitzer's inequality \cite{schweitzer1914egy} for the ratio between arithmetic and harmonic means, stating
\[
\frac{1}{n}\sum_{i=1}^n\frac{\bar{z}}{z_i} \le \frac{(a+b)^2}{4ab}
\]
for $z\in\ensuremath{\mathbb{R}}^n$ with bounded components $0< a\le z_i\le b$. Further, we observe
\begin{equation}
1 - \frac{(2-2\lambda+K^2\lambda+K^{-2}\lambda)^2}{4(1-\lambda+K^2\lambda)(1-\lambda+K^{-2}\lambda)} = -\frac{(4K^2(K^{-2}-1)^2+(K-K^{-1})^4)}{4(1-\lambda+K^2\lambda)(1-\lambda+K^{-2}\lambda)}\lambda^2 \ge - 3K^4\lambda^2.
\end{equation}
Taking $\lambda = \zeta K^{-6}/8$ in \eqref{Eqn_proof_wgt_exp_cond_concentration_lower_neg_log_prob_lower_bound} yields
\begin{equation}
\underline{\chi}\left(\frac{1}{8}\zeta K^{-6}\right) \ge \frac{1}{16}\zeta^2 K^{-8} - \frac{1}{8}t\zeta K^{-6} \ge \Theta(\zeta^2),
\end{equation}
hence finishing our proof for the lower tail.
The proof for the upper tail follows a similar structure.
Writing
\begin{multline*}
\mathbb{P}(u\cdot x > (1+\zeta) tu\cdot v^{-1} | v\cdot x < tn) = \frac{\mathbb{P}(u\cdot x > (1+\zeta) tu\cdot v^{-1}, v\cdot x < tn)}{\mathbb{P}(v\cdot x < tn)} \\
\le \frac{\mathbb{P}((-\lambda u+ (1+\lambda)v)\cdot x < -(1+\zeta)\lambda t u\cdot v^{-1} + (1+\lambda) tn)}{\mathbb{P}(v\cdot x < tn)}
\end{multline*}
for some $0 < \lambda < K^{-2}$ (so that $-\lambda u+ (1+\lambda)v\in\ensuremath{\mathbb{R}}_+^n$) to be determined later, Lemma~\ref{Lemma_weighted_exp_chernoff} and \ref{Lemma_weighted_exp_chernoff} together imply that the ratio is, up to $e^{o(n)}$,
\begin{equation}\label{Eqn_proof_wgt_exp_cond_concentration_upper_p_ratio_up_to_e^o(n)_1}
\left(1+\lambda - \frac{(1+\zeta)\lambda u\cdot v^{-1}}{n}\right)^n e^{(1+\zeta)\lambda t u\cdot v^{-1} - \lambda tn} \prod_{i=1}^n \frac{1}{1+\lambda -\lambda u_i v_i^{-1}}
\end{equation}
As in the proof of lower tail bound, the product term can be bounded by
\begin{equation}
\prod_{i=1}^n (1+\lambda-\lambda u_i v_i^{-1}) \le
\left(1+\lambda - \frac{\lambda u\cdot v^{-1}}{n}\right)^{-n} \exp\left( -n +\sum_{i=1}^n\frac{1+\lambda - \lambda u\cdot v^{-1}/n}{1+\lambda - \lambda u_i v_i^{-1}}\right),
\end{equation}
giving an upper bound, again up to $e^{o(n)}$, of
\begin{equation}
\left(1+\lambda - \frac{(1+\zeta)\lambda u\cdot v^{-1}}{n}\right)^n e^{(1+\zeta)\lambda t u\cdot v^{-1}-\lambda tn} \left(1+\lambda - \frac{\lambda u\cdot v^{-1}}{n}\right)^{-n} \exp\left( -n +\sum_{i=1}^n\frac{1+\lambda - \lambda u\cdot v^{-1}/n}{1+\lambda - \lambda u_i v_i^{-1}}\right)
\end{equation}
for the conditional probability of interest.
Denote the negative logarithm of the $n$-th root of the quantity above by $\overline{\chi}(\lambda)$. That is,
\begin{equation}
\mathbb{P}(u\cdot x > (1+\zeta) tu\cdot v^{-1} | v\cdot x < tn)
\le \inf_{0<\lambda<K^{-2}} e^{o(n) - n \overline{\chi}(\lambda)}
\end{equation}
for any $\lambda \in(0,K^{-2})$ with
\begin{multline}
\overline{\chi}(\lambda) := -\log\left(1+\lambda - \frac{(1+\zeta)\lambda u\cdot v^{-1}}{n}\right) + \lambda t -\frac{(1+\zeta)\lambda t u\cdot v^{-1}}{n} + \log\left(1+\lambda - \frac{\lambda u\cdot v^{-1}}{n}\right) \\
+ 1 - \frac{1}{n}\sum_{i=1}^n\frac{1+\lambda - \lambda u\cdot v^{-1}/n}{1+\lambda - \lambda u_i v_i^{-1}}.
\end{multline}
Again, it suffices to prove that for some choice of $\lambda$ we have $\overline{\chi}(\lambda)=\Theta(\zeta^2)$.
With similar arithmetic as in the proof for the lower tail, we observe that for $\lambda$ sufficiently small (e.g., $\lambda \le K^{-2}/2$)
\begin{equation}\label{Eqn_proof_wgt_exp_cond_concentration_upper_neg_log_prob_lower_bound}
\overline{\chi}(\lambda) \ge \zeta \lambda K^{-2} + \lambda t - (1+\zeta)\lambda t K^2 - 4 \lambda^2 K^4.
\end{equation}
Again, taking $\lambda = \zeta K^{-6}/8$ in \eqref{Eqn_proof_wgt_exp_cond_concentration_upper_neg_log_prob_lower_bound} gives the desired lower bound of $\Theta(\zeta^2)$ for $\sup_{0<\lambda<K^{-2}}\overline{\chi}(\lambda)$ and thus finishes our proof.
\end{proof}
\subsection{A generalized DKW inequality for independent and nearly identically distributed random variables}
\begin{lemma}\label{Lemma_dkw_non_identical}
Let $X_i$, $i=1,\cdots,n$ be independent random variables each with (non-identical) distribution function $G_i$, and assume that there exists a constant $\delta>0$ and a distribution $F$ such that $\|G_i-F\|_\infty \leq \delta$ uniformly across all $i=1,\cdots,n$. Let $\hat{G}$ be the empirical distribution function of $\{X_i\}_{i=1}^n$. Then
\begin{equation}\label{Eqn_dkw_non_identical_lemma}
\mathbb{P}(\|\hat{G}-F\|_\infty > 2\delta + \epsilon) < 4\exp(-2n\epsilon^2/9).
\end{equation}
\end{lemma}
\begin{proof}
Let $U_i = G_i(X_i)$ so that $U_1,\cdots,U_n$ are i.i.d. uniform on $[0,1]$, and denote their empirical distribution function by $\hat{J}$. Let $Y_i=F^{-1}(U_i)=F^{-1}(G_i(X_i))$ so that $Y_1,\cdots,Y_n$ are i.i.d. each with distribution function $F$, and denote their empirical distribution function by $\hat{F}$. Notice that
\begin{align*}
\|\hat{G}-F\|_\infty &= \sup_{x\in\mathbb{R}} \left| n^{-1} \sum_{i=1}^n I_{(-\infty,x)}(X_i) - F(x)\right|\\
&= \sup_{x\in\mathbb{R}} \left| n^{-1} \sum_{i=1}^n I_{(-\infty,x)}(Y_i) - F(x) + n^{-1} \sum_{i=1}^n \left(I_{(-\infty,x)}(Y_i) - I_{(-\infty,x)}(X_i)\right) \right|\\
&\leq \sup_{x\in\mathbb{R}} \left| n^{-1} \sum_{i=1}^n I_{(-\infty,x)}(Y_i) - F(x)\right| + \sup_{x\in\mathbb{R}}\left(n^{-1} \sum_{i=1}^n \left|I_{(-\infty,x)}(Y_i) - I_{(-\infty,x)}(X_i) \right|\right)\\
&= \|\hat{F}-F\|_\infty + \sup_{x\in\mathbb{R}}A(x).
\end{align*}
From the classic result of DKW inequality \cite{dvoretzky1956asymptotic} applied to $\hat{F}$ and $F$, we know that
\begin{equation}\label{Eqn_proof_dkw_non_identical_bound_1}
\mathbb{P}(\|\hat{F}-F\|_\infty > \epsilon/3) < 2\exp(-2n\epsilon^2/9).
\end{equation}
For the second supremum of in the sum above, we now consider $U_i$, $i=1,\cdots,n$ as the underlying random variables. Each term in the summation in $A$ contributes 1 to the sum if and only if
\[F^{-1}(U_i)=Y_i<x\leq X_i=G_i^{-1}(U_i) \;\text{ or }\; G_i^{-1}(U_i)=X_i<x\leq Y_i=F^{-1}(U_i),\]
or alternatively
\[F(x)\wedge G_i(x) \leq U_i \leq F(x)\vee G_i(x),\footnote{We may safely ignore the case where the two sides are equal, as it happens with probability zero.}\]
where $\wedge$ and $\vee$ denote the operators of taking the minimum and maximum, respectively. Hence,
\begin{align*}
A(x) &= n^{-1} \sum_{i=1}^n \left|I_{(-\infty,x)}(Y_i) - I_{(-\infty,x)}(X_i) \right|\\
&= n^{-1} \sum_{i=1}^n I_{(F(x)\wedge G_i(x),F(x)\vee G_i(x))}(U_i)\\
&\leq n^{-1} \sum_{i=1}^n I_{(\bigwedge_j G_j(x)\wedge F(x),\bigvee_j G_j(x)\vee F(x))}(U_i)\\
&= \hat{J}(M(x)) - \hat{J}(m(x)),
\end{align*}
where $M$ and $m$ denote the maximum and minimum across $F$ and $G_i$, $i=1,\cdots,n$, respectively. By our assumption that $\|G_i-F\|_\infty \leq \delta$ across all $i$, we have that
\[
0\leq M(x) - m(x)\leq 2\delta
\]
for all $x\in\mathbb{R}$. Noticing that the true distribution function $J$ of $U_i$, $i=1,\cdots,n$ is the identity function on $[0,1]$, we have
\begin{align*}
A(x) &= \hat{J}(M(x)) - \hat{J}(m(x))\\
&\leq \left|\hat{J}(M(x)) - J(M(x))\right| + \left|\hat{J}(m(x)) - J(m(x))\right| + \left|J(M(x)) - J(m(x))\right|\\
&\leq 2\|\hat{J}-J\|_\infty + 2\delta
\end{align*}
on $\mathbb{R}$ uniformly. Therefore, applying DKW inequality again, we see that
\begin{equation}\label{Eqn_proof_dkw_non_identical_bound_2}
\mathbb{P}(\sup A > 2\delta + 2\epsilon/3) \leq \mathbb{P}(\|\hat{J}-J\|_\infty > \epsilon/3) < 2\exp(-2n\epsilon^2/9).
\end{equation}
Combining \eqref{Eqn_proof_dkw_non_identical_bound_1} and \eqref{Eqn_proof_dkw_non_identical_bound_2} yields the desired bound in \eqref{Eqn_dkw_non_identical_lemma}.
\end{proof}
\section{Additional proofs}\label{appendix_extra_proofs}
\subsection{Proof of Corollary~\ref{Cor_subexp_num_stable_match}}
In this section, we prove Corollary~\ref{Cor_subexp_num_stable_match}, which is restated below for convenience. We will assume Proposition~\ref{Prop_EqXY_bound}, whose proof is deferred to Appendix~\ref{Append_proof_prop_EqXY}.
\corSubExpNumOfStableMatch*
\begin{proof}
The last part is simply Corollary~\ref{Cor_Rstar_likely}.
For the first part, observe that for each $\mathcal{M}'\subseteq\mathcal{M}$ and $\mathcal{W}'\subseteq\mathcal{W}$ with $|\mathcal{M}'|=|\mathcal{W}'|=n-\floor{\delta n}$ and partial matching $\mu'$ between $\mathcal{M}'$ and $\mathcal{W}'$,
\begin{multline}
\ensuremath{\mathbb{P}}(\mu'\text{ is stable and satisfies }\mathcal{R}^*) \le e^{o(n)} \ensuremath{\mathbb{E}}[q(\mathbf{X}_{\mathcal{M}'},\mathbf{Y}_{\mathcal{W}'}) \cdot \mathbbm{1}_{\mathcal{R}^*}(\mathbf{X}_{\mathcal{M}'},\mathbf{Y}_{\mathcal{W}'})] \le \\
e^{o(n)} \ensuremath{\mathbb{E}}[q(\mathbf{X}_{\mathcal{M}'},\mathbf{Y}_{\mathcal{W}'}) \cdot \mathbbm{1}_{\mathcal{R}_2}(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \cdot\mathbbm{1}_{\mathcal{R}_1}(\mathbf{Y}_{\mathcal{W}'})] \le e^{o_\delta(n)} \frac{(\delta n)!}{n!}\prod_{i\in\mathcal{M}'} a_{i,\mu'(i)} b_{\mu'(i),i}
\end{multline}
by Proposition~\ref{Prop_EqXY_bound}. Summing over $\mathcal{M}'$, $\mathcal{W}'$, and $\mu'$ bounds the expected number of such stable partial matchings above by
\begin{align}
\ensuremath{\mathbb{E}}[N_\delta] &\le \sum_{\substack{\mathcal{M}'\subseteq\mathcal{M},\mathcal{W}'\subseteq\mathcal{W}\\|\mathcal{M}'|=|\mathcal{W}'|=n-\floor{\delta n}}}\sum_{\substack{\mu':\mathcal{M}'\to\mathcal{W}'\\\text{bijection}}} e^{o_\delta(n)} \frac{(\delta n)!}{n!}\prod_{i\in\mathcal{M}'} a_{i,\mu'(i)} b_{\mu'(i),i} \nonumber \\
&\labelrel={Step1_num_stable} \frac{1}{\floor{\delta n}!} \sum_{\substack{\mu:\mathcal{M}\to\mathcal{W}\\\text{bijection}}} \sum_{\substack{\mathcal{M}'\subseteq\mathcal{M}\\|\mathcal{M}'|=n-\floor{\delta n}}} e^{o_\delta(n)} \frac{(\delta n)!}{n!}\prod_{i\in\mathcal{M}'} a_{i,\mu(i)} b_{\mu(i),i} \nonumber \\
&\labelrel\le{Step2_num_stable} \sum_{\substack{\mu:\mathcal{M}\to\mathcal{W}\\\text{bijection}}} \sum_{\substack{\mathcal{M}'\subseteq\mathcal{M}\\|\mathcal{M}'|=n-\floor{\delta n}}} e^{o_\delta(n)} \frac{1}{n!}\cdot C^{2\floor{\delta n}} \prod_{i\in\mathcal{M}} a_{i,\mu(i)} b_{\mu(i),i} \nonumber \\
&\labelrel\le{Step3_num_stable} e^{o_\delta(n)} \binom{n}{\floor{\delta n}} \cdot \frac{1}{n!} \Perm(\mathbf{A}\circ \mathbf{B}^\top) \nonumber \\
&\labelrel\le{Step4_num_stable} e^{o_\delta(n)}, \nonumber
\end{align}
where in \eqref{Step1_num_stable} we use an alternative counting of partial matchings by counting sub-matchings of size $n-\floor{\delta n}$ in full matchings and then deduplicate by a factor of $\floor{\delta n}!$; in \eqref{Step2_num_stable} we use the boundedness assumption on the components of $\mathbf{A}$ and $\mathbf{B}$; in \eqref{Step3_num_stable} we merge $C^{2\floor{\delta n}}$ into $e^{o_\delta(n)}$; and finally in \eqref{Step4_num_stable} we merge $\binom{n}{\floor{\delta n}}=\exp(h(\delta) n + o(n))$ into $e^{o_\delta(n)}$ and bound the permanent term by $n^n \Perm(\mathbf{M}) \le \Theta(n!)$ using the moderate deviation property of $\mathbf{M}$ \citep[Sec.~3]{mccullagh2014asymptotic}.
\end{proof}
\subsection{Proof of Lemma~\ref{Prop_eigenvec_of_M_high_prob}}\label{Append_proof_prop_eigenvec_of_M}
In this section, we prove Lemma~\ref{Prop_eigenvec_of_M_high_prob}, which is restated below for convenience.
\PropEigenVecOfMHighProf*
To prepare for the proof of Lemma~\ref{Prop_eigenvec_of_M_high_prob}, let us denote the expectation in \eqref{Eqn_Prop_eigenvec_of_M_Expectation_is_small} by $E$, and express it as
\begin{align}
E &= \int_{0}^\infty \ensuremath{\mathbb{P}}\big(q(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \cdot \mathbbm{1}_{\mathcal{R}\backslash\Omega_{\text{eig}}(\zeta)}(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) > s\big) \ensuremath{\,d} s \nonumber\\
&= \int_0^1 \ensuremath{\mathbb{P}}\big(\exp(-n\mathbf{X}_{\mathcal{M}'}^\top \mathbf{M} \mathbf{Y}_{\mathcal{W}'}) > s, (\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \in \mathcal{R} \backslash \Omega_{\text{eig}}(\zeta)\big) \ensuremath{\,d} s \nonumber \\
&= \int_0^\infty \ensuremath{\mathbb{P}}\big(\mathbf{X}_{\mathcal{M}'}^\top \mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t, (\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \in \mathcal{R}_2 \backslash \Omega_{\text{eig}}(\zeta), \mathbf{X}_{\mathcal{M}'}\in\mathcal{R}_1, \mathbf{Y}_{\mathcal{W}'}\in\mathcal{R}_1\big) \cdot n e^{-nt}\ensuremath{\,d} t \nonumber \\
&= \int_0^\infty \ensuremath{\mathbb{P}}\big(\mathbf{X}_{\mathcal{M}'}^\top \mathbf{M} \mathbf{Y}_{\mathcal{W}'} < \bar{t}, (\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \notin \Omega_{\text{eig}}(\zeta), \mathbf{X}_{\mathcal{M}'}\in\mathcal{R}_1, \mathbf{Y}_{\mathcal{W}'}\in\mathcal{R}_1\big) \cdot n e^{-nt}\ensuremath{\,d} t,
\end{align}
where $\bar{t} := t \wedge (c_2(\log n)^{1/8})$.
If we can find two families of regions $\Omega_1(\zeta;s),\Omega_2(\zeta;s)\subseteq\ensuremath{\mathbb{R}}_+^n\times\ensuremath{\mathbb{R}}_+^n$ such that $\Omega_{\text{eig}}(\zeta) \supseteq \Omega_1(\Theta(\zeta);s)\cap\Omega_2(\Theta(\zeta);s)$ for all $0<s<c_2(\log n)^{1/8}$, by union bound and relaxing the requirement that $\mathbf{X}_{\mathcal{M}'}$ (resp. $\mathbf{Y}_{\mathcal{W}'}$) is in $\mathcal{R}_1$, we will obtain
\begin{align}
E
&\le \int_0^\infty \ensuremath{\mathbb{P}}\big(\mathbf{X}_{\mathcal{M}'}^\top \mathbf{M} \mathbf{Y}_{\mathcal{W}'} < \bar{t}, (\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \notin \Omega_1(\Theta(\zeta);\bar{t}), \mathbf{X}_{\mathcal{M}'}\in\mathcal{R}_1, \mathbf{Y}_{\mathcal{W}'}\in\mathcal{R}_1\big) \cdot n e^{-nt}\ensuremath{\,d} t \nonumber \\
&\qquad + \int_0^\infty \ensuremath{\mathbb{P}}\big(\mathbf{X}_{\mathcal{M}'}^\top \mathbf{M} \mathbf{Y}_{\mathcal{W}'} < \bar{t}, (\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \notin \Omega_2(\Theta(\zeta);\bar{t}), \mathbf{X}_{\mathcal{M}'}\in\mathcal{R}_1, \mathbf{Y}_{\mathcal{W}'}\in\mathcal{R}_1\big) \cdot n e^{-nt}\ensuremath{\,d} t \nonumber\\
&\le \int_0^\infty \ensuremath{\mathbb{P}}\big(\mathbf{X}_{\mathcal{M}'}^\top \mathbf{M} \mathbf{Y}_{\mathcal{W}'} < \bar{t}, (\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \notin \Omega_1(\Theta(\zeta);\bar{t}), \mathbf{X}_{\mathcal{M}'}\in\mathcal{R}_1\big) \cdot n e^{-nt}\ensuremath{\,d} t \nonumber \\
&\qquad + \int_0^\infty \ensuremath{\mathbb{P}}\big(\mathbf{X}_{\mathcal{M}'}^\top \mathbf{M} \mathbf{Y}_{\mathcal{W}'} < \bar{t}, (\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \notin \Omega_2(\Theta(\zeta);\bar{t}), \mathbf{Y}_{\mathcal{W}'}\in\mathcal{R}_1\big) \cdot n e^{-nt}\ensuremath{\,d} t. \nonumber
\end{align}
Rewriting the probabilities through conditioning and further relaxing the requirement gives
\begin{align}
E
&\le \int_0^\infty \ensuremath{\mathbb{P}}\big((\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \notin \Omega_1(\Theta(\zeta);\bar{t}) \big| \mathbf{X}_{\mathcal{M}'}^\top \mathbf{M} \mathbf{Y}_{\mathcal{W}'} < \bar{t}, \mathbf{X}_{\mathcal{M}'}\in\mathcal{R}_1\big) \nonumber \\
&\qquad\qquad \cdot \ensuremath{\mathbb{P}}\big(\mathbf{X}_{\mathcal{M}'}^\top \mathbf{M} \mathbf{Y}_{\mathcal{W}'} < \bar{t}, \mathbf{X}_{\mathcal{M}'}\in\mathcal{R}_1\big) \cdot n e^{-nt}\ensuremath{\,d} t \nonumber \\
&\qquad + \int_0^\infty \ensuremath{\mathbb{P}}\big((\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \notin \Omega_2(\Theta(\zeta);\bar{t}) \big| \mathbf{X}_{\mathcal{M}'}^\top \mathbf{M} \mathbf{Y}_{\mathcal{W}'} < \bar{t}, \mathbf{Y}_{\mathcal{W}'}\in\mathcal{R}_1\big) \nonumber \\
&\qquad\qquad \cdot \ensuremath{\mathbb{P}}\big(\mathbf{X}_{\mathcal{M}'}^\top \mathbf{M} \mathbf{Y}_{\mathcal{W}'} < \bar{t}, \mathbf{Y}_{\mathcal{W}'}\in\mathcal{R}_1\big) \cdot n e^{-nt}\ensuremath{\,d} t. %
\label{Eqn_decompose_E_exp_qxy_on_Omega_eig_zeta}
\end{align}
Due to the symmetry between the two integrals, it then suffices to bound one of the two integrals (e.g., the latter) by showing
\begin{equation}\label{Eqn_unif_concentrat_MX_cond_on_XE}
\sup_{0 < t < c_2(\log n)^{1/8}} \ensuremath{\mathbb{P}}\big((\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \notin \Omega_2(\Theta(\zeta);t) \big| \mathbf{X}_{\mathcal{M}'}^\top \mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t, \mathbf{X}_{\mathcal{M}'}\in\mathcal{R}_1\big) \le \exp(-\Theta(\zeta^2 n))
\end{equation}
and
\begin{equation}\label{Eqn_EexpXMY_over_Y_in_R1}
\int_0^\infty \ensuremath{\mathbb{P}}\big(\mathbf{X}_{\mathcal{M}'}^\top \mathbf{M} \mathbf{Y}_{\mathcal{W}'} < \bar{t}, \mathbf{Y}_{\mathcal{W}'}\in\mathcal{R}_1\big) \cdot n e^{-nt}\ensuremath{\,d} t
\le e^{o(n)+o_\delta(n)} \frac{(\delta n)!}{n!} \prod_{i\in\mathcal{M}'} a_{i,\mu'(i)} b_{\mu'(i),i},
\end{equation}
from which the desired upper bound immediately follows. Recognizing that
\begin{equation*}
\int_0^\infty \ensuremath{\mathbb{P}}\big(\mathbf{X}_{\mathcal{M}'}^\top \mathbf{M} \mathbf{Y}_{\mathcal{W}'} < \bar{t}, \mathbf{Y}_{\mathcal{W}'}\in\mathcal{R}_1\big) \cdot n e^{-nt}\ensuremath{\,d} t = \ensuremath{\mathbb{E}}\big[q(\mathbf{X}_{\mathcal{M}'},\mathbf{Y}_{\mathcal{W}'})\cdot\mathbbm{1}_{\mathcal{R}_2}(\mathbf{X}_{\mathcal{M}'},\mathbf{Y}_{\mathcal{W}'})\cdot\mathbbm{1}_{\mathcal{R}_1}(\mathbf{Y}_{\mathcal{W}'})\big],
\end{equation*}
we reduce \eqref{Eqn_EexpXMY_over_Y_in_R1} to Proposition~\ref{Prop_EqXY_bound}. Our road map is to first find the desirable choices for $\Omega_1$ and $\Omega_2$ and establish \eqref{Eqn_unif_concentrat_MX_cond_on_XE}, and then prove Proposition~\ref{Prop_EqXY_bound} in Appendix~\ref{Append_proof_prop_EqXY}. Note that Proposition~\ref{Prop_EqXY_bound} is in fact independent of our choice of $\Omega_1$ and $\Omega_2$, but we will develop useful intermediate results to prepare for its proof.
Concretely, we consider events $\Omega_1$ and $\Omega_2$ as follows:
\begin{equation}
\Omega_1(\zeta;t) := \left\{(\mathbf{x},\mathbf{y})\in\ensuremath{\mathbb{R}}_+^n\times\ensuremath{\mathbb{R}}_+^n : \max_{i\in[n]}\left|(1-\delta)\frac{\mathbf{M}_{i,\cdot}\cdot \mathbf{y}}{t \mathbf{M}_{i,\cdot} \cdot (\mathbf{M}^\top\mathbf{x})^{-1}_{\mathcal{W}'}} - 1\right| >\zeta\right\},
\end{equation}
\begin{equation}
\Omega_2(\zeta;t) := \left\{(\mathbf{x},\mathbf{y})\in\ensuremath{\mathbb{R}}_+^n\times\ensuremath{\mathbb{R}}_+^n : \max_{j\in[n]}\left|(1-\delta)\frac{\mathbf{M}_{\cdot,j}\cdot \mathbf{x}}{t \mathbf{M}_{\cdot,j} \cdot (\mathbf{M}\mathbf{y})^{-1}_{\mathcal{M}'}} - 1\right| >\zeta\right\},
\end{equation}
where $\mathbf{M}_{i,\cdot}$ and $\mathbf{M}_{\cdot,j}$ denote the $i$-th row and the $j$-th column of $\mathbf{M}$, respectively; inverse is applied coordinate-wise on vectors; and $\mathbf{v}_{S}$ denotes the $n$-dimensional vector obtained by zeroing out the $i$-th component $v_i$ of $\mathbf{v}\in\ensuremath{\mathbb{R}}_+^n$ for all $i\in[n]\backslash S$ (with this operation performed after coordinate-wise inverse). We first verify the following lemma.
\begin{lemma}\label{lemma_Omega1_and_Omega2_suggests_Oeig}
There exist absolute constants $\zeta_0,\delta_0 > 0$ and $k_1, k_2 > 0$ such that for all $\zeta\in(0,\zeta_0)$, $\delta\in(0,\delta_0)$, and $t>0$ we have
\begin{equation}
\Omega_1(\zeta;t)\cap\Omega_2(\zeta;t) \subseteq \Omega_{\text{eig}}(k_1\delta + k_2\zeta).
\end{equation}
\end{lemma}
\begin{proof}
Let $\mathbf{d} = \mathbf{M}^\top \mathbf{x}$ and $\mathbf{e} = \mathbf{M}\mathbf{y}$.
Under the event that $(\mathbf{x},\mathbf{y})\in\Omega_1(\zeta;t)\cap\Omega_2(\zeta;t)$,
we have
\begin{align}
\frac{1}{d_j} &= \frac{1}{\mathbf{M}_{\cdot,j}\cdot \mathbf{x}} \labelrel\le{Step_use_Omega2_Lemma_Omega12_gives_Oeig} \frac{1-\delta}{(1-\zeta) t\mathbf{M}_{\cdot,j} \cdot \mathbf{e}^{-1}_{\mathcal{M}'}} \labelrel\le{Step_bdd_comp_Lemma_Omega12_gives_Oeig} \frac{1-\delta}{(1-\zeta) t(1+2C^2\delta)\mathbf{M}_{\cdot,j} \cdot \mathbf{e}^{-1}} \nonumber \\
&\labelrel\le{Step_Jensen_Lemma_Omega12_gives_Oeig} \frac{1-\delta}{(1-\zeta) t (1+2C^2\delta)} \sum_{i=1}^n m_{ij} e_i = \frac{1-\delta}{(1-\zeta) t (1+2C^2\delta)} \sum_{i=1}^n m_{ij}\mathbf{M}_{i,\cdot}\cdot \mathbf{y} \nonumber \\
&\labelrel\le{Step_use_Omega1_Lemma_Omega12_gives_Oeig} \frac{1-\delta}{(1-\zeta) t (1+2C^2\delta)} \sum_{i=1}^n m_{ij} \left( \frac{1+\zeta}{1-\delta} t \mathbf{M}_{i,\cdot} \cdot \mathbf{d}^{-1}_{\mathcal{W}'}\right) \le \frac{1+\zeta}{(1-\zeta)(1+2C^2\delta)} (\mathbf{M}^\top\mathbf{M}\mathbf{d}^{-1})_j,
\end{align}
where \eqref{Step_use_Omega2_Lemma_Omega12_gives_Oeig} uses the definition of $\Omega_2(\zeta;t)$; \eqref{Step_bdd_comp_Lemma_Omega12_gives_Oeig} uses the fact that $\mathbf{M}$ and $\mathbf{e}$ both have bounded ratios (at most $C$) among their entries and assumed $\delta < 1/2$; \eqref{Step_Jensen_Lemma_Omega12_gives_Oeig} is due to Jensen's inequality (or equivalently, harmonic-mean-arithmetic-mean inequality); and \eqref{Step_use_Omega1_Lemma_Omega12_gives_Oeig} uses the definition of $\Omega_1(\zeta;t)$.
Recall our assumption that $\mathbf{M}$ has entries bounded on $[1/(Cn), C/n]$. It is straightforward to verify that for any vector $\mathbf{v}\in\ensuremath{\mathbb{R}}_+^n$ with $\bar{v}=\frac{1}{n}\sum_{i=1}^n v_i$, we have $\max_{i\in[n]} (\mathbf{M}\mathbf{v})_i - \bar{v} \le \max_{i\in[n]} v_i - \frac{1}{C n} \cdot n(\max_{i\in[n]} v_i - \bar{v}) - \bar{v} = (1-C^{-1}) (\max_{i\in[n]} v_i - \bar{v})$. In the case of $\mathbf{v} = \mathbf{d}^{-1}$, this implies that
\begin{multline}\label{Eqn_chain_bound_d_i_star_and_harmonic_mean}
(1-C^{-1})^2 \big(d_{i^*}^{-1} - \bar{d}_{(H)}^{-1}\big) \ge \max_{i\in[n]}(\mathbf{M}^\top\mathbf{M}\mathbf{d}^{-1})_i - \bar{d}_{(H)}^{-1} \\
\ge (\mathbf{M}^\top\mathbf{M}\mathbf{d}^{-1})_{i^*} - \bar{d}_{(H)}^{-1} \ge (1+2C^2\delta) \frac{1-\zeta}{1+\zeta}d_{i^*}^{-1} - \bar{d}_{(H)}^{-1},
\end{multline}
where $i^*=\argmin_{i\in[n]} d_i$ and $\bar{d}_{H} = \big(n^{-1}\sum_{i=1}^n d_i^{-1}\big)^{-1}$ is the harmonic mean of $d_1,\ldots,d_n$. Solving \eqref{Eqn_chain_bound_d_i_star_and_harmonic_mean} gives
\begin{equation}
\frac{ d_{i^*}^{-1} - \bar{d}_{(H)}^{-1} }{ \bar{d}_{(H)}^{-1} } \le \Theta(\delta) + \frac{2\zeta}{1 - \zeta - (1-C^{-2})^2(1+\zeta)} \le \Theta(\delta + \zeta)
\end{equation}
with hidden constants independent of $\delta$ and $\zeta$, granted that $\zeta$ is sufficiently small. Hence, for all but $\sqrt{\delta+\zeta} n$ indices $i\in[n]$, we have $1-\Theta(\sqrt{\delta+\zeta}) \le \frac{\bar{d}_{(H)}}{d_i} \le 1 + \Theta(\delta+\zeta)$, implying that $(\mathbf{x},\mathbf{y})\in\Omega_{\text{eig}}(\Theta(\delta+\zeta))$.
\end{proof}
Let $\mathbf{D} = \mathbf{M}^\top \mathbf{X}_{\mathcal{M}'}$ and $\mathbf{E} = \mathbf{M}\mathbf{Y}_{\mathcal{W}'}$. Note that $\mathbf{D}$ and $\mathbf{E}$ both have bounded ratios among their components due to the bounded ratio assumption on $\mathbf{M}$, and in addition $\|\mathbf{D}\|_1=\|\mathbf{X}_{\mathcal{M}'}\|_1$ and $\|\mathbf{E}\|_1=\|\mathbf{Y}_{\mathcal{W}'}\|_1$. By Lemma~\ref{Lemma_wgt_exp_cond_concentration}, whenever $t\le c_2(\log n)^{1/8}$, we have for each column $\mathbf{M}_{\cdot,j}$ of $\mathbf{M}$, $j=1,\ldots,n$,
\begin{equation}
\mathbb{P}\left(\left|(1-\delta)\frac{\mathbf{M}_{\cdot,j}\cdot \mathbf{X}_{\mathcal{M}'}}{t \mathbf{M}_{\cdot,j} \cdot \mathbf{E}^{-1}_{\mathcal{M}'}} - 1\right| >\zeta \;\middle|\; \mathbf{X}_{\mathcal{M}'} \cdot \mathbf{E} < t, \|\mathbf{E}\|_1\ge \underline{c}_1 \log n\right) \le \exp(-\Theta(n\zeta^2)),
\end{equation}
where we note that the effective dimension of $\mathbf{X}_{\mathcal{M}'}$ is $n-\floor{\delta n}$ instead of $n$.
By a union bound over $j\in[n]$, this gives
\begin{equation}
\mathbb{P}\left(\max_{j\in[n]}\left|(1-\delta)\frac{\mathbf{M}_{\cdot,j}\cdot \mathbf{X}_{\mathcal{M}'}}{t \mathbf{M}_{\cdot,j} \cdot \mathbf{E}^{-1}_{\mathcal{M}'}} - 1\right| >\zeta, \|\mathbf{E}\|_1\ge \underline{c}_1 \log n \;\middle|\; \mathbf{X}_{\mathcal{M}'} \cdot \mathbf{E} < t \right) \le \exp(-\Theta(n\zeta^2)),
\end{equation}
which is simply \eqref{Eqn_unif_concentrat_MX_cond_on_XE}.
\subsection{Proof of Corollary~\ref{Cor_no_stable_outside_Oeigz}}\label{Append_proof_no_stable_outside_oeigz}
We now prove Corollary~\ref{Cor_no_stable_outside_Oeigz}, restated below.
\CorNoStableOutsideOeigz*
\begin{proof}
Summing over all partial matchings with size $n-\floor{\delta n}$ gives
\begin{multline}\label{Eqn_sum_expectation_Omega_zeta}
\sum_{\substack{\mathcal{M}'\subseteq\mathcal{M},\mathcal{W}'\subseteq\mathcal{W}\\|\mathcal{M}'|=|\mathcal{W}'|=n-\floor{\delta n}}}\sum_{\substack{\mu':\mathcal{M}'\to\mathcal{W}'\\\text{ bijection}}} \ensuremath{\mathbb{E}}\big[q(\mathbf{X}_{\mathcal{M}'}(\mu'), \mathbf{Y}_{\mathcal{W}'}(\mu')) \cdot \mathbbm{1}_{\mathcal{R}\backslash\Omega_{\text{eig}}(\zeta)}(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'})\big] \\
\le \exp(o_\delta(n)-\Theta(\zeta^2 n)) \cdot \frac{(\delta n)!}{n!} \sum_{\substack{\mathcal{M}'\subseteq\mathcal{M},\mathcal{W}'\subseteq\mathcal{W}\\|\mathcal{M}'|=|\mathcal{W}'|=n-\floor{\delta n}}}\sum_{\substack{\mu':\mathcal{M}'\to\mathcal{W}'\\\text{bijection}}} \prod_{i\in\mathcal{M}'} (n m_{i,\mu'(i)}).
\end{multline}
To bound the summation, %
notice that
\begin{align}
\sum_{\substack{\mathcal{M}'\subseteq\mathcal{M},\mathcal{W}'\subseteq\mathcal{W}\\|\mathcal{M}'|=|\mathcal{W}'|=n-\floor{\delta n}}}\sum_{\substack{\mu':\mathcal{M}'\to\mathcal{W}'\\\text{bijection}}} \prod_{i\in\mathcal{M}'} (n m_{i,\mu'(i)})
&= \frac{1}{(\floor{\delta n})!} \sum_{\substack{\mu:\mathcal{M}\to\mathcal{W}\\\text{bijection}}} \sum_{\substack{\mathcal{M}'\subseteq\mathcal{M}\\|\mathcal{M}'|=n-\floor{\delta n}}} \prod_{i\in\mathcal{M}'} (n m_{i,\mu'(i)}) \nonumber\\
&\le \frac{1}{(\floor{\delta n})!} \sum_{\substack{\mu:\mathcal{M}\to\mathcal{W}\\\text{bijection}}} \binom{n}{\floor{\delta n}} C^{\floor{\delta n}} \prod_{i\in\mathcal{M}} (n m_{i,\mu(i)}) \nonumber\\
&= \frac{e^{o_\delta(n)}}{(\delta n)!} n^n \Perm(\mathbf{M}).\label{Eqn_sum_expectation_Omega_zeta_summation_bound}
\end{align}
Under the assumption that the bistochastic matrix $\mathbf{M}$ is of moderate deviation (cf. \cite[Section~3]{mccullagh2014asymptotic}), we know that $n^n \Perm(\mathbf{M}) = O( n! )$. Hence, the quantity in \eqref{Eqn_sum_expectation_Omega_zeta} is bounded by $\exp(o_\delta(n)-\Theta(\zeta^2 n))$. Invoking Lemma~\ref{Lemma_reduction_to_q} finishes the proof.
\end{proof}
\subsection{Proof of Proposition~\ref{Prop_EqXY_bound}}\label{Append_proof_prop_EqXY}
In this section, we present the proof of Proposition~\ref{Prop_EqXY_bound}, restated below.
\propEqXYBound*
Denote the target expectation by $E$ and express it as an integral of tail probability
\begin{equation}\label{Eqn_proof_prop_6_6_goal}
E = \int_0^\infty \ensuremath{\mathbb{P}}(\mathbf{X}_{\mathcal{M}'}^\top\mathbf{M}\mathbf{Y}_{\mathcal{W}'} < \bar{t}, \mathbf{Y}_{\mathcal{W}'} \in \mathcal{R}_1) \cdot ne^{-nt} \ensuremath{\,d} t,
\end{equation}
where $\bar{t} = t \wedge (c_2(\log n)^{1/8})$. It suffices to upper bound probabilities of the form $\ensuremath{\mathbb{P}}(\mathbf{X}_{\mathcal{M}'}^\top\mathbf{M}\mathbf{Y}_{\mathcal{W}'} < \bar{t}, \mathbf{Y}_{\mathcal{W}'} \in \mathcal{R}_1)$ for all $t\in(0,c_2(\log n)^{1/8}$. We will go one step further and prove a stronger result by relaxing the $\mathbf{Y}_{\mathcal{W}'} \in \mathcal{R}_1$ condition, which will eventually translate to a bound on $\ensuremath{\mathbb{E}}[q(\mathbf{X}_{\mathcal{M}'},\mathbf{Y}_{\mathcal{W}'}) \cdot \mathbbm{1}_{\mathcal{R}_2}(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'})]$.
\begin{lemma}\label{Lemma_x_y_not_both_small_cond_on_XMY_and_R1}
There exists a positive constant $\beta$ such that, for any $t\in(0, c_2(\log n)^{1/8})$,
\begin{equation}
\ensuremath{\mathbb{P}}\left(\|\mathbf{X}_{\mathcal{M}'}\|_1 \le \beta \sqrt{tn}, \|\mathbf{Y}_{\mathcal{W}'}\|_1 \le \beta \sqrt{tn} \middle| \mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t\right) \le 0.1
\end{equation}
for $n$ sufficiently large.
\end{lemma}
\begin{proof}
Let $p$ denote the target probability. We have
\begin{align}
p &= \ensuremath{\mathbb{P}}\left(\|\mathbf{X}_{\mathcal{M}'}\|_1 \le \beta \sqrt{tn} \middle| \|\mathbf{Y}_{\mathcal{W}'}\|_1 \le \beta \sqrt{tn}, \mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t\right) \nonumber \\
&\qquad\cdot \ensuremath{\mathbb{P}}\left(\|\mathbf{Y}_{\mathcal{W}'}\|_1 \le \beta \sqrt{tn} \middle| \mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t\right) \nonumber \\
&\le \ensuremath{\mathbb{P}}\left(\|\mathbf{X}_{\mathcal{M}'}\|_1 \le \beta \sqrt{tn} \middle| \|\mathbf{Y}_{\mathcal{W}'}\|_1 \le \beta \sqrt{tn}, \mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t\right) \nonumber \\
&= \frac{\ensuremath{\mathbb{P}}\left(\|\mathbf{X}_{\mathcal{M}'}\|_1 \le \beta \sqrt{tn}, \mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t \middle| \|\mathbf{Y}_{\mathcal{W}'}\|_1 \le \beta \sqrt{tn}\right)}{\ensuremath{\mathbb{P}}\left(\mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t \middle| \|\mathbf{Y}_{\mathcal{W}'}\|_1 \le \beta \sqrt{tn}\right)} \nonumber \\
&\le \frac{\ensuremath{\mathbb{P}}\left(\|\mathbf{X}_{\mathcal{M}'}\|_1 \le \beta \sqrt{tn} \middle| \|\mathbf{Y}_{\mathcal{W}'}\|_1 \le \beta \sqrt{tn}\right)}{\ensuremath{\mathbb{P}}\left(\mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t \middle| \|\mathbf{Y}_{\mathcal{W}'}\|_1 \le \beta \sqrt{tn}\right)} \nonumber \\
&\le \frac{\ensuremath{\mathbb{P}}\left(\|\mathbf{X}_{\mathcal{M}'}\|_1 \le \beta \sqrt{tn}\right)}{\ensuremath{\mathbb{P}}\left(\|\mathbf{X}_{\mathcal{M}'}\|_1 \le \sqrt{tn}/(C\beta)\right)} = \ensuremath{\mathbb{P}}\left(\|\mathbf{X}_{\mathcal{M}'}\|_1 \le \beta \sqrt{tn} \middle| \|\mathbf{X}_{\mathcal{M}'}\|_1 \le (C\beta)^{-1}\sqrt{tn}\right),
\end{align}
where the last inequality follows from the independence between $\mathbf{X}$ and $\mathbf{Y}$ and the fact that $n\mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'}\le C\|\mathbf{X}_{\mathcal{M}'}\|_1 \|\mathbf{Y}_{\mathcal{W}'}\|_1$.
By choosing $\beta = (2C)^{-1/2}$, the upper bound becomes
\begin{equation*}
\ensuremath{\mathbb{P}}\left(\|\mathbf{X}_{\mathcal{M}'}\|_1 \le \beta \sqrt{tn} \middle| \|\mathbf{X}_{\mathcal{M}'}\|_1 \le 2\beta\sqrt{tn}\right).
\end{equation*}
A direct invocation of Lemma~\ref{Lemma_wgt_exp_cond_concentration} implies an $\exp(-\Theta(n))$ upper bound for this probability.
\end{proof}
\begin{lemma}\label{Lemma_high_prob_small_xnorm_cond_XMY}
There exists a positive constant $\gamma$ such that, for any $t\in(0, c_2(\log n)^{1/8})$,
\begin{equation}\label{Eqn_Lemma_high_prob_small_xnorm_cond_XMY_goal}
\ensuremath{\mathbb{P}}\left(\|\mathbf{Y}_{\mathcal{W}'}\|_1 \ge \gamma n (\log n)^{-7/8}
\middle| \mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t\right) \le 0.1
\end{equation}
\end{lemma}
\begin{proof}
To free ourselves from always carrying the notation for the partial matching, let us observe that, once we relinquish the condition on the bistochasticity of $\mathbf{M}$, it becomes irrelevant that $\mu'$ is a partial matching between $\mathcal{M}'\subseteq\mathcal{M}$ and $\mathcal{W}'\subseteq\mathcal{W}$ (instead of a complete one between $\mathcal{M}$ and $\mathcal{W}$), since the difference in the market size $|\mathcal{M}|=|\mathcal{W}|=n$ and $|\mathcal{M}'|=|\mathcal{W}'|=n-{\delta n}$ does not affect the final asymptotics in the Lemma. Hence, it suffices to establish a version of \eqref{Eqn_Lemma_high_prob_small_xnorm_cond_XMY_goal} with $\mathbf{X}_{\mathcal{M}'}$ and $\mathbf{Y}_{\mathcal{W}'}$ replaced by non-truncated value vectors $\mathbf{X}$ and $\mathbf{Y}$ in a complete (instead of partial) matching $\mu$, as long as we do not rely on bistochasticity of $\mathbf{M}$.
In the simplified notation, let $\mathbf{Z} = \mathbf{a} \circ \mathbf{X}$ and $\mathbf{W} = \mathbf{b} \circ \mathbf{Y}$ with $\mathbf{a}=(a_{i,\mu(i)})_{i\in[n]}$ and $\mathbf{b}=(b_{j,\mu^{-1}(j)})_{j\in[n]}$, so that $\mathbf{Z},\mathbf{W}\sim \Exp(1)^n$ and are independent. Moreover, let $R=\|\mathbf{Z}\|_1$ and $\mathbf{U}=R^{-1}\mathbf{Z}$ so that, as is well known, $R\sim\Gamma(n, 1)$, $\mathbf{U}\sim\Unif(\Delta_{n-1})$, and $R$ and $\mathbf{U}$ are independent. Similarly, let $S=\|\mathbf{W}\|_1$, and $\mathbf{V}=S^{-1}\mathbf{W}$. Then
\begin{equation*}
\mathbf{X}^\top \mathbf{M}\mathbf{Y} = \mathbf{Z}^\top \diag(\mathbf{a}^{-1})\mathbf{M} \diag(\mathbf{b}^{-1}) \mathbf{W} = RS \mathbf{U}^\top \tilde{\mathbf{M}} \mathbf{V},
\end{equation*}
where $\tilde{\mathbf{M}}:=\diag(\mathbf{a}^{-1})\mathbf{M} \diag(\mathbf{b}^{-1})$ again has entries bounded on $[1/(C^2n),C^2/n]$. Since $\|\mathbf{Y}\|_1 = \Theta(S)$, it suffice to find a positive constant $\gamma$ such that
\begin{equation}
\ensuremath{\mathbb{P}}\left(S \ge \gamma n (\log n)^{-7/8}
\middle| RS \mathbf{U}^\top \tilde{\mathbf{M}} \mathbf{V} < t\right) < 0.1
\end{equation}
for all $n$ sufficiently large and $t\in(0, c_2(\log n)^{1/8})$.
Note that $1/(C^2n)\le \mathbf{U}^\top \tilde{\mathbf{M}} \mathbf{V} \le C^2/n$ a.s. By conditional on all possible values of $\mathbf{U}^\top \tilde{\mathbf{M}} \mathbf{V}$, it suffices to show that for all $t'\in(0, c_2C^2(\log n)^{1/8})$
\begin{equation}
\ensuremath{\mathbb{P}}\left(S \ge \gamma n (\log n)^{-7/8}
\middle| RS < t' n\right) < 0.1 \label{Eqn_proof_Lemma_high_prob_small_xnorm_cond_XMY_reduced_goal_RS}
\end{equation}
asymptotically.
First, we write $\ensuremath{\mathbb{P}}(S \ge \gamma n (\log n)^{-7/8}, RS < t' n)$ as
\begin{equation*}
\ensuremath{\mathbb{P}}\big(S \ge \gamma n (\log n)^{-7/8}, RS < t' n\big) = \int_{\gamma n (\log n)^{-7/8}}^\infty G(t'n/s) g(s) ds,
\end{equation*}
where $g(x)=\frac{x^{n-1}e^{-x}}{(n-1)!}$ is the probability density function of $\Gamma(n,1)$ and $G$ is the corresponding CDF. Since $t'n/s \ll n$, we may use Lemma~\ref{Lemma_weighted_exp_chernoff} to upper bound $G(t'n/s)$, giving
\begin{align}
\ensuremath{\mathbb{P}}\big(S \ge \gamma n (\log n)^{-7/8}, RS < t' n\big) &\le \int_{\gamma n (\log n)^{-7/8}}^\infty \left(\frac{t'e}{s}\right)^n \frac{s^{n-1}e^{-s}}{(n-1)!} ds \nonumber \\
&= \frac{(t'e)^n}{(n-1)!} \int_{\gamma n (\log n)^{-7/8}}^\infty \frac{e^{-s}}{s} ds \nonumber \\
&\le \frac{(t'e)^n}{(n-1)!} e^{-\gamma n (\log n)^{-7/8}}. \label{Eqn_proof_Lemma_high_prob_small_xnorm_cond_XMY_upper_bound_numerator}
\end{align}
Next, we lower bound $\ensuremath{\mathbb{P}}(RS < t' n)$ by
\begin{equation*}
\ensuremath{\mathbb{P}}\left(n^{1/2} \le S \le n^{2/3}, RS < t' n\right) = \int_{n^{1/2}}^{n^{2/3}} G(t'n/s) g(s) ds.
\end{equation*}
Note that $t'n/s \le \ensuremath{O}(n^{2/3})$ for all $s\in[n^{1/2},n^{2/3}]$. Using the lower bound in Lemma~\ref{Lemma_weighted_exp_chernoff}, we have
\begin{align}
\ensuremath{\mathbb{P}}\left(n^{1/2} \le S \le n^{2/3}, RS < t' n\right) &\ge e^{-\ensuremath{O}(n^{2/3})} \int_{n^{1/2}}^{n^{2/3}} \left(\frac{t'e}{s}\right)^n \frac{s^{n-1}e^{-s}}{(n-1)!} ds \nonumber \\
&= e^{-\ensuremath{O}(n^{2/3})} \frac{(t'e)^n}{(n-1)!} \int_{n^{1/2}}^{n^{2/3}} \frac{e^{-s}}{s} ds \nonumber \\
&\ge \frac{(t'e)^n}{(n-1)!} n^{-2/3} e^{-\ensuremath{O}(n^{2/3})}. \label{Eqn_proof_Lemma_high_prob_small_xnorm_cond_XMY_lower_bound_denominator}
\end{align}
Comparing \eqref{Eqn_proof_Lemma_high_prob_small_xnorm_cond_XMY_upper_bound_numerator} with \eqref{Eqn_proof_Lemma_high_prob_small_xnorm_cond_XMY_lower_bound_denominator} establishes \eqref{Eqn_proof_Lemma_high_prob_small_xnorm_cond_XMY_reduced_goal_RS} and hence finishes the proof.
\end{proof}
\begin{remark}
Note that this lemma should be treated only as a technical result about the typical behavior of $\mathbf{X}_{\mathcal{M}'}$ and $\mathbf{Y}_{\mathcal{W}'}$ when $q(\mathbf{X}_{\mathcal{M}'},\mathbf{Y}_{\mathcal{W}'})=\exp(-n\mathbf{X}_{\mathcal{M}'}^\top\mathbf{M}\mathbf{Y}_{\mathcal{W}'})$ is large, and should not be confused with any attempt to bound the number of stable (partial) matchings with women's total values in a certain range. For example, one might hope to replace $\gamma n (\log n)^{-7/8}$ with $\gamma n(\log n)^{-1}$ in the proof to conclude that stable matchings with $\|\mathbf{Y}_\delta\|_1\in[n^{1/2},n^{2/3}]$ are over $e^{n^{2/3}}$ times more common than those with $\|\mathbf{Y}_\delta\|_1\ge \Omega(n(\log n)^{-1})$. This, however, is generally not true as we know in the classic case with uniformly random preferences. To see why this fact is not contradictory to our proof, recall from Proposition~\ref{Prop_ratio_p_q_high_prob} (see Section~\ref{sec_prep_proof} and Appendix~\ref{Append_weak_regular_scores}) that $q(\mathbf{X}_{\mathcal{M}'},\mathbf{Y}_{\mathcal{W}'})$ is only a good approximation to $p_{\mu'}(\mathbf{X}_{\mathcal{M}'},\mathbf{Y}_{\mathcal{W}'})$ when, among other conditions, $\|\mathbf{Y}_{\mathcal{W}'}\|_1 \le \Theta(n(\log n)^{-7/8})$; even then, the approximation is only valid up to an $e^{o(n)}$ factor. As the ratio between \eqref{Eqn_proof_Lemma_high_prob_small_xnorm_cond_XMY_upper_bound_numerator} and \eqref{Eqn_proof_Lemma_high_prob_small_xnorm_cond_XMY_lower_bound_denominator} is only $e^{o(n)}$, the quality of approximation is insufficient for us to rule out the possibility for a (partial) stable matching to have $\|\mathbf{Y}_{\mathcal{W}'}\|_1\ge\Theta(n(\log n)^{-1})$: the man-optimal stable matching obtained from the man-proposing deferred acceptance algorithm will be such an example.
\end{remark}
\begin{corollary}\label{Cor_x_not_small_when_y_large_cond_on_XMY_and_R1}
There exists a positive constant $\gamma'$ such that, for any $t\in(0, c_2(\log n)^{1/8})$,
\begin{equation}
\ensuremath{\mathbb{P}}\left(\|\mathbf{X}_{\mathcal{M}'}\|_1 \le \gamma' t (\log n)^{7/8}, \|\mathbf{Y}_{\mathcal{W}'}\|_1 \ge \beta \sqrt{tn}
\middle| \mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t\right) \le 0.2
\end{equation}
for $n$ sufficiently large, where $\beta$ is the constant appearing in Lemma~\ref{Lemma_x_y_not_both_small_cond_on_XMY_and_R1}.
\end{corollary}
\begin{proof}
Note that $\|\mathbf{M}\mathbf{Y}_{\mathcal{W}'}\|_1 = \|\mathbf{Y}_{\mathcal{W}'}\|_1 \gtrsim \sqrt{tn} \gg t$ for $t\lesssim (\log n)^{1/8}$. For any $\mathbf{y}$ supported on coordinates indexed by $\mathcal{W}'$ with $t \ll \|\mathbf{y}\|_1 \le \gamma n (\log n)^{-7/8}$, Lemma~\ref{Lemma_wgt_exp_cond_concentration} implies
\begin{equation}
\ensuremath{\mathbb{P}}\Big(\|\mathbf{X}_{\mathcal{M}'}\|_1 \le 0.9 \frac{t}{n-\floor{\delta n}}\big\|(\mathbf{M}\mathbf{y})_{\mathcal{M}'}^{-1}\big\|_1 \Big| \mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t, \mathbf{Y}_{\mathcal{W}'}=\mathbf{y}\Big) \le e^{-\Theta(n)}.
\end{equation}
Plugging in $\big\|(\mathbf{M}\mathbf{y})_{\mathcal{M}'}^{-1}\big\|_1 \ge (n-\floor{\delta n}) \frac{n}{C\|\mathbf{y}\|_1} \ge \frac{n-\floor{\delta n}}{C\gamma}(\log n)^{7/8}$ gives
\begin{equation}
\ensuremath{\mathbb{P}}\big(\|\mathbf{X}_{\mathcal{M}'}\|_1 \le 0.9(C\gamma)^{-1} t(\log n)^{7/8} \big| \mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t, \mathbf{Y}_{\mathcal{W}'}=\mathbf{y}\big) \le e^{-\Theta(n)}.
\end{equation}
Marginalizing over all relevant values of $\mathbf{y}$ implies
\begin{equation}
\ensuremath{\mathbb{P}}\big(\|\mathbf{X}_{\mathcal{M}'}\|_1 \le \gamma' t(\log n)^{7/8}, \beta \sqrt{tn} \le \|\mathbf{Y}_{\mathcal{W}'}\|_1 \le \gamma n (\log n)^{-7/8} \big| \mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t\big) \le e^{-\Theta(n)}
\end{equation}
with $\gamma'=0.9(C\gamma)^{-1}$. Combining this with Lemma~\ref{Lemma_high_prob_small_xnorm_cond_XMY} completes the proof.
\end{proof}
\begin{corollary}
For any $t\in(0, c_2(\log n)^{1/8})$ and $n$ sufficiently large,
\begin{equation}
\ensuremath{\mathbb{P}}\left(\|\mathbf{X}_{\mathcal{M}'}\|_1 \wedge \|\mathbf{Y}_{\mathcal{W}'}\|_1 \ge \gamma' t (\log n)^{7/8}, \mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t\right) \ge \frac{1}{2} \ensuremath{\mathbb{P}}\left( \mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t\right).
\end{equation}
\end{corollary}
\begin{proof}
This is a direct consequence of Lemma~\ref{Lemma_x_y_not_both_small_cond_on_XMY_and_R1} and Corollary~\ref{Cor_x_not_small_when_y_large_cond_on_XMY_and_R1}
\end{proof}
We are now ready to state the proof of Proposition~\ref{Prop_EqXY_bound}.
\begin{proof}[Proof of Proposition~\ref{Prop_EqXY_bound}]
Define events
\begin{equation*}
\begin{aligned}[c]
A_1(t)&: \|\mathbf{X}_{\mathcal{M}'}\|_1 \ge \gamma' t(\log n)^{1/8},\\
B_1(t)&: (\mathbf{X}_{\mathcal{M}'},\mathbf{Y}_{\mathcal{W}'})\in\Omega_1(\zeta;t),
\end{aligned}
\qquad
\begin{aligned}[c]
A_2(t)&: \|\mathbf{Y}_{\mathcal{W}'}\|_1 \ge \gamma' t(\log n)^{1/8},\\
B_2(t)&: (\mathbf{X}_{\mathcal{M}'},\mathbf{Y}_{\mathcal{W}'})\in\Omega_2(\zeta;t),
\end{aligned}
\end{equation*}
where $\Omega_1$ and $\Omega_2$ are defined in Appendix~\ref{Append_proof_prop_eigenvec_of_M}, and $\zeta$ is to be specified later.
We have
\begin{align}
\frac{1}{2}\ensuremath{\mathbb{P}}(\mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t) &\le \ensuremath{\mathbb{P}}(\mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t, A_1(t), A_2(t)) \nonumber \\
&\le \ensuremath{\mathbb{P}}(\mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t, A_1(t), A_2(t), B_1(t), B_2(t)) \nonumber \\
&\qquad + \ensuremath{\mathbb{P}}(\mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t, A_1(t), B_1(t)^c) + \ensuremath{\mathbb{P}}(\mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t, A_2(t), B_2(t)^c) \nonumber \\
&\le \ensuremath{\mathbb{P}}(\mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t, A_1(t), A_2(t), B_1(t), B_2(t)) \nonumber \\
&\qquad + \ensuremath{\mathbb{P}}(B_1(t)^c | \mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t, A_1(t)) \cdot \ensuremath{\mathbb{P}}(\mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t) \nonumber \\
&\qquad + \ensuremath{\mathbb{P}}(B_2(t)^c | \mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t, A_2(t)) \cdot \ensuremath{\mathbb{P}}(\mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t).
\end{align}
For any fixed $\delta>0$ and $\zeta=o_\delta(1)$, $\ensuremath{\mathbb{P}}(B_1(t)^c | \mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t, A_1(t)) \to 0$ by Lemma~\ref{Lemma_wgt_exp_cond_concentration}; in particular, we may assume that $\ensuremath{\mathbb{P}}(B_1(t)^c | \mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t, A_1(t))$ (and by symmetry $\ensuremath{\mathbb{P}}(B_2(t)^c | \mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t, A_2(t))$) is at most $1/8$.
Thus,
\begin{equation}
\ensuremath{\mathbb{P}}(\mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t) \le 4 \ensuremath{\mathbb{P}}(\mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t, A_1(t), A_2(t), B_1(t), B_2(t)).
\end{equation}
By Lemma~\ref{lemma_Omega1_and_Omega2_suggests_Oeig}, $B_1(t)$ and $B_2(t)$ together imply that $n\mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} = (1 + o_{\delta}(1)) \|\mathbf{X}_{\mathcal{M}'}\|_1\|\mathbf{Y}_{\mathcal{W}'}\|_1$. Further, along with the events $\mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t$, $A_1(t)$, and $A_2(t)$, they imply $t(\log n)^{1/8} \lesssim \|\mathbf{Y}_{\mathcal{W}'}\|_1 \lesssim n(\log n)^{-1/8}$. Hence,
\begin{align}
\ensuremath{\mathbb{P}}(\mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} &\mathbf{Y}_{\mathcal{W}'} < t, A_1(t), A_2(t), B_1(t), B_2(t)) \nonumber \\
&\le \ensuremath{\mathbb{P}}\left(\|\mathbf{X}_{\mathcal{M}'}\|_1\|\mathbf{Y}_{\mathcal{W}'}\|_1 \le \frac{nt}{1+o_{\delta,\zeta}(1)}, t(\log n)^{1/8} \le \|\mathbf{Y}_{\mathcal{W}'}\|_1 \le \Theta(n(\log n)^{-1/8})\right) \nonumber \\
&\le \ensuremath{\mathbb{E}}\left[\ensuremath{\mathbb{P}}\left(\|\mathbf{X}_{\mathcal{M}'}\|_1\|\mathbf{Y}_{\mathcal{W}'}\|_1 \le \frac{nt}{1+o_{\delta,\zeta}(1)}, t(\log n)^{1/8} \le \|\mathbf{Y}_{\mathcal{W}'}\|_1 \le \Theta(n(\log n)^{-1/8}) \middle| \|\mathbf{Y}_{\mathcal{W}'}\|_1\right)\right] \nonumber \\
&\le e^{o_{\delta}(n)} \bigg(\frac{ent}{n-\floor{\delta n}}\bigg)^{n-\floor{\delta n}} \prod_{i\in\mathcal{M}'} a_{i,\mu'(i)} \nonumber \\
&\qquad \cdot \ensuremath{\mathbb{E}}\left[\|\mathbf{Y}_{\mathcal{W}'}\|_1^{-n+\floor{\delta n}}; t(\log n)^{1/8} \le \|\mathbf{Y}_{\mathcal{W}'}\|_1 \le \Theta(n(\log n)^{-1/8})\right]. \label{Eqn_proof_prop_EqXY_P_XMY_and_A1A2B1B2}
\end{align}
It is straightforward, albeit a bit tedious, to explicitly bound the expectation term in \eqref{Eqn_proof_prop_EqXY_P_XMY_and_A1A2B1B2} by
\begin{equation*}
(\Theta(\log n) - \log t) e^{n-\floor{\delta n}} \prod_{i\in\mathcal{M}'} b_{\mu'(i),i},
\end{equation*}
again using Lemma~\ref{Lemma_weighted_exp_chernoff}. Carrying out the integral over $t$ in \eqref{Eqn_proof_prop_6_6_goal} finishes the proof.
\end{proof}
\subsection{Proof of Lemma~\ref{Prop_Oempe_likely_for_happiness_emp_distr}}\label{Proof_prop_Oempe_likely}
In this section, we present the proof of Lemma~\ref{Prop_Oempe_likely_for_happiness_emp_distr}, restated below.
\propOempeLikelyForHappinessEmpDist*
\begin{proof}
In light of Proposition~\ref{Prop_EqXY_bound}, it suffices to show that for all $\mathbf{y} \in \text{Proj}_y(\mathcal{R}\cap\Omega_{\text{eig}}(\zeta))$, we have
\begin{equation}\label{Eqn_want_ratio_EOempe_EqXY}
\frac{\ensuremath{\mathbb{E}}\big[q(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \cdot \mathbbm{1}_{\mathcal{R} \cap \Omega_{\text{eig}}(\zeta)\backslash\Omega_{\text{emp}}(\eps)}(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \;|\; \mathbf{Y}_{\mathcal{W}'} = \mathbf{y} \big]}{\ensuremath{\mathbb{E}}\big[q(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \cdot \mathbbm{1}_{\mathcal{R}_2}(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \;|\; \mathbf{Y}_{\mathcal{W}'} = \mathbf{y} \big]} \le \exp(-\Theta(\ensuremath{\epsilon}^2 n)).
\end{equation}
It then follows that
\begin{align}
\ensuremath{\mathbb{E}}\big[q(&\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \cdot \mathbbm{1}_{\mathcal{R} \cap \Omega_{\text{eig}}(\zeta)\backslash\Omega_{\text{emp}}(\eps)}(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'})\big] \nonumber \\
&= \ensuremath{\mathbb{E}}\big[\ensuremath{\mathbb{E}}\big[q(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \cdot \mathbbm{1}_{\mathcal{R} \cap \Omega_{\text{eig}}(\zeta)\backslash\Omega_{\text{emp}}(\eps)}(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'})\big| \mathbf{Y}_{\mathcal{W}'}\big]\big] \nonumber \\
&\le \ensuremath{\mathbb{E}}\big[\exp(-\Theta(\ensuremath{\epsilon}^2 n)) \cdot \ensuremath{\mathbb{E}}[q(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \cdot \mathbbm{1}_{\mathcal{R}_2}(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \;|\; \mathbf{Y}_{\mathcal{W}'}] \cdot \mathbbm{1}_{\text{Proj}_y(\mathcal{R}\cap\Omega_{\text{eig}}(\zeta))}(\mathbf{Y}_{\mathcal{W}'})\big] \nonumber \\
&\le \exp(-\Theta(\ensuremath{\epsilon}^2 n)) \cdot \ensuremath{\mathbb{E}}\big[\ensuremath{\mathbb{E}}[q(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \cdot \mathbbm{1}_{\mathcal{R}_2}(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \;|\; \mathbf{Y}_{\mathcal{W}'}] \cdot \mathbbm{1}_{\mathcal{R}_1}(\mathbf{Y}_{\mathcal{W}'})\big] \nonumber \\
&\le \exp(-\Theta(\ensuremath{\epsilon}^2 n)) \cdot \ensuremath{\mathbb{E}}[q(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \cdot \mathbbm{1}_{\mathcal{R}_2}(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \cdot \mathbbm{1}_{\mathcal{R}_1}(\mathbf{Y}_{\mathcal{W}'})],
\end{align}
and Proposition~\ref{Prop_EqXY_bound} immediately implies the desired bound.
To show \eqref{Eqn_want_ratio_EOempe_EqXY}, notice that the quotient in the left-hand side is simply
\begin{equation}\label{Eqn_proof_distr_happi_equiv_emp_tail_bound_for_nearly_iid_X}
\ensuremath{\mathbb{P}}_{\mathbf{X}\sim\bigotimes_{i=1}^n \Exp(a_{i,\mu'(i)}+n(\mathbf{M}\mathbf{y})_i)}\big((\mathbf{X}_{\mathcal{M}'},\mathbf{y})\in\mathcal{R}\cap\Omega_{\text{eig}}(\zeta)\backslash\Omega_{\text{emp}}(\eps)\big).
\end{equation}
Recall that $(\mathbf{X}_{\mathcal{M}'})_i = X_i$ for $i \in \mathcal{M}'$ and $(\mathbf{X}_{\mathcal{M}'})_i = 0$ for $i\notin\mathcal{M}'$. For any $\mathbf{y} \in \text{Proj}_y(\mathcal{R}\cap\Omega_{\text{eig}}(\zeta))$, there must exist $\hat y\in\ensuremath{\mathbb{R}}_+$ such that for all but at most $\sqrt{\zeta} n$ indices $i\in[n]$ we have $|(\mathbf{M} \mathbf{y})_i-\hat y| \le \sqrt{\zeta} \hat y$. In other words, under the distribution $\mathbf{X}\sim\bigotimes_{i=1}^n \Exp(a_{i,\mu'(i)}+n(\mathbf{M}\mathbf{y})_i)$, for all but at most $(\delta + \sqrt{\zeta}) n$ indices $i\in[n]$, we have $n\hat y X_i\sim \Exp\big(\lambda_i\big)$ for some
\begin{equation*}
\lambda_i = \frac{a_{i,\mu'(i)}}{n\hat y}+\frac{(\mathbf{M}\mathbf{y})_i}{\hat y} = 1 + \Theta(\sqrt{\zeta}) + \Theta(1/\log n),
\end{equation*}
where we used the fact that $n\hat y\ge\Theta(\|\mathbf{y}\|_1) \ge \Theta(\log n)$ as implied by $\mathbf{y}\in\text{Proj}_y(\mathcal{R}\cap\Omega_{\text{eig}}(\zeta))$. The generalized Dvoretzky–Kiefer–Wolfowitz (DKW) inequality (see Lemma~\ref{Lemma_dkw_non_identical}) for independent and nearly-identically distributed random variables implies that the probability \eqref{Eqn_proof_distr_happi_equiv_emp_tail_bound_for_nearly_iid_X} is upper bounded by
\begin{equation}
\ensuremath{\mathbb{P}}_{\mathbf{X}\sim\bigotimes_{i=1}^n \Exp(a_{i,\mu'(i)}+n(\mathbf{M}\mathbf{y})_i)}\big(\big\|\hat{\mathcal{F}}(\mathbf{x}) - F_{n\hat y}\|_\infty > \ensuremath{\epsilon} + \Theta(\delta + \sqrt{\zeta})\big) \le \exp(-\Theta(\ensuremath{\epsilon}^2 n)),
\end{equation}
which finishes the proof.
\end{proof}
\subsection{Proof of Theorem~\ref{Thm_main_rank_dist_body}}\label{Append_proof_thm_main_rank}
Heuristically, we would expect the rank $R_i$ for a man to be proportional to his value $X_i(\mu)$. We will see below that this is approximately the case when $x_i\ll 1$. There are, however, going to be some $x_i$ of constant order, making it hard for us to say anything exact about $R_i$. But as we will soon see, for all but a $o(1)$ fraction of the $n$ men, we will indeed have $x_i = o(1)$. As we are concerned with the empirical distribution, such small fraction becomes negligible in the limit and can be simply ignored. This heuristics is formalized in the next Lemma.
\begin{lemma}\label{Lemma_probable_happiness_majority}
Fix any $\delta > 0$. Let $\mu'$ be a partial matching of size $n-\floor{\delta n}$ on $\mathcal{M}'\subseteq\mathcal{M}$ and $\mathcal{W}'\subseteq\mathcal{W}$. For any $0<\xi<\rho < 1$, consider $\Omega_{\text{tail}}(\xi,\rho)$ defined as
\begin{equation}
\bigg\{(\mathbf{x},\mathbf{y})\in \ensuremath{\mathbb{R}}_+^n\times\ensuremath{\mathbb{R}}_+^n : \sum_{i=1}^n \mathbbm{1}\Big\{n x_i (\mathbf{M}\mathbf{y})_i\notin (F^{-1}(\xi/2)/2, F^{-1}(1-\xi/2))\Big\} \le \floor{\delta n} + \rho (n-\floor{\delta n})\bigg\}.
\end{equation}
Then
\begin{equation}\label{Eqn_Lemma_probable_happiness_majority_main_bound}
\ensuremath{\mathbb{E}}\big[q(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \cdot \mathbbm{1}_{\mathcal{R} \backslash\Omega_{\text{tail}}(\xi,\rho)}(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'})\big] \le \exp(o_\delta(n)-\Theta(D(\rho\|\xi)n)) \cdot \frac{(\delta n)!}{n!} \prod_{i\in\mathcal{M}'}a_{i,\mu'(i)}b_{\mu'(i),i},
\end{equation}
where $D(q\|p)$ denotes the KL divergence from $\Bern(p)$ to $\Bern(q)$.
\end{lemma}
\begin{proof}
The proof entirely mirrors that of Lemma~\ref{Prop_Oempe_likely_for_happiness_emp_distr}. It suffices to show that for all $\mathbf{y}\in\text{Proj}_y(\mathcal{R})$ we have
\begin{equation}\label{Eqn_want_ratio_EOtailab_EqXY}
\frac{\ensuremath{\mathbb{E}}\big[q(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \cdot \mathbbm{1}_{\mathcal{R} \backslash\Omega_{\text{tail}}(\xi,\rho)}(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \;|\; \mathbf{Y}_{\mathcal{W}'} = \mathbf{y} \big]}{\ensuremath{\mathbb{E}}\big[q(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \;|\; \mathbf{Y}_{\mathcal{W}'} = \mathbf{y} \big]} \le \exp(-\Theta(D(\rho\|\xi)n)).
\end{equation}
The quotient is simply
\begin{equation}\label{Eqn_proof_probable_happiness_majority_rewrite_x_distr_cond_on_y}
\ensuremath{\mathbb{P}}_{\mathbf{X}\sim\bigotimes_{i=1}^n \Exp(a_{i,\mu'(i)}+n(\mathbf{M}\mathbf{y})_i)}\big((\mathbf{X}_{\mathcal{M}'},\mathbf{y})\in\mathcal{R}\backslash\Omega_{\text{tail}}(\xi,\rho)\big) \le \ensuremath{\mathbb{P}}\big((\mathbf{X}_{\mathcal{M}'},\mathbf{y})\notin\Omega_{\text{tail}}(\xi,\rho)\big),
\end{equation}
where we will have $\mathbf{X}\sim\prod_{i=1}^n \Exp(a_{i,\mu'(i)}+n(\mathbf{M}\mathbf{y})_i)$ for the rest of this proof.
Note that under this specified distribution, $\Big(\big(a_{i,\mu'(i)}+n(\mathbf{M}\mathbf{y})_i\big)X_i\Big)_{i\in\mathcal{M}'}$ are $n-\floor{\delta n}$ i.i.d. samples from $\Exp(1)$, each falling outside the interval $(F^{-1}(\xi/2), F^{-1}(1-\xi/2))$ with probability precisely $\xi$. Hence,
\begin{multline}\label{Eqn_proof_probable_happiness_majority_hoeffding_for_renormalized_x}
\ensuremath{\mathbb{P}}\bigg(\sum_{i\in\mathcal{M}'} \mathbbm{1}\Big\{X_i\big(a_{i,\mu'(i)}+n(\mathbf{M}\mathbf{y})_i\big)\notin (F^{-1}(\xi/2), F^{-1}(1-\xi/2))\Big\} \le \rho (n-\floor{\delta n})\bigg) \\
= \ensuremath{\mathbb{P}}_{Z\sim\Binom(n-\floor{\delta n}, \xi)}(Z > \rho(n-\floor{\delta n})) \le \exp(-D(\rho\|\xi)(n-\floor{\delta n}))
\end{multline}
by the Hoeffding bound for binomial distribution. Since $n(\mathbf{M}\mathbf{y})_i \le a_{i,\mu'(i)}+n(\mathbf{M}\mathbf{y})_i \le 2n(\mathbf{M}\mathbf{y})_i$ across all $i\in\mathcal{M}'$ for $\mathbf{y}\in\mathcal{R}_1$ and $n$ sufficiently large, the probability \eqref{Eqn_proof_probable_happiness_majority_hoeffding_for_renormalized_x} upper bounds $\ensuremath{\mathbb{P}}\big((\mathbf{X}_{\mathcal{M}'},\mathbf{y})\notin\Omega_{\text{tail}}(\xi,\rho)\big)$. This establishes \eqref{Eqn_want_ratio_EOtailab_EqXY} and concludes the proof.
\end{proof}
By fixing some small $\delta,\rho$ and choosing $\xi$ sufficiently small, we can make $D(\rho\|\xi)$ arbitrarily large and obtain the following Corollary.
\begin{corollary}\label{Cor_no_stable_match_tilde_Otailab}
For any $0 < \delta,\rho < 1/2$, there exists a choice of $\xi > 0$ such that %
\begin{equation}
\ensuremath{\mathbb{P}}(\exists \mu\in\mathcal{S}, (\mathbf{X}_\delta(\mu),\mathbf{Y}_\delta(\mu))\notin\tilde{\Omega}_{\text{tail}}(\xi,\rho)) \lesssim e^{-n^c}
\end{equation}
asymptotically as $n\to\infty$,
where $\tilde{\Omega}_{\text{tail}}(\xi,\rho)$ is defined as
\begin{equation}
\bigg\{ (\mathbf{x},\mathbf{y})\in \ensuremath{\mathbb{R}}_+^n\times\ensuremath{\mathbb{R}}_+^n : \sum_{i=1}^n \mathbbm{1}\Big\{x_i\notin \Big(F^{-1}(\xi/2)\frac{(\log n)^{7/8}}{C^2\overline{c}_1 n}, 2F^{-1}(1-\xi/2)\frac{C^2}{\underline{c}_1 \log n}\Big)\Big\} \le (\delta + \rho) n \bigg\}.
\end{equation}
That is, with high probability, no stable matchings $\mu$ have more than $\delta+\rho$ fraction of the men's post-truncation values outside an interval $(\Theta(n^{-1}(\log n)^{7/8}), \Theta(1/\log n))$.
\end{corollary}
\begin{proof}
Observe that $\mathcal{R}\backslash\tilde{\Omega}_{\text{tail}}(\xi,\rho) \subseteq \mathcal{R}\backslash\Omega_{\text{tail}}(\xi,\rho)$ by our definition of $\mathcal{R}$ and the boundedness assumption on the entries of $\mathbf{M}$. Again, invoking Lemma~\ref{Lemma_reduction_to_q} using inequality \eqref{Eqn_Lemma_probable_happiness_majority_main_bound} in Lemma~\ref{Lemma_probable_happiness_majority} yields the $\Theta(e^{-n^c})$ upper bound on $\ensuremath{\mathbb{P}}((\mathbf{X}_\delta(\mu),\mathbf{Y}_\delta(\mu))\notin\tilde{\Omega}_{\text{tail}}(\xi,\rho))$.
\end{proof}
\begin{remark}
Recall that in a market with uniform preferences, the man-optimal (and woman-pessimal) stable matching realizes an average rank of $\Theta(\log n)$ for men and $\Theta(n/\log n)$ for women. Under the heuristics that values multiplied by $n$ roughly correspond to ranks (which we will formalize below), Lemma~\ref{Lemma_probable_happiness_majority} nicely matches our expectation that even in the most extreme cases, few individuals will strike a rank better (smaller) than $\Theta((\log n)^{7/8})$ or worse (larger) than $\Theta(n/\log n)$. The lower bound can be refined to $\Theta(\log n/\log \log n)$ with a more careful analysis.
\end{remark}
Now let us consider a specific partial matching $\mu'$ of size $n-\floor{\delta n}$ between $\mathcal{M}'$ and $\mathcal{W}'$ and condition on $\mu'$ being stable with value vectors $(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'})=(\mathbf{x},\mathbf{y})\in\tilde{\Omega}_{\text{tail}}(\xi,\rho)$. That is, there exists a subset $\bar{\mathcal{M}}'\subseteq\mathcal{M}'$ with $|\bar{\mathcal{M}}'|\ge (1-\delta-\rho)n$ such that $\Theta(n^{-1}(\log n)^{7/8}) \le (X_{\mathcal{M}'})_i \le \Theta(1/\log n)$ for all $i\in\bar{\mathcal{M}}'$. By symmetry, we may further assume that there exists $\bar{\mathcal{W}}'\subseteq\mathcal{W}'$ with $|\bar{\mathcal{W}}'|\ge (1-\delta-\rho)n$ such that $\Theta(n^{-1}(\log n)^{7/8}) \le (Y_{\mathcal{W}'})_j \le \Theta(1/\log n)$ for all $j\in\bar{\mathcal{W}}'$. We want to show that, for $i\in\bar{\mathcal{M}}'$, the \emph{pre-truncation} rank $R_i$ of man $m_i$ (i.e., over the entire market, including the $\floor{\delta n}$ women outside $\mathcal{M}'$) is well characterized by his value $X_{i,\,u'(i)}$ in the matching, up to some proper scaling. From now on, we will consider some $i\in\bar{\mathcal{M}}'$ with value $X_{i,\,u'(i)}=x_i$, and write
\begin{equation}\label{Eqn_def_eqn_rank_i}
R_i = 1 + \sum_{j\ne \mu'(i)}\mathbbm{1}_{[0,x_i]}(X_{ij}).
\end{equation}
The condition that $\mu'$ is stable requires $(X_{ij}, Y_{ji}) \notin [0, x_i]\times[0,y_j]$ for all $j\in\mathcal{W}'\backslash\{\mu'(i)\}$. Thus, for all $j\in\mathcal{W}'\backslash\{\mu'(i)\}$,
\begin{multline}
\ensuremath{\mathbb{P}}(X_{ij} \le x_i | \mu'\text{ stable}, (\mathbf{X}_{\mathcal{M}'})_i=x_i, (\mathbf{Y}_{\mathcal{W}'})_j=y_j) = \frac{\ensuremath{\mathbb{P}}(X_{ij} \le x_i,Y_{ji} > y_j)}{1 - \ensuremath{\mathbb{P}}(X_{ij} \le x_i,Y_{ji} \le y_j)} \\
= \frac{(1-e^{-a_{ij}x_i})e^{-b_{ji}y_j}}{1-(1-e^{-a_{ij}x_i})(1-e^{-b_{ji}y_j})},
\end{multline}
and for all $j\in\mathcal{W}\backslash\mathcal{W}'$ (so $(\mathbf{Y}_{\mathcal{W}'})_j=0$),
\begin{equation}
\ensuremath{\mathbb{P}}(X_{ij} \le x_i | \mu'\text{ stable}, (\mathbf{X}_{\mathcal{M}'})_i=x_i) = 1-e^{-a_{ij}x_i}.
\end{equation}
Define
\begin{equation*}
p_{ij} = \begin{cases}
1 & \quad \text{ when }j=\mu'(i), \\
\frac{(1-e^{-a_{ij}x_i})e^{-b_{ji}y_j}}{1-(1-e^{-a_{ij}x_i})(1-e^{-b_{ji}y_j})} & \quad \text{ when } j\in\mathcal{W}'\backslash\{\mu'(i)\}, \\
1-e^{-a_{ij}x_i} & \quad\text{ when } j\in\mathcal{W}\backslash\mathcal{W}',
\end{cases}
\end{equation*}
and $I_{ij}\sim \Bern(p_{ij})$ independently for $i\in[n]$ so that $R_i = \sum_{j=1}^n I_{ij}$ conditional on $(\mathbf{X}_{\mathcal{M}'})_i = X_i=x_i$.
Note that for any $j\ne \mu'(i)$ and $j\notin \bar{\mathcal{W}}'$, we always have
\begin{equation}\label{Eqn_relate_rank_to_happi_p_ij_upperbound}
p_{ij} \le 1-e^{-a_{ij}x_i} \le a_{ij}x_i
\end{equation}
and
\begin{multline}\label{Eqn_relate_rank_to_happi_p_ij_lowerbound}
p_{ij} \ge \frac{(1-e^{-a_{ij}x_i})e^{-b_{ji}y_j}}{1-(1-e^{-a_{ij}x_i})(1-e^{-b_{ji}y_j})} \ge (1-e^{-a_{ij}x_i})e^{-b_{ji}y_j} \\
\ge e^{-\Theta(\frac{1}{\log n})}\bigg(1 - \Theta\Big(\frac{1}{\log n}\Big)\bigg) a_{ij}x_i = (1-o(1))a_{ij}x_i.
\end{multline}
For $j \in \bar{\mathcal{W}}'\backslash\{\mu'(i)\}$, $p_{ij}$ admits the same upper bound \eqref{Eqn_relate_rank_to_happi_p_ij_upperbound} and the trivial lower bound of zero.
Hence, conditional on
\begin{equation}\label{Eqn_condition_stable_with_vx_vy}
\mu'\text{ stable and }(\mathbf{X}_{\mathcal{M}'},\mathbf{Y}_{\mathcal{W}'})=(\mathbf{x},\mathbf{y})\in\tilde{\Omega}_{\text{tail}}(\xi,\rho)\cap \tilde{\Omega}_{\text{emp}}(\ensuremath{\epsilon}) \cap\mathcal{R} \tag{$\dagger$}
\end{equation}
for any (fixed) $\xi,\rho,\ensuremath{\epsilon}> 0$,
we have the stochastic dominance
\begin{equation}
1 + \sum_{j\notin \bar{\mathcal{W}}'\cup\{\mu'(i)\}} \underline{I}_{ij} \preceq R_i \preceq 1 + \sum_{j\ne \mu'(i)} \overline{I}_{ij},
\end{equation}
where $\underline{I}_{ij}\sim\Bern((1-o(1))a_{ij}x_i)$ and $\overline{I}_{ij}\sim\Bern(a_{ij}x_i)$. Since $i\in\bar{\mathcal{M}}'$ by our assumption and thus $\Theta((\log n)^{7/8}/n)\le x_i \le \Theta(1/\log n)$, the expectation of $R_i/x_i$ can be upper bounded by
\begin{equation}
\ensuremath{\mathbb{E}}\bigg[\frac{R_i}{x_i} \bigg| \eqref{Eqn_condition_stable_with_vx_vy}\bigg] \le \frac{1}{x_i} + \sum_{j\ne\mu'(i)}a_{ij} = (1+o(1))\sum_{j=1}^n a_{ij}
\end{equation}
and lower bounded by
\begin{equation}
\ensuremath{\mathbb{E}}\bigg[\frac{R_i}{x_i} \bigg| \eqref{Eqn_condition_stable_with_vx_vy}\bigg] \ge (1-o(1))\sum_{j\ne\bar{\mathcal{W}}'}a_{ij} = (1-\Theta(\delta))\sum_{j=1}^n a_{ij}.
\end{equation}
Similarly, we may bound the variance of $R_i/x_i$ by
\begin{equation}
{\rm Var}\bigg(\frac{R_i}{x_i} \bigg| \eqref{Eqn_condition_stable_with_vx_vy}\bigg) \le \sum_{j\ne\mu'(i)}a_{ij} (1-a_{ij}x_i) \le \sum_{j=1}^n a_{ij}.
\end{equation}
Hence, we have
\begin{equation}\label{Eqn_final_E_Var_bound_for_rank_happiness_ratio}
1-\Theta(\delta) \le \ensuremath{\mathbb{E}}\bigg[\frac{R_i}{x_i \sum_{j=1}^n a_{ij}} \bigg| \eqref{Eqn_condition_stable_with_vx_vy}\bigg] \le 1+o(1) \enspace \text{ and } \enspace {\rm Var}\bigg(\frac{R_i}{x_i \sum_{j=1}^n a_{ij}} \bigg| \eqref{Eqn_condition_stable_with_vx_vy}\bigg) \le \Theta(n^{-1}),
\end{equation}
with these quantities conditionally independent for all $i\in\bar{\mathcal{M}}'$ and the hidden constants depending only on $C$, implying concentration of $R_i$ around $x_i\sum_{j=1}^n a_{ij}$ in the following sense.
\begin{proposition}\label{Prop_most_have_good_rank_happi_ratio}
Conditional on \eqref{Eqn_condition_stable_with_vx_vy}, for any fixed $\theta > 0$ and $\delta,\rho,\gamma\in(0,1/2)$, we have
\begin{equation}
\ensuremath{\mathbb{P}}\left(\sum_{i=1}^n \mathbbm{1}_{(\theta + \Theta(\delta), \infty)}\bigg(\bigg|\frac{R_i}{x_i\sum_{j=1}^n a_{ij}} - 1\bigg|\bigg) \ge (\delta+\rho+\gamma)n \middle| \eqref{Eqn_condition_stable_with_vx_vy}\right)
\lesssim \ensuremath{\mathbb{P}}_{N\sim\Poi(\Theta(\theta^{-2}))}(N\ge \gamma n)
\le e^{-\Theta(\gamma n)}.
\end{equation}
\end{proposition}
\begin{proof}
By Chebyshev's inequality and \eqref{Eqn_final_E_Var_bound_for_rank_happiness_ratio}, $\ensuremath{\mathbb{P}}\big(\big|\frac{R_i}{x_i\sum_{j=1}^n a_{ij}} - \ensuremath{\mathbb{E}}\big[\frac{R_i}{x_i \sum_{j=1}^n a_{ij}} \big| \eqref{Eqn_condition_stable_with_vx_vy}\big]\big| \ge \theta \big| \eqref{Eqn_condition_stable_with_vx_vy}\big) \le \Theta\big((n\theta^2)^{-1}\big)$ for all $i\in\bar{\mathcal{M}}'$. Hence, by conditional independence of the ranks, $\sum_{i\in\bar{\mathcal{M}}'} \mathbbm{1}_{(\theta + \Theta(\delta), \infty)}\big(\big|\frac{R_i}{x_i\sum_{j=1}^n a_{ij}} - 1\big|\big)$ is stochastically dominated by $\Binom\big(n, \Theta\big((n\theta^2)^{-1}\big)\big)$, which converges to $\Poi(\Theta(\theta^{-2}))$ in total variance. The Proposition follows from the well known tail bound for $N\sim\Poi(\lambda)$ that $\ensuremath{\mathbb{P}}(N\ge \lambda + t) \le \exp\big(-\frac{t^2}{2(\lambda+t)}\big)$, which implies $\ensuremath{\mathbb{P}}_{N\sim\Poi(\Theta(\theta^{-2}))}(N\ge \gamma n) \le \exp\big(-\frac{(\gamma n - \Theta(\theta^{-2}))^2}{2\gamma n}\big) \lesssim \exp(-\gamma n/2)$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{Thm_main_rank_dist_body}]
Note that in Corollary~\ref{Cor_no_stable_match_tilde_Otailab} and Corollary~\ref{Cor_no_stable_match_tilde_Otailab}, $\rho$ can be chosen arbitrarily small once $\delta$ is fixed. In particular, we may always guarantee $\rho \le \delta$. Similarly, we may assume $\theta \le \delta$ in Proposition~\ref{Prop_most_have_good_rank_happi_ratio}. Thus,
\begin{equation}
\ensuremath{\mathbb{P}}\left(\sum_{i=1}^n \mathbbm{1}_{(\Theta(\delta), \infty)}\bigg(\bigg|\frac{R_i}{x_i w_i} - 1\bigg|\bigg) \ge (2\delta+\gamma)n \middle| \eqref{Eqn_condition_stable_with_vx_vy}\right)
\le e^{-\Theta(\gamma n)},
\end{equation}
where $w_i = \sum_{j=1}^n a_{ij}$ is the fitness value of man $m_i$.
Marginalizing over all pairs of relevant value vectors $(\mathbf{x},\mathbf{y})\in\tilde{\Omega}_{\text{tail}}(\xi,\rho)\cap\tilde{\Omega}_{\text{emp}}(\ensuremath{\epsilon})\cap\mathcal{R}$ in the condition \eqref{Eqn_condition_stable_with_vx_vy}, we obtain
\begin{equation}
\ensuremath{\mathbb{P}}\left(\mathcal{E}_{\text{ratio}}(\delta,\gamma) \middle| (\mathbf{X}_{\mathcal{M}'},\mathbf{Y}_{\mathcal{W}'})\in\tilde{\Omega}_{\text{tail}}(\xi,\rho)\cap\tilde{\Omega}_{\text{emp}}(\ensuremath{\epsilon})\cap\mathcal{R}\right)
\le e^{-\Theta(\gamma n)},
\end{equation}
where $\mathcal{E}_{\text{ratio}}(\delta,\gamma)$ denotes the undesirable event that $\sum_{i=1}^n \mathbbm{1}_{(\Theta(\delta), \infty)}\big(\big|\frac{R_i}{(\mathbf{X}_{\mathcal{M}'})_i w_i} - 1\big|\big) \ge (2\delta+\gamma)n$ for the partial matching $\mu'$.
By Proposition~\ref{Prop_EqXY_bound},
\begin{multline}
\ensuremath{\mathbb{P}}((\mathbf{X}_{\mathcal{M}'},\mathbf{Y}_{\mathcal{W}'})\in\tilde{\Omega}_{\text{tail}}(\xi,\rho)\cap\Omega_{\text{emp}}(\eps)\cap\mathcal{R}) \le \ensuremath{\mathbb{P}}(\mu'\text{ stable}, (\mathbf{X}_{\mathcal{M}'},\mathbf{Y}_{\mathcal{W}'})\in\mathcal{R}) \\
\le e^{o(n)+o_\delta(n)} \frac{(\delta n)!}{n!}\prod_{i\in\mathcal{M}'} a_{i,\mu'(i)} b_{\mu'(i),i}.
\end{multline}
By choosing $\gamma=\gamma(\delta)=o_\delta(1)$ sufficiently large (relative to $\delta$) and following a similar computation as in Lemma~\ref{Lemma_reduction_to_q} and Corollary~\ref{Cor_subexp_num_stable_match}, we can ensure that with probability $1-\Theta(e^{-n^c})$ there exists no stable partial matching $\mu'$ where both $\mathcal{E}_{\text{ratio}}(\delta,\gamma)$ and $(\mathbf{X}_{\mathcal{M}'},\mathbf{Y}_{\mathcal{W}'})\in \tilde{\Omega}_{\text{emp}}(\ensuremath{\epsilon}_0(\delta))\cap\mathcal{R}$ happen, where the function $\ensuremath{\epsilon}_0$ is defined in the proof of \ref{Thm_main_happiness_dist_body}. Notice that by repeated uses of the triangle inequality,
\begin{equation}
(\mathbf{X}_{\mathcal{M}'},\mathbf{Y}_{\mathcal{W}'})\in \tilde{\Omega}_{\text{emp}}(\ensuremath{\epsilon}_0(\delta)), \mathcal{E}_{\text{ratio}}(\delta,\gamma)^c \enspace \Longrightarrow \enspace \|\hat{\mathcal{F}}(\mathbf{w}^{-1}\circ\mathbf{R}(\mu'))-F_\lambda\|_\infty \le \Theta(\delta) + \gamma(\delta) + \ensuremath{\epsilon}_0(\delta) = o_\delta(1)
\end{equation}
for the choice of $\lambda = \|\mathbf{Y}_{\mathcal{W}'}\|_1$. Combining this with \eqref{Eqn_proof_Thm_main_happiness_dist_tOemp} and Proposition~\ref{Prop_R_likely}, we conclude that with probability $1-\Theta(e^{-n^c})$, all stable matchings $\mu\in\mathcal{S}$ induce $\delta$-truncated stable partial matchings $\mu_\delta$ with $\|\hat{\mathcal{F}}(\mathbf{w}^{-1}\circ\mathbf{R}(\mu_\delta))-F_{\lambda(\mu)}\|_\infty = o_\delta(1)$, where $\lambda(\mu)=\|\mathbf{Y}_{\delta}(\mu)\|_1$. The $\delta$-truncation affects the distance by at most $\delta$, which can be absorbed into the $o_\delta(1)$ upper bound. Thus, by choosing $\delta$ sufficiently small relative to any fixed $\ensuremath{\epsilon} > 0$, we complete our proof of Theorem~\ref{Thm_main_rank_dist_body}.
\end{proof}
\subsection{Proofs of Theorem~\ref{Thm_dist_body_approx_stable} and Corollary~\ref{Cor_body_imbalance}}
\ThmMainApproxStable*
\begin{proof}
There are $\binom{n}{\alpha n}^2 = \exp(2 h_b(\alpha) n + O(\ln n))$ sub-markets of size at least $(1-\alpha) n$, where $h_b(p) = -p \log p - (1-p) \log (1-p)$ is the binary entropy function. Under Assumption~\ref{Assumption_C_bounded} for the whole market, each of such sub-markets also satisfies Assumption~\ref{Assumption_C_bounded}. Fix any $\ensuremath{\epsilon} > 0$. By Theorem~\ref{Thm_main_happiness_dist_body}, each of such sub-markets can only contain a stable matching with men's empirical distribution of value deviating from the family of exponential distributions by at least $\ensuremath{\epsilon}/2$ in Kolmogorov-Smirnov distance with probability at most $1-\exp(-n^c)$ for any fixed $c\in(0,1/2)$. Whenever $\alpha < n^{-\eta}$ for some $\eta > 1/2$, we have $h_b(\alpha) n < n^{1-\eta}$. Choosing $c \in (1-\eta, 1/2)$ and applying union bound over all relevant sub-markets gives the first part of \eqref{Eqn_happiness_dist_approx_stable}, since the additional $\alpha$ fraction of the market affects the empirical distribution by at most $\alpha\to 0$. The second part follows analogously from Theorem~\ref{Thm_main_rank_dist_body}.
\end{proof}
\CorImbalanceMarket*
\begin{proof}
The proof is entirely the same as the proof of Theorem~\ref{Thm_dist_body_approx_stable}. The union bound covers all sub-markets of size $n-k$, that is, consisting all the men and a subset of the women. The rest is the same.
\end{proof}
\section{Proofs of typical behaviors of scores in stable matching}\label{Append_weak_regular_scores}
Recall that for a matching $\mu$ with (latent) value vectors $\mathbf{X}(\mu)$ and $\mathbf{Y}(\mu)$, we define $\mathbf{U}(\mu) = F(\hat{\mathbf{X}}(\mu))$ and $\mathbf{V}(\mu) = f(\hat{\mathbf{X}}(\mu))$ with the standard exponential CDF $F(z) = 1-e^{-z}$ applied component-wise to the renormalized values $\hat{X}_i(\mu) = X_i(\mu) a_{i,\mu(i)}$ and $\hat{Y}_i(\mu) = Y_i(\mu) b_{i,\mu^{-1}(i)}$ for $i\in[n]$, so that $U_i,V_i\sim\Unif([0,1])$ and are mutually independent due to the way the score matrices are generated.
\begin{lemma}\label{Lemma_DA_prop_num}
For any $c \in (0,1/2)$, there exists a constant $\theta_1> 0$ (depending on $c$ and $C$) such that in a random instance of the market, at least $\theta_1 n\ln n$ proposals are made during man-proposing deferred acceptance with probability $1 - \exp(-n^c)$.
\end{lemma}
\begin{proof}
This result follows from a standard analysis of the deferred acceptance algorithm executed as in \cite[Section~3]{ashlagi2020tiered}. In \cite{ashlagi2020tiered}, the preference model involves tiers, where fitness values among different tiers differ by at most a constant factor. It turns out that this bounded ratio of fitness is the only thing used in the proofs, and is also satisfied by our matching market under Assumption~\ref{Assumption_C_bounded}. The main steps of analysis are as follows.
\begin{enumerate}
\item Consider (man-proposing) deferred acceptance \emph{with re-proposals}, where each time when man $i$ proposes, his proposal will go to woman $i$ with probability proportional to $a_{ij}$, independent of all previous proposals (and their acceptance/rejection). The total number $T$ of proposals in this process is equal in distribution to the number of draws in the coupon collector problem, and from standard concentration bounds we can show $\ensuremath{\mathbb{P}}(T \ge k n^{1+c}) \le e^{-n^c-2}$ for $k$ sufficiently large and $\ensuremath{\mathbb{P}}(T < \alpha n\ln n) \le \exp(-n^c-2)$ for $\alpha>0$ sufficiently small (see also \cite{doerr2020probabilistic}). The details mirror Appendices~A and B in \cite{ashlagi2020tiered}.
\item Analogous to Lemmas~3.5 and 3.6 in \cite{ashlagi2020tiered}, we can show that, with probability $1-\exp(-n^c-1)$, no single man makes more than $\ell n^{2c}$ proposals during deferred acceptance with re-proposal for $\ell$ sufficiently large.
\item Conditional on $T\ge \alpha n\ln n$ and the maximum number of proposals any man makes being at most $\ell n^{2c}$, the fraction of re-proposals should be no greater than $C\ell n^{2c-1}$ in expectation, since each proposal will be a duplicate of a previous proposal independently with probability at most $\frac{C\ell n^{2 n}}{n}$. It follows immediately from a binomial concentration that the (conditional) probability that the number of repeated proposals exceeds $T/2$ is exponentially small. Hence, the actual number of proposals during deferred acceptance (without re-proposals) is at least $\frac{\alpha}{2} n\ln n$ with probability $1-\exp(-n^c)$.
\end{enumerate}%
\end{proof}
\begin{lemma}\label{Lemma_U1_ge_ln_n}
For any $c \in (0,1/2)$, there exists a constant $\theta_1> 0$ (depending on $c$ and $C$) such that,
with probability $1-\exp(-n^c)$,
there exist no stable matchings $\mu$ where $\|\mathbf{U}(\mu)\|_1 \le \theta_2\ln n$.
\end{lemma}
\begin{proof}
Note that $U_i(\mu) = 1-e^{-a_{i,\mu(i)} X_i(\mu)} \in [C^{-1}F(X_i(\mu)),C F(X_i(\mu))]$ since $a_{i,\mu(i)}\in[1/C, C]$. For any stable matching $\mu$, $\mathbf{X}(\mu) \succeq \mathbf{X}(\mu_\text{MOSM})$ where $\mu_\text{MOSM}$ is the man-optimal stable matching obtained from the man-proposing deferred acceptance, where all the $n$ men are matched with their optimal possible stable partner (and hence achieves best value) simultaneously, and hence
\begin{equation}
\|\mathbf{U}(\mu)\|_1 \ge \frac{1}{C} \|F(\mathbf{X}(\mu))\|_1 \ge \frac{1}{C} \|F(\mathbf{X}(\mu))\|_1 \ge \frac{1}{C^2} \|\mathbf{U}(\mu_\text{MOSM})\|_1.
\end{equation}
Thus, it suffices to consider the event where $\|\mathbf{U}(\mu_\text{MOSM})\|_1 \le \theta_2 C^2 \ln n$.
Without loss of generality, we may assume that $\mu_\text{MOSM}$ matches man $i$ with woman $i$ for $i\in[n]$. Denote by $R_i\in[n]$ and $X_i=X_{ii}\in\ensuremath{\mathbb{R}}_+$ the (random) rank of partner and the latent value in $\mu_\text{MOSM}$ for man $i\in[n]$, and let $U_i = F(a_{ii}X_i)$. By definition, $X_i\sim\Exp(a_{ii})$ and $R_i = \sum_{j\ne i} \mathbbm{1}\{X_{ij}/a_{ij} < X_i\}$.
We condition on a specific execution of the man-proposing deferred acceptance algorithm, i.e., on the sequence of proposals, which specifies the rank $R_i$ and an ordering over his top $R_i$ most preferred partners for each man. Notice that the specific value $X_{ij}$ only affect the execution through the ordering of proposals, and hence conditional on a particular ordering, the values of the men are independent. Further, the value $X_i$ conditional on $R_i$ and an ordering $w_{j_1} \succeq_{m_i} w_{j_2} \succeq_{m_i} w_{j_1} \succeq \cdots \succeq_{m_i} w_{j_{R_i}}$ is equal in distribution to $\tilde{X}_{(R_i)}$, i.e., the $R_i$-th order statistic of $(\tilde{X}_1,\ldots,\tilde{X}_n)$ with $\tilde{X}_j \sim \Exp(a_{ij})$, conditional on $\tilde{X}_{(k)} = \tilde{X}_{j_k}$ for all $k\in[R_i]$. By the representation of exponential order statistics given in \cite{nevzorov1986representations}, under such conditions,
\begin{equation}
\tilde{X}_{(R_i)} \overset{d}{=} \sum_{t=1}^{R_i} \frac{Z_{i,t}}{\sum_{k=t}^n a_{i,j_k}}, %
\end{equation}
where $Z_{i,t}\sim\Exp(1)$ are independently sampled for $t\in[n]$. Conditional on $R_i$ and the sequence $j_1,\ldots,j_{R_i}$, we have
\begin{multline}
U_i = F(a_{ii} X_i) \overset{d}{=} F\bigg(\sum_{t=1}^{R_i} \frac{a_{ii} Z_{i,t}}{\sum_{k=t}^n a_{i,j_k}} \bigg) \ge F\bigg( \frac{1}{n}\sum_{t=1}^{R_i} \frac{Z_{i,t}}{C^2} \bigg) \\
\ge \frac{1}{n}\sum_{t=1}^{R_i}F(Z_{i,t}/C^2) \ge \frac{1}{C^2 n} \sum_{t=1}^{R_i} F(Z_{i,t}) = \frac{1}{C^2 n} \sum_{t=1}^{R_i} W_{i,t},
\end{multline}
where the second last inequality follows from Jensen's inequality (applied to the concave function $F$), and $W_{i,t}=F(Z_{i,t}) \sim \Unif([0,1])$ independently.
Thus, conditional on $\mathbf{R}$, we have $C^2 n\|\mathbf{U}\|_1 \succeq \sum_{i=1}^n \sum_{t=1}^{R_i} W_{i,t}$, independent of the specific ordering (and the identity) of the proposals made. Therefore, we may marginalize over this ordering to get
\begin{equation}
\ensuremath{\mathbb{P}}\bigg(\|\mathbf{U}\|_1 \le \theta_2 C^2 \ln n\bigg| \mathbf{R}\bigg) \le \ensuremath{\mathbb{P}}\left( \sum_{i=1}^n \sum_{t=1}^{R_i} W_{i,t} \le \theta_2 C^4 n \ln n \middle| \mathbf{R}\right).
\end{equation}
Whenever $\|\mathbf{R}\|_1 \ge \theta_1 n \ln n$, the probability above is at most $\exp(-\Theta(n\ln n)) \ll \exp(-n^c)$ by Hoeffding's inequality granted that we choose $\theta_2 < \frac{\theta_1}{2C^4}$,
and our proof is complete as we marginalize over all possible realizations of $\mathbf{R}$ with $\|\mathbf{R}\|_1\ge \theta_1 n\ln n$.
\end{proof}
\begin{lemma}\label{Lemma_U1V1_le_O_n_ln}
For any $\kappa \ge 0$ and $\theta_3 > 0$, the expected number of stable matchings $\mu$ with $\|\mathbf{U}(\mu)\|_1 \|\mathbf{V}(\mu)\|_1 \ge \theta_3 n(\ln n)^{1/8}$ is upper bounded by $\exp(-\kappa n)$.
In particular, with high probability, no such stable matchings exists.
\end{lemma}
\begin{proof}
It suffices to show that the probability that any fixed matching $\mu$ is stable and satisfies $\|\mathbf{U}(\mu)\|_1 \|\mathbf{V}(\mu)\|_1 \ge \theta_3 n(\ln n)^{1/8}$ is upper bounded by $o(e^{-\kappa n}/n!)$ for any $\theta_3>0$. Write $I=[0,1]$ for the unit interval. Let
\begin{equation*}
\Omega = \left\{(\mathbf{u},\mathbf{v})\in I^n\times I^n : \|\mathbf{u}\|_1\|\mathbf{v}\|_1 > \theta_3 n(\ln n)^{1/8}\right\}
\end{equation*}
and let
\begin{equation}
P := \ensuremath{\mathbb{P}}(\mu\in\mathcal{S}, (\mathbf{U}(\mu),\mathbf{V}(\mu))\in \Omega) = \int_{\ensuremath{\mathbb{R}}^n_+\times \ensuremath{\mathbb{R}}^n_+} p(\vx,\vy) \cdot \mathbbm{1}_{\Omega}(F(\hat{\mathbf{x}}), F(\hat{\mathbf{y}})) \cdot \prod_{i=1}^n f(\hat{x}_i)f(\hat{y}_i) \ensuremath{\,d}\hat{\mathbf{x}} \ensuremath{\,d}\hat{\mathbf{y}},
\end{equation}
where $f(t)=e^{-t}$ denotes the standard exponential density function. We apply the simple bound \eqref{Eqn_naive_bound_pxy} on $p(\vx,\vy)$ to obtain
\begin{align}
P &\le \int_{\ensuremath{\mathbb{R}}^n_+\times \ensuremath{\mathbb{R}}^n_+} \prod_{i\ne j} \Big(1 - \big(1-e^{-\hat{x}_i/C^2}\big)\big(1-e^{-\hat{y}_j/C^2}\big) \Big) \cdot \mathbbm{1}_{\Omega}(F(\hat{\mathbf{x}}), F(\hat{\mathbf{y}})) \cdot \prod_{i=1}^n f(\hat{x}_i)f(\hat{y}_i) \ensuremath{\,d}\hat{\mathbf{x}} \ensuremath{\,d}\hat{\mathbf{y}} \nonumber\\
&= \int_{I^n\times I^n} \prod_{i\ne j} \Big(1 - \big(1-(1-u_i)^{1/C^2}\big)\big(1-(1-v_j)^{1/C^2}\big) \Big) \cdot \mathbbm{1}_{\Omega}(\mathbf{u}, \mathbf{v}) \ensuremath{\,d}\mathbf{u} \ensuremath{\,d}\mathbf{v} \nonumber\\
&\le \int_{I^n\times I^n} \prod_{i\ne j} \Big(1 - \frac{1}{C^4} u_iv_j \Big) \cdot \mathbbm{1}_{\Omega}(\mathbf{u}, \mathbf{v}) \ensuremath{\,d}\mathbf{u} \ensuremath{\,d}\mathbf{v} \nonumber\\
&\le \int_{I^n\times I^n} \exp\Big(-\frac{1}{C^4}(\|\mathbf{u}\|_1\|\mathbf{v}\|_1 - \mathbf{u}\cdot\mathbf{v})\Big) \cdot \mathbbm{1}_{\Omega}(\mathbf{u}, \mathbf{v}) \ensuremath{\,d}\mathbf{u} \ensuremath{\,d}\mathbf{v} \nonumber\\
&\le \int_{I^n\times I^n} \exp\Big(\frac{1}{C^4}(n - \|\mathbf{u}\|_1\|\mathbf{v}\|_1)\Big) \cdot \mathbbm{1}_{\Omega}(\mathbf{u}, \mathbf{v}) \ensuremath{\,d}\mathbf{u} \ensuremath{\,d}\mathbf{v}\label{Eqn_Lemma_weak_reg_U_apply_naive_bound},
\end{align}
where we use the basic facts that $1-z^{1/C^2}\ge (1-z)/C^2$ for all $z\in[0,1]$ and $C\ge 1$ and that $1+z\le e^z$ for all $z\in\ensuremath{\mathbb{R}}$.
Let $s = \|\mathbf{u}\|_1$ and $t = \|\mathbf{v}\|_1$. It is well known (e.g., \cite[Ch.~I,~Sec.~9]{feller1971introduction}) that the probability density of $S=\|\mathbf{U}\|_1$ with $U_1,\ldots,U_n$ independent samples from $\Unif(I)$ is bounded above by $\frac{s^{n-1}}{(n-1)!}$. Hence,
\begin{equation}
P \le \int_{s,t\in[0,n]\,:\, st\ge \theta_3 n(\ln n)^{1/8}} e^{(n-st)/C^4} \cdot \frac{n^2(st)^{n-1}}{(n!)^2} \ensuremath{\,d} s \ensuremath{\,d} t.
\end{equation}
Note that when $st \ge n (\ln n)^2$, we have $\exp\big((n-st)/C^4\big) \le \exp\big((n-n(\ln n)^2)/C^4) = o(\exp(-\kappa n))/n!$ and therefore the region $\{s,t\in[0,n]\,:\, st\ge n(\ln n)^2\}$ contributes a negligible amount to the integral. Hence,
\begin{align}\label{Eqn_Lemma_weak_reg_U_p2_final_bound}
P &\le \frac{o(e^{-\kappa n})}{n!} + \int_{s,t\in[0,n]\,:\, \theta_3 n(\ln n)^{1/8} \le st\le n(\ln n)^2} e^{(n-st)/C^4} \cdot \frac{n^2(st)^{n-1}}{(n!)^2} \ensuremath{\,d} s \ensuremath{\,d} t \nonumber\\
&\labelrel\le{Rel_proof_lemma_u1v1_1} \frac{o(e^{-\kappa n})}{n!} + n^2\cdot e^{(n-\theta_3 n(\ln n)^{1/8})/C^4} \cdot \frac{n^2(n(\ln n)^2)^{n-1}}{(n!)^2} \nonumber\\
&\labelrel\le{Rel_proof_lemma_u1v1_2} \frac{o(e^{-\kappa n})}{n!} + \frac{1}{n!} \cdot \exp\Big(\frac{1}{C^4}(n-\theta_3 n(\ln n)^{1/8}) + 3\ln n + 2(n-1)\ln\ln n + n\Big) = \frac{o(e^{-\kappa n})}{n!},
\end{align}
where in step \eqref{Rel_proof_lemma_u1v1_1} we upper bound the integral by the product of the Lebesgue measure of its domain (bounded by $n^2$) and the supremum of its integrand, and in step \eqref{Rel_proof_lemma_u1v1_2} we invoke Stirling's approximation.
\end{proof}
\begin{corollary}\label{Cor_V1_le_n_over_ln}
For any constant $c\in(0,1/2)$,
there exists a constant $\theta_4> 0$ such that,
with probability $1-\exp(-n^c)$,
there exist no stable matchings $\mu$ with $\|\mathbf{U}(\mu)\|_1 \ge \frac{\theta_4 n}{(\ln n)^{7/8}}$.
\end{corollary}
\begin{proof}
This follows immediately from Lemma~\ref{Lemma_U1_ge_ln_n} and \ref{Lemma_U1V1_le_O_n_ln}.
\end{proof}
\begin{proposition}\label{Prop_characterize_low_prob_events}
Let $\Omega\subset I^n$ be a (sequence of) regions in the $n$-dimensional hypercube. For $k\in\ensuremath{\mathbb{Z}}_+$, define interval $I_k = (2^{-k}n, 2^{-k+1}n]$. If
\begin{equation}\label{Eqn_prop_characterize_low_prob_events_main_assumpt}
\ensuremath{\mathbb{P}}_{\mathbf{W}\sim\Exp(2^k)^{\otimes n}}(\mathbf{W}\in \Omega, \|\mathbf{W}\|_1\in I_k) \le \frac{g(n)}{e^{6n}C^{8n}}
\end{equation}
for some function $g(n)$ and uniformly for all $k\in\ensuremath{\mathbb{Z}}_+$, we can guarantee that the expected number of stable matchings $\mu$ with value vector $\mathbf{X}(\mu)\in\ensuremath{\mathbb{R}}^n$ satisfying $\mathbf{U}(\mu) \in \Omega$ is upper bounded by $g(n)+e^{-\Theta(n^2)}$; %
in particular, with high probability, no such stable matchings exist if $g(n) = o(1)$.
\end{proposition}
\begin{proof}
We focus on a fixed matching $\mu$ and by union bound, it suffices to show that
\begin{equation}
\ensuremath{\mathbb{P}}(\mu\in\mathcal{S}, \mathbf{U}(\mu)\in\Omega) = \frac{g(n)+ e^{-\Theta(n^2)}}{n!}
\end{equation}
under the condition of \eqref{Eqn_prop_characterize_low_prob_events_main_assumpt}.
The same chain of reasoning as in \eqref{Eqn_Lemma_weak_reg_U_apply_naive_bound} gives
\begin{align}
P := \ensuremath{\mathbb{P}}(\mu\in\mathcal{S}, \mathbf{U}(\mu) \in \Omega) &\le \int_{I^n\times I^n} \exp\Big(\frac{1}{C^4}(n - \|\mathbf{u}\|_1\|\mathbf{v}\|_1)\Big) \cdot \mathbbm{1}_{\Omega}(\mathbf{u}) \ensuremath{\,d}\mathbf{u} \ensuremath{\,d}\mathbf{v} \nonumber\\
&\le e^n \int_{I^n\times I^n} \exp( - \|\mathbf{u}\|_1\|\mathbf{v}\|_1/C^4)) \cdot \mathbbm{1}_{\Omega}(\mathbf{u}) \ensuremath{\,d}\mathbf{u} \ensuremath{\,d}\mathbf{v}
\end{align}
since $C\ge 1$.
Observing that $\|\mathbf{U}(\mu)\|_1=0$ and $\|\mathbf{V}(\mu)\|_1=0$ are both probability zero events, we may split the domain of the integral above into sub-regions according to which intervals $\|\mathbf{U}(\mu)\|_1$ and $\|\mathbf{V}(\mu)\|_1$ fall into, and then bound the value of the integral within each sub-region. That is, with the help of the monotone convergence theorem to interchange summation with integral,
\begin{align}
P &\le e^n \sum_{k,\ell=1}^\infty \int_{I^n\times I^n}\exp( - \|\mathbf{u}\|_1\|\mathbf{v}\|_1/C^4) \cdot \mathbbm{1}_{\Omega}(\mathbf{u}) \mathbbm{1}_{I_k}(\|\mathbf{u}\|_1) \mathbbm{1}_{I_\ell}(\|\mathbf{v}\|_1) \ensuremath{\,d}\mathbf{u} \ensuremath{\,d}\mathbf{v} \nonumber\\
&\le e^n \sum_{k,\ell=1}^\infty \int_{I^n\times I^n}\exp( - 2^{-k-l}n^2/C^4) \cdot \mathbbm{1}_{\Omega}(\mathbf{u}) \mathbbm{1}_{I_k}(\|\mathbf{u}\|_1) \mathbbm{1}_{I_\ell}(\|\mathbf{v}\|_1) \ensuremath{\,d}\mathbf{u} \ensuremath{\,d}\mathbf{v}.
\end{align}
For all $k\in\ensuremath{\mathbb{Z}}_+$, we have $e^{2n}\exp(-2^k\|\mathbf{u}\|_1)\ge 1$ whenever $\mathbf{u}\in I^n$ with $\|\mathbf{u}\|_1\in I_k$. Thus,
\begin{align}
P &\le e^{5n} \sum_{k,\ell=1}^\infty \int_{I^n\times I^n} \exp( - 2^{-k-l}n^2/C^4) \cdot \mathbbm{1}_{\Omega}(\mathbf{u}) \mathbbm{1}_{I_k}(\|\mathbf{u}\|_1) \mathbbm{1}_{I_\ell}(\|\mathbf{v}\|_1) \cdot \exp(-2^k\|\mathbf{u}\|_1 - 2^\ell\|\mathbf{v}\|_1) \ensuremath{\,d}\mathbf{u} \ensuremath{\,d}\mathbf{v} \nonumber\\
&= e^{5n} \sum_{k,\ell=1}^\infty 2^{-(k+\ell)n} \exp( - 2^{-k-l}n^2/C^4) \ensuremath{\mathbb{E}}_{\mathbf{U}\sim\Exp(2^k)^{\otimes n},\mathbf{V}\sim\Exp(2^\ell)^{\otimes n}}\left[ \mathbbm{1}_{\Omega}(\mathbf{U})\mathbbm{1}_{I^n}(\mathbf{V})\mathbbm{1}_{I_k}(\|\mathbf{U}\|_1) \mathbbm{1}_{I_\ell}(\|\mathbf{V}\|_1)\right].
\end{align}
Observe that all the terms with $k+\ell \ge n$ combined contribute at most
\begin{multline}
e^{5n}\sum_{k+\ell \ge n} 2^{-(k+\ell)n} = e^{5n} \left( \sum_{k=1}^{n-1} 2^{-kn} \sum_{\ell=n-k}^\infty 2^{-\ell n} + \sum_{k=n}^\infty 2^{-kn} \sum_{\ell=1}^\infty 2^{-\ell n} \right) \\
= e^{5n}\cdot 2^{-n^2-n} \cdot\frac{n(1-2^{-n})+1}{(1-2^{-n})^2} = \frac{e^{-\Theta(n^2)}}{n!},
\end{multline}
which is negligible. Therefore, we only need to consider $\ensuremath{O}(n^2)$ terms and
\begin{align}
P
&\le \frac{e^{-\Theta(n^2)}}{n!} + n^2 e^{5n} \max_{k,\ell\in\ensuremath{\mathbb{Z}}_+} \frac{\ensuremath{\mathbb{E}}_{\mathbf{U}\sim\Exp(2^k)^{\otimes n},\mathbf{V}\sim\Exp(2^\ell)^{\otimes n}}\left[ \mathbbm{1}_{\Omega}(\mathbf{U}) \mathbbm{1}_{I^n}(\mathbf{V}) \mathbbm{1}_{I_k}(\|\mathbf{U}\|_1) \mathbbm{1}_{I_\ell}(\|\mathbf{V}\|_1)\right]}{2^{(k+\ell)n} \exp( 2^{-k-l}n^2/C^4 )} \nonumber\\
&\le \frac{e^{-\Theta(n^2)}}{n!} + e^{6n} \max_{k\in\ensuremath{\mathbb{Z}}_+} \frac{\ensuremath{\mathbb{P}}_{\mathbf{U}\sim\Exp(2^k)^{\otimes n}}(\mathbf{U}\in \Omega, \|\mathbf{U}\|_1\in I_k)}{2^{(k+\ell)n} \exp( 2^{-k-l}n^2/C^4 )}.
\end{align}
Let $\alpha = 2^{-(k+\ell)}$. When $\alpha\le C^8/n$, the denominator of the second term can be bounded as $\alpha^{-n}e^{\alpha n^2/C^4} \ge n^n/C^{8n} \ge n!/C^{8n}$; when $\alpha > C^8/n$, let $\tau = n\alpha > C^8$ and we have $\alpha^{-n}e^{\alpha n^2/C^4} = \frac{n^n}{\tau^n} e^{n\tau/C^4} = n^n \exp\big(n(\tau/C^4 - \ln \tau)\big) \ge n^n \exp\big(n(C^4 - 4\ln C)\big) \ge n!$. Thus, the denominator is bounded below by $n!/C^{8n}$ and
\begin{equation}\label{Eqn_proof_X1_O_V1_last_reusable_bound}
P \le \frac{e^{-\Theta(n^2)}}{n!} + \frac{e^{6n} C^{8n}}{n!} \max_{k\in\ensuremath{\mathbb{Z}}_+} \ensuremath{\mathbb{P}}_{\mathbf{U}\sim\Exp(2^k)^{\otimes n}}(\mathbf{U}\in \Omega, \|\mathbf{U}\|_1\in I_k) \le \frac{1}{n!}(e^{-\Theta(n^2)} + g(n)).
\end{equation}
The claim follows immediately.
\end{proof}
\begin{lemma}\label{Lemma_bdd_x_delta_infty_norm}
For any fixed $\delta > 0$ and $\kappa > 0$, there exists an absolute constant $\theta_5$ (depending on $\delta,\kappa$, and $C$) such that, with probability $1-\exp(-\kappa n)$,
there exist no stable matchings $\mu$ with $\|\mathbf{X}_\delta(\mu)\|_\infty \le \theta_5$. (Recall that $\mathbf{X}_\delta(\mu)$ is the value vector of the $(1-\delta)$-partial matching obtained from $\mu$ that excludes the least happy $\delta/2$ fraction of men and women.)
\end{lemma}
\begin{proof}
Since $\hat{\mathbf{X}}$ and $\mathbf{X}$ differ by at most a factor of $C$ component-wise, we have
\begin{equation}\label{Eqn_proof_Lemma_bdd_x_delta_infty_norm_translate_to_U_quantile}
\|\mathbf{X}_\delta(\mu)\|_\infty \le X_{(n-\floor{\delta n/2})}(\mu) \le C \hat{X}_{(n-\floor{\delta n/2})}(\mu) = -C\log\big(1-U_{(n-\floor{\delta n/2})}(\mu)\big).
\end{equation}
Thus, it suffices to bound the upper $\delta/2$ quantile $U_{(n-\floor{\delta n/2})}(\mu)$ away from $1$.
Let $\Omega = \{\mathbf{u}\in I^n:\mathbf{u}_{(n-\floor{\delta n/2})} > 1-e^{-s}\}$ for some $s\ge 1$ that we will specified later. Then $\mathbf{W}\in\Omega$ implies that $\sum_{i=1}^n \mathbbm{1}_{(1-e^{-s},1]}(W_i) \ge \delta n/2$. For $W_i\sim\Exp(2^k)$, we have
\begin{equation*}
\ensuremath{\mathbb{P}}(W_i\in (1-e^{-s},1]) = \int_{1-e^{-s}}^1 2^k e^{-2^k t} dt \le e^{-s}\cdot 2^ke^{-2^{k-1}} \le e^{-s}.
\end{equation*}
Thus, $\sum_{i=1}^n \mathbbm{1}_{(1-e^{-s},1]}(W_i)$ is stochastically dominated by a $\Binom(n, e^{-s})$ random variable, and as a result
\begin{multline*}
\ensuremath{\mathbb{P}}_{\mathbf{W}\sim\Exp(2^k)^{\otimes n}}(\mathbf{W}\in\Omega, \|\mathbf{W}\|_1\in I_k) \le \ensuremath{\mathbb{P}}_{\mathbf{W}\sim\Exp(2^k)^{\otimes n}}(\mathbf{W}\in\Omega) \\
\le \ensuremath{\mathbb{P}}_{\mathbf{W}\sim\Exp(2^k)^{\otimes n}}\left(\sum_{i=1}^n \mathbbm{1}_{(1-e^{-s},1]}(W_i) \ge \frac{\delta n}{2}\right)
\le \ensuremath{\mathbb{P}}_{Z\sim\Binom(n, e^{-s})}\bigg(Z\ge \frac{\delta n}{2}\bigg) \le \exp(-n D(\delta/2 \| e^{-s})).
\end{multline*}
Since $D(\delta/2 \| z)\to\infty$ as $z\to 0$, it suffices to take $s$ sufficiently large to guarantee $D(\delta/2\|e^{-s}) > \kappa + 6 + 8 \log C$. Proposition~\ref{Prop_characterize_low_prob_events} then guarantees with probability $1-\exp(-\kappa n)$ that no stable matchings $\mu$ have $\mathbf{U}(\mu) \in \Omega$, and as a result of \eqref{Eqn_proof_Lemma_bdd_x_delta_infty_norm_translate_to_U_quantile}, with at least the desired probability, no stable matchings $\mu$ should have $\|\mathbf{X}_\delta(\mu)\|_\infty > \theta_5 := C s$.
\end{proof}
\begin{lemma}\label{Lemma_X1_le_O_U1}
For any $\delta > 0$ and $\kappa > 0$, there exists an absolute constant $\theta_6$ (again depending on $\delta,\kappa$, and $C$) such that, with probability $1-\exp(-\kappa n)$, there exist no stable matchings $\mu$ with $\|\mathbf{X}_\delta(\mu)\|_1 \ge \theta_6 \|\mathbf{U}(\mu)\|_1$.
\end{lemma}
\begin{proof}
Take $\theta_6 = C \sup_{0<x\le C\theta_5} \frac{x}{F(x)} = \frac{C^2\theta_5}{1-e^{-C\theta_5}}$, where $\theta_5$ is the constant in Lemma~\ref{Lemma_bdd_x_delta_infty_norm}. Assume that $\|\mathbf{X}_\delta(\mu)\|_\infty \le \theta_5$ for all stable matchings $\mu$ since the probability otherwise is at most $\exp(-\kappa n)$ as desired. Note that for each $i$ in the support of $\mathbf{X}_\delta(\mu)$ (i.e., $(X_\delta)_i(\mu) > 0$), we have
\begin{equation}
\hat{X}_i(\mu) \le C X_i(\mu) = C (X_\delta)_i(\mu) \le C\theta_5,
\end{equation}
and subsequently
\begin{equation}
U_i(\mu) = F(\hat{X}_i(\mu)) \ge \frac{C\hat{X}_i(\mu)}{\theta_6} \ge \frac{X_i(\mu)}{\theta_6} = \frac{(X_\delta)_i(\mu)}{\theta_6},
\end{equation}
and this final inequality is trivial for any $i$ not in the support of $\mathbf{X}_\delta(\mu)$. The claim then follows immediately.
\end{proof}
\begin{lemma}\label{Lemma_Xdelta2sq_le_O_V1sq}
For any fixed $\delta > 0$ and $\kappa > 0$, there exists an absolute constant $\theta_7$ (depending on $\delta,\kappa$, and $C$) such that, with probability $1-\exp(-\kappa n)$,
there exist no stable matchings $\mu$ with $\|\mathbf{X}_\delta(\mu)\|_2^2 \ge \theta_7 \|\mathbf{U}(\mu)\|_1^2 /n$.
\end{lemma}
\begin{proof}
Taking advantage of Lemma~\ref{Lemma_bdd_x_delta_infty_norm}, let us assume that $X_{(n-\floor{\delta n/2})}(\mu) \le \theta_5$ is satisfied simultaneously by all stable matchings $\mu$ (see the proof, in particular \eqref{Eqn_proof_Lemma_bdd_x_delta_infty_norm_translate_to_U_quantile}, for more details); the event otherwise has probability bounded by $\exp(-\kappa n)$.
Notice that
\begin{equation*}
\|\mathbf{X}_\delta(\mu)\|_2^2 \le \sum_{i=1}^{n-\floor{\delta n/2}} X_{(i)}(\mu)^2 \le C^2 \sum_{i=1}^{n-\floor{\delta n/2}} \hat{X}_{(i)}(\mu)^2 \le \theta_6' \sum_{i=1}^{n-\floor{\delta n/2}} U_{(i)}(\mu)^2,
\end{equation*}
where the $(i)$ subscript denotes the $i$-th (lower) order statistics (and in particular, $\hat{X}_{(i)}(\mu)$ is the $i$-th smallest entry of $\hat{\mathbf{X}}(\mu)$) with $\theta_6' = C^2 \left(\frac{\theta_5}{F(\theta_5)}\right)^2$. Now it suffices to compare $\sum_{i=1}^{n-\floor{\delta n/2}} U_{(i)}(\mu)^2$ with $\|\mathbf{U}(\mu)\|_1^2 /n$.
Consider $\Omega := \{\mathbf{w} \in I^n : \sum_{i=1}^{n-\floor{\delta n/2}} w_{(i)}^2 \ge \gamma \|\mathbf{w}\|_1^2/n\}$ for some $\gamma\in\ensuremath{\mathbb{R}}_+$ to be specified. By Proposition~\ref{Prop_characterize_low_prob_events}, it suffices to show that for some appropriate value of $\gamma$ we have
\begin{equation*}
\ensuremath{\mathbb{P}}_{\mathbf{W}\sim\Exp(2^k)^{\otimes n}}(\mathbf{W}\in\Omega, \|\mathbf{W}\|_1\in I_k) \le e^{-(\kappa+6)n}C^{-8n}
\end{equation*}
for all $k\in\ensuremath{\mathbb{Z}}_+$. Observe that
\begin{align*}
\ensuremath{\mathbb{P}}_{\mathbf{W}\sim\Exp(2^k)^{\otimes n}}(\mathbf{W}\in\Omega, \|\mathbf{W}\|_1\in I_k) &= \ensuremath{\mathbb{P}}_{\mathbf{W}\sim\Exp(2^k)^{\otimes n}}\bigg(\sum_{i=1}^{n-\floor{\delta n/2}} W_{(i)}^2 \ge \frac{\gamma \|\mathbf{W}\|_1^2}{n}, 2^{-k}n<\|\mathbf{W}\|_1\le 2^{-k+1}n\bigg) \\
&\le \ensuremath{\mathbb{P}}_{\mathbf{W}\sim\Exp(2^k)^{\otimes n}}\bigg(\sum_{i=1}^{n-\floor{\delta n/2}} W_{(i)}^2 \ge \gamma 2^{-2k} n\bigg) \\
&\le \ensuremath{\mathbb{P}}_{\mathbf{W}\sim\Exp(1)^{\otimes n}}\bigg(\sum_{i=1}^{n-\floor{\delta n/2}} W_{(i)}^2 \ge \gamma n\bigg) \\
&\le \ensuremath{\mathbb{P}}_{\mathbf{W}\sim\Exp(1)^{\otimes n}}(W_{(n-\floor{\delta n/2})} \ge \sqrt{\gamma}) \\
&\le \ensuremath{\mathbb{P}}_{Z\sim\Binom(n, e^{-\sqrt{\gamma}})}\left(Z \ge \frac{\delta n}{2}\right).
\end{align*}
By the large deviation bound for binomial distribution, choosing $\gamma$ sufficiently large such that $D(\delta/2 \| e^{-\sqrt{\gamma}}) > \kappa + 6 + 8\log C$ ensures that this probability is $o(e^{-(\kappa+6)n}C^{-8n})$. This finishes the proof with the choice of $\theta_7 = \theta_6' \gamma$.
\end{proof}
\begin{remark}
This is the only part of our analysis that relies on the $\delta$-truncation of values. Without the truncation, $\frac{1}{n}\|\mathbf{W}\|_2^2$ would concentrate poorly -- in fact not even having a finite mean -- for $\mathbf{W}\sim\Exp(1)^{\otimes n}$.
\end{remark}
\begin{corollary}\label{Cor_X_delta_truncate_at_C_over_2_is_small}
For any constant $c \in (0, 1/2)$, there exists a constant $\theta_8> 0$ such that,
with probability $1-\exp(-n^c)$,
there exist no stable matchings $\mu$ with $\sum_{i=1}^n (X_\delta)_i \mathbbm{1}_{[2/C,\infty)}\big((X_\delta)_i\big) \ge \theta_8 \|\mathbf{U}\|_1/(\ln n)^{7/8}$.
\end{corollary}
\begin{proof}
Notice that
\begin{equation*}
\|\mathbf{X}_\delta\|_2^2 \ge \sum_{i=1}^n (X_\delta)_i^2 \mathbbm{1}_{[2/C,\infty)}\big((X_\delta)_i\big) \ge \frac{2}{C} \sum_{i=1}^n (X_\delta)_i \mathbbm{1}_{[2/C,\infty)}\big((X_\delta)_i\big).
\end{equation*}
The statement then follows from Lemma~\ref{Lemma_Xdelta2sq_le_O_V1sq} and Corollary~\ref{Cor_V1_le_n_over_ln}.
\end{proof}
\begin{lemma}\label{Lemma_Xdelta1_ge_U1}
For any fixed $\delta > 0$ and $\kappa > 0$, there exists an absolute constant $\theta_9$ (depending on $\delta,\kappa$, and $C$) such that, with probability $1-\exp(-\kappa n)$,
there exist no stable matchings $\mu$ with $\|\mathbf{X}_\delta(\mu)\|_1 \le \theta_9 \|\mathbf{U}(\mu)\|_1$.
\end{lemma}
\begin{proof}
Since $\mathbf{X} \succeq \mathbf{U}$ component-wise, we have $\|\mathbf{X}_\delta(\mu)\|_1 \ge \|\mathbf{U}_\delta(\mu)\|_1$. Thus, it suffices to consider the condition $\|\mathbf{U}_\delta(\mu)\|_1 \le \theta_9 \|\mathbf{U}(\mu)\|_1$.
Consider $\Omega := \{\mathbf{w} \in I^n : \exists S\subseteq[n], |S|=n-\floor{\delta n},\sum_{i\in S} w_i \le \alpha \|\mathbf{w}\|_1\}$ for some $\alpha\in\ensuremath{\mathbb{R}}_+$ to be specified. By union bound, for any $k\in\ensuremath{\mathbb{Z}}_+$,
\begin{align*}
\ensuremath{\mathbb{P}}_{\mathbf{W}\sim\Exp(2^k)^{\otimes n}}(\mathbf{W}\in\Omega, \|\mathbf{W}\|_1\in I_k) &\le \binom{n}{\floor{\delta n}} \ensuremath{\mathbb{P}}_{\mathbf{W}\sim\Exp(2^k)^{\otimes n}}\bigg(\sum_{i=1}^{n-\floor{\delta n}} w_i \le \alpha \|\mathbf{W}\|_1\le 2^{-k+1}\alpha n\bigg) \\
&= \binom{n}{\floor{\delta n}} \ensuremath{\mathbb{P}}_{\mathbf{W}\sim\Exp(1)^{\otimes n}}\bigg(\sum_{i=1}^{n-\floor{\delta n}} w_i \le 2\alpha n\bigg) \\
&= \exp(-h(\delta) n + o(n)) \cdot \bigg(\frac{2\alpha e}{1-\delta}\bigg)^{n-\floor{\delta n}},
\end{align*}
where in the last step we use Stirling's approximation to bound the first factor and standard (lower) concentration of $\Exp(1)$ to bound the probability term (e.g., see Lemma~\ref{Lemma_weighted_exp_chernoff}).
For $\alpha$ sufficiently small, e.g., $\alpha < \exp\big(\frac{1}{1-\delta}(h(\delta)-\kappa - 6-8\ln C)-h(\delta)\big)$, we have
\begin{equation*}
\ensuremath{\mathbb{P}}_{\mathbf{W}\sim\Exp(2^k)^{\otimes n}}(\mathbf{W}\in\Omega, \|\mathbf{W}\|_1\in I_k) \le e^{-(\kappa+6)n}C^{-8n}
\end{equation*}
for all $k\in\ensuremath{\mathbb{Z}}_+$. Invoking Lemma~\ref{Prop_characterize_low_prob_events} concludes the proof with $\theta_9 = \alpha$.
\end{proof}
\begin{corollary}\label{Cor_Xdelta1_ge_ln_n}
For any constant $c \in (0, 1/2)$, there exists a constant $\theta_{10}> 0$ such that,
with probability $1-\exp(-n^c)$,
there exist no stable matchings $\mu$ with $\|\mathbf{X}_\delta(\mu)\|_1 \le \theta_{10} \ln n$.
\end{corollary}
\begin{proof}
This follows from Lemmas~\ref{Lemma_U1_ge_ln_n} and \ref{Lemma_Xdelta1_ge_U1}, with $\theta_{10} = \theta_2\theta_9$.
\end{proof}
The following Corollary combines all the previous into the typical behavior of value vectors in stable matchings.
\begin{corollary}\label{Cor_Rstar_likely}
Define $\mathcal{R}^\star(\mu)\subseteq \ensuremath{\mathbb{R}}_+^n \times \ensuremath{\mathbb{R}}_+^n$, in the context of a matching $\mu$, to be the set of all pairs of vectors $(\mathbf{x},\mathbf{y})\in \ensuremath{\mathbb{R}}_+^n \times \ensuremath{\mathbb{R}}_+^n$ that satisfy all of the following conditions:
\begin{align}
\theta_2 \ln n \le \|\mathbf{u}\|_1&,\|\mathbf{v}\|_1 \le \frac{\theta_4 n}{(\ln n)^{7/8}}, \label{Eqn_def_Rstar_1} \\
\|\mathbf{u}\|_1\|\mathbf{v}\|_1 &\le \theta_3 n(\ln n)^{1/8}, \label{Eqn_def_Rstar_2} \\
\|\mathbf{x}_\delta\|_1 \le \theta_6 \|\mathbf{u}\|_1
&\text{ and } \|\mathbf{y}_\delta\|_1 \le \theta_6 \|\mathbf{v}\|_1, \label{Eqn_def_Rstar_3} \\
\|\mathbf{x}_\delta\|_2^2 \le \frac{\theta_7 \|\mathbf{u}\|_1^2}{n}
&\text{ and } \|\mathbf{y}_\delta\|_2^2 \le \frac{\theta_7 \|\mathbf{v}\|_1^2}{n}, \label{Eqn_def_Rstar_4} \\
\sum_{i=1}^n (x_\delta)_i \mathbbm{1}_{[2/C,\infty)}\big((x_\delta)_i\big) \le \frac{\theta_8 \|\mathbf{u}\|_1}{(\ln n)^{7/8}}
&\text{ and } \sum_{i=1}^n (y_\delta)_i \mathbbm{1}_{[2/C,\infty)}\big((y_\delta)_i\big) \le \frac{\theta_8 \|\mathbf{v}\|_1}{(\ln n)^{7/8}}, \label{Eqn_def_Rstar_5} \\
\|\mathbf{x}_\delta\|_1,\|\mathbf{y}_\delta\|_1 &\ge \theta_{10} \ln n, \label{Eqn_def_Rstar_6}
\end{align}
where $u_i = F(a_{i,\mu(i)}x_i)$ and $v_j = F(b_{j,\mu^{-1}(j)}y_j)$ for $i,j\in[n]$; $\mathbf{x}_\delta$ and $\mathbf{y}_\delta$ denote the truncated version of $\mathbf{x}$ and $\mathbf{y}$;
$\theta_2,\theta_3, \theta_6, \theta_7, \theta_4, \theta_8,\theta_{10}\in\ensuremath{\mathbb{R}}_+$ are absolute constants (independent of $\mu$) chosen appropriately as in Lemmas~\ref{Lemma_U1_ge_ln_n}, \ref{Lemma_U1V1_le_O_n_ln},
\ref{Lemma_X1_le_O_U1}, \ref{Lemma_Xdelta2sq_le_O_V1sq}, and Corollaries \ref{Cor_V1_le_n_over_ln}, \ref{Cor_X_delta_truncate_at_C_over_2_is_small}, and \ref{Cor_Xdelta1_ge_ln_n}. Then, for any $c\in(0,1/2)$, with probability $1-\exp(-n^c)$, $(\mathbf{X}(\mu),\mathbf{Y}(\mu)) \in \mathcal{R}^\star(\mu)$ for all stable matchings $\mu$.
\end{corollary}
The proof simply summarizes the aforementioned Lemmas and Corollaries and shall be omitted.
\propRatioPQHighProbeon*
\begin{proof}
Note that $1-e^{-tx} \ge \left(tx - \frac{t^2 x}{2}\right)$ for all $x, t \ge 0$. In particular, $1-e^{-tx} \ge \left(tx - \frac{t^2 x}{2}\right) \mathbbm{1}_{[0, 2/t]}(x) \ge 0$. Using this to approximate $p(\mathbf{x},\mathbf{y})$ gives
\begin{align}
p(\mathbf{x},\mathbf{y}) &= \prod_{\substack{i\ne j}} \left(1 - \big(1-e^{-a_{ij}x_i}\big)\big(1-e^{-b_{ji}y_j}\big) \right) \nonumber\\
&\le \prod_{\substack{i\ne j}} \left(1 - \mathbbm{1}_{[0, 2/a_{ij}]}(x_i) \mathbbm{1}_{[0, 2/b_{ji}]}(y_j) \bigg(a_{ij}x_i - \frac{a_{ij}^2}{2}x_i^2\bigg)\bigg(b_{ji}y_j-\frac{b_{ji}^2}{2}y_j^2\bigg) \right) \nonumber\\
&\le \exp\left( -\sum_{i\ne j} \mathbbm{1}_{[0, 2/C]}(x_i) \mathbbm{1}_{[0, 2/C]}(y_j) \bigg(a_{ij}x_i - \frac{a_{ij}^2}{2}x_i^2\bigg)\bigg(b_{ji}y_j-\frac{b_{ji}^2}{2}y_j^2\bigg) \right).
\end{align}
Taking logarithm for simplicity and expanding the expression above gives
\begin{align}
\ln p(\mathbf{x},\mathbf{y}) & \le - \sum_{i\ne j} \Bigg( a_{ij}b_{ji} x_i y_j - \big(\mathbbm{1}_{(2/C,\infty)}(x_i) + \mathbbm{1}_{(2/C,\infty)}(y_j)\big) a_{ij} b_{ji} x_i y_j \nonumber \\
&\qquad\qquad - \mathbbm{1}_{[0, 2/C]}(x_i) \mathbbm{1}_{[0, 2/C]}(y_j) \bigg( a_{ij}^2 b_{ji} x_i^2 y_j + a_{ij} b_{ji}^2 x_i y_j^2
\bigg) \Bigg) \nonumber \\
&\le
-\sum_{i, j=1}^n a_{ij}b_{ji} x_i y_j
+ \sum_{i=1}^n C^2 x_i y_i \nonumber \\
&\qquad\qquad + \sum_{i, j=1}^n \Bigg( C^2 \big(\mathbbm{1}_{(2/C,\infty)}(x_i) + \mathbbm{1}_{(2/C,\infty)}(y_j)\big) x_i y_j + C^3 \bigg( x_i^2 y_j + x_i y_j^2 \bigg) \Bigg). %
\end{align}
Notice that $-\ln q(\mathbf{x},\mathbf{y}) = \sum_{i, j=1}^n a_{ij}b_{ji}\frac{x_i}{a_{ii}}\frac{y_j}{b_{jj}}$. Thus,
\begin{multline}\label{Eqn_proof_ratio_p_q_high_prob_diff_pq}
\ln \frac{p(\mathbf{x},\mathbf{y})}{q(\mathbf{x},\mathbf{y})} \le C^2 \mathbf{x}^\top \mathbf{y} + C^2\left(\|\mathbf{x}\|_1 \sum_{i=1}^n\mathbbm{1}_{(2/C,\infty)}(y_j) y_j + \|\mathbf{y}\|_1 \sum_{i=1}^n\mathbbm{1}_{(2/C,\infty)}(x_i) x_i\right) \\
+ C^3 \left(\|\mathbf{x}\|_2^2 \|\mathbf{y}\|_1 + \|\mathbf{x}\|_1 \|\mathbf{y}\|_2^2\right).
\end{multline}
In light of Corollary~\ref{Cor_Rstar_likely}, it suffices to upper bound $\ln\frac{p(\mathbf{x}_\delta,\mathbf{y}_\delta)}{q(\mathbf{x}_\delta,\mathbf{y}_\delta)}$ by $cn/(\ln n)^{1/2}$ for all $(\mathbf{x},\mathbf{y})\in\mathcal{R}^\star(\mu)$ and for all $\mu$. To simplify notation, we will make the dependency on $\mu$ implicit in the rest of the proof.
By Cauchy-Schwarz inequality, the first term in \eqref{Eqn_proof_ratio_p_q_high_prob_diff_pq}, up to a factor of $C^2$, is at most
\begin{equation*}
\|\mathbf{x}_\delta\|_2 \|\mathbf{y}_\delta\|_2 \le \frac{\theta_7\|\mathbf{u}\|_1 \|\mathbf{v}\|_1}{n} \le \theta_3\theta_7(\ln n)^{1/8} = o\left(\frac{n}{(\ln n)^{1/2}}\right)
\end{equation*}
by \eqref{Eqn_def_Rstar_4} and \eqref{Eqn_def_Rstar_2}.%
The middle term in \eqref{Eqn_proof_ratio_p_q_high_prob_diff_pq}, up to a factor of $2C^2$, is at most%
\begin{equation*}
\|\mathbf{x}_\delta\|_1 \sum_{i=1}^n\mathbbm{1}_{(2/C,\infty)}((y_\delta)_j) (y_\delta)_j \le \theta_6\|\mathbf{u}\|_1 \frac{\|\mathbf{v}\|_1}{(\ln n)^{7/8}} \le \theta_3\theta_6\frac{n}{(\ln n)^{3/4}}
\end{equation*}
by \eqref{Eqn_def_Rstar_3}, \eqref{Eqn_def_Rstar_5}, and \eqref{Eqn_def_Rstar_2}.
Finally, the last term, up to a factor of $2C^2$, is upper bounded by
\begin{multline*}
\|\mathbf{x}_\delta\|_2^2 \|\mathbf{y}_\delta\|_1 = \frac{\|\mathbf{x}_\delta\|_2^2}{\|\mathbf{u}\|_1^2} \frac{\|\mathbf{y}_\delta\|_1}{\|\mathbf{v}\|_1} \frac{1}{\|\mathbf{v}\|_1} (\|\mathbf{u}\|_1\|\mathbf{v}\|_1)^2 \\
\le \frac{\theta_7}{n} \cdot \theta_6 \cdot \frac{1}{\theta_2\ln n} \cdot \theta_3^2 n^2(\ln n)^{1/4} = \frac{\theta_7\theta_6\theta_3^2}{\theta_2}\frac{n}{(\ln n)^{3/4}}
\end{multline*}
due to \eqref{Eqn_def_Rstar_4}, \eqref{Eqn_def_Rstar_3}, \eqref{Eqn_def_Rstar_1}, and \eqref{Eqn_def_Rstar_2}. Putting these together gives the proposition.
\end{proof}
\section{Generalization to approximately stable matchings}\label{Sec_approx_stable}
Exact stability is an arguably overly stringent requirement.
analysis can be further extended to understand behaviors of matchings that are approximately stable in the following perspective.
\begin{definition}
We say a matching $\mu$ between $\mathcal{M}$ and $\mathcal{W}$ is $\alpha$-stable for some $0 < \alpha < 1$ if there exists a sub-market of size at least $(1-\alpha) n$ on which $\mu$ is stable; that is, there exist subsets $\mathcal{M}'\subseteq \mathcal{M}$ and $\mathcal{W}' \subseteq \mathcal{W}$ both with cardinality $|\mathcal{M}'|=|\mathcal{W}'| \ge (1-\alpha) n$ such that $\mu(\mathcal{M}') = \mu(\mathcal{W}')$ and the partial matching induced by $\mu$ between $\mathcal{M}'$ and $\mathcal{W}'$ is stable (within this sub-market). We refer to the stable sub-matching between $\mathcal{M}'$ and $\mathcal{W}'$ as the \emph{stable part} of $\mu$.
Denote the set of $\alpha$-stable matchings by $\mathcal{S}_\alpha$.
\end{definition}
A simple adaptation of our previous results and proofs for fully stable matchings gives the following Theorem, analogous to Theorem~\ref{Thm_main_happiness_dist_body} and \ref{Thm_main_rank_dist_body}.
\begin{theorem}\label{Thm_dist_body_approx_stable}
Assume $\alpha = \alpha(n)$ is such that $h(\alpha) > n^{-\eta}$ for some constant $\eta > 0$. Then, for any fixed $\ensuremath{\epsilon} > 0$,
\begin{equation}\label{Eqn_happiness_dist_approx_stable}
\ensuremath{\mathbb{P}}\bigg(\max_{\mu\in\mathcal{S}_\alpha} \inf_{\lambda\in\ensuremath{\mathbb{R}}_+} \|\hat{\mathcal{F}}(\mathbf{X}(\mu))-F_\lambda\|_\infty \le \ensuremath{\epsilon}\bigg) \to 1 \text{ as } n\to\infty,
\end{equation}
and
\begin{equation}\label{Eqn_rank_dist_approx_stable}
\ensuremath{\mathbb{P}}\bigg(\max_{\mu\in\mathcal{S}_\alpha} \inf_{\lambda\in\ensuremath{\mathbb{R}}_+} \|\hat{\mathcal{F}}(\bm{\phi}^{-1}\circ\mathbf{R}(\mu))-F_\lambda\|_\infty \le \ensuremath{\epsilon}\bigg) \to 1 \text{ as } n\to\infty,
\end{equation}
where $\phi_i = \sum_{j=1}^n a_{ij}$.
\end{theorem}
The proof is straightforward and is deferred to Appendix~\ref{appendix_extra_proofs}. The key observation is that, if the entire market satisfies the cohesion assumption, then each sub-market of it is also cohesive, and we can apply union bound over all sub-markets of size $(1-\alpha)n$. This argument goes through even when the market is imbalanced.
As an immediate corollary, we have the following result for slightly imbalanced markets.
\begin{corollary}\label{Cor_body_imbalance}
Consider a market consisting of $n-k$ men and $n$ women, where $h_b(k/n) > n^{-\eta}$ for some constant $\eta > 0$. Assume that the cohesion condition holds as in Assumption~\ref{Assumption_C_bounded}, i.e., the pairwise scores are bounded on $[1/C, C]$. Then, for any fixed $\ensuremath{\epsilon} > 0$,
\begin{equation}\label{Eqn_happiness_dist_imbalance}
\ensuremath{\mathbb{P}}\bigg(\max_{\mu\in\mathcal{S}} \inf_{\lambda\in\ensuremath{\mathbb{R}}_+} \|\hat{\mathcal{F}}(\mathbf{X}(\mu))-F_\lambda\|_\infty \le \ensuremath{\epsilon}\bigg) \to 1 \text{ as } n\to\infty,
\end{equation}
and
\begin{equation}\label{Eqn_rank_dist_imbalance}
\ensuremath{\mathbb{P}}\bigg(\max_{\mu\in\mathcal{S}} \inf_{\lambda\in\ensuremath{\mathbb{R}}_+} \|\hat{\mathcal{F}}(\bm{\phi}^{-1}\circ\mathbf{R}(\mu))-F_\lambda\|_\infty \le \ensuremath{\epsilon}\bigg) \to 1 \text{ as } n\to\infty,
\end{equation}
where $\phi_i = \sum_{j=1}^n a_{ij}$.
\end{corollary}
This follows directly from Theorem~\ref{Thm_dist_body_approx_stable} as we can backfill the market with $k$ men with unit scores with all women, so that any stable matching in the original imbalanced market can be extended to an $\frac{k}{n}$-stable matching in the extended market.
\begin{remark}
The results will not hold if there is a linear imbalance in the market, i.e., $k \propto n$. This is because in such markets the men achieve a constant average rank \cite{ashlagi2017unbalanced}, and therefore the convergence to the exponential distribution is impossible.
\end{remark}
One should be able to further strengthen the results through a more careful analysis. However, we choose to leave these potentials for future studies. %
\section{Conclusion}
We studied the welfare structure in stable outcomes in large two-sided matching markets with logit-based preferences. Under a contiguity condition that prevents agents from disproportionately favoring or unfavoring other agents, we characterize stable and almost stable matchings outcomes in terms of the empirical distribution of latent values and ranks.
In particular, our results suggest that the welfare of an agent in a stable matching can be decomposed into three parts: a global parameter that determines the trade-off between the two sides, a personal intrinsic fitness computed from the systematic scores, and an exogenous factor behaving as a standard exponential random variable. In other words, given the market structure (i.e., the systematic scores), the average rank (or value) of the men (or women) is essentially a sufficient statistic for the outcome distribution.
\section{Empirical distribution of values and ranks}\label{sec_distribution}
\subsection{Empirical distribution of values}
Knowing the eigenspace property of the value vectors allows us to characterize the empirical distribution of values.
\begin{restatable}{lemma}{propOempeLikelyForHappinessEmpDist}\label{Prop_Oempe_likely_for_happiness_emp_distr}
Fix any $\delta,\zeta > 0$. Let $\mu'$ be a partial matching of size $n-\floor{\delta n}$ on $\mathcal{M}'\subseteq\mathcal{M}$ and $\mathcal{W}'\subseteq\mathcal{W}$. For any $\ensuremath{\epsilon} > 0$, consider
\begin{equation}\label{Eqn_Prop_Oempe_likely_for_happiness_oempe_def}
\Omega_{\text{emp}}(\eps) := \left\{(\mathbf{x},\mathbf{y})\in \ensuremath{\mathbb{R}}_+^n\times\ensuremath{\mathbb{R}}_+^n : \exists \lambda\in\ensuremath{\mathbb{R}}_+, \big\|\hat{\mathcal{F}}(\mathbf{x}) - F_\lambda\|_\infty \le \ensuremath{\epsilon} + \Theta(\delta + \sqrt{\zeta})\right\}.
\end{equation}
Then
\begin{equation}
\ensuremath{\mathbb{E}}\big[q(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \cdot \mathbbm{1}_{\mathcal{R} \cap \Omega_{\text{eig}}(\zeta)\backslash\Omega_{\text{emp}}(\eps)}(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'})\big] \le \exp(o_\delta(n)-\Theta(\ensuremath{\epsilon}^2 n)) \cdot \frac{(\delta n)!}{n!} \prod_{i\in\mathcal{M}'}a_{i,\mu'(i)}b_{\mu'(i),i},
\end{equation}
where again the implicit constants are uniform over all $\mathcal{M}',\mathcal{W}'$, and $\mu'$.
\end{restatable}
The proof formalizes the intuition that, conditional on stability of $\mu'$ and $\mathbf{Y}_{\mathcal{W}'}=\mathbf{y}$, the value $X_i$ for $i\in\mathcal{M}'$ should behave approximately as $\Exp(\lambda_i)$ for some $\lambda_i = (1+ \Theta(\delta+\sqrt{\zeta})) \|\mathbf{y}\|_1$.
The full proof is deferred to Appendix~\ref{Proof_prop_Oempe_likely}.
Hence, instead of looking for the optimal $\lambda$ that minimizes $\|\hat{\mathcal{F}}(\mathbf{X}_{\mathcal{M}'})-\mathcal{F}(\Exp(\lambda))\|_\infty$ in the definition \eqref{Eqn_Prop_Oempe_likely_for_happiness_oempe_def} of $\Omega_{\text{emp}}(\eps)$, we may simply choose $\lambda = \|\mathbf{y}\|_1$, which only differs from the right choice by at most a tolerable $\Theta(\sqrt{\zeta}+\delta)$ factor. In other words, if we define
\begin{equation*}
\tilde{\Omega}_{\text{emp}}(\ensuremath{\epsilon}) := \left\{(\mathbf{x},\mathbf{y})\in \ensuremath{\mathbb{R}}_+^n\times\ensuremath{\mathbb{R}}_+^n : \big\|\hat{\mathcal{F}}(\mathbf{x}) - \mathcal{F}(\Exp(\|\mathbf{y}\|_1))\|_\infty \le \ensuremath{\epsilon} + \Theta(\delta + \sqrt{\zeta})\right\},
\end{equation*}
albeit with a worse implicit constant in $\Theta(\delta+\sqrt{\zeta})$,
the same conclusion holds as in Lemma~\ref{Prop_Oempe_likely_for_happiness_emp_distr} with $\Omega_{\text{emp}}(\eps)$ replaced by $\tilde{\Omega}_{\text{emp}}(\ensuremath{\epsilon})$; that is,
\begin{equation}\label{Eqn_tilde_Oempe_likely_for_happi_emp_distr}
\ensuremath{\mathbb{E}}\big[q(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \cdot \mathbbm{1}_{\mathcal{R} \cap \Omega_{\text{eig}}(\zeta)\backslash\tilde{\Omega}_{\text{emp}}(\ensuremath{\epsilon})}(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'})\big] \le \exp(o_\delta(n)-\Theta(\ensuremath{\epsilon}^2 n)) \cdot \frac{(\delta n)!}{n!} \prod_{i\in\mathcal{M}'}a_{i,\mu'(i)}b_{\mu'(i),i}.
\end{equation}
Using Lemma~\ref{Prop_Oempe_likely_for_happiness_emp_distr}, we now prove our first main theorem about the uniform limit of empirical distribution of men's (or women's) value in stable matchings.
\begin{theorem}[Empirical distribution of value]\label{Thm_main_happiness_dist_body}
Fix any $\ensuremath{\epsilon}>0$. Then
\begin{equation}\label{Eqn_happiness_dist_main_body_thm_whp}
\ensuremath{\mathbb{P}}\bigg(\max_{\mu\in\mathcal{S}} \inf_{\lambda\in\ensuremath{\mathbb{R}}_+} \|\hat{\mathcal{F}}(\mathbf{X}(\mu))-F_\lambda\|_\infty > \ensuremath{\epsilon}\bigg) \lesssim e^{-n^c}
\end{equation}
asymptotically as $n\to\infty$.
In particular, the infimum over $\lambda$ in \eqref{Eqn_happiness_dist_main_body_thm_whp} can be replaced with the choice of $\lambda=\|\mathbf{Y}_\delta(\mu)\|_1$ for $\delta$ sufficiently small.
\end{theorem}
\begin{proof}
Plugging \eqref{Eqn_tilde_Oempe_likely_for_happi_emp_distr} into Lemma~\ref{Lemma_reduction_to_q} and repeating the same arithmetic as in \eqref{Eqn_sum_expectation_Omega_zeta} and \eqref{Eqn_sum_expectation_Omega_zeta_summation_bound} immediately give
\begin{equation}
\ensuremath{\mathbb{P}}(\exists \mu\in\mathcal{S}, (\mathbf{X}_\delta(\mu),\mathbf{Y}_\delta(\mu))\in \Omega_{\text{eig}}(\zeta)\backslash\tilde{\Omega}_{\text{emp}}(\ensuremath{\epsilon})) \le e^{-n^c} + \exp(o_\delta(n) - \Theta(\ensuremath{\epsilon}^2 n)) \lesssim e^{-n^c},
\end{equation}
granted that $\ensuremath{\epsilon} \ge \ensuremath{\epsilon}_0(\delta)$, where the function $\ensuremath{\epsilon}_0(\delta)\to 0$ as $\delta\to 0$.
Corollary~\ref{Cor_no_stable_outside_Oeigz} implies that with probability at least $1-\Theta(e^{-n^c})$ there exists no stable matching $\mu$ with $(\mathbf{X}_\delta(\mu),\mathbf{Y}_\delta(\mu))\notin \Omega_{\text{eig}}(\zeta)$, and hence
\begin{equation}\label{Eqn_proof_Thm_main_happiness_dist_tOemp}
\ensuremath{\mathbb{P}}(\exists \mu\in\mathcal{S}, (\mathbf{X}_\delta(\mu),\mathbf{Y}_\delta(\mu))\notin \tilde{\Omega}_{\text{emp}}(\ensuremath{\epsilon}/3)) \lesssim e^{-n^c},
\end{equation}
granted that $\ensuremath{\epsilon}_0(\delta) < \ensuremath{\epsilon} / 3$.
By choosing $\delta$ (and hence also $\zeta=\zeta(\delta)$) sufficiently small so that the $\Theta(\delta+\sqrt{\zeta})$ term in the definition of $\tilde{\Omega}_{\text{emp}}$ is upper bounded by $\ensuremath{\epsilon}/3$, we ensure
\begin{equation}
\ensuremath{\mathbb{P}}(\exists \mu\in\mathcal{S}, \big\|\hat{\mathcal{F}}(\mathbf{X}_\delta(\mu)) - \mathcal{F}(\Exp(\|\mathbf{Y}_\delta(\mu)\|_1))\|_\infty \ge 2\ensuremath{\epsilon}/3) \lesssim e^{-n^c}.
\end{equation}
By further restricting $\delta$ to be sufficiently small, we may absorb the difference caused by the $\delta$-truncation on $\mathbf{X}(\mu)$ into an extra term of $\Theta(\delta)\le \ensuremath{\epsilon}/3$, since $\|\hat{\mathcal{F}}(\mathbf{X}_\delta(\mu))-\hat{\mathcal{F}}(\mathbf{X}(\mu))\|_\infty \le \delta$. The theorem follows immediately.
\end{proof}
With essentially the same analysis as in Lemma~\ref{Prop_Oempe_likely_for_happiness_emp_distr} and Theorem~\ref{Thm_main_happiness_dist_body}, except for replacing the DKW inequality with Bernstein's inequality for empirical average, we can also deduce the fillowing result. The proof is omitted.
\begin{proposition}
For any fixed $\ensuremath{\epsilon},\delta > 0$ and $0< c < 1/2$,
\begin{equation}
\ensuremath{\mathbb{P}}\bigg(\max_{\mu\in\mathcal{S}} |n^{-1}\|\mathbf{X}_\delta(\mu)\|_1 \|\mathbf{Y}_\delta(\mu)\|_1 - 1| > \ensuremath{\epsilon}\bigg) \lesssim e^{-n^c}.
\end{equation}
\end{proposition}
The effect of the $\delta$-truncation is nontrivial to remove because the sum of values can be sensitive to outliers, in particular given the heavy tail of the exponential distribution. We believe, however, that a refined analysis should suggest that $\sup_{\mu\in\mathcal{S}}\big|n^{-1}\|\mathbf{X}(\mu)\|_1\|\mathbf{Y}(\mu)\|_1 - 1\big| \overset{p}{\to} 0$. This is the analogue of the ``law of hyperbola'' in \citet{pittel1992likely}.
\subsection{Empirical distribution of ranks}
Based on the previous discussion on the empirical distribution of value, we now extend the result to ranks and prove our second main theorem.
\begin{theorem}
[Empirical distribution of ranks]\label{Thm_main_rank_dist_body}
For any fixed $\ensuremath{\epsilon}>0$,
\begin{equation}\label{Eqn_rank_dist_main_thm_body_whp}
\ensuremath{\mathbb{P}}\bigg(\max_{\mu\in\mathcal{S}} \inf_{\lambda\in\ensuremath{\mathbb{R}}_+} \|\hat{\mathcal{F}}(\bm{\phi}^{-1}\circ\mathbf{R}(\mu))-F_\lambda\|_\infty > \ensuremath{\epsilon}\bigg) \lesssim e^{-n^c}
\end{equation}
asymptotically as $n\to\infty$,
where $\bm\phi$ is men's fitness vector.
As in Theorem~\ref{Thm_main_happiness_dist_body}, the infimum over $\lambda$ in \eqref{Eqn_happiness_dist_main_body_thm_whp} can be replaced with the choice of $\lambda=\|\mathbf{Y}_\delta(\mu)\|_1$ for $\delta$ sufficiently small.
\end{theorem}
Heuristically, we would expect the rank $R_i(\mu)$ for a man to be proportional to his value $X_i(\mu)$ when stability of $\mu$ and the values $X_i(\mu)=x_i$ and $\mathbf{Y}(\mu)=\mathbf{y}$ are conditioned upon. Indeed, a woman $w_j$ with $j\ne\mu(i)$ stands ahead of $w_{\mu(i)}$ in the preference of man $m_i$ exactly when $X_{ij}<x_i$ and $Y_{ji}>y_{\mu(i)}$. As $X_{ij}$ and $Y_{ji}$ jointly follow the product distribution $\Exp(a_{ij})\otimes\Exp(b_{ji})$ conditional on the event that $X_{ij}<x_i$ and $Y_{ji}<y_{\mu(i)}$ do not simultaneously happen, the conditional probability that $\ensuremath{\mathsf{w}}_j\succeq_{\ensuremath{\mathsf{m}}_i} \ensuremath{\mathsf{w}}_{\mu(i)}$ is $\frac{(1-e^{-a_{ij}x_i})e^{-b_{ji}y_j}}{1-(1-e^{-a_{ij}x_i})(1-e^{-b_{ji}y_j})} \approx a_{ij} x_i$. By summing over $j\ne \mu(i)$, we should expect that the rank $R_i(\mu)$ to be in expectation close to $x_i \sum_{j\ne i} a_{ij} \approx x_i \phi_i$; further, as the sum of independent Bernoulli random variables, $R_i(\mu)$ should concentrate at its expectation, therefore leading to $R_i(\mu) \approx x_i \phi_i$ simultaneously for most $i\in [n]$.
This intuition is formalized in the proof, which is given in Appendix~\ref{Append_proof_thm_main_rank}.
\section{Introduction}
This paper is concerned with the welfare in random two-sided matching markets. In a two-sided matching market there are two kinds of agents, where each agent has preferences over potential partners of the other kind. We assume that the outcome is stable \citep{gale1962college}, meaning that there are no blocking pairs of agents who would rather match to each other over the their assigned partners.
A large literature initiated by \citep{gale1962college} has deepened our understanding of two-sided matching markets, generating a blend of rich theory and market designs.\footnote{See, e.g., \citet{roth1992two,roth2018marketplaces}.} Less understood, however are welfare properties in typical markets. We study the welfare structure in matching markets when agents have latent preferences generated according to observed characteristics. Specifically we are interested in the empirical welfare distribution of agents on each side of the market under stable outcomes as well as the relation between the outcomes of each side of the market.%
We study this question in large randomly generated markets, which allow for both vertical and horizontal differentiation. The model assumes that every agent has an observed personal score for every other agent in the market, and her preferences follows a Logit model based on these scores. We impose that no agent is a-priori overwhelmingly more desirable than any other agent. We find that the observed characteristics alone determine the empirical welfare distribution on each side of the market. Moreover, the joint surplus in the market is fixed, and the average welfare of one side of the market is a sufficient statistic to determine the empirical welfare distribution on both sides of the market.
The model we consider has an equal number of men and women. For every man $\ensuremath{\mathsf{m}}_i$ and every woman $\ensuremath{\mathsf{w}}_j$, we are given non-negative scores $a_{ij}$ and $b_{ji}$, which %
can be viewed as generated from observed characteristics.
Each man and each woman have strict preference rankings generated independently and proportionally to these latent scores, as in the Logit model.\footnote{Numerous empirical papers that study two-sided matching market assume agents' preferences follow a logit model (see e.g., \cite{agarwal2018demand,hitsch2010matching}).} Equivalently, each man $\ensuremath{\mathsf{m}}_i$ has a latent value from matching with woman $\ensuremath{\mathsf{w}}_j$ that is distributed exponentially with rate $a_{ij}$ (smaller values are considered better).\footnote{One can view the utility of an agent for her match to be the negative of the corresponding latent value.} Women's latent values for men are generated similarly.\footnote{Special cases of this general model are markets with uniformly random preferences \citep{knuth1990stable,pittel1989average,knuth1997stable,pittel1992likely, ashlagi2017unbalanced} or when agents have common public scores \citep{mauras2021two,ashlagi2020tiered}.}
We identify an intrinsic fitness for each agent that represents her relative competitiveness in the market, independent of the realized stable outcome. For every pair of agents on opposing sides of the market, we can obtain a mutual score of the pair's match. If we write these scores in a matrix, the intrinsic fitness values
correspond to scaling coefficients that make the mutual matrix bi-stochastic.\footnote{This representation is valid since preferences are invariant under such transformations.} Intuitively, this bi-stochastic mutual matrix can be thought of as consisting of {\em a-priori} probabilities of each pair matching. In particular, this representation captures the interactions between the sides of the market. We exploit this representation to further analyze typical realized outcomes in the market.
We find that the welfare, or the ranks of the agents, when scaled by their intrinsic fitness, have an approximately exponential empirical distribution on each side of the market. Moreover, the average welfare of agents on one side of the market is sufficient to determine the average on the other side. Overall, each agent's welfare can be seen as determined by a global parameter, her intrinsic fitness, and an extrinsic factor with exponential distribution across the population. This characterization holds with high probability in every stable matching. In fact, this structure extends to matchings that are only approximately stable, which can tolerate a vanishing fraction of blocking pairs.
At its core, since our proof needs to apply to all stable matchings (and even to nearly-stable matchings), it is a union bound argument. We use inequalities derived from the integral formula for the probability that a given matching is stable, first introduced by \citet{knuth1976mariages}. The heterogeneous preferences brings great difficulty, which we overcome with a truncation technique to accommodate heavy tails of agents' outcomes and a fixed-point argument on the eigenspace of the characterizing matrix of the market. The exponential empirical distribution part of the result holds intuitively because there are not too many stable matchings in expectation, and the exponential distribution has the highest entropy of all non-negative distributions with a given mean.
Closely related to our work is the remarkable paper \cite{menzel2015large}, which finds that the joint surplus in the market is unique. The focus in \cite{menzel2015large} is on analyzing the matching rates between agents of different types, rather than the rankings and agents' welfare. Menzel's preference model is more general.\footnote{We note that both his and our model assume that the ratio between any two systematic scores is bounded.} Menzel establishes that, at the limit, agents choose partners according to a logit model from opportunity sets, while we consider large markets and assume agents' preferences are logit based. There are several other key differences. First, his model requires many agents of each type (with the same characteristics), while every agent in our model may have different characteristics. Second, while in our model every agent is matched, he assumes agents have a non-negligible outside option resulting in a large number of unmatched agents\footnote{\citet{menzel2015large} identifies how to scale the market under this assumption to capture realistic outcomes.}; this assumption allows him to apply a fixed point contraction argument and establish the uniqueness and characterization result.\footnote{Technically, such substantial outside options keep rejection chains short and prevent them from cycling.}
\subsection{Literature}
The analysis of random two-sided markets goes back to \citet{knuth1990stable,pittel1989average,pittel1992likely}, who consider markets with uniformly random complete preference lists. These papers establish the number of stable matchings as well as the average ranks on each side. A key finding is that the product of there average rank of agents on each side of the market is approximately the size of the market \citep{pittel1992likely}, implying that stable matchings essentially lie on a parabola. Our findings generalize these findings to markets to random logit markets. We also expand these findings to describe the distributional outcomes in the market.
Several papers consider markets with uniformly drawn preferences with an unequal number of agents on each side of market \citep{ashlagi2017unbalanced,pittel2019likely,cai2019short}.
A key finding is there is an essentially unique stable matching and agents on the short side have a substantial advantage. We believe that similar findings hold in random logit markets. Since our results hold for approximately stable matches, our findings extend to the imbalanced case as long as the imbalance is not too large.%
Our paper further contributes to the above literature by considering also outcomes that are approximately stable outcomes.
Several papers study markets random markets when (at least on one side) agents' preferences are generated proportionally to public scores. \citep{immorlica2015incentives,kojima2009incentives,ashlagi2014stability} look at the size of the core.\footnote{They further consider the related issue of strategizing under stable matching mechanisms.} Their analysis relies on a certain market structure (keeping preference lists short), which leaves many agents unmatched. %
\cite{gimbert2019popularity} and \citet{ashlagi2020tiered} assume agents have complete preference lists and their focus is on the size of the core or agents' average rank.
\subsection{Notations}
Denote $[n] = \{1,\ldots,n\}$. Boldface letters denote vectors (lower case) and matrices (upper case), e.g., $\mathbf{x} = (x_i)_{i\in[n]}$ and $\mathbf{A} = (a_{ij})_{i\in[n],j\in [m]}$, and capital letters denote random variables.
For two identically shaped matrices (or vectors) $\mathbf{M}$ and $\mathbf{N}$, $\mathbf{M}\circ \mathbf{N}$ denotes their Hadamard (entry-wise) product. For a vector $\mathbf{x}\in\ensuremath{\mathbb{R}}^n$ with non-zero entries, denote its coordinate-wise inverse by $\mathbf{x}^{-1}$. $\diag(\mathbf{x})$ denotes the diagonal matrix whose $i$-th entry on the diagonal is $x_i$.
$\Exp(\lambda)$ and $\Poi(\lambda)$ denote, respectively, the exponential distribution and the Poisson distribution with rate $\lambda$. We denote the probability density function (pdf) and cumulative distribution function (CDF) of $\Exp(\lambda)$ by $f_\lambda$ and $F_\lambda$, respectively. %
$\Bern(p)$ denotes the Bernoulli distribution with success probability $p\in[0,1]$. %
For distributions $\mathcal{D}_1$ and $\mathcal{D}_2$ over space $\mathcal{X}$, $\mathcal{D}_1\otimes \mathcal{D}_2$ denotes their product distribution over $\mathcal{X}^2$.
$\hat{\mathcal{F}}(\mathbf{x})$ denotes the empirical distribution function for the components of a vector $\mathbf{x}$, treated as a function from $\ensuremath{\mathbb{R}}$ to $[0,1]$. $\mathcal{F}(\mathcal{D})$ denotes the CDF of a distribution $\mathcal{D}$ on $\ensuremath{\mathbb{R}}$.
For real-valued random variables $X$ and $Y$, $X\preceq Y$ denotes stochastic domination of $X$ by $Y$.
We use the standard $O(\cdot)$, $o(\cdot)$, $\Omega(\cdot)$, and $\Theta(\cdot)$ notations to hide constant factors. For functions $f,g:\ensuremath{\mathbb{N}}\to\ensuremath{\mathbb{R}}_+$, we say $f = O(g)$ (resp. $\Omega(g)$) if there exists an absolute constant $K\in(0,\infty)$ such that $f \le K g$ (resp. $f \ge K g$) for $n$ sufficiently large; $f=o(g)$ if $f/g \to 0$ as $n\to\infty$; and $f=\Theta(g)$ if $f=O(g)$ and $f=\Omega(g)$. We say $f = o_\alpha(g)$ if $f/g\to 0$ as $\alpha\to 0$ (uniformly over all other parameters, such as $n$). For example, $\sqrt{\ensuremath{\epsilon}} = o_\ensuremath{\epsilon}(1)$.
\section{Main results}
We denote the (random) set of stable matchings by $\mathcal{S}$. Recall that for a matching $\mu$, $\mathbf{X}(\mu)$ and $\mathbf{R}(\mu)$ denote men's value and rank vectors, respectively, under $\mu$. Denote by $\hat{\mathcal{F}}(\mathbf{v})$ the empirical distribution of the components of a vector $\mathbf{v}$ (viewed as a function from $\ensuremath{\mathbb{R}}$ to $[0,1]$), and $F_\lambda$ denotes the CDF of $\Exp(\lambda)$.
\begin{theorem}[Empirical distribution of values]\label{Thm_main_happiness_dist}
For any fixed $\ensuremath{\epsilon}>0$,
\begin{equation}\label{Eqn_happiness_dist_main_thm_whp}
\ensuremath{\mathbb{P}}\bigg(\max_{\mu\in\mathcal{S}} \inf_{\lambda\in\ensuremath{\mathbb{R}}_+} \|\hat{\mathcal{F}}(\mathbf{X}(\mu))-F_\lambda\|_\infty \le \ensuremath{\epsilon}\bigg) \to 1 \text{ as } n\to\infty.
\end{equation}
That is, with high probability, in all stable matchings simultaneously, the empirical distribution of the men's values is arbitrarily close to some exponential distribution $\Exp(\lambda)$ in Kolmogorov-Smirnov norm, where the parameter $\lambda$ depends on the specific stable matching. In particular, the infimum over $\lambda$ in \eqref{Eqn_happiness_dist_main_thm_whp} can be replaced with the choice of $\lambda$ that can be computed from the women's value vector $\mathbf{Y}(\mu)$.
\end{theorem}
\begin{theorem}
[Empirical distribution of ranks]\label{Thm_main_rank_dist}
For any fixed $\ensuremath{\epsilon}>0$,
\begin{equation}\label{Eqn_rank_dist_main_thm_whp}
\ensuremath{\mathbb{P}}\bigg(\max_{\mu\in\mathcal{S}} \inf_{\lambda\in\ensuremath{\mathbb{R}}_+} \|\hat{\mathcal{F}}(\bm{\phi}^{-1}\circ\mathbf{R}(\mu))-F_\lambda\|_\infty \le \ensuremath{\epsilon}\bigg) \to 1 \text{ as } n\to\infty,
\end{equation}
where $\bm{\phi}$ is the fitness vector (and can be computed as $\phi_i = \sum_{j=1}^n a_{ij}$).
That is, with high probability, in all stable matchings simultaneously, the empirical distribution of rescaled ranks of the men is arbitrarily close to some exponential distribution $\Exp(\lambda)$ in Kolmogorov-Smirnov norm, where the parameter $\lambda$ depends on the specific stable matching (yet the scaling doesn't). Again, we may replace the infimum with the choice of $\lambda$ that can be computed from the women's latent value vector $\mathbf{Y}(\mu)$.%
\end{theorem}
\subsection{Discussion}
The results characterize outcomes of all stable matchings. A slight refinement of Theorem~\ref{Thm_main_happiness_dist} will imply that the average value of one side of the market is essentially sufficient to determine the average value of the other side. Roughly, for a given stable matching $\mu$, the value of $\lambda$ in \eqref{Eqn_happiness_dist_main_thm_whp} and \eqref{Eqn_rank_dist_main_thm_whp} is approximately the sum of the women's values in $\mu$.\footnote{Technically, the choice of $\lambda$ can be taken as the sum of the values of women after excluding a small fraction $\delta$ of the women who are the least satisfied (those with the highest latent values) under the matching $\mu$. This truncation, which is also done for technical reasons, avoid outliers and in fact shows that the predictions still hold under even weaker notions of stability. We believe that such trimming is unnecessary with a more careful analysis.} This suggests that the average value of men is approximately $1/\lambda \approx 1/\|\mathbf{Y}(\mu)\|_1$. Therefore multiplying the average values of the two sides of the market gives approximately $1/n$ simultaneously in all stable matchings with high probability. While we will establish such an approximation, we believe that, with a refined analysis, one should be able to show $\sup_{\mu\in\mathcal{S}}\big|n^{-1}\|\mathbf{X}(\mu)\|_1\|\mathbf{Y}(\mu)\|_1 - 1\big| \overset{p}{\to} 0$. %
Moreover, the average value of men is also sufficient to predict the empirical value distribution on each side of the market. For example, if we find that $30\%$ of the men have value $h$ or higher, then we should expect $9\%$ to have value $2h$ or higher. %
Theorem \ref{Thm_main_rank_dist} is similar but with respect to ranks; it implies that the product of the average scaled ranks of men and women should be asymptotically $n$, and the the average rank on each side determines the empirical rank distributions.
Observe that the scaling in \eqref{Eqn_rank_dist_main_thm_whp} is consistent with the intuition of $\phi_i=\sum_{j=1}^n a_{ij}$ being the average fitness of $\ensuremath{\mathsf{m}}_i$. Within a stable matching, a more popular man should, on average, achieve a better (smaller) rank than a less popular one.
For instance, in a market with bounded public scores (Example~\ref{Ex_public_scores}), each man receives a number of proposals roughly inversely proportional to his fitness during the woman-proposing deferred acceptance algorithm,
implying that his rank is proportional to $\phi_i$ in the woman optimal stable matching.
The proof of Theorem~\ref{Thm_main_happiness_dist} also offers evidence that the number of stable matchings should essentially be sub-exponential. This is formally stated in Corollary~\ref{Cor_subexp_num_stable_match}.
\subsection{Results for approximately stable matchings}
The proof suggest that the characterization further extends to matchings that are only approximately stable in the following sense.
\begin{definition}
We say a matching $\mu$ between $\mathcal{M}$ and $\mathcal{W}$ is $\alpha$-stable for some $0 < \alpha < 1$ if there exists a sub-market of size at least $(1-\alpha) n$ on which $\mu$ is stable; that is, there exist subsets $\mathcal{M}'\subseteq \mathcal{M}$ and $\mathcal{W}' \subseteq \mathcal{W}$ both with cardinality $|\mathcal{M}'|=|\mathcal{W}'| \ge (1-\alpha) n$ such that $\mu(\mathcal{M}') = \mu(\mathcal{W}')$ and the partial matching induced by $\mu$ between $\mathcal{M}'$ and $\mathcal{W}'$ is stable (within this sub-market). We refer to the stable sub-matching between $\mathcal{M}'$ and $\mathcal{W}'$ as the \emph{stable part} of $\mu$.
Denote the set of $\alpha$-stable matchings by $\mathcal{S}_\alpha$.
\end{definition}
The following Theorem can be derived from the quantitative versions of Theorem~\ref{Thm_main_happiness_dist} and \ref{Thm_main_rank_dist}, which will be presented in Section~\ref{sec_distribution}.
\begin{restatable}{theorem}{ThmMainApproxStable}\label{Thm_dist_body_approx_stable}
Assume $\alpha < n^{-\eta}$ for some constant $\eta > 1/2$. Then, as $n\to\infty$,
\begin{equation}\label{Eqn_happiness_dist_approx_stable}
\max_{\mu\in\mathcal{S}_\alpha} \inf_{\lambda\in\ensuremath{\mathbb{R}}_+} \|\hat{\mathcal{F}}(\mathbf{X}(\mu))-F_\lambda\|_\infty \overset{p}{\to} 0 \quad\text{ and }\quad \max_{\mu\in\mathcal{S}_\alpha} \inf_{\lambda\in\ensuremath{\mathbb{R}}_+} \|\hat{\mathcal{F}}(\bm{\phi}^{-1}\circ\mathbf{R}(\mu))-F_\lambda\|_\infty \overset{p}{\to} 0.
\end{equation}
\end{restatable}
The approximately exponential empirical distribution applies to any matching that is stable except for $o(\sqrt{n})$ agents. The key observation is that, if the entire market satisfies the contiguity assumption, then each sub-market of it is also contiguous, and we can apply union bound over all sub-markets of size $(1-\alpha)n$.
As a corollary, we have the following result for slightly imbalanced markets.
\begin{restatable}{corollary}{CorImbalanceMarket}\label{Cor_body_imbalance}
Consider a market consisting of $n-k$ men and $n$ women, where $k < n^{\beta}$ for some constant $\beta < 1/2$. Assume that the contiguity condition holds as in Assumption~\ref{Assumption_C_bounded}. Then, as $n\to\infty$,
\begin{equation}\label{Eqn_happiness_dist_imbalance}
\max_{\mu\in\mathcal{S}} \inf_{\lambda\in\ensuremath{\mathbb{R}}_+} \|\hat{\mathcal{F}}(\mathbf{X}(\mu))-F_\lambda\|_\infty \overset{p}{\to} 0 \quad\text{ and }\quad \max_{\mu\in\mathcal{S}} \inf_{\lambda\in\ensuremath{\mathbb{R}}_+} \|\hat{\mathcal{F}}(\bm{\phi}^{-1}\circ\mathbf{R}(\mu))-F_\lambda\|_\infty \overset{p}{\to} 0.
\end{equation}
\end{restatable}
\begin{remark}
The results will not hold if there is a linear imbalance in the market, i.e., $k \propto n$. This is because in such markets the men achieve a constant average rank \cite{ashlagi2017unbalanced}, and therefore the convergence to the exponential distribution is impossible.
\end{remark}
These results are not necessarily tight and one may weaken the constraints on $\alpha$ and $k$ with a more careful analysis. Exploring other notions of approximate stability is left for future work. %
\section{Model}
We study two-sided matching markets with randomly generated preferences. Next we formalize the model, how preferences are generated and key assumptions.
\paragraph{Setup.} A matching market consists of two sets of agents, referred to as men $\mathcal{M}$ and women $\mathcal{W}$. Unless specified otherwise, we assume that $|\mathcal{M}| = |\mathcal{W}| = n$, men are labeled $\ensuremath{\mathsf{m}}_1,\ldots, \ensuremath{\mathsf{m}}_n$ and women are labeled $\ensuremath{\mathsf{w}}_1,\ldots, \ensuremath{\mathsf{w}}_n$.
Each man $\ensuremath{\mathsf{m}}_i$ has a complete strict preference list $\succ_{\ensuremath{\mathsf{m}}_i}$ over the the set of women and each woman $\ensuremath{\mathsf{w}}_j$ has a complete strict preference list $\succ_{\ensuremath{\mathsf{w}}_j}$ over the set of men. A \emph{matching} is a bijection $\mu : \mathcal{M}\to\mathcal{W}$. To simplify the notation, men and women will be presented using the set of integers $[n]=\{1,2,\ldots,n\}$ and we write $\mu:[n]\to[n]$ so that $\mu(i)=j$ and $\mu^{-1}(j)=i$ means that $\ensuremath{\mathsf{m}}_i$ is matched with $\ensuremath{\mathsf{w}}_j$ in $\mu$. The \emph{rank} for man $\ensuremath{\mathsf{m}}_i$, denoted by $R_i(\mu)$, is the position of $\mu(i)$ on $\ensuremath{\mathsf{m}}_i$'s preference list (e.g., if an agent is matched to the second agent on her list, her rank is two). Write $\mathbf{R}(\mu):=(R_i(\mu))_{i\in[n]}$ for the men's rank vector in matching $\mu$.
The matching $\mu$ is \emph{unstable} if there is a pair of man $\ensuremath{\mathsf{m}}_i$ and woman $\ensuremath{\mathsf{w}}_j$ such that $\ensuremath{\mathsf{w}}_j \succ_{\ensuremath{\mathsf{m}}_i} \ensuremath{\mathsf{w}}_{\mu(i)}$ and $\ensuremath{\mathsf{m}}_i \succ_{\ensuremath{\mathsf{w}}_j} \ensuremath{\mathsf{w}}_{\mu^{-1}(j)}$. A matching is said to \emph{stable} otherwise. It is well-known that the set of stable matchings is not empty.
\paragraph{Logit-based random markets: the canonical form.} We consider markets in which complete preferences are randomly generated as follows. %
For each man $\ensuremath{\mathsf{m}}_i$, we are given a stochastic vector $\hat{\mathbf{a}}_i = (\hat{a}_{ij})_{j\in[n]} \in\ensuremath{\mathbb{R}}^n_+$. Then, $\ensuremath{\mathsf{m}}_i$'s preference list is generated from a logit model based on $\hat{\mathbf{a}}_i$. In particular, let $\mathcal{D}_i$ be the distribution on $\mathcal{W}$ that places on $\ensuremath{\mathsf{w}}_j$ a probability proportional to $\hat{a}_{ij}$; then $\ensuremath{\mathsf{m}}_i$ samples from $\mathcal{D}_i$ for his favorite partner, and repeatedly sample from it without replacement for his next favorite partner until completing his list.
Similarly, each woman $\ensuremath{\mathsf{w}}_j$ preference list is generated from a logit model based on a given stochastic vector $\hat{\mathbf{b}}_j = (\hat{b}_{ji})_{i\in[n]}$.
Denote by $\hat{\mathbf{A}} = (\hat{a}_{ij})_{i,j\in[n]}$ and $\hat{\mathbf{B}} = (\hat{b}_{ji})_{j,i\in[n]}$ the row-stochastic matrices. We refer to this matrix representation of the preference model as the \emph{canonical form} and to $\hat{a}_{ij}$ (resp. $\hat{b}_{ji}$) as the \emph{canonical score} that $\ensuremath{\mathsf{m}}_i$ (resp. $\ensuremath{\mathsf{w}}_j$) assigns to $\ensuremath{\mathsf{w}}_j$ (resp. $\ensuremath{\mathsf{m}}_i$).
This model captures the multinomial logit (MNL) choice model, in which scores are closely related to the systematic utilities for agents over matches. The special case in which $\hat{a}_{ij} = \hat{b}_{ji} = 1/n$ for all $i,j\in[n]$ corresponds to the uniformly random preference model.
\paragraph{Mutual matrix and intrinsic fitness: the balanced form.}
While the canonical form is a useful way to describe the market, it will be helpful for the analysis to describe it using an alternative scaling scheme, which we refer to as the \emph{balanced form}.
Observe that multiplying any row of $\hat{A}$ and $\hat{B}$ by a constant does not change the behavior of the market.
We look for scaling vectors $\bm{\phi},\bm{\psi}\in\ensuremath{\mathbb{R}}^n_+$ for the rows of $\hat\mathbf{A}$ and $\hat\mathbf{B}$ such that $\mathbf{M} = n^{-1} \mathbf{A} \circ \mathbf{B}$ is bistochastic\footnote{The nonnegative matrix $\mathbf{M}$ is bistochastic if the sum of entries in each row and each column is one.}, where $\mathbf{A} = \diag(\bm{\phi}) \hat\mathbf{A}$ and $\mathbf{B} = \diag(\bm{\psi}) \hat\mathbf{B}$. As is shown by \citet[Theorem~1]{sinkhorn1964relationship}, such a bistochastic matrix $\mathbf{M}$ always uniquely exists, and the scaling vectors $\bm{\phi}$ and $\bm{\psi}$ are unique up to constant rescaling. That is, $\bm{\phi}$ and $\bm{\psi}$ jointly solve
\begin{equation}
\frac{1}{n} \diag(\bm{\phi}) (\hat{\mathbf{A}} \circ \hat{\mathbf{B}}^\top) \diag(\bm{\psi}) \mathbf{1} = \mathbf{1} \quad\text{ and }\quad \frac{1}{n} \diag(\bm{\psi}) (\hat{\mathbf{B}} \circ \hat{\mathbf{A}}^\top) \diag(\bm{\phi}) \mathbf{1} = \mathbf{1},
\end{equation}
where $\mathbf{1}$ is the vector consisting of all $1$'s.
The matrix $\mathbf{M}$ will be referred to as the \emph{mutual matrix}. %
In the remainder of the paper we assume without loss of generality that the market is described in the balanced form, using $\mathbf{A},\mathbf{B}$ and the mutual matrix $\mathbf{M}$.
The bistochasticity constraint incurs the following relationship: if $\hat{b}_{ji}$'s increase (resp. decrease) by a factor of $\alpha$ simultaneously for all $j\in[n]$, the scaling factor $\phi_i$, and hence all $a_{ij}$'s for $j\in[n]$, must decrease (resp. increase) by the same factor to maintain bistochasticity of $\mathbf{M}$. In other words, a uniform increase (resp. decrease) of $\ensuremath{\mathsf{m}}_i$'s popularity among the women will lead to a proportional decrease (resp. increase) in $\phi_i$. Thus, we can view the $\phi_i$ as reflecting the ``average popularity'' of man $\ensuremath{\mathsf{m}}_i$ among the women: Loosely speaking, the smaller $\sum_{j=1}^n a_{ij}$ is, the more popular $\ensuremath{\mathsf{m}}_i$ is (reflected by larger values of $b_{ji}$'s).
We refer to the vector $\bm{\phi}$ and $\bm{\psi}$ as the men's and women's \emph{intrinsic fitness} vector, respectively (and note that a smaller intrinsic fitness value means the agent is more competitive). Note that since $\hat{\mathbf{A}} = \diag(\bm{\phi})^{-1} \mathbf{A}$ is row-stochastic, we conveniently have $\phi_i = \sum_{j=1}^n a_{ij}$, and similarly $\psi_j = \sum_{i=1}^n b_{ji}$ in the balanced form.
\begin{example}
[Markets with public scores]\label{Ex_public_scores}
We say a matching market has public scores when $\hat{\mathbf{a}}_i=\hat{\mathbf{a}}\in\ensuremath{\mathbb{R}}_+^n$ for all $i\in[n]$ and $\hat{\mathbf{b}}_j=\hat{\mathbf{b}}\in\ensuremath{\mathbb{R}}_+^n$ for all $j\in[n]$. In other words, agents on the same side of the market share an identical preference distribution. %
The fitness vectors are simply $\bm{\phi} = \hat{\mathbf{b}}^{-1}$ and $\bm{\psi} = \hat{\mathbf{a}}^{-1}$, where the inverse is taken component-wise. The mutual matrix $\mathbf{M} = \mathbf{J} := (n^{-1})_{i,j,\in[n]}$ in this case.
\end{example}
\paragraph{Latent values.}
The logit-based preference model can be generated equivalently in the following way.
Let $\mathbf{X},\mathbf{Y}\in\ensuremath{\mathbb{R}}_+^{n\times n}$ be two random matrices with independent entries $X_{ij}$ (resp. $Y_{ji}$) sampled from $\Exp(a_{ij})$ (resp. $\Exp(b_{ji})$). The preference profile is then derived from $\mathbf{X}$ and $\mathbf{Y}$ as follows:
\[
\ensuremath{\mathsf{w}}_{j_1}\succeq_{\ensuremath{\mathsf{m}}_i}\ensuremath{\mathsf{w}}_{j_2} \quad\Longleftrightarrow\quad X_{ij_1} < X_{ij_2},
\]
\[
\ensuremath{\mathsf{m}}_{i_1} \succeq_{\ensuremath{\mathsf{w}}_j} \ensuremath{\mathsf{m}}_{i_2} \quad\Longleftrightarrow\quad Y_{ji_1} < Y_{ji_2}.
\]
We refer to each $X_{ij}$ (resp. $Y_{ji}$) for $i,j\in[n]$ as the \emph{latent value} (or simply {\em value}) of $\ensuremath{\mathsf{m}}_i$ (resp. $\ensuremath{\mathsf{w}}_j$) if matched with $\ensuremath{\mathsf{w}}_j$ (resp. $\ensuremath{\mathsf{m}}_i$).
Note that for every agent, a lower rank implies a lower latent value (and therefore lower values of rank and latent value are better).
\paragraph{Regularity assumption.}
We study the asymptotic behavior of two-sided matching markets as the market size grows large. Informally, we restrict attention to contiguous markets, in the sense that, ex ante, no agent finds any other agent (on the opposite side of the market) disproportionately favorable or unfavorable to other agents. The condition is formalized as follows.
A matrix $\mathbf{L}\in\ensuremath{\mathbb{R}}^{n\times n}$ with non-negative entries is called \emph{$C$-bounded} for some constant $C\ge 1$ if $\ell_{ij}\in[1/C,C]$ for all $1\le i,j\le n$. When $\mathbf{L}$ is (bi-)stochastic, we will abuse notation and say $\mathbf{L}$ is $C$-bounded if $n\mathbf{L}$ satisfies the definition above.
\begin{assumption}[Contiguity]\label{Assumption_C_bounded}
We assume that, by choosing an appropriate scaling of $\bm{\phi}$ and $\bm{\psi}$ in the balanced form,
there exist absolute constants $C\in[1,\infty)$ and $n_0<\infty$ such that $\mathbf{A}$, $\mathbf{B}$, and $n\mathbf{M}=\mathbf{A}\circ \mathbf{B}^\top$ are all $C$-bounded for all $n\ge n_0$; that is, there exists $C\in[1,\infty)$ such that
\begin{equation}\label{Eqn_assumpt_C_bound_main}
\frac{1}{C} \le \min_{i,j\in[n]} \min\{a_{ij}, b_{ji}, nm_{ij}\} \le \max_{i,j\in[n]} \max\{a_{ij}, b_{ji}, nm_{ij}\} \le C \quad\text{ for all }\; n > n_0.
\end{equation}
\end{assumption}
\begin{remark}
It is easy to verify that Assumption~\ref{Assumption_C_bounded} holds when no agent finds any potential partner disproportionately favorable or unfavorable based on their canonical scores:
If $\hat\mathbf{A}$ and $\hat\mathbf{B}$ are $C$-bounded, then there exists a choice of $\bm{\phi}$ and $\bm{\psi}$ with all entries in $[n/C^2, nC^2]$ in the balanced form; further, $\mathbf{M} $ is $C^4$-bounded.
Thus, Assumption~\ref{Assumption_C_bounded} is equivalent to the existence of an absolute upper bound on the ratio between pairs of entries within the same row of $\mathbf{A}$ or $\mathbf{B}$; that is
\begin{equation}
\limsup_{n\to\infty} \max_{i,j_1,j_2\in[n]} \frac{a_{ij_1}}{a_{ij_2}} < \infty \qquad\text{ and }\qquad \limsup_{n\to\infty} \max_{j,i_1,i_2\in[n]} \frac{b_{ji_1}}{b_{ji_2}} < \infty.
\end{equation}
This condition is agnostic to scaling of the matrices and hence easy to certify. However, the lower and upper bounds in \eqref{Eqn_assumpt_C_bound_main} are more convenient in our later analysis, where the constant $C$ will make an appearance (although often made implicit in the results).
\end{remark}
\begin{remark}
Assumption~\ref{Assumption_C_bounded} offers a strong contiguity condition on the market, in that the attractiveness among all pairs of men and women vary at most by an (arbitrarily large) constant factor as the market grows. We expect the results to hold under a weaker assumption, which can be described through the spectral gap of the matrix $\mathbf{M}$. Recall that, as a bistochastic matrix, $\mathbf{M}$ has a largest eigenvalue of $1$ and all other eigenvalues of magnitude at most $1$. We may think of the market as contiguous in this weaker sense if the spectral gap of $\mathbf{M}$, given by $1-|\lambda_{\max}(\mathbf{M}-\mathbf{J})|$, is bounded away from zero as the market grows. The spectral gap is a common and powerful notion when studying the structure of networks and communities.\footnote{In our model of the matching market, the spectral gap of $\mathbf{M}$ describes the extent to which the market interconnects globally (contiguity) or decomposes into multiple sub-markets (modularity). A larger spectral gap means that the market is more cohesive, with more uniform or homogeneous preferences. For instance, the uniform market with $\mathbf{M}=\mathbf{J}$ has a unit spectral gap, the maximum possible value. On the other hand, a smaller spectral gap means that the market is more clustered, with a clearer boundary between communities and poorly mixed preferences. For instance, any block-diagonal bistochastic matrix (with more than one blocks) has a zero spectral gap, and corresponds to a market that decomposes into two or more independent sub-markets --- one cannot hope to have a uniform structure result in such markets.} We impose Assumption \ref{Assumption_C_bounded} as it simplifies substantially the analysis and exposition.
\end{remark}
\subsection{Estimating the (unconditional) stability probability}\label{Subsec_uncond_stable_prob}
Using concentration inequalities given in Lemma~\ref{Lemma_weighted_exp_chernoff} and \ref{Lemma_wgt_exp_cond_concentration}, we derive the following upper bound, which essentially characterizes the (approximate) probability that a partial matching of size $n-\floor{\delta n}$ is stable with a probable value vector for the women (i.e., $Y_{\mathcal{W}'}\in\mathcal{R}_1$).
\begin{restatable}{proposition}{propEqXYBound}\label{Prop_EqXY_bound}
For a fixed a partial matching $\mu'$ on $\mathcal{M}'$ and $\mathcal{W}'$ of size $n-\floor{\delta n}$,
\begin{equation}\label{Eqn_prop_EqXY_target_bound}
\ensuremath{\mathbb{E}}[q(\mathbf{X}_{\mathcal{M}'},\mathbf{Y}_{\mathcal{W}'}) \cdot \mathbbm{1}_{\mathcal{R}_2}(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \cdot\mathbbm{1}_{\mathcal{R}_1}(\mathbf{Y}_{\mathcal{W}'})] \le e^{o(n)+o_\delta(n)} \frac{(\delta n)!}{n!}\prod_{i\in\mathcal{M}'} a_{i,\mu'(i)} b_{\mu'(i),i}.
\end{equation}
\end{restatable}
The proof of Proposition~\ref{Prop_EqXY_bound} will be deferred to Appendix~\ref{appendix_extra_proofs}, where we will develop intermediate results that characterize the typical behavior of $\mathbf{X}_{\mathcal{M}'}$ and $\mathbf{Y}_{\mathcal{W}'}$ relative to each other (see Appendix~\ref{Append_proof_prop_eigenvec_of_M}).
Proposition~\ref{Prop_EqXY_bound} provides evidence that, heuristically, the expected number of stable partial matchings should be sub-exponential.
\begin{restatable}[Number of stable partial matchings]{corollary}{corSubExpNumOfStableMatch}
\label{Cor_subexp_num_stable_match}
Fix any $\delta > 0$ and $c\in(0,1/2)$. Let $N_\delta$ denote the number of stable partial matchings of size $n-\floor{\delta n}$ satisfying the condition in Corollary~\ref{Cor_Rstar_likely} (i.e., $\mathcal{R}^\star$) in a random instance of the matching market. Then, $\ensuremath{\mathbb{E}}[N_\delta] \le \exp(o_\delta(n))$ granted that $n$ is sufficiently large. Further, with probability at least $1-e^{-n^c}$, the condition $\mathcal{R}^\star$ is satisfied by all $\delta$-truncated stable matchings.
\end{restatable}
\begin{remark}
Corollary~\ref{Cor_subexp_num_stable_match} falls short of establishing a sub-exponential bound for the expected number of stable matchings in two aspects.
\begin{itemize}
\item While stable matchings that violate $\mathcal{R}^\star$ (when truncated) will not exist with high probability, we have not yet proved a bound for the expected number of such stable matchings. We believe that this can be overcome with a refined analysis of deferred acceptance, which should lead to stronger results than Lemma~\ref{Lemma_DA_prop_num}. Note that all high probability results in Appendix~\ref{Append_weak_regular_scores} after this lemma come with an upper bound on the expected number of stable matchings under various conditions.
\item In general, it is possible to have multiple, in the worst case $\floor{\delta n}!$, stable matchings that produces the same $\delta$-truncated stable partial matching.
\end{itemize}
We believe that a sub-exponential bound for the number of stable matchings is possible with a more refined analysis.
\end{remark}
\subsection{Opportunity sets and an eigenspace property for the value vectors}\label{Subsec_proximity_eigensubsp}
Our next result states that value vectors in stable matchings are not only controlled in terms of their first and second moments, but also in a sense ``close'' to some constant vector, i.e., $t\mathbf{1}$ for some $t\in\ensuremath{\mathbb{R}}_+$, which are eigenvectors of $\mathbf{M}$ corresponding to its maximal eigenvalue $\lambda_1(\mathbf{M})=1$.
Let us fix the women's values to be $\mathbf{Y}_{\mathcal{W}'}=\mathbf{y}$ and consider the implication for the men's outcome in any (partial) matching $\mu'$. For a man $\ensuremath{\mathsf{m}}_i$ with value $x_i$, the expected number of blocking pairs between him and the women, conditional on $x_i$ and $\mathbf{y}$, is
\begin{equation*}
\sum_{j\ne \mu(i)} (1-e^{-a_{ij} x_i}) (1-e^{-b_{ji} y_j}) \approx \sum_{j=1}^n a_{ij} b_{ji} x_i y_j = n (\mathbf{M} \mathbf{y})_i x_i.
\end{equation*}
The next result suggests that, in a typical market, the burden of avoiding blocking pairs falls roughly equally on the men in the sense that the entries of $\mathbf{M} \mathbf{Y}_{\mathcal{W}'}$ are largely the same.
\begin{restatable}{lemma}{PropEigenVecOfMHighProf}\label{Prop_eigenvec_of_M_high_prob}
Let $\mu'$ be a partial matching of size $n-\floor{\delta n}$ on $\mathcal{M}'\subseteq\mathcal{M}$ and $\mathcal{W}'\subseteq\mathcal{W}$. Fix any $\zeta > 0$, and let
\begin{equation}
\Omega_{\text{eig}}(\zeta) := \left\{(\mathbf{x},\mathbf{y})\in \ensuremath{\mathbb{R}}^n\times\ensuremath{\mathbb{R}}^n : \exists t\in\ensuremath{\mathbb{R}}_+, \sum_{i=1}^n \mathbbm{1}\left\{|(\mathbf{M} \mathbf{y})_i-t| \ge \sqrt{\zeta} t \right\} \le \sqrt{\zeta} n\right\}.
\end{equation}
Then
\begin{equation}\label{Eqn_Prop_eigenvec_of_M_Expectation_is_small}
\ensuremath{\mathbb{E}}\big[q(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \cdot \mathbbm{1}_{\mathcal{R}\backslash\Omega_{\text{eig}}(\Theta(\delta)+\zeta)}(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'})\big]
\le %
\exp(o_\delta(n)-\Theta(\zeta^2 n)) \cdot \frac{(\delta n)!}{n!} \prod_{i\in\mathcal{M}'}a_{i,\mu'(i)}b_{\mu'(i),i},
\end{equation}
with the implicit constants uniform over all $\mathcal{M}',\mathcal{W}'$, and $\mu'$.
\end{restatable}
The proof of Lemma~\ref{Prop_eigenvec_of_M_high_prob} is deferred to Appendix~\ref{Append_proof_prop_eigenvec_of_M}.
Let us observe the immediate corollary of this Lemma, the proof of which is similar to that of Lemma~\ref{Lemma_reduction_to_q} and deferred to Appendix~\ref{Append_proof_no_stable_outside_oeigz}.
\begin{restatable}{corollary}{CorNoStableOutsideOeigz}\label{Cor_no_stable_outside_Oeigz}
For $\delta > 0$ sufficiently small, there exists a choice of $\zeta =\zeta(\delta) > 0$ such that $\zeta\to 0$ as $\delta \to 0$ and that
\begin{equation}
\ensuremath{\mathbb{P}}(\exists \mu\text{ stable}, (\mathbf{X}_\delta(\mu),\mathbf{Y}_\delta(\mu))\notin\Omega_{\text{eig}}(\zeta)) \lesssim e^{-n^c}
\end{equation}
asymptotically as $n\to\infty$.
\end{restatable}
Corollary~\ref{Cor_no_stable_outside_Oeigz} roughly states that, in a contiguous market, conditioning on the women's outcomes in a stable matching has an almost even impact on the men's values.
\section{Intuition and Proof Ideas \label{sec_prep_proof}}
This section offers intuition and the key ideas behind the proofs.
\subsection{Intuition}
Let us start with providing a high-level intuition of both why the result is true, and how we should expect the proof to go. The actual proof does not follow the intuition exactly due to some technical difficulties that need to be overcome. It is possible that one can find a proof that follows the intuition below more directly.
At a very high level, the result follows from a {\em union bound}. There are $n!$ potential matchings. Based on {\em a-priori} preferences, for each matching $\mu$, one could compute the probability $P_\mu$ that $\mu$ is stable under the realized preferences. A union bound argument just establishes that
\begin{equation}\label{eq:int1}
\sum_{\text{$\mu$ does not satisfy the conditions of \eqref{Eqn_happiness_dist_main_thm_whp}}} P_\mu = \exp(-\Omega_\epsilon(n))
\end{equation}
Establishing \eqref{eq:int1} directly appears to be difficult. A more approachable statement is a universal bound on the number of stable matchings overall:
\begin{equation}\label{eq:int2}
\sum_{\mu} P_\mu = \exp(o(n)).
\end{equation}
That is, in expectation, there are not so many stable matchings.
Consider the triplet of random variable $(X,Y,\mu)$, where $X$ and $Y$ are preferences sampled according to the model, and $\mu$ is a uniformly random matching. Let $\mathcal{S}$ be the event that $\mu$ is stable under preference profiles $(X,Y)$. If there are few stable matchings overall, as \eqref{eq:int2} implies, then we have
\begin{equation}
\label{eq:int3}
\ensuremath{\mathbb{P}}(\mathcal{S})=\exp(o(1))\cdot n!^{-1}
\end{equation}
Another way of uniformly sampling from $\mathcal{S}$ is as follows.
First, sample $(X_1,Y,\mu)\in_U \mathcal{S}$. Then resample $X_2$ conditioned on $(X_2,Y,\mu)\in \mathcal{S}$. The triple $(X_2,Y,\mu)$ is a uniform element of $\mathcal{S}$. Note that for a fixed $(Y,\mu)$ the marginal distribution of $X_2$ conditioned on
$(X_2,Y,\mu)\in\mathcal{S}$ is fairly simple to reason about: each member should prefer the pairing assigned to them by $\mu$ to all other potential blocking matches. In such a resampling, as we shall see, the empirical exponential distribution appears naturally from large deviations theory.
Suppose we prove that for all $(Y,\mu)$,
\begin{equation}
\label{eq:int4}
\ensuremath{\mathbb{P}}_{X_2:(X_2,Y,\mu)\in\mathcal{S}}(\text{$X_2$ does not satisfy the conditions of \eqref{Eqn_happiness_dist_main_thm_whp}} ) = \exp(-\Omega_\epsilon(n))
\end{equation}
Putting these together, we would get
\begin{multline*}
\ensuremath{\mathbb{P}}\big((X_2,Y,\mu)\in \mathcal{S} \wedge(\text{$X_2$ does not satisfy the conditions of \eqref{Eqn_happiness_dist_main_thm_whp}} )\big) \\ = \ensuremath{\mathbb{P}}((X_1,Y,\mu)\in \mathcal{S}) \cdot \exp(-\Omega_\epsilon(n)) = n!^{-1}\cdot \exp(-\Omega_\epsilon(n)),
\end{multline*}
implying \eqref{eq:int1} (together with a similar statement about $Y$).
A certain amount of technical work is needed to make the above blueprint go through. In particular, since our bounds need to be
pretty tight, we need to worry about tail events. We end up having to perform the above resampling trick multiple times.
\paragraph{Why do we need the boundedness assumption \ref{Assumption_C_bounded}?} It is worth noting that while the boundedness assumption might not be the weakest assumption under which our results hold, some assumptions on the market are inevitable. In particular, if the market can be split into two independent balanced markets $A$ and $B$, then there is no connection between the fortunes of men in market $A$ and their fortunes in market $B$, and the empirical distribution of values on each side will be a mixture of two exponential distributions.
Things will get even more complicated if markets $A$ and $B$ are not entirely independent, but are connected by a small number of agents. It is still possible that some version of Theorem~\ref{Thm_main_happiness_dist} holds, but it will need to depend on the eigenspaces corresponding to large eigenvectors of the matrix.
It is worth noting that even \eqref{eq:int2} fails to hold when we do not have the boundedness assumption. Consider a market consisting of $n/2$ small markets with just $2$ men and $2$ women in each. Under the uniform preferences within each market, the expected number of stable matchings is $9/8$, thus
\begin{equation*}
\sum_{\mu} P_\mu = (9/8)^{n/2} \neq \exp(o(n)).
\end{equation*}
\subsection{Proof sketch}
\paragraph{Case with uniformly random preferences.}
Let us first look at the classic case where agents' preferences are generated independently and uniformly at random, i.e.,
all canonical scores equal $1/n$. %
The proof in this case is more straightforward due to the symmetry among agents and the established high probability bound on the number of stable matchings \citep{pittel1989average}. We keep however the discussion informal in this section.
For a given matching $\mu$, we study the conditional distribution of the value vectors $\mathbf{X}(\mu),\mathbf{Y}(\mu)\in\ensuremath{\mathbb{R}}^n$ conditional on stability of $\mu$. Conditional also on women's value vector $\mathbf{Y}(\mu)=\mathbf{y}$, man $\ensuremath{\mathsf{m}}_i$'s value $X_i(\mu)$ must satisfy $X_i(\mu) < X_{ij}$ for each $j\ne \mu(i)$ with $Y_{ji} < y_j$ in order not to form a blocking pair. Since $X_{ij}$ and $Y_{ji}$ are i.i.d. samples from $\Exp(1)$ for each $j\ne \mu(i)$, one should expect $X_i(\mu)$ to be effectively less than the minimum of about $\sum_{j\ne \mu(i)} 1 - e^{-y_j} \approx \|\mathbf{y}\|_1$ number of $\Exp(1)$ random variables. Such a constraint acts independently on each $X_i$ (conditional on $\mathbf{Y}(\mu)=\mathbf{y}$), and therefore in the posterior distribution one should expect $\mathbf{X}(\mu)$ to behave like i.i.d. samples from $\Exp(\|\mathbf{y}\|_1)$.
Concretely, conditional on $\mathbf{U}(\mu)=\mathbf{u}=1-F(\mathbf{x})$ and $\mathbf{V}(\mu)=\mathbf{v}=1-F(\mathbf{y})$, where we recall that $F(z) = 1-e^{-z}$ is the CDF of $\Exp(1)$ and is applied component-wise on the value vectors. The likelihood of $(\mathbf{x},\mathbf{y})$ when $\mu$ is stable is $\prod_{j\ne \mu(i)} (1-u_iv_j)$ \citep{pittel1989average}. With some crude first order approximation (namely, $1-z\approx e^{-z}$), this expression can be approximated by
\begin{equation}\label{Eqn_approx_intuition}
\prod_{j\ne \mu(i)} (1-u_iv_j)\approx \exp\left(-\sum_{i,j\in[n]}x_i y_j\right)=\exp(-\|\mathbf{x}\|_1\|\mathbf{y}\|_1),
\end{equation}
where we also put in the terms $x_i y_{\mu(i)}$ for $i\in [n]$ despite their absence in the original product. By the Bayes rule, conditional on $\mathbf{Y}(\mu)=\mathbf{y}$ and that $\mu$ is stable, the distribution of $\mathbf{X}(\mu)$ is approximately $p(\mathbf{x}|\mu\in\mathcal{S}, \mathbf{Y}=\mathbf{y})\propto \exp(-\|\mathbf{x}\|_1\|\mathbf{y}\|_1)\cdot\prod_{i=1}^n e^{-x_i} = \prod_{i=1}^n \exp(-(1+\|\mathbf{y}\|_1)x_i)$. Note that this is the joint density of the $n$-fold product of $\Exp(1+\|\mathbf{y}\|_1)$. Our main theorems in the case with uniformly random preferences follow directly from the the convergence of empirical measures.
\paragraph{The general case.}
The entire result can be viewed abstractly from the following lens: For any matching $\mu$, we expect the value vectors $(\mathbf{X}(\mu),\mathbf{Y}(\mu))$ to behave ``nicely'' with very high probability conditional on stability of $\mu$, so that even if we apply union bound on all stable matchings (which we will show to be ``rare'' separately) it is still unlikely to see any ``bad'' value vectors. To do so requires a careful analysis on the conditional distribution of value (given stability) $\mathcal{D}_\mu\in\Delta(\ensuremath{\mathbb{R}}^n_+\times\ensuremath{\mathbb{R}}^n_+)$, which depends on both the (unconditional) preference distribution (the ``prior'') and the conditional probability that $\mu$ is stable given a pair of value vector $\mathbf{X}(\mu)=\mathbf{x}$ and $\mathbf{Y}(\mu)=\mathbf{y}$, which we will denote by $p_\mu(\mathbf{x},\mathbf{y})$. We will define the ultimate ``nice'' event to be the $\ensuremath{\epsilon}$-proximity of empirical distribution of value (or rescaled rank) to some exponential distribution, but unsurprisingly it is hard to analyze this event directly from $\mathcal{D}_\mu$, which is complicated itself, in one single step. Instead, we will follow a ``layer-by-layer peeling'' of the desirable events. Namely, we will find a nested sequence of subsets $\ensuremath{\mathbb{R}}^n_+\times \ensuremath{\mathbb{R}}^n_+ = \Omega_0 \supseteq \Omega_1 \supseteq \cdots\supseteq \Omega_K$ representing events on the joint value vectors of a stable matching, with $\Omega_K$ the desired event that the empirical distribution of value for men is $\ensuremath{\epsilon}$-close to some exponential distribution. Step by step, we will show that a stable matching, conditional on its value vectors in $\Omega_i$, must have value vectors in $\Omega_{i+1}$ with very high probability. Here is the roadmap to establishing these increasingly ``nice'' events:
\begin{enumerate}[listparindent=1cm, label=(\alph*)] \item\label{Step_1_in_proof_sketch} As a first step, we approximate $p_\mu(\mathbf{x},\mathbf{y})$, the likelihood of value vectors $\mathbf{x}$ and $\mathbf{y}$ in a stable matching $\mu$, by the function $q(\mathbf{x},\mathbf{y})=\exp(-n\mathbf{x}^\top\mathbf{M}\mathbf{y})$. That is, the log likelihood of value vectors in a stable matching is approximately bilinear in the two vectors. To establish this, we identify a weak regularity condition on the value vectors of all stable matchings in terms of the first and second moments and extremal quantiles of the value vectors, under which the approximation holds (see Section~\ref{subsec_reg_of_happiness_moment_conditions}). Such a condition is met by all stable matchings with high probability (see Appendix~\ref{Append_weak_regular_scores} for details). The proof primarily consists of standard analysis of the deferred acceptance algorithm and careful use of first- and second-order approximation of $p_\mu(\mathbf{x},\mathbf{y})$. Here we use the fact that the men-proposing and women-proposing deferred acceptance algorithms output the extremal outcomes with respect to the two sides' values among all possible stable matchings.
\item\label{Step_2_in_proof_sketch} In the expression for $q(\vx,\vy)$, the value vectors relate through the matching matrix $\mathbf{M}$. However, we show next that, in stable matchings, we can further simplify things by approximately factoring $n\mathbf{X}(\mu)^\top\mathbf{M}\mathbf{Y}(\mu)$ into a product $\|\mathbf{X}(\mu)\|_1\|\mathbf{Y}(\mu)\|_1$ of sums of values on the two sides. More specifically, both $\mathbf{M}\mathbf{Y}(\mu)$ and $\mathbf{M}^\top\mathbf{X}(\mu)$ lie near the maximal eigenspace of $\mathbf{M}$, which is the span of $\mathbf{1}$ under Assumption~\ref{Assumption_C_bounded} (see Section~\ref{Subsec_proximity_eigensubsp}). The proof uses a fixed point argument to deduce that $\mathbf{M}\mathbf{Y}(\mu)$ depends almost deterministically on $\mathbf{M}^\top\mathbf{X}(\mu)$ and, symmetrically, $\mathbf{M}^\top\mathbf{X}(\mu)$ on $\mathbf{M}\mathbf{Y}(\mu)$, which forces both quantities to lie near the eigenspace.
Along the way, we also deduce an upper bound for the (unconditional) probability for $\mu$ to be stable (see Section~\ref{Subsec_uncond_stable_prob}), suggesting an sub-exponential upper bound on the typical number of stable matchings (Corollary~\ref{Cor_subexp_num_stable_match}).
\item\label{Step_3_in_proof_sketch} Under the previous event, the men's values behaves approximately like i.i.d. exponential samples with rate $\|\mathbf{Y}(\mu)\|_1$ conditional on stability of $\mu$ and $\mathbf{Y}(\mu)$ -- in fact, they are conditionally independent and nearly identically distributed. The result on the empirical distribution of men's values
follows immediately from a concentration inequality of Dvoretzky–Kiefer–Wolfowitz (DKW) type, generalized for nearly identically distributed independent random variables (Lemma~\ref{Lemma_dkw_non_identical}).%
\item Finally, we translate values into ranks. Using the classic first- and second-moment method, we show that for majority of the agents in the market, the rescaled rank (based one's own scores) lies close to the value. This implies Theorem~\ref{Thm_main_rank_dist}.
\end{enumerate}
There is one caveat, however: In \ref{Step_1_in_proof_sketch}, a second order expansion of $p_\mu(\mathbf{x},\mathbf{y})$ is required in order to justify the approximation with $q(\mathbf{x},\mathbf{y})$. As a result, we need to control second order behavior of value, i.e., $\|\mathbf{X}(\mu)\|_2^2=\sum_{i=1}^n X_i(\mu)^2$, in any stable matching $\mu$. However, the second moment cannot be easily controlled due to the heavy tail of $\Exp(1)$ (indeed, the moment generating function for $X^2$ does not exist for $X\sim\Exp(1)$). To resolve this issue, we perform a truncation in the upper $\delta/2$-quantile of the values on each side. By choosing $\delta$ sufficiently small, we can ensure that the truncation only affects the empirical distribution by an arbitrarily small amount in $\ell^\infty$ norm. As a price to pay, in \ref{Step_2_in_proof_sketch} and \ref{Step_3_in_proof_sketch}, we will have to deal with not just all stable matchings, but all \emph{partial} matchings on any $(1-\delta)$-fraction of the market that is stable. See Section~\ref{Subsec_partial_match_and_truncation} for the technical definition of truncated and partial matchings.
\section{Preliminaries}
\subsection{Probability of stability and its approximation}
For each matching $\mu$, define the function $p_\mu: \ensuremath{\mathbb{R}}^n_+\times\ensuremath{\mathbb{R}}^n_+ \to [0,1]$ to be probability that $\mu$ is stable given values of men and women in $\mu$. That is
\begin{equation}\label{Eqn_def_pmu}
p_\mu(\mathbf{x},\mathbf{y}) = \ensuremath{\mathbb{P}}(\mu\in\mathcal{S} | \mathbf{X}(\mu)=\mathbf{x},\mathbf{Y}(\mu)=\mathbf{y}).
\end{equation}
Just like the integral formula used in \citet{knuth1976mariages} and \citet{pittel1989average,pittel1992likely} to study matching markets with uniformly random preferences, the probability of a matching $\mu$ being stable can be similarly characterized by an integral
\begin{multline}\label{Eqn_integral_formula_orig}
\ensuremath{\mathbb{P}}(\mu\in\mathcal{S}) = \ensuremath{\mathbb{E}}_{\mathbf{X}\sim\bigotimes_{i=1}^n\Exp(a_{i,\mu(i)}),\mathbf{Y}\sim\prod_{i=1}^n\Exp(b_{i,\mu^{-1}(i)})}[p_\mu(\mathbf{X},\mathbf{Y})] \\
= \int_{\ensuremath{\mathbb{R}}_+^n\times \ensuremath{\mathbb{R}}_+^n} p_\mu(\mathbf{x},\mathbf{y}) \prod_{i=1}^n f_{a_{i,\mu(i)}}(x_i)f_{b_{i,\mu^{-1}(i)}}(y_i) \ensuremath{\,d}\mathbf{x} \ensuremath{\,d}\mathbf{y}.
\end{multline}
The function $p_\mu$ can be further expressed in closed form. Condition on the value vector $\mathbf{X}(\mu)
= \mathbf{x}$ and $\mathbf{Y}(\mu)
= \mathbf{y}$ and sample the rest of the values $X_{ij}$ and $Y_{ji}$ for all $j\ne \mu(i)$. Each pair of $i,j\in[n]$ with $j\ne \mu(i)$ may form a blocking pair when $X_{ij} < x_i$ and $Y_{ji} < y_j$, which event happens with probability $(1-\exp(-a_{ij}x_i))(1-\exp(-b_{ji}y_j))$. For $\mu$ to be stable, there must be no blocking pairs and thus
\begin{equation}
p_\mu(\mathbf{x},\mathbf{y}) = \prod_{\substack{i,j\in[n]\\j\ne \mu(i)}} \left(1 - \big(1-e^{-a_{ij}x_i}\big)\big(1-e^{-b_{ji}y_j}\big) \right).
\end{equation}
Under Assumption~\ref{Assumption_C_bounded}, i.e., $\max_{i,j_1,j_2} a_{ij_1}/a_{ij_2} \le C^2$ and $\max_{j,i_1,i_2} b_{ji_1}/b_{ji_2} \le C^2$, we observe a simple upper bound
\begin{equation}\label{Eqn_naive_bound_pxy}
p_\mu(\mathbf{x},\mathbf{y}) \le \prod_{\mu(i)\ne j} \Big(1 - \big(1-e^{-\hat{x}_i/C^2}\big)\big(1-e^{-\hat{y}_j/C^2}\big) \Big),
\end{equation}
where $\hat{x}_i = x_i a_{i,\mu(i)}$ and $\hat{y}_j = y_j b_{j,\mu^{-1}(j)}$ for $i,j\in[n]$ are the renormalized values (thus named because they have unit mean).
This bound is fairly conservative and crude for our final purpose, but will prove useful for establishing preliminary results.
To further simplify the analysis, we recognize that, through first order approximation,
\begin{equation}\label{Eqn_goal_approx_p_with_q}
p_\mu(\mathbf{x},\mathbf{y}) \approx \prod_{i\neq j}\left(1-a_{ij}b_{ji} x_i y_j\right) \le \exp\Big(-\sum_{i\ne j} a_{ij}b_{ji} x_i y_j\Big) \approx \exp(-n \mathbf{x}^\top \mathbf{M} \mathbf{y}).
\end{equation}
Define the function
\begin{equation*}
q(\mathbf{x},\mathbf{y}) := \exp(-n \mathbf{x}^\top \mathbf{M} \mathbf{y}) = \exp\bigg(-n \sum_{i,j=1}^n m_{ij}x_i y_j\bigg).
\end{equation*}
In the next section we discuss conditions under which the function $q(\mathbf{x},\mathbf{y})$ offers a good approximation for $p_\mu(\mathbf{x},\mathbf{y})$.
\subsection{Partial matchings and truncation} \label{Subsec_partial_match_and_truncation}
In order to study also approximately stable matchings, as well as for technical reasons, we need to consider matchings that are stable on a significant subset of the market. We first formalize a general partial matching then describe a
particular way to form stable partial matchings. %
Let $\mathcal{M}'\subseteq \mathcal{M}$ and $\mathcal{W}'\subseteq \mathcal{W}$ be subsets of the men and women with cardinality $|\mathcal{M}'|=|\mathcal{W}'|= n'$. A {\em partial matching} $\mu':\mathcal{M}'\to\mathcal{W}'$ is a bijection between $\mathcal{M}'$ and $\mathcal{W}'$. Denote the values of men among $\mathcal{M}'$ and women among $\mathcal{W}'$ in the partial matching $\mu'$ by $\mathbf{X}_{\mathcal{M}'}(\mu')$ and $\mathbf{Y}_{\mathcal{W}'}(\mu')$, respectively. While it may be natural to view $\mathbf{X}_{\mathcal{M}'}(\mu')$ and $\mathbf{Y}_{\mathcal{W}'}(\mu')$ as $n'$-dimensional vector, we choose to view them as $n$-dimensional vectors where components corresponding to men in $\mathcal{M}\backslash\mathcal{M}'$ and women in $\mathcal{W}\backslash\mathcal{W}'$ are zero (recall that since small is better, zero is the best possible latent value). Therefore, conditional on $\mathbf{X}_{\mathcal{M}'}(\mu')=\mathbf{x}'$ and $\mathbf{Y}_{\mathcal{W}'}(\mu')=\mathbf{y}'$ for $\mathbf{x}',\mathbf{y}'\in\ensuremath{\mathbb{R}}^n$ supported on $\mathcal{M}'$ and $\mathcal{W}'$, respectively, the probability that $\mu'$ is stable (as a matching between $\mathcal{M}'$ and $\mathcal{W}'$) is simply $p_{\mu'}(\mathbf{x}',\mathbf{y}')$.%
Given a full stable matching $\mu$ and any $\delta > 0$, we define the following routine to construct a stable partial matching of size $n-\floor{\delta n}$: Let $\bar{\mathcal{M}}_{\mu,\delta/2}\subseteq \mathcal{M}$ be the subset of $\floor{\delta n / 2}$ men with the largest value (i.e., the least happy men) in $\mu$, and similarly let $\bar{\mathcal{W}}_{\mu,\delta/2}\subseteq \mathcal{W}$ be the set of $\floor{\delta n / 2}$ least happy women. Construct $\mathcal{M}'_{\mu,\delta}\subseteq \mathcal{M} \backslash (\bar{\mathcal{M}}_{\mu,\delta/2} \cup \mu(\bar{\mathcal{W}}_{\mu,\delta/2}))$ of cardinality $n-\floor{\delta n}$. This is always possible because $|\bar{\mathcal{M}}_{\mu,\delta/2} \cup \mu(\bar{\mathcal{W}}_{\mu,\delta/2})| \le 2 \floor{\delta n / 2} \le \floor{\delta n}$ and in fact here can be multiple ways to choose $\mathcal{M}'_{\mu,\delta}$. The specific way $\mathcal{M}'_{\mu,\delta}$ is chosen (when $|\mathcal{M} \backslash (\bar{\mathcal{M}}_{\mu,\delta/2} \cup \mu(\bar{\mathcal{W}}_{\mu,\delta/2}))| > n-\floor{\delta n / 2}$) is irrelevant to our discussion, but it may be helpful to assume that the choice is made based on some canonical ordering of the men so there is no extra randomness. Let $\mu_\delta:\mathcal{M}'_{\mu,\delta}\to\mu(\mathcal{M}'_{\mu,\delta})$ be the partial matching induced by $\mu$ on $\mathcal{M}'_{\mu,\delta}$ and their partners. Define the \emph{$\delta$-truncate value} vector for $\mu$ to be $\mathbf{X}_\delta(\mu):= \mathbf{X}_{\mathcal{M}'_{\mu,\delta}}(\mu_\delta)$ and $\mathbf{Y}_\delta(\mu):= \mathbf{Y}_{\mu(\mathcal{M}'_{\mu,\delta})}(\mu_\delta)$.
\section{Regularity of values in stable matchings}\label{sec_reg_of_happiness}
In this section, we establish several (high probability) properties of stable matchings.
\subsection{Moment behavior and approximation of the conditional stable probability}\label{subsec_reg_of_happiness_moment_conditions}
We first consider a set of events in the value space, which can be thought of as regularity conditions for the approximation \eqref{Eqn_goal_approx_p_with_q} of $p_\mu(\mathbf{x},\mathbf{y})$ by $q(\mathbf{x},\mathbf{y})$. Define
\begin{equation}\label{Eqn_def_underR1}
\underline{\mathcal{R}}_1 = \{\mathbf{u}\in\ensuremath{\mathbb{R}}_+^n : \|\mathbf{u}\|_1 \ge \underline{c}_1 \log n\},
\end{equation}
\begin{equation}\label{Eqn_def_overR1}
\overline{\mathcal{R}}_1 = \{\mathbf{u}\in\ensuremath{\mathbb{R}}_+^n : \|\mathbf{u}\|_1 \le \overline{c}_1 n (\log n)^{-7/8}\},
\end{equation}
\begin{equation}\label{Eqn_def_R2}
\mathcal{R}_{2} = \{(\mathbf{u},\mathbf{v})\in\ensuremath{\mathbb{R}}_+^n\times\ensuremath{\mathbb{R}}_+^n : \mathbf{u}^\top\mathbf{M}\mathbf{v} \le c_2 (\log n)^{1/8}\},
\end{equation}
where $\underline{c}_1,\overline{c}_1,c_2\in\ensuremath{\mathbb{R}}_+$ are constants to be specified later.
Let
\begin{equation*}
\mathcal{R}_1 = \underline{\mathcal{R}}_1\cap\overline{\mathcal{R}}_1 \enspace\text{ and }\enspace \mathcal{R} = (\mathcal{R}_1 \times \ensuremath{\mathbb{R}}_+^n) \cap (\ensuremath{\mathbb{R}}_+^n \times \mathcal{R}_1) \cap \mathcal{R}_2 = \{(\mathbf{x},\mathbf{y})\in\mathcal{R}_2: \mathbf{x},\mathbf{y}\in \mathcal{R}_1 \}.
\end{equation*}
The region $\mathcal{R}$ should capture the typical behavior of $(\mathbf{X}_\delta(\mu),\mathbf{Y}_\delta(\mu))$ for any stable matching $\mu$.
\begin{proposition}\label{Prop_R_likely}
For any fixed $\delta > 0$ and $c \in (0,1/2)$, the constants $\underline{c}_1,\overline{c}_1$, and $c_2$ in \eqref{Eqn_def_underR1}-\eqref{Eqn_def_R2} can be appropriately chosen such that
\begin{equation}
\ensuremath{\mathbb{P}}(\exists \mu\in\mathcal{S}, (\mathbf{X}_\delta(\mu),\mathbf{Y}_\delta(\mu))\notin\mathcal{R}) \lesssim e^{-n^c}
\end{equation}
asymptotically as $n\to\infty$.
\end{proposition}
\begin{remark}
It is helpful to compare the bounds \eqref{Eqn_def_underR1}-\eqref{Eqn_def_R2} to classic results in the setting with uniform preferences (cf. \citep{pittel1989average,pittel1992likely}): Namely, the optimal average rank is $\Theta(\log n)$, the pessimal average rank is $\Theta(n/\log n)$, and the product of the average ranks on the two side is asymptotic to $n$ in all stable matchings. Here, due to heterogeneity of preferences, we pay a small price of an extra constant or $(\log n)^{1/8}$ factor.
\end{remark}
We will defer the proof to Appendix~\ref{Append_weak_regular_scores}. %
In fact, we will establish even finer control over the truncated value vectors in stable matchings. For a matching $\mu$, we define $\mathbf{U}(\mu) = F(\hat{\mathbf{X}}(\mu))$ and $\mathbf{V}(\mu) = F(\hat{\mathbf{Y}}(\mu))$, where the (standard exponential CDF) function $F(z)=1-e^{-z}$ is applied coordinate-wise to the renormalized value vectors. Through relating $\mathbf{X}$ and $\mathbf{Y}$ to $\mathbf{U}$ and $\mathbf{V}$, we will specify a subregion $\mathcal{R}^\star \subseteq \mathcal{R}$ in which $p_\mu(\mathbf{x},\mathbf{y})$ can be well approximated by $q(\mathbf{x},\mathbf{y})$.\footnote{Technically, $\mathcal{R}^\star$ has to be defined in the context of a matching $\mu$, as $\mathbf{U}$ and $\mathbf{V}$. Here we drop the dependency for convenience. See Corollary~\ref{Cor_Rstar_likely} for the formal definition of $\mathcal{R}^\star(\mu)$.} We will see in Corollary~\ref{Cor_Rstar_likely} that with high probability no stable matchings $\mu$ have $(\mathbf{X}_\delta(\mu),\mathbf{Y}_\delta(\mu))$ outside $\mathcal{R}^\star$, from which Proposition~\ref{Prop_R_likely} follows.
The conditions for $\mathcal{R}^\star$ are sufficiently strong to bound the functions $p_\mu$ and $q$ within an $\exp(o(n))$ factor of each other. This is formalized as follows.
\begin{restatable}{proposition}{propRatioPQHighProbeon}\label{Prop_ratio_p_q_high_prob}
For any $\delta>0$ and $c \in (0,1/2)$, there exists an absolute constant $\theta\in(0,\infty)$ such that the probability that a matching $\mu$ is stable with value vectors \emph{not} satisfying
\begin{equation}\label{Eqn_prop_ratio_p_q_high_tag}
\frac{p_\mu(\mathbf{X}_\delta(\mu),\mathbf{Y}_\delta(\mu))}{q(\mathbf{X}_\delta(\mu),\mathbf{Y}_\delta(\mu))} \le \exp\left(\frac{\theta n}{(\log n)^{1/2}}\right)\tag{$\star$}
\end{equation}
is $\frac{\exp(-n^c)}{n!}$.
In other words, with high probability, there exist no stable matchings $\mu$ whose post-truncation value vectors $\mathbf{X}_\delta(\mu)$ and $\mathbf{Y}_\delta(\mu)$ satisfy \eqref{Eqn_prop_ratio_p_q_high_tag}.
\end{restatable}
Again, the proof of Proposition~\ref{Prop_ratio_p_q_high_prob}
is deferred to Appendix~\ref{Append_weak_regular_scores}.
The reason for using $\delta$-truncated value vectors is that, when approximating $p(\mathbf{x},\mathbf{y})$ to second order, there will be terms of $\|\mathbf{x}\|_2^2$ and $\|\mathbf{y}\|_2^2$, which are hard to control due to the heavy tail of the exponential distribution.%
\footnote{Note that moment generating function does not exist for $X^2$ where $X\sim\Exp(1)$ so the classic Hoeffding- or Bernstein-type bounds fail to apply.} On the other hand, changing the values of a $\delta$ fraction should affect the empirical CDF by at most $\delta$ in $\ell^\infty$ distance. Therefore, %
it suffices to show that for small enough $\delta$ all stable partial matchings of size $n-\floor{\delta n}$ have values and ranks empirically distributed close to some exponential distribution.
The function $p_\mu(\mathbf{x},\mathbf{y})$
cannot be approximated globally by $q(\mathbf{x},\mathbf{y}) = \exp(-n\mathbf{x}^\top \mathbf{M}\mathbf{y})$.
However, we can find a region in $\ensuremath{\mathbb{R}}_+^n\times\ensuremath{\mathbb{R}}_+^n$ where $p_\mu(\mathbf{x},\mathbf{y})$ and $q(\mathbf{x},\mathbf{y})$ are close (uniformly for all stable matchings) in the sense that \eqref{Eqn_prop_ratio_p_q_high_tag} holds; Meanwhile, Proposition~\ref{Prop_ratio_p_q_high_prob} states that with high probability, no stable matchings will ever have $\delta$-truncated value vectors outside this region.
\subsection{A key reduction lemma}
Proposition~\ref{Prop_ratio_p_q_high_prob} allows us to study high probability behaviors in stable partial matchings obtained from truncating stable (full) matchings pretending that the conditional probability of stability were given by $q(\mathbf{x},\mathbf{y})$. Concretely, consider a fixed constant $\delta > 0$ and a region $\Omega \subseteq \ensuremath{\mathbb{R}}_+^n \times \ensuremath{\mathbb{R}}_+^n$ that defines an event on the (truncated) value vectors. If there exists a stable matching $\mu$ whose truncated value vectors $(\mathbf{X}_\delta(\mu),\mathbf{Y}_\delta(\mu))\in\Omega$, the induced partial matching $\mu_\delta$ of size $n-\floor{\delta n}$ between $\mathcal{M}'_{\mu,\delta}$ and $\mu(\mathcal{M}'_{\mu,\delta})$ must also be stable with value vectors $\mathbf{X}_{\mathcal{M}'_{\mu,\delta}}(\mu_\delta) = \mathbf{X}_\delta(\mu)$ and $\mathbf{Y}_{\mu(\mathcal{M}'_{\mu,\delta})}(\mu_\delta) = \mathbf{Y}_\delta(\mu)$. Thus, we will end up either having a stable matching whose truncated value violates \eqref{Eqn_prop_ratio_p_q_high_tag}, or a stable partial matching of size $n-\floor{\delta n}$ whose value vectors (already truncated) lies in $\Omega\cap\mathcal{R}^\star$. By Proposition~\ref{Prop_ratio_p_q_high_prob}, the former event happens with probability $o(1)$. Therefore, we may focus on the second event, where a stable partial matching of size $n-\floor{\delta n}$ exists with value vectors in $\Omega\cap\mathcal{R}^\star$. This is summarized by the following Lemma, which will be a major tool in the remainder of the proof.%
\begin{lemma}\label{Lemma_reduction_to_q}
Let $\delta > 0$, $c \in (0,1/2)$, and $\Omega \subseteq \ensuremath{\mathbb{R}}_+^n \times \ensuremath{\mathbb{R}}_+^n$. Then,
\begin{multline}
\ensuremath{\mathbb{P}}(\exists \mu\text{ stable}, (\mathbf{X}_\delta(\mu),\mathbf{Y}_\delta(\mu))\in\Omega) \le e^{-n^c} + \exp\left(\Theta\Big(\frac{n}{(\log n)^{1/2}}\Big)\right) \cdot \\
\sum_{\substack{\mathcal{M}'\subseteq\mathcal{M},\mathcal{W}'\subseteq\mathcal{W}\\|\mathcal{M}'|=|\mathcal{W}'|=n-\floor{\delta n}}}\sum_{\substack{\mu':\mathcal{M}'\to\mathcal{W}'\\\text{bijection}}} \ensuremath{\mathbb{E}}\big[q(\mathbf{X}_{\mathcal{M}'}(\mu'), \mathbf{Y}_{\mathcal{W}'}(\mu')) \cdot \mathbbm{1}_{\mathcal{R}\cap\Omega}(\mathbf{X}_{\mathcal{M}'}(\mu'), \mathbf{Y}_{\mathcal{W}'}(\mu'))\big].
\end{multline}
\end{lemma}
\begin{proof}
Note that
\begin{multline}\label{Eqn_proof_reduction_to_q_1}
\ensuremath{\mathbb{P}}(\exists \mu\text{ stable}, (\mathbf{X}_\delta(\mu),\mathbf{Y}_\delta(\mu))\in\Omega) \le \ensuremath{\mathbb{P}}(\exists \mu\text{ stable}, (\mathbf{X}_\delta(\mu),\mathbf{Y}_\delta(\mu))\notin\mathcal{R}^\star) \\
+ \ensuremath{\mathbb{P}}(\exists \mu\text{ stable}, (\mathbf{X}_\delta(\mu),\mathbf{Y}_\delta(\mu))\in\mathcal{R}^\star\cap\Omega),
\end{multline}
where the first term is $e^{-n^c}$ by Corollary~\ref{Cor_Rstar_likely}.
Let $\mathcal{E}$ denote the event that there exists a {\em stable} partial matching $\mu'$ between $\mathcal{M}'\subseteq\mathcal{M}$ and $\mathcal{W}'\subseteq\mathcal{W}$ with $|\mathcal{M}'|=|\mathcal{W}'|=n-\floor{\delta n}$ where $(\mathbf{X}_{\mathcal{M}'}(\mu'), \mathbf{Y}_{\mathcal{W}'}(\mu'))\in\mathcal{R}^\star\cap\Omega$. Clearly, the existence of a stable matching $\mu$ with $(\mathbf{X}_\delta(\mu),\mathbf{Y}_\delta(\mu))\in\mathcal{R}^\star\cap\Omega$ implies $\mathcal{E}$.
Thus, the second term in \eqref{Eqn_proof_reduction_to_q_1} is bounded by
\begin{equation}
\ensuremath{\mathbb{P}}(\mathcal{E})
\le \sum_{\substack{\mathcal{M}'\subseteq\mathcal{M},\mathcal{W}'\subseteq\mathcal{W}\\|\mathcal{M}'|=|\mathcal{W}'|=n-\floor{\delta n}}}\sum_{\substack{\mu':\mathcal{M}'\to\mathcal{W}'\\\text{bijection}}} \ensuremath{\mathbb{P}}(\mu'\text{ stable}, (\mathbf{X}_{\mathcal{M}'}(\mu'), \mathbf{Y}_{\mathcal{W}'}(\mu'))\in\mathcal{R}^\star\cap\Omega).
\end{equation}
using union bound. For each $\mathcal{M}',\mathcal{W}'$, and $\mu'$ in the summation, we compute the above probability through conditioning on $\mathbf{X}_{\mathcal{M}'}(\mu')$ and $\mathbf{Y}_{\mathcal{W}'}(\mu')$ as
\begin{align}
\ensuremath{\mathbb{P}}(\mu'\text{ stable}&, (\mathbf{X}_{\mathcal{M}'}(\mu'), \mathbf{Y}_{\mathcal{W}'}(\mu'))\in\mathcal{R}^\star\cap\Omega) \nonumber\\
&= \ensuremath{\mathbb{E}}\big[\ensuremath{\mathbb{P}}\big(\mu'\text{ stable} \big| \mathbf{X}_{\mathcal{M}'}(\mu'), \mathbf{Y}_{\mathcal{W}'}(\mu')\big); (\mathbf{X}_{\mathcal{M}'}(\mu'), \mathbf{Y}_{\mathcal{W}'}(\mu'))\in\mathcal{R}^\star\cap\Omega\big] \nonumber\\
&= \ensuremath{\mathbb{E}}\big[p_{\mu'}(\mathbf{X}_{\mathcal{M}'}(\mu'), \mathbf{Y}_{\mathcal{W}'}(\mu')) \cdot\mathbbm{1}_{\mathcal{R}^\star\cap\Omega}(\mathbf{X}_{\mathcal{M}'}(\mu'), \mathbf{Y}_{\mathcal{W}'}(\mu'))\big] \nonumber\\
&\le \ensuremath{\mathbb{E}}\left[\exp\left(\Theta\Big(\frac{n}{(\log n)^{1/2}}\Big)\right)q(\mathbf{X}_{\mathcal{M}'}(\mu'), \mathbf{Y}_{\mathcal{W}'}(\mu')) \cdot\mathbbm{1}_{\mathcal{R}^\star\cap\Omega}(\mathbf{X}_{\mathcal{M}'}(\mu'), \mathbf{Y}_{\mathcal{W}'}(\mu'))\right] \nonumber\\
&\le \exp\left(\Theta\Big(\frac{n}{(\log n)^{1/2}}\Big)\right) \ensuremath{\mathbb{E}}\left[q(\mathbf{X}_{\mathcal{M}'}(\mu'), \mathbf{Y}_{\mathcal{W}'}(\mu')) \cdot\mathbbm{1}_{\mathcal{R}\cap\Omega}(\mathbf{X}_{\mathcal{M}'}(\mu'), \mathbf{Y}_{\mathcal{W}'}(\mu'))\right],
\end{align}
which completes the proof.
\end{proof}
\begin{remark}
The choice of the constant $c\in(0,1/2)$ affects the implicit constants in defining $\mathcal{R}$ and $\mathcal{R}^\star$. Once we have a target convergence of $e^{-n^c}$ for some $c\in(0,1/2)$, we will assume that $c$ is fixed in the rest of our discussion unless otherwise mentioned.
\end{remark}
Lemma~\ref{Lemma_reduction_to_q} will be a key tool in the proof to establish further likely behaviors of value (and rank) vectors. It will be a recurring theme where we first identify a likely region $\Omega_\text{likely}$ for truncated value vectors of stable (full) matchings to fall in ($\mathcal{R}$ to start with), then rule out a bad event $\Omega_\text{bad}$ within $\Omega_\text{likely}$ by showing $\Omega_\text{likely}\cap\Omega_\text{bad}$ is unlikely for value vectors of any stable \emph{partial} matching, and apply Lemma~\ref{Lemma_reduction_to_q} to conclude that $\Omega_\text{bad}$ is unlikely for truncated value vectors of any stable matching and that $\Omega_\text{likely}\cap\Omega_\text{bad}$ can be used as the likely region moving forward.
Based on Lemma~\ref{Lemma_reduction_to_q}, it now suffices to consider partial matchings $\mu'$ of size $n-\floor{\delta n}$ between $\mathcal{M}'$ and $\mathcal{W}'$ and upper bound $\ensuremath{\mathbb{E}}\big[q(\mathbf{X}_{\mathcal{M}'}(\mu'), \mathbf{X}_{\mathcal{W}'}(\mu')) \cdot \mathbbm{1}_\Omega(\mathbf{X}_{\mathcal{M}'}(\mu'), \mathbf{X}_{\mathcal{W}'}(\mu'))\big]$.
From now on, we fix %
$\mathcal{M}'$, $\mathcal{W}'$, and $\mu'$, and make the dependency of $\mathbf{X}_{\mathcal{M}'}$ and $\mathbf{Y}_{\mathcal{W}'}$ on $\mu'$ implicit when the context is clear.
|
train/arxiv
|
BkiUc9w5i7PA9O6rSzoY
| 5 | 1 |
\section{Introduction}
\label{sec:intro} One of the most important limitations to the
sensitivity of long baseline gravitational wave detectors such as
LIGO~\cite{Abramovici}, VIRGO~\cite{VIRGO}, GEO 600~\cite{GEO},and
TAMA~\cite{tama} is thermal noise associated with the test masses
and their suspensions. Designs for advanced detectors propose
either fused silica or sapphire test-masses. For fused silica
test-masses internal mode thermal noise is expected to be an
important source of noise from approximately 20 Hz to a few
hundred Hertz, whereas pendulum mode thermal noise is more
important below this range.
Pendulum mode thermal noise is due primarily to dissipation in the
suspending filaments. It is imperative, therefore, to minimize
the intrinsic losses in the filament. Many current detector
designs use metal wires to suspend the test-mass, but metals,
with $\phi \geq 10^{-5}$, result in unacceptably high levels of
pendulum mode thermal noise. Fused silica has lower loss
($\phi\approx 3\times 10^{-8}$) and monolithic fused silica
suspensions have been shown to have much higher $Q$ than metal
wire suspensions~\cite{Braginsky,Lunin,Andri}. Such a monolithic
suspension system is being developed and adopted for use in the
GEO 600 detector~\cite{Rowan}, while variations of this design
are being considered for LIGO II. In particular, fibers with
circular cross sections may be replaced with fused silica
ribbons~\cite{Rowan,Logan,white_paper}, which allow the suspension
filaments to be very thin and compliant in the direction of
motion~\cite{Weiss,Martin,Ju}.
Experiments indicate that in thin fused silica filaments much of
the dissipation takes place in a layer near the
surface~\cite{Andri}. The level at which surface loss affects the
total pendulum mode dissipation depends on the filament thickness
and geometry, and influences the choice of suspension design
parameters. To investigate this, we calculate the pendulum mode
thermal noise, including surface dependent loss, as a function of
the design parameters for fibers and for ribbons.
\section{Dissipation Dilution}
\label{diss_dil_sec}
In the absence of external sources of dissipation, the dissipation in the
fundamental mode of a pendulum suspended by a filament is given
by\cite{Saulson_PRD}
\begin{equation}
\label{single filament dissipation dilution}
\Phi=\frac{1}{2}\sqrt{\frac{YI}{MgL^2}}\,\phi,
\end{equation}
where $\phi$ is the loss angle of the unloaded suspension filament,
$Y$ is the Young's modulus of the filament material, $I$ is the
cross-sectional moment of inertia, $M$ is the supported mass, $g$ is the
acceleration due to gravity, and $L$ is the filament length. In
general, $\phi$ and hence $\Phi$ will be a function of frequency.
The coefficient of $\phi$ is called the dissipation dilution factor
and is the ratio of elastic energy (subject to dissipation) to the total
energy stored in the pendulum mode, which is predominantly gravitational
energy (non-dissipative).
The right hand side of this equation should be multiplied by two if the mass
is constrained from rotating in the plane of oscillation,
since bending then occurs both in the region where the filament leaves
the support and in the region where the filament leaves the mass.
If the test mass is suspended by $N$ filaments Eq.~\ref{single filament
dissipation dilution}
should be multiplied by $\sqrt{N}$.
For fibers of diameter $d_f$, and ribbons of thickness
$d_r$ and width $w$, we have
\begin{equation}
I=\left\{
\begin{array}{ll}
(\pi/64)\, d_f^4 &\qquad\mbox{{fibers,}}\\
(1/12)\,wd_r^3 &\qquad\mbox{{ribbons,}}
\end{array}\right.
\end{equation}
where subscripts $f$ refer to fibers and subscripts $r$ to ribbons.
Rewriting Eq.~\ref{single filament dissipation dilution} using these
expressions for the cross-sectional moment of inertia, allowing multiple
filaments and assuming that the suspension constrains the filaments to bend
at both ends, we have
\begin{equation}
\label{dissipation dilution}
\Phi = \left\{
\begin{array}{ll}
\sqrt{\frac{YN_f \pi (d_f/2)^4}{4MgL^2}}\,\phi_f
&\qquad\mbox{fibers,}\\
\sqrt{\frac{YN_r wd_r^3}{12MgL^2}}\,\phi_r
&\qquad\mbox{{ribbons,}}
\end{array}\right.
\end{equation}
where $M$ is the suspended mass, $N_f$ is the number of suspension
fibers, $\phi_f$ is the loss angle of the unloaded fibers,
$N_r$ is the number of suspension ribbons, and $\phi_r$ is the
loss angle of the unloaded ribbons.
The limit to how much dissipation dilution we can obtain, and hence the
lower limit to the pendulum mode thermal noise, is set by the
values obtainable for the parameters in these equations. They are
limited by a number of material and technological concerns, especially
by the value achievable for the loss angle $\phi_f$ or $\phi_r$.
This loss angle may depend on a number of factors including the
bulk material loss angle, surface loss, and the filament geometry.
If $\phi_f$ and $\phi_r$ were independent of filament thickness and roughly equal,
Eq.~\ref{dissipation dilution} would indicate that by using very thin
but wide ribbons one could obtain lower dissipation $\Phi$, and
hence less pendulum mode thermal noise, than by using fibers of similar
load bearing capacity. However, since surface loss becomes increasingly
important as the filament thickness is reduced, the enhanced dissipation dilution
obtainable using thin ribbons is moderated by an increase in $\phi_r$.
\section{Thermal Noise in the Presence of Surface Loss}
The loss angle for a sample, including surface loss, may be
expressed as\protect\cite{Andri}
\begin{equation}
\label{intrinsic dissipation}
\phi = \phi_{\mathrm{bulk}}(1+\mu\frac{d_s}{V/S}),
\end{equation}
where $\phi_{\mathrm{bulk}}$ is the loss angle of the bulk material,
$\mu$ is a geometrical factor and $d_s$ is the dissipation depth which
parametrizes the filament size at which surface loss becomes important.
The geometrical factor $\mu$ describes the emphasis placed on the condition
of the surface due to the sample geometry and mode of oscillation while the
dissipation depth $d_s$ describes the amount of surface damage and the depth to
which it penetrates. Equation~\ref{intrinsic dissipation} serves to
define $d_s$, whose value for a given sample may be determined by experiment.
The geometrical factor is given by
\begin{equation}
\mu = \frac{V}{S}\, \frac{\int_{\mathcal S} \epsilon^2({\vec r}) d^2r}
{\int_{\mathcal V} \epsilon^2(\vec{r}) d^3r},
\end{equation}
where ${\vec r}$ denotes a point in the sample, $\epsilon({\vec r})$ the
strain amplitude, $V$ is the volume of the sample, $S$ the surface area of
the sample,
${\mathcal V}$ is the set of points comprising the volume, and
${\mathcal S}$ is the set of points comprising the outer surface. For
transverse oscillations of fibers and ribbons we have
\begin{equation}
\label{mu's}
\mu=\left\{
\begin{array}{ll}
2 &\quad\mbox{fibers,} \\
(3+a)/(1+a) &\quad\mbox{ribbons}
\end{array}\right.
\end{equation}
where $a$ is the aspect ratio of the combined ribbons, $a\equiv d_r/W$
with $W\equiv N_r w$ being the total combined width of the ribbons.
Experiments suggest that $\phi_{\mathrm{bulk}}$ is approximately
constant over the frequency range of interest for
LIGO\cite{Lunin,Andri,Bill}. For simplicity, we will assume
$\phi_{\mathrm{bulk}}$ to be constant. Substituting
Eqs.~\ref{intrinsic dissipation}~and~\ref{mu's} into
Eq.~\ref{dissipation dilution} we obtain
\begin{equation}
\label{dd with surface effect}
\Phi= \left\{
\begin{array}{ll}
\sqrt{{Y}/{16\sigma L^2}}\,(d_f+8d_s)\phi_{\mathrm{bulk}}
&\quad\mbox{{fibers,}}
\vspace{3pt}\\
\sqrt{{Y}/{12\sigma L^2}}\,(d_r+(6+2a)d_s)\phi_{\mathrm{bulk}}
&\quad\mbox{{ribbons,}}
\end{array}\right.
\end{equation}
where $\sigma$ is the filament stress.
In both cases, the first term is the traditional expression for
dissipation dilution, while the term involving $d_s$ represents a reduction
of the dilution due to
the increasing importance of surface loss as the filament thickness is
decreased. For
very thin filaments the term involving $d_s$ dominates and the loss angle
becomes independent of the filament thickness.
From the fluctuation-dissipation theorem~\cite{Callen}, we find
the power spectrum of the pendulum mode thermal fluctuations above
the pendulum mode resonance, at angular frequency $\omega$:
\begin{equation}
\label{FDT pend}
x^2(\omega) = \frac{4k_BTg}{ML'\omega^5}\,{\Phi(\omega)},
\end{equation}
where $\omega \stackrel{>}{_\sim} \sqrt{g/L'}$, $T$ is the temperature of the suspending filaments and $L'$
is the radius of the arc traced out by the center of mass during
pendulum mode oscillation. For convenience, we will take $L'\approx L$.
Inserting $\Phi$ from Eq.~\ref{dd with surface effect}, and including
the contribution from thermoelastic damping
we have the expression for the pendulum mode thermal noise:
\begin{equation}
\label{pend fluct}
\begin{array}{l}
x^2(\omega) = \vspace{5pt}
\left\{\hspace{-4pt}
\begin{array}{ll}
\frac{4k_BTg}{ML^2\omega^5}\,\sqrt{\frac{Y}{16\sigma}}\,
\Big[d_f\left(\phi_{\mathrm{bulk}}+\phi_{\mathrm{th}}\right)+8d_s\phi_{\mathrm{bulk}}\Big]
&\quad\mbox{{fibers,}} \vspace{3pt}\\
\frac{4k_BTg}{ML^2\omega^5}\,\sqrt{\frac{Y}{12\sigma}}\,\Big[d_r\left(
\phi_{\mathrm{bulk}} +\phi_{\mathrm{th}}\right)+(6+2a)d_s\phi_{\mathrm{bulk}}\Big]
&\quad\mbox{{ribbons.}}
\end{array} \right.
\end{array}
\end{equation}
The thermoelastic damping term $\phi_{\mathrm{th}}$ is given by\cite{Zener}
\begin{equation}
\label{thermoelastic damping}
\phi_{\mathrm{th}}=\frac{Y\alpha^2T}{C}\frac{\omega\tau_d}{1+\omega^2\tau_d^2},
\vspace{3pt}
\quad \tau_d=\left\{\hspace{-4pt}
\begin{array}{ll}
d_f^2/13.55D & \quad\mbox{fibers,} \vspace{3pt}\\
d_r^2/\pi^2D & \quad\mbox{ribbons,}
\end{array} \right.
\end{equation}
where $\alpha$ is the thermal expansion coefficient of the filament
material, $C$ is the heat capacity per unit volume, and $D$ is the
thermal diffusion coefficient.
\section{Advanced Interferometers}
\label{Advanced Interferometers} Using Eq.~\ref{pend fluct}, we
can now make estimates for the level of pendulum mode thermal
noise achievable in advanced interferometers and investigate the
dependence on filament thickness and geometry. Using the results
of previous experiments and design studies, most of the
parameters can be well bounded with some reasonable assumptions.
From these parameters, we can obtain upper and lower bounds on
the pendulum mode thermal noise at a given frequency as a
function of ribbon thickness. This analysis assumes that losses
extrinsic to the filaments (e.g. recoil of the suspending
structure or lossy filament-to-test-mass bonds) have been made
negligible.
The achievable fiber diameter depends on the achievable stress
$\sigma$ and on the mass $M$. The fiber diameter in
Eq.~\ref{pend fluct} can be replaced by
\begin{equation}
\label{fiber_diameter}
d_f=\sqrt{4Mg/\pi N_f\sigma} .
\end{equation}
The remaining parameters $M$,$L$,$N_f$, $\sigma$, and $d_s$ are
independent and the achievable pendulum mode thermal noise
depends on the bounds established for these parameters.
It is clear from Eq.~\ref{pend fluct} that an efficient way of
minimizing the thermal noise is to make the length of the
suspension as large as possible. However, the value of $L$ is
bounded above by the minimum allowable spacing $f_{\mathrm{min}}$
of the violin mode frequencies. The frequency spacing must be
kept above about 300~Hz to allow reasonably large intervals of
the spectrum to be free of violin modes. The frequency spacing
limited $L$ is
\begin{equation}
\label{L_max}
L=\frac{1}{2f_{\mathrm{min}}}\sqrt{\frac{\sigma}{\rho}}
\end{equation}
where $f_{\mathrm{min}}$ is the minimum allowable spacing of the
violin modes. The range of possible lengths is determined by the
range of possible stress to which the filaments will be subject.
This in turn depends on the breaking strengths achievable for
fused silica filaments. Many measurements have been reported on
the breaking strength of fibers manufactured from naturally
occurring, and synthetic, vitreous
silica\cite{Russian_book,Proctor}. Little is known about the
strength of ribbons, though one is tempted to assume their
strengths are similar. Values reported for the breaking strength
of fibers in tension at room temperature vary greatly depending
on the condition of the fibers~\cite{Ernsberger}, but strengths on
the order of several gigapascals at room temperature, in fibers
with diameters as large as 1~mm have been
reported~\cite{Hillig,Morley,Perugia}. By assuming that the
filaments are only loaded to a fraction of their breaking
strength we assign the range of possible stress to which the
filaments will be subject as
$0.1~\mathrm{GPa}\nobreak\leq\nobreak\sigma\nobreak\leq\nobreak
1.0~\mathrm{GPa}$. Substituting these values into Eq.~\ref{L_max}
we obtain the range of possible lengths
$0.36~\mathrm{m}\nobreak\leq\nobreak
L\nobreak\leq\nobreak1.1~\mathrm{m}$. In principle, the physical
design of the suspension also places an upper limit on the
length, but ultimately this limit is likely to be less stringent
than that due to the frequency spacing.
For the number of filaments we will choose $N_f=N_r=4$. This
reflects the most likely choice for the suspensions in advanced
detectors which require ``marionette'' control of the test
masses~\cite{white_paper}. Analytically, the number of ribbons
does not enter into the calculation as, for a given stress
$\sigma$ and ribbon thickness $d_r$, only the total combined
width of the ribbons $W$ is fixed.
In order to avoid excessive radiation pressure noise in LIGO II,
the suspended test masses must have a mass $M$ of 30~kg. However,
if this is not feasible, the LIGO I mass of 10~kg can be used as
a fall-back. We take 10~kg~$<M<$~100~kg to allow for possible
advanced designs that utilize even larger
masses~\cite{white_paper}.
For the bulk material dissipation in fused silica, Gretarsson and
Harry have measured
$\phi_{\mathrm{bulk}}=3.3\pm0.3\times10^{-8}$\cite{Andri}. Others
have measured similar values for the bulk material dissipation in
samples of different geometry~\cite{Lunin,Bill,Litten,Geppo:talk}.
We shall adopt the relatively reliable upper limit
$\phi_{\mathrm{bulk}}=3.6\times 10^{-8}$ and somewhat uncertainly
set the lower limit as $\phi_{\mathrm{bulk}}=2.5\times 10^{-8}$
Finally, for untreated fused silica fibers drawn in a natural gas
flame, $d_s=180\pm20~\mu$m has been measured~\cite{Andri}. It
should be noted that the factors resulting in a given quality of
fiber surface are not well quantified. Fibers pulled from silica
rods with different initial surface conditions, or fibers drawn
using a different production method, may have a different
dissipation depth or surface loss than that found in the
measurements above. The geometry of a filament could also have
some effect on the quality of the surface layer, e.g. through
different cooling stresses during fabrication, and our assumption
that fused silica ribbons have the same surface properties as
fused silica fibers has not been tested. However, given these
assumptions we set an upper bound of $d_s=200~\mu$m, which should
be reliable. To estimate a lower bound for $d_s$ we will use a
$Q$-measurement of a ribbon of thickness $50~\mu$m, made of
natural fused quartz~\cite{Rowan}. One mode of this ribbon showed
a $Q$ much higher than others. After subtracting the loss due to
thermoelastic damping (Eq.~\ref{thermoelastic damping}) and
assuming the remaining loss is mainly surface loss, the
equivalent $d_s$ for a similarly limited fused silica ribbon can
be estimated at $30~\mu$m. We, therefore, set the range of
possible dissipation depths at 30~$\mu$m$\nobreak\,<d_s<200~\mu$m.
Table~\ref{estimates} summarizes the best and worst case
estimates for the parameters, and also a ``best guess'' for the
most probable values. Figure~\ref{xvsd} shows the levels of
pendulum mode thermal noise at 10~Hz as a function of filament
thickness for each of the three sets of estimates of the
parameters. The graphs for ribbon filaments all show a maximum
around $400~\mu$m. This is the thermoelastic damping peak, which
for all but the most optimistic case must be avoided if the
desired levels of pendulum mode thermal noise are to be achieved.
Figure~\ref{xvsd} also shows the value of using ribbons rather
than fibers as suspension filaments. Since the diameter of fibers
cannot be independently reduced, the pendulum mode thermal noise
is dominated by thermoelastic damping. The use of ribbons allows
us to evade this problem by residing in the surface loss limited
regime. To evade the thermoelastic regime and to obtain better
dissipation dilution one might be tempted to use the thinnest
ribbons possible. However, at small thicknesses, below the
thermoelastic peak, the graphs begin to level off. This is due
to surface loss which sets a minimum to achievable pendulum mode
thermal noise of
\begin{equation}
x^2(\omega)_{\mathrm{min}} =
\frac{24k_BTg}{ML^2\omega^5}\,\sqrt{\frac{Y}{12\sigma}}\,d_s\phi_{\mathrm{bulk}},
\label{x_min}
\end{equation}
It is clear from the plots that, for all but the most optimistic
values of $d_s$, reducing the ribbon thickness below about
$50~\mu$m (corresponding to individual ribbon widths $w$ of 5~mm,
3~mm, and 5~mm for the three cases) does not result in
significant reductions of pendulum mode thermal noise. To satisfy
LIGO II requirements, the pendulum mode thermal noise at 10~Hz
should be less than about $2\times
10^{-19}~\mathrm{m/\sqrt{Hz}}$~\cite{white_paper}. Even in the
presence of surface loss, only the worst case scenario does not
achieve this level. In the most probable case, pendulum mode
thermal noise will be lower than other noise sources at 10~Hz
(radiation pressure noise, fused silica internal mode thermal
noise), provided the ribbon thickness is kept below the
thermoelastic regime. In the most optimistic case, other noise
sources dominate the total noise at 10~Hz regardless of ribbon
thickness. The most probable estimate for fiber suspensions gives
pendulum mode thermal noise that is just acceptable for LIGO II.
If there are unforeseen problems with ribbon suspensions, fiber
suspensions may still prove an acceptable alternative. We
reiterate that in the comparison of fibers and ribbons, we have
assumed that the breaking strength of fibers is not significantly
greater than that of ribbons of equal cross-sectional area; we
have also assigned them identical surface properties. Further
research is required to test these assumptions. Research is
continuing on ribbon suspensions within the LIGO research
community, and additional emphasis on surface properties and
breaking strength is warranted.
\section{Comparison of low frequency noise sources in LIGO II}
While studying the pendulum mode thermal noise at 10~Hz is a good way to
gain insight into the effect of the different physical parameters on the
level of this source of noise, the comparison with other sources of noise
must be done over the entire range of relevant frequencies.
For clarity, we now specialize to a single set of values for the
physical parameters of the suspension filaments--those proposed
for LIGO II~\cite{white_paper}. These parameters are shown in
the last column of Table~\ref{estimates}. With a four ribbon
suspension, it follows from the proposed stress that each ribbon
should have a cross-sectional area of $5.5 \times
10^{-7}~\mathrm{m}^{2}$, giving a width of 5.5~mm for the
$100~\mu$m ribbon thickness proposed. In general, the values for
all these parameters fall between the worst case and most probable
case scenarios. As such, the LIGO II proposal is fairly
conservative, and better noise performance may be achieved.
The LIGO II proposal does not, however, specify a value for $d_s$.
In keeping with the conservative spirit of the other parameters,
we choose $d_s = 180~\mu$m. This falls between the worst case
and most probable case and corresponds to the measured value in
fibers without strict handling requirements~\cite{Andri}.
Figure~\ref{noise_fibers} shows the pendulum mode thermal noise of
a fiber suspension in relation to estimates for the other sources
of noise in the interferometer. Figure~\ref{noise_ribbons} shows
the same comparison for the pendulum mode thermal noise of a
ribbon suspension~\cite{Braginsky_note}. Note that the noise due
to radiation pressure in the LIGO II design is greater than the
pendulum mode thermal noise for the ribbon suspension. For
greater low frequency sensitivity, radiation pressure can always
be reduced by lowering the amount of laser power at the beam
splitter. This could reduce the noise to the pendulum mode
thermal noise level in the low frequency band but will increase
the amount of laser shot noise at higher frequencies. From
figures~\ref{noise_fibers}~and~\ref{noise_ribbons} it is clear
that while a ribbon suspension leads to lower pendulum mode
thermal noise than a fiber suspension, the pendulum mode thermal
noise for the fiber suspension is still comparable to radiation
pressure noise in the relevant frequency band. If there are
unforeseen problems with ribbons (buckling, lower strength,
etc.), fibers do provide an acceptable, if less attractive,
alternative.
\section{Acknowledgments}
We would like to thank our colleagues at the University of
Glasgow, at Stanford University, and throughout the gravitational
wave community for their interest in this work. Additional
thanks to Ken Strain for his help with LIGO II parameters beyond
the white paper as well as Gabriela Gonzalez, Gary Sanders, David
Tanner, and Rai Weiss for their comments. This work was supported
by Syracuse University, U.S. National Science Foundation Grants
No. PHY-9602157 and No. PHY-9630172, the University of Glasgow,
and PPARC.
|
train/arxiv
|
BkiUd144ukPiEekATC2U
| 5 | 1 |
\section{Introduction}
Numerical simulations are an important tool in understanding complex problems in physics and engineering systems.
Many of these phenomena are multi-scale in nature, and are governed by nonlinear partial differential equations (PDEs).
With a wide range of scales at realistic conditions, like turbulence phenomena in fluid flows, the numerical solution of these equations becomes computationally very expensive.
Advances in computing technology have made it possible to carry out intensive simulations on massively parallel computers.
Currently, state-of-the-art simulations are routinely being done on tens or hundreds of thousands of processing elements (PEs) \cite{DJ2013,DASY2014,MC2016,LMM2013}.
It is known, at extreme scale, that data communication as well as synchronization between PEs pose a major challenge in the scalability of scientific applications \cite{DBMA+2011}.
In the case of PDE solvers, where the parallelism is typically realized by decomposing the computational domain among PEs, communications that affect the scalability
arise due to the computation of spatial derivatives in order to propagate the physical
information across the domain.
The problem becomes more acute in simulations of transient phenomena, where spatial derivatives are evaluated at each time step over an integration of large number of steps.
Another issue concerning the scalability is related to the performance variations across the PEs in a parallel system.
In this case, sub-optimal performance of even a few PEs may lead to idling of others, as dictated by the data dependencies involved in the computations.
It is likely that in future Exascale computing systems, which will have an extremely large PE count, communication and synchronization will be a major bottleneck.
It is thus not surprising that there is a substantial increased interest in developing numerical methods that minimize communications and relax data synchronizations at the mathematical level \cite{GGSZTY2012,Betal2014}.
An early effort in solving PDEs in an asynchronous fashion has been presented in \cite{AAIT1994,AAII1996}.
Their method is based on finite-difference schemes and is restricted to the solution of parabolic PDEs with at most second order accuracy.
More recent work \cite{AD2012,DA2014}, again based on finite-difference method, has suggested that due to the randomness in the arrival of messages at different PEs, the resulting algebraic difference equations are stochastic in nature.
In that work, a statistical framework to analyze such systems was developed to study the numerical properties of commonly used schemes in the presence of asynchrony.
Furthermore, they show that though the stability and consistency of the schemes can be maintained, their accuracy is significantly degraded.
They also proposed the possibility of deriving schemes that are tolerant to communication data asynchrony.
A follow up of this work to a simple specific equation and numerical scheme has be presented in \cite{Mudigere2014}.
Although the authors were able to maintain second order accuracy for their chosen scheme when asynchrony is present, one can show using Taylor series that they are severely limited to low order of accuracy.
However, as mentioned earlier, a number of natural and engineering systems are multi-scale in nature and will require higher order accurate schemes.
In this work, we present a general methodology to generate different classes of
high-order asynchrony-tolerant (AT) schemes. This is the main
objective of this paper.
The rest of the paper is organized as follows. We first briefly review the concept of asynchronous computing for PDEs in section 2. A general method to derive \ats schemes, the choices in stencil available in arriving at these schemes and their classification are presented in section 3. In section 4, we show a statistical framework to analyze the overall accuracy of a numerical method when \ats schemes are used. Numerical experiments to validate the performance of \ats schemes are shown in section 5. Conclusions and further discussions are presented in section 6.
\section{Concept}
Let $u(x,t)$ be a function of spatial coordinate $x$ and time $t$, which is governed by a time-dependent PDE in a one-dimensional domain.
\rfig{grid} illustrates the discretized domain which is decomposed into $P$ number of PEs.
Let $i$ and $n$ represent an arbitrary grid point in the domain and time level such that $u(x_i,t_n)=\U{i}{n}$.
For clarity in the exposition, we assume that the grid points are uniformly distributed in the domain with a spacing $\dx$.
A finite-difference to approximate a spatial derivative at point $i$ and time level $n$ can be expressed, in the most general case, as
\be
\left. \frac{\pd^d u}{\pd x^d} \right|_i^n =
\sum_{j=-\smin}^{\smax} c_{j}
\U{i+j}{n}
+ \ord(\dx^a) ,
\label{eq:general_deriv}
\ee
where $d$ is the order of the derivative,
$\smin$ and $\smax$ are the number of points to the left and right of point $i$ in the stencil, and
$c_j$ is the appropriate coefficient or weight of $\U{i+j}{n}$ such that the scheme is accurate to an order $a$ in space. The term $\ord(\dx^a)$ represents the truncation error of the scheme.
\begin{figure}[h]
\centering
\includegraphics[width=0.7\textwidth]{grid-a.eps}
\includegraphics[trim=0cm 0cm 0cm 6.6cm,clip=true,width=0.7\textwidth]{grid.eps}
\caption{Discretized one-dimensional domain decomposed into two PEs ($P=2$).}
\label{fig:grid}
\end{figure}
Usually, the numerical solution of a time-dependent PDE is obtained by advancing an initial condition according to an algebraic finite-difference equation in small steps of time $\dt$.
During each time advancement, say, marching from a time level $n$ to $n+1$, spatial derivatives are computed at each grid point using \req{general_deriv}.
In general, these computations are trivial to implement in a serial code, as the value of the function at all the grid points will be locally available in the memory of the PE.
However, if the domain is decomposed into multiple PEs, computations at points near PE boundaries may need values of the function at stencil points that are computed in the neighboring PEs.
Such values are commonly communicated into buffer or
ghost points,
as shown in \rfig{grid}.
Note that the number of values communicated across the left and right PE boundaries is equal to $\smin$ and $\smax$, respectively.
Let $I$ represent the set of physical grid points in the domain and $B$ represent the set of buffer points.
For convenience we divide the set $I$ further such that $I=\Ii\cup\Ib$.
The set of grid points near PE boundaries whose computations need data from neighboring PEs will be denoted by $\Ib$.
The complementary set of interior points, whose computations are independent of communication between PEs is denoted by $\Ii$.
In commonly used parallel algorithms, computations at a point $i\in\Ib$ cannot be advanced until the communication between PEs is complete.
This is typically ensured by enforcing communication synchronization after messages are issued from one PE to another.
As mentioned earlier, with a large number of PEs such synchronizations become expensive and result in poor scalability of codes at extreme scales.
We refer to this as \emph{synchronous} computing.
In the case of \emph{asynchronous} computing, communication between PEs is initiated at each time step, however, the data synchronization is not enforced.
This means, we cannot ensure that the time level of the function at buffer points is $n$.
It can be $n$, $n-1$, $n-2$, ... depending on the status of messages from successive time advancements.
Due to the random nature of the arrival of messages at different PEs \cite{HSL2008}, the availability of a particular time level at a buffer point is also random.
Let $\n=n-\k{j}$ be the latest available time level at a buffer point $j$, where $\k{j}$ is the corresponding random delay at that point\footnote{In order to distinguish random
from deterministic variables, we will use a tilde ( $\tilde{}$ ) over the variable for the former.}.
Note that $\n$ can be different at different locations and time levels.
If we restrict the maximum allowable delay levels to $L$, then $\n\in\left\{n, n-1, ..., n-L+1\right\}$ and $\k{j}\in \left\{ 0, 1 ,...,L-1\right\}$.
The scheme in \req{general_deriv}, when asynchrony is allowed, can be rewritten as
\be
\left. \frac{\pd^d u}{\pd x^d} \right|_i^n \approx
\sum_{j=-\smin}^{\smax} c_{j}
\U{i+j}{n-l}
,
\label{eq:general_deriv_async}
\ee
where $l=0$ for $i+j\in I$ and $l=\k{i+j}$ for $i+j\in B$.
Unlike the scheme in \req{general_deriv} which contains a single time level, this scheme uses multiple time levels when some of the points in the stencil belong to the set $B$.
It has been shown in \cite{DA2014} that the accuracy of common
finite-differences used in such an asynchronous fashion is significantly
affected.
In particular, accuracy drops to first order regardless of the original finite
difference used.
Thus, the need to derive AT schemes that maintain accuracy
even when there is a communication delay. We do this next.
\section{Asynchrony-tolerant (AT) schemes}
\subsection{General methodology}
\label{sec:at-gm}
Taylor series and the method of undetermined coefficients provide a systematic procedure to derive finite-difference schemes.
As we show momentarily, this approach can also be used to construct \ats schemes to approximate spatial derivatives.
Let $\U{i+j}{n-l}$ represent the function at a generic point $i+j$ in the stencil with an arbitrary delay of $l$ levels to compute a spatial derivative at a point $i$ and time level $n$.
Using the $L$ possible time delays, we can express an \ats scheme as
\be
\left. \frac{\pd^d u}{\pd x^d} \right|_i^n \approx
\sum_{j=-\smin}^{\smax}
\sum_{l=0}^{L-1}
\co {j}{l}
\U{i+j}{n-l}
,
\label{eq:general_deriv_at}
\ee
where $\co{j}{l}$, for the range of $j$ and $l$, are the appropriate coefficients that have to be determined.
Note that this scheme represents the most general case with the function at all possible time levels at each point in the stencil.
However, depending on the delay at each grid point and time step, which is given by $\k{i+j}$, only one or few time levels may be used in approximating the derivative.
The random nature of $\k{i+j}$ is, now, embedded into $\co{j}{l}$.
The merits of using older time levels not just at buffer points, but also at interior points will be discussed later.
The coefficients in the scheme expressed in \req{general_deriv_at} can be obtained by imposing constraints on different terms of the Taylor series, upon expansion of the function at each combination of point and time level in the stencil.
Let us consider the Taylor series of $\U{i+j}{n-l}$ about the point $i$ and time level $n$.
The series is an expansion in two variables, namely $\dx$ and $\dt$, which is given by
\be
u_{i+j}^{n-l}=\sum_{\pp=0}^\infty \sum_{\qq=0}^\infty u^{(\pp,\qq)} \frac{(j\dx)^\pp (-l\dt)^\qq}{\pp!\qq!},
\label{eq:taylor_series_async}
\ee
where $u^{(\pp,\qq)}$ denotes the $\pp$-th and $\qq$-th partial derivative in
space and time, respectively, of $u$ evaluated at $i$ and $n$.
When $l=0$, the function corresponds to a synchronous value of $u$.
This makes the terms in the series a function of $\dx$ only:
\be
u_{i+j}^{n}=\sum_{\pp=0}^\infty u^{(\pp,0)} \frac{(j\dx)^\pp}{\pp!}
\label{eq:taylor_series_sync}
\ee
To obtain the constraints that will assure a given order of accuracy, we substitute the Taylor series of $u$ in the right hand side of \req{general_deriv_at}.
\bea
\sum_{j=-\smin}^{\smax}
\sum_{l=0}^{L-1}
\co {j}{l}
\U{i+j}{n-l}
&=&
\sum_{j=-\smin}^{\smax}
\sum_{l=0}^{L-1}
\co {j}{l}
\sum_{\pp=0}^\infty \sum_{\qq=0}^\infty u^{(\pp,\qq)} \frac{(j\dx)^\pp (-l\dt)^\qq}{\pp!\qq!} \nonumber \\
& = & u^{(0,0)} \sum_{j=-\smin}^{\smax} \sum_{l=0}^{L-1} \co {j}{l}
+ u^{(1,0)} \dx \sum_{j=-\smin}^{\smax} \sum_{l=0}^{L-1} j \co {j}{l} - \nonumber \\
& & u^{(0,1)} \dt \sum_{j=-\smin}^{\smax} \sum_{l=0}^{L-1} l \co{j}{l}
- u^{(1,1)} \dx\dt \sum_{j=-\smin}^{\smax} \sum_{l=0}^{L-1}j l \co {j}{l} + \nonumber \\
& & \frac{ u^{(2,0)}} {2} \dx^2 \sum_{j=-\smin}^{\smax} \sum_{l=0}^{L-1} j^2 \co {j}{l}
+ \frac{ u^{(0,2)}}{2} \dt^2 \sum_{j=-\smin}^{\smax} \sum_{l=0}^{L-1} l^2 \co{j}{l}
+ \dots
\label{eq:exp}
\eea
The linear combination of the function values, in the above equation, represents a scheme when ($i$) the coefficient of the $d$-th derivative of $u$ in space is unity, and ($ii$) low-order terms are eliminated according to the desired accuracy of the scheme.
Let $a$ be the desired order of accuracy in space.
This means that the leading order term in the truncation error should vary with the grid spacing as $\dx^a$.
In the Taylor series of the function for synchronous schemes, as in \req{taylor_series_sync}, the lower order terms can be readily identified as the ones with the power of $\dx$ less than $d+a$.
However, when asynchrony is present this is not obvious. The terms in the series can now be a function of either or both $\dx$ and $\dt$, which are usually not independent.
In order to identify the lower order terms, let us assume the relation $\dt\sim\dx^r$.
Such a relation is often obtained from analysis of the scheme's numerical stability or other constraints posed by the physics of the problem.
Using this relation, we can arrive at the condition to identify lower order terms that need to be eliminated to obtain a scheme of order $a$. This expression is: $\pp+r\qq<d+a$.
Using \req{exp}, we can then summarize the constraints as
\be
\sum_{j=-\smin}^{\smax} \sum_{l=0}^{L-1} \co{j}{l} \frac{(j\dx)^\pp (-l\dt)^\qq}{\pp!\qq!} =
\begin{cases}
1 & \quad \text{for } (\pp,\qq) = (d,0) \\
0 & \quad \text{for } \pp+r\qq<d+a; (\pp,\qq) \ne (d,0). \\
\end{cases}
\label{eq:cond}
\ee
Clearly, the first condition in the above equation makes the coefficient of the term corresponding to $d$-th spatial derivative on the right hand side of \req{exp} unity.
The second condition will set to zero all the necessary lower order terms to obtain an overall accuracy $a$.
For a given stencil, these conditions give rise to a system of linear equations.
The number of equations in the system is one more than the number of lower order terms that have to be eliminated from \req{exp}.
Let $\bs{A}\bs{\tilde{c}}=\bs{b}$ represent this system, where $\bs{A}$ is the coefficient matrix whose elements are a function of $j$ and $l$, $\bs{\tilde{c}}$ is the vector of variables that contains coefficients in the scheme and $\bs{b}$ is the vector with zero elements except for the row corresponding to the order of the derivative to be approximated.
The solution to this system determines the coefficients of the scheme.
Before getting into the discussion on the choice of stencil, we make a few observations regarding the linear system when asynchrony is present.
To aid the discussion we express the terms in the Taylor series of the function at the generic stencil point, $\U{i+j}{n-l}$, in a matrix format as shown in \rfig{ts1}.
This provides a simple format to visualize different terms in the series and help us easily identify the terms on which the conditions in \req{cond} have to be imposed.
\begin{figure}[h]
\begin{center}
{\ss
\begin{TAB}(e,1.5cm,1.5cm){:c:c:c:c:c:}{:c:c:c:c:c:}
\bmpc${u}$\empc & {\bmpc ${u^{(0,1)}}$ ${ \ldt}$\empc} & {\bmpc$u^{(0,2)}$ $\ldt^2$\empc} & {\bmpc$u^{(0,3)}$ $ \ldt^3$\empc} & { $\dots$} \\
{\bmpc${u}^{(1,0)}$ $\ldx$\empc } & {\bmpc${u^{(1,1)}}$ ${\ldx}$\\ ${\ldt}$ \empc} & {\bmpc$u^{(1,2)}$ $\ldx$ \\ $\ldt^2$\empc} & {\bmpc$u^{(1,3)}$ $\ldx$ \\ $\ldt^3$\empc} & { $\dots$} \\
{\bmpc${u}^{(2,0)}$ $\ldx^2$\empc } & {\bmpc${u^{(2,1)}}$ ${\ldx^2}$\\ ${\ldt}$ \empc} & {\bmpc$u^{(2,2)}$ $\ldx^2$ \\ $\ldt^2$\empc} & {\bmpc$u^{(2,3)}$ $\ldx^2$ \\ $\ldt^3$\empc} & { $\dots$} \\
{\bmpc${u}^{(3,0)}$ $\ldx^3$\empc } & {\bmpc${u^{(3,1)}}$ ${\ldx^3}$\\ ${\ldt}$ \empc} & {\bmpc$u^{(3,2)}$ $\ldx^3$ \\ $\ldt^2$\empc} & {\bmpc$u^{(3,3)}$ $\ldx^3$ \\ $\ldt^3$\empc} & { $\dots$} \\
{$\vdots$ } & {$\vdots$ } & {$\vdots$ } & {$\vdots$ } & {$\ddots$ } \\
\end{TAB}
}
{\setlength{\unitlength}{1cm}
\bp
\put(-8.5,1.7){\line(1,1){6.2}}
\put(-3,8.5){Line A, $r=1$,}
\put(-2.8,8.1){$a=2$}
\put(-8.5,0.2){\line(1,1){8.2}}
\put(-9.5,0.5){Line C,}
\put(-9.5,0.1){$r=1$,}
\put(-9.5,-0.2){$a=3$}
\qbezier(-7.9,1.7)(-5.7,6.03)(-4.5,8.2)
\put(-6.0,8.5){Line B, $r=2$,}
\put(-6.0,8.1){$a=2$}
\ep}
\caption{Terms in the Taylor series of $\U{i+j}{n-l}$ illustrated in a matrix format. Constant in each term are omitted for clarity. Lines A, B and C represent $\pp+r\qq=d+a$ for different sets of parameters.}
\label{fig:ts1}
\end{center}
\end{figure}
In this graphical representation, we omit constants in each term for the sake of clarity.
In words, \req{cond} implies constraints on the term containing the derivative of order $(d,0)$ and on all the terms that satisfy the inequality $\pp+r\qq<d+a$, that is, all terms above the $\pp+r\qq=d+a$ line in \rfig{ts1}.
With this representation, we can easily separate terms that need to be eliminated from those that do not.
To illustrate this, let us choose $d=1$, $a=2$ and $r=1$, which corresponds to a second-order approximation of the first derivative, using a convective-type CFL condition such that $\dt\sim\dx$.
For these parameters, conditions are imposed on the terms with $(\pp,\qq)=\{(0,0),(1,0),(2,0),(0,1),(1,1),(0,2))\}$, which are the terms above the line $A$ in the figure.
If asynchrony is absent, that is, $l=0$, the only terms that are non-zero in the table belong to the first column.
This shows that, for a given accuracy, the number of terms on which
conditions are imposed is larger when asynchrony is present, which thus results in a larger linear system.
The increase in the number of equations also depends on $r$, which relates $\dt$ and $\dx$.
For example, the situation for $r=2$ is also shown in \rfig{ts1} with line $B$.
The number of terms above the line $B$ (4 terms) is less than $A$ (6 terms),
which implies that a higher $r$ will reduce the number of lower order terms due
to asynchrony for a given accuracy.
The other aspect is the increase in stencil size with increase in accuracy.
In commonly used synchronous schemes ($l=0$), a successive increase in the
order of accuracy will impose a new condition on one more term in the Taylor
series,
which adds an additional equation to the linear system. The linear system is then solved by adding one more grid point to the stencil.
However, in deriving \ats schemes more than one additional equation may be
added to the system (compare the number of terms above lines A and C).
Thus, we expect the stencil of \ats schemes to grow larger than commonly used
synchronous schemes when the desired accuracy is increased.
\subsection{Choice of stencil}
\label{sec:choice}
In principle one can choose a stencil that consists of different grid points
and time levels to approximate spatial derivatives.
However, the stencil of commonly used synchronous schemes are constructed
exclusively with spatial grid points.
This has some advantages.
First, the function at the synchronous time level is available for spatial
derivative evaluation at all points in the domain.
Second, as argued in \cite{DA2014}, and elaborated in \rsec{at-gm} above, this
choice avoids the additional terms that will appear in Taylor series when the
stencil consists of delayed time levels.
When asynchrony is present, on the other hand, as is clear from \req{general_deriv_at}, the function can belong to multiple time levels.
Thus, we can take advantage of using the function at delayed time levels in deriving \ats schemes.
The choice of stencil should be made according to the nature of terms in Taylor series on which conditions in \req{cond} are imposed.
To understand this let us recall the tabular representation of the Taylor series of $\U{i+j}{n-l}$, as shown in \rfig{ts2}.
\begin{figure}[h]
\begin{center}
{\ss
\begin{TAB}(e,1cm,1cm){:c:c:c:c:c:}{:c:c:c:c:c:}
\bmpc${u}$\empc & {\cg\bmpc ${u^{(0,1)}}$ ${ \ldt}$\empc} & {\cg\bmpc$u^{(0,2)}$ $\ldt^2$\empc} & {\cg\bmpc$u^{(0,3)}$ $ \ldt^3$\empc} & {\cg $\dots$} \\
{\cb \bmpc${u}^{(1,0)}$ $\ldx$\empc } & {\colr \bmpc${u^{(1,1)}}$ ${\ldx}$\\ ${\ldt}$ \empc} & {\colr \bmpc$u^{(1,2)}$ $\ldx$ \\ $\ldt^2$\empc} & {\colr \bmpc$u^{(1,3)}$ $\ldx$ \\ $\ldt^3$\empc} & {\colr $\dots$} \\
{\cb $\vdots$ } & {\colr $\vdots$ } & {\colr $\vdots$ } & {\colr $\vdots$ } & {\colr $\ddots$ } \\
{\cb \bmpc ${u}^{(d,0)}$ $\ldx^d$ \empc} & {\colr \bmpc ${u^{(d,1)}}$ $\ldx^d$ \\ $\ldt$ \empc} & {\colr \bmpc $u^{(d,2)}$ $\ldx^d$ \\ $\ldt^2$ \empc} & {\colr \bmpc $u^{(d,3)}$ $\ldx^d$ \\ $\ldt^3$ \empc} & {\colr $\dots$} \\
{\cb $\vdots$ } & {\colr $\vdots$ } & {\colr $\vdots$ } & {\colr $\vdots$ } & {\colr $\ddots$ } \\
\end{TAB}
}
\caption{Terms in the Taylor series of $\U{i+j}{n-l}$ illustrated in a matrix format. Constants in each term are omitted for clarity. Different colors represent terms from different groups, as explained in \rsec{choice}.}
\label{fig:ts2}
\end{center}
\end{figure}
We can classify terms into four groups, as represented by the four different colors in the figure.
Terms in blue are a function of $\dx$ alone. These terms will appear in the Taylor series of the function when $j\ne 0$.
Similarly, terms that are a function of $\dt$ only are shown in green and they appear when $l\ne 0$.
Terms in red are a function of both $\dx$ and $\dt$, and these appear when $j\ne0$ and $l\ne 0$.
The term $u$ in black is a function of neither $\dx$ nor $\dt$ and is present in the Taylor series of the function at any point and time level.
In order to eliminate specific terms in the truncation error, it is apparent that we cannot arbitrarily choose the points and time levels in a stencil.
They have to be selected according to the number of terms in each of these groups.
For example, if a linear system consists of three equations that correspond to condition on terms that belong to the red group, then the scheme would need the function evaluated at a minimum of three combinations of $j$ and $l$ such that $j\ne0$ and $l\ne 0$.
If not, the linear system may not have a solution or may have a solution which correspond to stencils completely biased towards the synchronous side of the stencil, like forward and backward differences.
The choice of stencil has consequences also in terms of the performance of simulation codes on parallel machines.
Expanding the stencil in space will lead to larger message sizes to be sent over the network, which may be too expensive at extreme scales.
Using multiple levels in time will keep the messages relatively smaller, but will increase the memory requirements in each PE.
This choice, thus, would require information on the specific computing system to be used for the simulation.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{layout.eps}
\caption{A schematic of stencil layout for a particular
asynchrony-tolerant (AT) scheme.}
\label{fig:layout}
\end{figure}
The rectangular box in \rfig{layout} illustrates the layout of the stencil used in expressing the general scheme in \req{general_deriv_at}.
However, as mentioned earlier, not all the time levels at all points are required to approximate the derivative.
Instead, one can limit the number of time levels at each grid point in such a way to introduce the exact number of coefficients that would make the linear system solvable. \req{general_deriv_at}, then, becomes:
\be
\left. \frac{\pd^d u}{\pd x^d} \right|_i^n \approx
\sum_{j=-\smin}^{\smax}
\sum_{l=\lmin{j}}^{\lmax{j}}
\co{j}{l}
\U{i+j}{n-l}
,
\label{eq:general_deriv_at2}
\ee
where \lmin{j}\ and \lmax{j}\ are the lower and upper limits on the time levels used at the point $i+j$.
These limits are computed according the latest time level available and the number of time levels chosen at that point in the stencil.
As an example, in \rfig{layout} we identify a stencil to solve a system with four equations. At the two interior points the latest available time level is $n$, which has a zero delay.
Thus, the limits are $\lmin{j}=\lmax{j}=0$, for $j\in \{-1,0\}$.
As specified before, the latest available time level at the buffer point is given by $\n=n-\k{i+1}$, and we use two successive time levels at this point. The limits on the time level at this point are then $\lmin{1}=\k{i+1}$ and $\lmax{1}=\k{i+1}+1$.
A choice of stencil will lead to a scheme only when there exists a solution to the resulting linear system $\bs{A}\bs{\tilde{c}}=\bs{b}$.
Since $\bs{b}\neq \bs{0}$ due to the first condition in \req{cond}, the system is non-homogeneous and has a unique solution only when the matrix $\bs{A}$ is non-singular or has a full rank.
If $N_A$ is the size of the linear system, then the matrix has full rank when $rank(\bs{A})=N_A$.
We, then, obtain the scheme by solving the system and substituting the coefficients into \req{general_deriv_at2}.
On the other hand, when $rank(\bs{A})<N_A$, the matrix is singular and
the linear system possesses either no solution or infinite solutions.
We can distinguish these two cases by computing the rank of the augmented matrix $\bs{A}|\bs{b}$.
If $rank(\bs{A})\ne rank(\bs{A}|\bs{b})$, then the system is inconsistent and the choice of stencil does not result in a scheme.
In the case where $rank(\bs{A})= rank(\bs{A}|\bs{b})$, the linear system is consistent, but has infinite solutions.
The linear system, then, contains two or more equations that are linearly dependent.
This means that for the choice of stencil, conditions on at least two of the terms are mathematically equivalent or a condition on at least one of the terms can be obtained from linear combination of others.
In such a situation, we can get a scheme with greater accuracy with the same stencil.
In some cases, it is possible to construct a smaller linear system (with a corresponding smaller stencil) comprised of linearly independent equations, which does in fact have a unique solution.
The greater the number of linearly dependent equations, the smaller the linear system with linearly independent equations will be.
This suggests that a judicious selection of grid points and time levels can be used to increase the number of linearly dependent equations in the resulting system, and thus reduce the stencil size which in turn reduces computations as well as the size of communication messages.
This will be of interest in deriving \ats schemes, which demand larger stencil due the presence of terms due to asynchrony.
Let us recall the second condition from \req{cond}, imposed to eliminate the terms due to asynchrony in deriving a scheme. After cancelling out the term $\dx^\pp \dt^\qq / \pp! \qq! $, which is constant across the equation for a given $(\pp,\qq)$, we get
\be
\sum_{j=-\smin}^{\smax} \sum_{l=\lmin{j}}^{\lmax{j}} \co{j}{l} j^\pp l^\qq = 0.
\label{eq:cond-async1}
\ee
It is evident from the above equation that the existence of linearly dependent equations rests on the values of $j$ and $l$ which are defined by the stencil, as well as $\pp$ and $\qq$ which represent the order of the derivative corresponding to the equation.
As mentioned earlier, the function in the stencil can belong to multiple time levels.
The time level of the function at the interior points has a zero delay, that is $l=0$, and hence, will not appear in equations corresponding to asynchrony terms.
If we choose a single uniform time level with a delay $\k{i+j}=\tilde{k}$ for all $i+j \in B$ in the stencil, then $\lmin{j}=\lmax{j}=\tilde{k}$ which leads to a uniform value of $l^\qq$ in \req{cond-async1}.
The equation then reduces to
\be
\sum_{i+j\in B} \co{j}{\tilde{k}} j^\pp = 0,
\label{eq:cond-async2}
\ee
which is independent of $\qq$, and shows that for a given $\pp$, equations corresponding to $\qq \ne 0$ are linearly dependent.
With reference to \rfig{ts2}, when we eliminate a term in the red or green groups, all the other terms in the corresponding row that are in the same group are also eliminated.
This illustration shows that it is possible to choose a stencil which results in linearly dependent equations in a system.
We conclude this section by summarizing the steps to derive \ats schemes.
\begin{enumerate}
\item List the terms on which conditions have to be imposed for a given $d$, $a$ and $r$
\item Identify an appropriate stencil ($\smin,\smax,\lmin{j},\lmax{j}$) according to the terms in the list
\item Compute the rank of the matrix $\bs{A}$
\bi
\item $rank(\bs{A})=N_A$: unique solution
\item $rank(\bs{A})<N_A$ and $rank(\bs{A})\ne rank(\bs{A}|\bs{b})$: no solution, identify a new stencil
\item $rank(\bs{A})<N_A$ and $rank(\bs{A})= rank(\bs{A}|\bs{b})$: infinite solutions, add more conditions to get greater accuracy or reduce the system size with only linearly independent equations and adjust the stencil size
\ei
\item Solve for $\bs{\tilde{c}}$ and substitute the coefficients into general scheme
\end{enumerate}
\subsection{Alternative approach} \label{sec:alt-app}
It is often necessary to use schemes with a specific structure in terms of stencil and the corresponding coefficients to either improve computational performance or satisfy numerical properties.
In the context of \ats schemes, it is desirable to use schemes at PE boundary points that are similar in nature to those at interior points.
Such an implementation may improve the overall stability of a numerical method and relieve the natural tendency of concentrated errors in the spatial distribution near PE boundaries (e.g. see Fig. (3) in \cite{DA2014}).
Though the method described earlier gives the flexibility to choose a particular structure for the stencil, there is not much control over the nature of the resulting coefficients in the scheme.
This is because the necessary conditions imposed on the terms on the right hand side of \req{exp} are all solved in a single linear system.
And, explicit conditions on the coefficients have to be added to the linear system to address this issue.
In an alternative approach to derive schemes, we propose to impose necessary conditions (similar to \req{cond}) on the set of terms arising from Taylor series in a step-by-step process.
In each step, a subset of lower order terms are eliminated, while retaining the derivative order term using a particular stencil.
This process is repeated until the desired accuracy is achieved.
Linear systems of smaller size can be constructed in each step to enforces the conditions and obtain the coefficients.
The procedure described in bullet $3$ in the summary of \rsec{at-gm} should be used in computing the solution of these systems.
We now proceed to outline the procedure to derive schemes similar to central differences using this approach, and will later provide a detailed illustration in example 3 of \rsec{class}.
Central difference schemes are widely used in solving parabolic and elliptic PDEs, and are shown to have low numerical dissipation, necessary to resolve all scales in multi-scale phenomena \cite{hirsch.I}.
If we consider the structure of central difference schemes, it can be
characterized by a symmetric stencil about the point of computation, and
symmetric coefficients (in absolute value).
A general synchronous central difference scheme can be expressed as
\be
\left. \frac{\pd^d u}{\pd x^d} \right|_i^n \approx
\sum_{j=0}^{J}
\phi_{j}
\left( \U{i+j}{n} + (-1)^d \U{i-j}{n} \right)
,
\label{eq:general_deriv_cds}
\ee
where $J$ determines the size of stencil and $\phi_j$ are the appropriate coefficients.
Let us consider this stencil in the presence of asynchrony.
In practical simulations each PE, typically, is assigned a large number of grid points.
When asynchrony is allowed in such cases, delays are experienced only on one side of the stencil, that is, either on the left or right about the point of computation $i$ for $i\in I_B$.
If we assume the delay on the left side, which implies $i-j$ is a buffer point, then the terms in the sum in \req{general_deriv_cds} take the form $\left( \U{i+j}{n}+ (-1)^d\U{i-j}{n-l} \right)$.
To maintain the above mentioned symmetries in \ats schemes, we use this sum to eliminate some of the lower order terms and retain the derivative order term in the Taylor series in the first step.
As delay is present only at $i-j$, none of the terms due to asynchrony in the expansion of $\U{i-j}{n-l}$ are cancelled out in the sum of the function at the two points.
However, some of the terms, which are not a function of $\dt$, cancel out depending on the order of derivative $d$.
If $d$ is odd, then terms that correspond to even power of $\dx$ cancel out, as shown next.
Consider the difference $\U{i+j}{n}- \U{i-j}{n}$ where we choose $l=0$ to simplify the analysis.
The conclusions, though, are valid for arbitrary delays $l>0$. A Taylor series expansion can then be written as
\be
\U{i+j}{n}- \U{i-j}{n} = 2 \left[ u^{(1,0)} \frac{(j\dx)}{1!} + u^{(3,0)} \frac{(j\dx)^3}{3!} + u^{(5,0)} \frac{(j\dx)^5}{5!} + \dots \right]
\label{eq:deriv11}
\ee
Similarly, if we consider the sum, $\U{i+j}{n}+\U{i-j}{n-l}$, terms with odd powers of $\dx$ will vanish.
This reduces some of the terms on which conditions need to be imposed, as we move on to the next step.
A further decrease in the number of conditions can be achieved by artificially imposing the same delay of $l$ levels on the other side of the stencil, that is, $\left( \U{i+j}{n-l} + (-1)^d \U{i-j}{n-l} \right)$.
The Taylor series expansion of this difference, for odd $d$, is
\bea
\U{i+j}{n-l}-\U{i-j}{n-l} & = & 2 \left[ u^{(1,0)} \frac{(j\dx)}{1!} + u^{(1,1)} \frac{(j\dx)(-l\dt)}{1!1!} \right. \nonumber\\
& & \left. + u^{(3,0)} \frac{(j\dx)^3}{3!} + u^{(1,2)} \frac{(j\dx)(-l\dt)^2}{1!2!} + \dots \right],
\label{eq:deriv22}
\eea
which shows that all the terms with even powers of $\dx$, regardless of the power of $\dt$, are absent.
Indeed, imposing this artificial delay on the function at the interior point, though demands additional storage of more time levels at each grid point, will lead to a smaller number of constraints. Thus, schemes with delay on both sides will need a smaller stencil to compute derivatives.
In \cite{Mudigere2014}, this approach was used to recover the drop in accuracy due to delay in communication in a particular application using central difference schemes.
The authors further suggested that imposing delay on both sides of the stencil in central differences would suffice to maintain the accuracy under asynchronous conditions.
However, it can be shown from a Taylor series expansion that the schemes cannot be accurate beyond second order under the conditions they presented.
It is essential to increase the stencil size to achieve higher order accuracy when asynchrony is present, as shown in this work.
The remaining lower order terms can be eliminated by expanding the stencil with additional terms of the form $\left( \U{i+j}{n}+ (-1)^d\U{i-j}{n-l} \right)$ for different values of $j$ or $l$.
This can be done either in a single or multiple steps, and both of them will ensure symmetry in the coefficients.
Assuming delay on the left of the stencil, the resultant \ats scheme takes the
form:
\be
\left. \frac{\pd^d u}{\pd x^d} \right|_i^n \approx
\sum_{j=0}^{J}
\sum_{l=\lmin{-j}}^{\lmax{-j}}
\cats{j}{l}
\left( \U{i+j}{n} + (-1)^d \U{i-j}{n-l} \right)
\label{eq:general_deriv_cds-at}
\ee
It is often useful to derive \ats schemes that reduce to central difference schemes when all delays are zero, that is $\k{j}=0$ for $j\in B$.
Such schemes can be derived by expanding the stencil in space, using the sum at different $j$, to eliminate terms that are not a function of $\dt$. And, use the sum at different levels in time to cancel out the terms due to asynchrony.
This approach, which contains the essence of the alternative procedure
presented in this section, will be illustrated in detail as Example 3 below.
\subsection{Classification of schemes}\label{sec:class}
In arriving at \ats schemes there are several choices available in terms of choosing points and time levels in a stencil and on the nature of coefficients. We first provide a simple classification of \ats schemes based on these choices, and then we present some examples.
Let us consider the stencil of the general \ats scheme in \req{general_deriv_at2}, which is given by the limits $\smin$ and $\smax$ in space and $\lmin{j}$ and $\lmax{j}$ in time.
If $\smin=\smax$, the number of points are equal on either sides of the point of computation $i$.
We refer to this as a {\em symmetric} stencil in space.
Else, $\smin \ne \smax$ and the stencil is {\em asymmetric}.
Regarding the nature of the delays, schemes can potentially have different delay values at different points in a stencil.
However, enforcing a uniform delay across all the buffer points in a scheme, that is $\k{i+j}=\k{}$ for all $i+j\in B$, may lead to linearly dependent conditions and a simpler implementation of schemes.
We can, thus, classify schemes according to the presence or absence of uniform delay in schemes.
In addition to the uniformity of delays, schemes can also be classified with respect to the time levels chosen at interior points.
The function at these points can be either at the synchronous time level or at artificially imposed levels which, as we have shown, provide some numerical advantages.
Schemes can also be classified on the basis of the nature of coefficients.
When asynchrony is present, a stencil with symmetric points may not necessarily give rise to symmetry in its coefficients.
This is due to the non-uniform time levels in the stencil at these points.
To obtain symmetry in coefficients, as in standard central difference schemes, we have earlier proposed to use a sum or difference of the function at symmetric grid points.
In this regard, we classify schemes with symmetric coefficients as the ones which have $|\co{-j}{0}|= |\co{j}{0}|$ (i.e. when $\k{i+j}=0$).
A summary of these classifications is given in \rtab{class}.
\begin{table}[h]
\begin{center}
\begin{tabular}{l|l l}
\hline
Feature & Classification \\
\hline
\hline
Layout of & symmetric & asymmetric \\
grid points & $\smin=\smax$ & $\smin \neq \smax$ \\
\hline
Nature of delay & unconstrained & uniform delay \\
at buffer points & & $\k{i+j} = \tilde{K} \ \ \forall \ \ i+j\in B$ \\
\hline
Artificial delay & zero delay & non-zero delay \\
at interior point & $\k{i+j}=0 \ \ \forall \ \ i+j\in I$ & $\k{i+j}\le 0 \ \ \forall \ \ i+j\in I$ \\
\hline
Coefficients & symmetric & asymmetric \\
& $|\co{-j}{0}|= |\co{j}{0}|$ & $|\co{-j}{0}|\neq |\co{j}{0}|$ \\
\hline
\end{tabular}
\caption{Summary of classification of asynchrony-tolerant (AT) schemes.}
\label{tab:class}
\end{center}
\end{table}
From the discussions in previous sections, it is clear that each of these classifications will have consequences in terms of numerical properties and computational performance of schemes.
We will now proceed to derive three \ats schemes and demonstrate how the conditions corresponding to the classification can be implemented in arriving at them.
\vspace{0.25cm}
\noindent{\em{Example 1}}: first derivative - second order accurate ($d=1$, $a=2$) \newline
Using $r=2$,
conditions in \req{cond} are imposed on terms that satisfy the inequality $\pp+2\qq<3$.
This gives rise to a linear system with four equations corresponding to the terms with $(\pp,\qq)=\{(0,0),(1,0),(2,0),(0,1)\}$ in the Taylor series.
The next step is to select a stencil with the function defined at four different combinations of points and time levels.
Let, as before, $n-\k{i+j}$ be the latest available time level at a point $i+j \in B$ with $j>0$.
Further, let us choose the function set $\{\U{i-1}{n},\U{i}{n},\U{i+1}{n-\k{i+1}}, \U{i+2}{n-\k{i+2}}\}$ to construct the linear system $\bs{A}\bs{\tilde{c}}=\bs{b}$.
This results in
\be
\left[
\begin{array}{cccc}
1 & 1 & 1 & 1 \\
-\dx & 0 & \dx & 2\dx \\
\frac{\dx^2}{2} & 0 & \frac{\dx^2}{2} & 2\dx^2 \\
0 & 0 & -\k{i+1}\dt & -\k{i+2}\dt
\end{array}
\right]
\left[
\begin{array}{c}
\co{i-1}{0} \\
\co{i}{0} \\
\co{i+1}{\k{i+1}} \\
\co{i+2}{\k{i+2}} \\
\end{array}
\right]
= \left[
\begin{array}{c}
0 \\
1 \\
0 \\
0 \\
\end{array}
\right].
\label{eq:lsx1}
\ee
The rank of the coefficient matrix in the above equation is $4$, which is equal to the size of the system.
Thus, the choice of stencil results in a scheme without any further adjustments.
After solving for the coefficients, we obtain the scheme as
\bea
\left. \frac{\pd u}{\pd x} \right|_i^n & = & \frac{(-4\k{i+1}+\k{i+2})\U{i-1}{n}+3\k{i+1}\U{i}{n}-\k{i+2}\U{i+1}{n-\k{i+1}}+\k{i+1}\U{i+2}{n-\k{i+2}}}{2(3\k{i+1}-\k{i+2})\dx} \nonumber \\
& & + \ord\left( \frac{6\k{i+1}-\k{i+2}}{18\k{i+1}-6\k{i+2}}\dx^2,-\frac{\k{i+1}\k{i+2}}{3\k{i+1}-\k{i+2}}\dt\right).
\label{eq:ex1-0}
\eea
Note that the coefficients are a function of the random delay at buffer points.
It is easy to see by inspection that this scheme has to be complemented in two specific circumstances.
First, when $3\k{i+1}-\k{i+2}=0$ the approximation has an infinite value.
This can be avoided by artificially altering the delays such that $3\k{i+1}-\k{i+2}\ne0$.
Second, when the function at both buffer points is at a synchronous time level,
i.e., no delay, the above scheme will result in an indeterminate form.
In that case one can use
\be
\left. \frac{\pd u}{\pd x} \right|_i^n = \frac{-3\U{i-1}{n}+3\U{i}{n}-\U{i+1}{n-\k{}}+\U{i+2}{n-\k{}}}{4\dx}+ \ord\left( \dx^2,\k{}\dt \right),
\label{eq:ex1-1}
\ee
which is obtained by substituting $\k{i+1}=\k{i+2}=\k{}$ and simplifying
\req{ex1-0}.
It is interesting to see that
the coefficients in the above scheme, with a uniform delay across the buffer points, are independent of the delay value, which eliminates the limitations of the scheme in \req{ex1-0}.
Similar schemes can be derived by considering delays on the left of the stencil.
\vspace{0.25cm}
\noindent{\em{Example 2}}: second derivative - fourth order accurate ($d=2$, $a=4$) \newline
The relationship between the time step and grid spacing is
assumed as $\dt\sim\dx$, that is $r=1$.
For this set of parameters, \req{cond} enforces conditions on 21 terms in the Taylor series, which are highlighted in red in \rfig{ts-ex2}.
\begin{figure}[h]
\begin{center}
{\ss
\begin{TAB}(e,1cm,1cm){:c:c:c:c:c:c:c:}{:c:c:c:c:c:c:c:}
\bmpc${\colr u}$\empc & {\colr\bmpc ${u^{(0,1)}}$ ${ \ldt}$\empc} & {\colr\bmpc$u^{(0,2)}$ $\ldt^2$\empc} & {\colr\bmpc$u^{(0,3)}$ $ \ldt^3$\empc} & {\colr\bmpc$u^{(0,4)}$ $ \ldt^4$\empc}& {\colr\bmpc$u^{(0,5)}$ $ \ldt^5$\empc}& {$\dots$} \\
{\colr \bmpc${u}^{(1,0)}$ $\ldx$\empc } & {\colr \bmpc${u^{(1,1)}}$ ${\ldx}$\\ ${\ldt}$ \empc} & {\colr \bmpc$u^{(1,2)}$ $\ldx$ \\ $\ldt^2$\empc} & {\colr \bmpc$u^{(1,3)}$ $\ldx$ \\ $\ldt^3$\empc} & {\colr \bmpc$u^{(1,4)}$ $\ldx$ \\ $\ldt^4$\empc}& {\bmpc$u^{(1,5)}$ $\ldx$ \\ $\ldt^5$\empc} & {$\dots$} \\
{\colr \bmpc${u}^{(2,0)}$ $\ldx^2$\empc } & {\colr \bmpc${u^{(2,1)}}$ ${\ldx^2}$\\ ${\ldt}$ \empc} & {\colr \bmpc$u^{(2,2)}$ $\ldx^2$ \\ $\ldt^2$\empc} & {\colr \bmpc$u^{(2,3)}$ $\ldx^2$ \\ $\ldt^3$\empc} & { \bmpc$u^{(2,4)}$ $\ldx^2$ \\ $\ldt^4$\empc}& { \bmpc$u^{(2,5)}$ $\ldx^2$ \\ $\ldt^5$\empc} & { $\dots$} \\
{\colr \bmpc${u}^{(3,0)}$ $\ldx^3$\empc } & {\colr \bmpc${u^{(3,1)}}$ ${\ldx^3}$\\ ${\ldt}$ \empc} & {\colr \bmpc$u^{(3,2)}$ $\ldx^3$ \\ $\ldt^2$\empc} & {\bmpc$u^{(3,3)}$ $\ldx^3$ \\ $\ldt^3$\empc} & { \bmpc$u^{(3,4)}$ $\ldx^3$ \\ $\ldt^4$\empc}& { \bmpc$u^{(3,5)}$ $\ldx^3$ \\ $\ldt^5$\empc} & {$\dots$} \\
{\colr \bmpc${u}^{(4,0)}$ $\ldx^4$\empc } & {\colr \bmpc${u^{(4,1)}}$ ${\ldx^4}$\\ ${\ldt}$ \empc} & {\bmpc$u^{(4,2)}$ $\ldx^4$ \\ $\ldt^2$\empc} & {\bmpc$u^{(4,3)}$ $\ldx^4$ \\ $\ldt^3$\empc} & {\bmpc$u^{(4,4)}$ $\ldx^4$ \\ $\ldt^4$\empc}& {\bmpc$u^{(4,5)}$ $\ldx^4$ \\ $\ldt^5$\empc} & {$\dots$} \\
{\colr \bmpc${u}^{(5,0)}$ $\ldx^5$\empc } & {\bmpc${u^{(5,1)}}$ ${\ldx^5}$\\ ${\ldt}$ \empc} & {\bmpc$u^{(5,2)}$ $\ldx^5$ \\ $\ldt^2$\empc} & {\bmpc$u^{(5,3)}$ $\ldx^5$ \\ $\ldt^3$\empc} & {\bmpc$u^{(5,4)}$ $\ldx^5$ \\ $\ldt^4$\empc}& {\bmpc$u^{(5,5)}$ $\ldx^5$ \\ $\ldt^5$\empc} & {$\dots$} \\
{ $\vdots$ } & { $\vdots$ } & {$\vdots$ } & {$\vdots$ } & {$\vdots$ } & {$\vdots$ } & {$\ddots$ } \\
\end{TAB}
}
\caption{Terms in the Taylor series of $\U{i+j}{n-l}$ illustrated in a matrix format. Constants in each term are omitted for clarity. To obtain the scheme in Example 2 conditions in \req{cond} are imposed on the red color terms.}
\label{fig:ts-ex2}
\end{center}
\end{figure}
The resulting linear system has 21 equations which, in principle, will need the function at 21 combinations of points and time levels.
However, from \req{cond-async1} we can see that a stencil with only two time levels, $n$ for interior points and $n-\k{}$ for buffer points, will lead to linearly dependent equations in the system.
We use this choice of stencil to reduce the size of the linear system.
Choosing the limits $\{\smin,\smax\}=\{5,6\}$ in space and assuming the buffer points are on the right side of the stencil, leads to a smaller linear system with 11 equations that has a unique solution.
Upon solving the system, the resulting \ats scheme is
\bea
\left. \frac{\pd^2 u}{\pd x^2} \right|_i^n & = & \frac{1}{12\dx^2}\left[ 35\U{i-5}{n}-164\U{i-4}{n}+294\U{i-3}{n}-236\U{i-2}{n}+71\U{i-1}{n} -45\U{i+1}{n-\k{}} \right. \nonumber\\
& & \left. +225\U{i+2}{n-\k{}}-450\U{i+3}{n-\k{}}+450\U{i+4}{n-\k{}}-225\U{i+5}{n-\k{}}+45\U{i+6}{n-\k{}} \right] \nonumber \\
& & +\ord\left( \dx^4 , \k{} \dx^3 \dt \right).
\label{eq:ex2-1}
\eea
Note that a single linear system has been used to obtain the above scheme.
\vspace{0.25cm}
\noindent{\em{Example 3}}: second derivative - fourth order accurate ($d=2$, $a=4$) \newline
In this example, we will use the alternative step-by-step approach described in \rsec{alt-app} to derive an \ats scheme that reduces to a standard central difference scheme in the absence of delays.
If we consider the Taylor series of $u$ at a generic point and time level in a stencil, and assume $\dt\sim\dx^2$, conditions have to be imposed on terms with
\bea
(\pp,\qq) & = & \{(0,0), (1,0), (2,0), (3,0), (4,0), (5,0) \nonumber \\
& & (0,1), (1,1), (2,1), (3,1), (0,2), (1,2)\} .
\label{eq:set}
\eea
In order to maintain a symmetry in the stencil points and coefficients, we use the sum $\left( \U{i+j}{n} + \U{i-j}{n-l} \right)$ in the first step, which eliminates the terms with $(\pp,\qq)=\{(1,0),(3,0)\}$ upon Taylor series expansion. In the second step, conditions are enforced on the terms that are only a function of $\dx$ or the terms with $\qq=0$ by expanding the stencil in space.
These are the three terms corresponding to $(\pp,\qq)=\{(0,0),(2,0),(4,0)\}$, which result in three equations using the function $\left( \U{i+j}{n} + \U{i-j}{n-l} \right)$ for $j=0,1,2$:
\be
\left[
\begin{array}{ccc}
2 & 2 & 2 \\
0 & \dx^2 & 4\dx^2 \\
0 &\frac{\dx^4}{12} & \frac{4\dx^4}{3} \\
\end{array}
\right]
\left[
\begin{array}{c}
\cats{0}{l} \\
\cats{1}{l} \\
\cats{2}{l} \\
\end{array}
\right]
= \left[
\begin{array}{c}
0 \\
1 \\
0 \\
\end{array}
\right]
\label{eq:lsx1}
\ee
After solving the equations we obtain a linear combination of the function that is free from lower order synchronous terms.
We can express the linear combination as
\be
\left. \frac{\pd^2 u}{\pd x^2} \right|_i^n = \frac{-\U{i+2}{n} + 16 \U{i+1}{n} - 30\U{i}{n} + 16\U{i-1}{n-l} - \U{i-2}{n-l}}{12\dx^2} + \ord(\dx^4,l\dt,l\dt/\dx^2).
\label{eq:cd4-async}
\ee
When $l=0$, the terms in the truncation error due to asynchrony disappear from the above expression, and we recover the standard fourth order central difference scheme.
In the next step, we eliminate the remaining lower order terms which appear due to asynchrony.
If we expand the stencil further in space, which corresponds to $j>2$, then the scheme would possess the required symmetries, but will not reduce to the fourth order central difference in the absence of delay.
On the other hand, when the stencil size is increased in time, i.e., $l\in\{\k{},\k{}+1,\k{}+2,\dots\}$, we get a scheme that does resemble a standard central difference.
The conditions on the six asynchrony terms from the set in \req{set}, need \req{cd4-async} at six time levels.
However, with the use of multiple time levels in the stencil,
the resulting linear system has three linearly dependent conditions.
We find that conditions on terms with the same $\qq$ are all mathematically equivalent.
This reduces the size of the linear system that uses the linear combination in \req{cd4-async} at $l\in\{\k{},\k{}+1,\k{}+2\}$ to three equations:
\be
\left[
\begin{array}{ccc}
-\k{}\frac{5\dt}{4\dx^2} & -(\k{}+1)\frac{5\dt}{4\dx^2} & -(\k{}+2)\frac{5\dt}{4\dx^2} \\
\k{}^2 \frac{5\dt^2}{8\dx^2} & (\k{}+1)^2\frac{5\dt^2}{8\dx^2} & (\k{}+2)^2\frac{5\dt^2}{8\dx^2} \\
1 & 1 & 1 \\
\end{array}
\right]
\left[
\begin{array}{c}
\cats{0}{\k{}} \\
\cats{1}{\k{}+1} \\
\cats{2}{\k{}+2} \\
\end{array}
\right]
= \left[
\begin{array}{c}
0 \\
0 \\
1 \\
\end{array}
\right]
\label{eq:lsx1}
\ee
The solution to this linear system results in the scheme
\bea
\left. \frac{\pd^2 u}{\pd x^2} \right|_i^n & = & \frac{1}{2}(\k{}^2+3\k{}+2)\frac{-\U{i+2}{n} + 16 \U{i+1}{n} - 30\U{i}{n} + 16\U{i-1}{n-\k{}} - \U{i-2}{n-\k{}}}{12\dx^2} \nonumber \\
& & -(\k{}^2+2\k{})\frac{-\U{i+2}{n} + 16 \U{i+1}{n}- 30\U{i}{n} + 16 \U{i-1}{n-\k{}-1} - \U{i-2}{n-\k{}-1}}{12\dx^2} \nonumber \\
& & +\frac{1}{2}(\k{}^2+\k{})\frac{-\U{i+2}{n} + 16 \U{i+1}{n}- 30\U{i}{n} + 16 \U{i-1}{n-\k{}-2} - \U{i-2}{n-\k{}-2}}{12\dx^2} \nonumber \\
& & + \ord\left(\dx^4,\k{}(\k{}+1)(\k{}+2)\dt^3,\k{}(\k{}+1)(\k{}+2) \dt^3/\dx^2 \right).
\label{eq:cd4-at}
\eea
Clearly, in the absence of delay, $\k{}=0$, the scheme reduces to a standard fourth order central difference scheme.
Note that, like the scheme in Example 1, the coefficients in the above scheme are a function of the random delay $\k{}$.
However, unlike \req{ex1-0} in Example 1, this scheme can take any delay value in the range $[0,L-1]$.
\ \\
Some other useful examples with their leading
order term in the truncation error are collected in
\rtabs{atschemes-left}{atschemes-right}.
These will be used later on
when we assess the numerical performance of
different \ats schemes.
\section{Error analysis}
\label{sec:error}
In previous sections, we presented a method to derive \ats schemes of arbitrary accuracy.
As explained earlier, these schemes are, typically, used at PE boundaries ($i\in\Ib$) where asynchrony is experienced.
The number of computations that are carried out asynchronously in a domain depend on the number of PEs used to solve the problem, the stencil size of schemes used at interior points and statistics of the random delays, which in turn depend on the characteristics of communications in a computing system.
These dependencies bring new challenges when trying to understand the overall accuracy of these \ats schemes.
First, due to the random nature of the delay, the associated truncation error is also random in nature.
Second, schemes to compute spatial derivatives at interior points are not the same as \ats schemes at PE boundary points and have, thus, different truncation errors. These issues result in a non-homogeneity of error in the domain, both in space as well as time.
In our previous work \cite{DA2014}, we have proposed a statistical description to analyze the overall error and determine the accuracy of the numerical solution.
We follow a similar procedure in this work.
Before we develop the error analysis, we present some necessary definitions that will be used.
First, let us define the probability of having a time level $\n=n-\k{i}$ at a grid point $i$ as $\prob{k}{i}$.
The sum of probabilities of all levels at point $i$ is obviously
\begin{equation}
\sum_{k[i]=0}^{L-1} \prob{k}{i} = 1.
\label{eq:c6}
\end{equation}
To obtain the statistics of the error, we define two types of averages for a variable $f$: a space average and an ensemble average.
The space average can be performed over all points in the set $I$ or the subsets $\Ii$ and $\Ib$.
If the average is over the entire domain, that is $i\in I$, it is denoted by angular brackets and given by $\la f\ra = \sum_{i=1,N} f_i / N$.
On the other hand, the average over the points in the subsets $\Ii$ and $\Ib$ are given by $\xaveb{f}= \sum_{i\in \Ib} f_i / \Nb$ and
$\xavei{f} = \sum_{i\in \Ii} f_i / \Ni$,
respectively.
The random nature of delays is taken into account by ensemble averages, which is denoted by an overline $\eave{f}$.
A common measure of the error incurred by using a finite difference representation of the original PDE is given by the so called truncation error. Formally, it is given by the difference between the PDE and the approximate finite difference equation (FDE), that is $E=PDE-FDE$.
As introduced in \cite{DA2014}, the assessment of the error of asynchrony schemes which are random in nature and heterogeneous in space, can be done by applying the two averages described above. That is,
\be
\ave{E} = {1\over N}\sum_{i=1,N} \eave{\E{i}{n}},
\ee
where $\E{i}{n}$ is the truncation error at the point $i$ and time level $n$.
Due to the non-uniform expression for the truncation error at interior and PE boundary points, it is convenient to split the error according to the two sets of points:
\be
\ave{E} = {1\over N}\left[ \sum_{i\in\Ii} \E{i}{n} + \sum_{i\in\Ib} \eave{\ranE{i}{n}} \right]
\label{eq:aveE_split}
\ee
Note that the error due to interior points does not possess randomness due to delays and are, hence, unaffected by the ensemble average.
On the other hand, errors at PE boundary points have both random asynchronous and deterministic synchronous components.
This allows us to further split the error in the set $\Ib$ as
\be
\ave{E} = {1\over N}\left[ \sum_{i\in\Ii} \E{i}{n} + \sum_{i\in\Ib} {\E{i}{n}}|_s + \sum_{i\in\Ib} \eave{\ranE{i}{n}}|_a \right],
\label{eq:aveE_split2}
\ee
where the subscripts $s$ and $a$ denote the synchronous and asynchronous components, respectively.
It is clear that in the absence of delays $\eave{\ranE{i}{n}}|_a = 0$.
The order of accuracy of a scheme will depend on the leading order term in each of the error terms in the above equation.
These terms comprise the sum of the truncation error due to all the
derivatives in the original PDE, including the time derivative.
Thus, it is important to choose the accuracy of time integration to match the order of accuracy of space derivatives.
We will discuss this topic next and then present an example to illustrate the effect of asynchrony on the error.
We will end this section with a generalization of the results on accuracy of asynchronous schemes.
\subsection{Time integration}
\label{sec:time-disc}
To understand the effect of time discretization on the overall order of accuracy, let us consider the equation $\pd u/\pd t = f$, where $f$ depends on spatial derivatives of $u$, integrated using Euler scheme.
The scheme is first order in time with the leading order term being $-u^{(0,2)}\dt / 2$.
As mentioned in \rsec{at-gm}, if we assume a relation of the form $\dt\sim \dx^r$, then the leading order term is equivalent to $\ord(\dx^r)$ in space.
When the accuracy of the space derivatives is greater than $r$, the total error will, very likely, be dominated by the temporal term and will dictate the order of accuracy of the solution.
Thus, if a certain order is desired for space derivatives, it is important to select a time discretization with the same (or greater) order to keep the overall order unchanged.
We will follow this practice as we demonstrate the accuracy of the proposed \ats schemes next.
For this, we choose a linear multi-step method to compute the time derivative. A general expression with $T$ time steps is given by
\be
u^{n+1}_i=u^{n}_i+ \dt \sum_{m=0}^{T-1} \beta_m f_i^{n-m},
\label{eq:time_disc}
\ee
where the coefficients $\beta_m$ determine the particular temporal scheme \cite{Stoer2013}.
The advantage of using a temporal scheme of the form \req{time_disc} is that the terms $f_i^{n-m}$ can be computed using \ats schemes and are thus, free of asynchrony errors to the desired order of accuracy. Thus, so will be the linear combination of $f_i$ at different time steps.
For example, if one uses an \ats scheme that is fourth order accurate, with $r=2$ (i.e. $\dt\sim \dx^2$), then one needs a temporal scheme with second order accuracy to maintain fourth order accuracy globally.
This can be accomplished by a two-step Adams-Bashforth method
\be
u^{n+1}_i=u^{n}_i+ \dt \left(\frac{3}{2} f_i^n - \frac{1}{2} f_i^{n-1} \right),
\label{eq:time_ab2}
\ee
which is readily shown to be second order in time \cite{Stoer2013}. The generalization to higher orders is straightforward.
\subsection{Example: heat equation with fourth-order accurate \ats schemes}
Let us consider the 1D heat equation,
\begin{equation}
\frac{\pd u}{\pd t}=\alpha\frac{\pdd u}{\pd x^2},
\label{eq:heat}
\end{equation}
where $u(x,t)$ is the temperature and $\alpha$ is the thermal diffusivity of the medium.
The above equation is solved on a uniform grid shown in \rfig{grid} with periodic boundary conditions.
The equation is approximated with the second order Adams-Bashforth scheme shown in \req{time_ab2} and standard fourth order central difference for the space derivative at interior points.
At the PE boundary points, the space derivative is computed with a fourth
order \ats scheme which, with delay in the left boundary,
is given by \req{cd4-at} derived in Example 3.
Using Taylor series, the truncation error at interior points is
\be
\E{i}{n} = \left( -\frac{1}{6} \U{}{(0,3)} - \frac{1}{4} \alpha \U{}{(2,2)} \right) \dt^2 - \frac{1}{90} \alpha \U{}{(6,0)} \dx^4 + \ord \left( \dx^6,\dt^3,\dx^4\dt \right) .
\label{eq:te-sync}
\ee
As mentioned above, at PE boundary points, the truncation error can be split into the synchronous and asynchronous components,
\be
\ranE{i}{n}|_{\k{}=k} = {\E{i}{n}}|_s + \ranE{i}{n}|_{a,\k{}=k}.
\label{eq:aveE_split3}
\ee
Because by construction, the \ats scheme in \req{cd4-at} reduces to the standard central difference in the absence of delay, the synchronous component of the error, ${\E{i}{n}}|_s$,
is the same as \req{te-sync}. Note that since this scheme has
a uniform delay
at buffer points, we drop the subscript for $\k{}$ in the above
expression for simplicity.
The asynchronous component of the error, considering delays only on the left side of the stencil, can be readily shown to be
\be
\ranE{i}{n}|_{a,\k{}=k} = - \frac{5}{24}\left( k^3 + 3 k^2 + 2 k \right) \alpha \U{}{(0,3)} \frac{\dt^3}{\dx^2} + \ord\left( k^3\dt^3/\dx \right).
\label{fig:te-async}
\ee
The leading order term in the error remains the same when the delays are experienced on the right of the stencil.
Clearly, when $\k{}=0$, we have $\ranE{i}{n}|_{a,\k{}=k}=0$ and thus also its ensemble average.
On the other hand, if $\k{}\ge0$, then the ensemble average is
\bea
\eave{\ranE{i}{n}|_{a}}
& \ap&
\sum_{k=0}^{L-1} p_k{\ranE{i}{n}|_{a,\k{}=k}} \nonumber \\
& \ap&
\sum_{k=0}^{L-1} p_{k}\left( - \frac{5}{24}\left( k^3 + 3 k^2 + 2 k \right) \alpha \U{}{(0,3)} \frac{\dt^3}{\dx^2} \right) \nonumber \\
& \ap&
\left( - \frac{5}{24} \alpha \U{}{(0,3)} \frac{\dt^3}{\dx^2} \right)
\sum_{k=0}^{L-1} p_{k}\left( k^3 + 3 k^2 + 2 k \right) \nonumber \\
& \ap&
\left( - \frac{5}{24} \alpha \U{}{(0,3)} \frac{\dt^3}{\dx^2} \right)
\left( \kmom{3} + 3 \kmom{2} + 2 \kmom{} \right),
\label{eq:te-async1}
\eea
where moments are given by $\eave{\k{}^n}=\sum_{k=0,L-1} p_{k}k^n$.
It is interesting that the average error under the presence of asynchrony, depends not just on the mean of the delay as in \cite{DA2014}, but also on its higher order moments.
The implication of this result is that in assessing the performance of asynchronous numerical schemes a certain degree of details about the architecture of the computing system would be needed, such as the probability density function of the delays $\k{}$. Conversely, one can quantitatively compare the performance of different computing systems by comparing moments of $\k{}$.
We now substitute the leading order terms in \reqs{te-sync}{te-async1} into \req{aveE_split3}. Assuming the statistics of the delays are homogeneous in space, the average error is
\bea
\ave{E} & \ap & {1\over N}\left[ \sum_{i\in\Ii} \left( \left( -\frac{1}{6} \U{}{(0,3)} - \frac{1}{4} \alpha \U{}{(2,2)} \right) \dt^2 - \frac{1}{90} \alpha \U{}{(6,0)} \dx^4 \right) \right. \nonumber \\
& & + \sum_{i\in\Ib} \left( \left( -\frac{1}{6} \U{}{(0,3)} - \frac{1}{4} \alpha \U{}{(2,2)} \right) \dt^2 - \frac{1}{90} \alpha \U{}{(6,0)} \dx^4 \right) \nonumber \\
& & \left. + \sum_{i\in\Ib} \left( \left( - \frac{5}{24} \alpha \U{}{(0,3)} \frac{\dt^3}{\dx^2} \right) \left( \kmom{3} + 3 \kmom{2} + 2 \kmom{} \right) \right) \right].
\label{eq:aveE_split4}
\eea
The first two sums on the right hand side are due to synchronous
computations and can be conveniently combined by noting that $I=I_I\cup\Ib$.
To determine the spatial accuracy of the solution, we use the stability parameter $r_\alpha = \alpha \dt / \dx^2$ to substitute the time step $\dt$ in terms of $\dx$.
This corresponds to $r=2$ in the formulation presented in \rsec{at-gm}. The above equation then reduces to
\bea
\ave{E} & \ap & {1\over N}\left[ \sum_{i\in I} \left( -\frac{1}{6} \frac{r_\alpha^2}{\alpha^2} \U{}{(0,3)} - \frac{1}{4} \frac{ r_\alpha^2}{\alpha} \U{}{(2,2)} - \frac{1}{90} \alpha \U{}{(6,0)}\right) \dx^4 \right. \nonumber \\
& & \left. + \sum_{i\in\Ib} \left( - \frac{5}{24} \frac{ r_\alpha^3}{\alpha^2} \U{}{(0,3)} \right) \left( \kmom{3} + 3 \kmom{2} + 2 \kmom{} \right) \dx^4 \right],
\label{eq:aveE_split5}
\eea
which can be rewritten as
\bea
\ave{E} & \ap & \left[ -\frac{1}{6} \frac{ r_\alpha^2}{\alpha^2} \la\U{}{(0,3)}\ra - \frac{1}{4} \frac{ r_\alpha^2}{\alpha} \la \U{}{(2,2)} \ra - \frac{1}{90} \alpha \la\U{}{(6,0)}\ra \right] \dx^4 \nonumber \\
& & + \left[\frac{N_B}{N} \left( \kmom{3} + 3 \kmom{2} + 2 \kmom{} \right) \left( - \frac{5}{24} \frac{ r_\alpha^3}{\alpha^2} \la \U{}{(0,3)} \ra_B \right) \right] \dx^4.
\label{eq:aveE_split6}
\eea
The average error is clearly seen to possess components due to synchronous and asynchronous computations.
Either of the terms can dominate the average error depending on physical
parameters ($\alpha$, initial conditions, etc.), numerical parameters
($\dx$, $r_\alpha$, etc.), and simulation parameters ($P$, network
performance, etc.).
If the synchronous part dominates the overall error, then the resulting
scheme is fourth order accurate, that is $\ave{E}\sim \ord(\dx^4)$.
If, on the other hand, the asynchronous component dominates, the error is given by
\be
\ave{E} \ap \frac{P(\smin+\smax)}{N} \left( \kmom{3} + 3 \kmom{2} + 2 \kmom{} \right) \left( - \frac{5}{24} \frac{r_\alpha^3}{\alpha^2} \la \U{}{(0,3)} \ra_B \right) \dx^4,
\label{eq:aveE_2}
\ee
where we have used $N_B=(\smin+\smax)P$, with $\smin$ and $\smax$ being the stencil size in space at interior points.
Using $N=\mathcal{L}/\dx$, where $\mathcal{L}$ is the length of the domain, and for all other parameters kept constant, the average error is found to scale as
\bea
\ave{E} & \sim & \frac{P}{N} \left( \kmom{3} + 3 \kmom{2} + 2 \kmom{} \right) \dx^4 \nonumber \\
& \sim & {P}\left( \kmom{3} + 3 \kmom{2} + 2 \kmom{} \right) \dx^5
\label{eq:aveE_scaling}
\eea
Interestingly, the order of accuracy of the numerical method now depends on how the problem is scaled on a parallel machine.
In the case of weak scaling, where the computational effort per PE is kept constant, that is $P/N=constant$, the error varies as $\dx^4$ and the method is fourth order accurate in space.
On the other hand, when the total computational effort is kept constant ($N=constant$) and the simulations are carried out on increasingly large number of PEs, the average error is $\ave{E}\sim \ord(\dx^5)$ and the method is fifth order accurate.
We also observe that the error scales linearly with $P$.
In some situations the error due the synchronous and
asynchronous components may be
comparable. In such cases, the overall error will
depend on the sign of each contribution.
If the synchronous and asynchronous components
have opposite signs, then it is possible to expect some error
cancellation.
The order of accuracy though would remain unaltered.
\subsection{Generalization}
We now proceed to generalize the expressions for average error ($\ave{E}$) presented in \req{aveE_scaling}.
For this, we restate the conditions and assumptions that lead to \req{aveE_scaling}.
First, we assumed that the asynchronous component dominates the overall error.
Second, the \ats scheme used in the analysis is fourth order accurate, which lead to an $\ord(\dx^4)$ leading term due to asynchrony.
We have also assumed a uniform random delay in the stencil, that is $\k{i+j}=\k{}$ for all $i+j \in B$.
Also, the scheme uses three successive asynchronous time levels (with delays $\k{}$, $\k{}+1$, $\k{}+2$), which results in a cubic polynomial in $k$ in the leading order error term.
With the above observations, we can arrive at a general case which uses \ats schemes with $\mathcal{T}$ number of successive asynchrony time levels and is accurate to an order $a$.
If the asynchronous component of the error
dominates the average error, then it is easy to
generalize \req{aveE_scaling} as:
\bea
\ave{E} & \sim & \frac{P}{N} \dx^a \sum_{m=1}^{\mathcal{T}}\gamma_m\kmom{m} \nonumber \\
& \sim & {P}\dx^{a+1} \sum_{m=1}^{\mathcal{T}}\gamma_m\kmom{m}
\label{eq:aveE_scaling_gen}
\eea
Note that the average error still scales linearly with the number of PEs.
However, higher order moments of the delay are necessary to characterize the error when the stencil size of \ats schemes is expanded in time.
A minimum accuracy of order $a$ is then assured, regardless of how simulations are scaled up.
\section{Numerical Simulations}
\label{sec:simulations}
In this section, we verify the numerical performance of AT schemes.
Let us consider the general PDE:
\be
{\partial u\over \partial t} = \sum_{d=1,\cal{D}} \beta_d {\partial^d u \over
\partial x^d}
\label{eq:general}
\ee
where $\cal{D}$ is the highest derivative and the coefficient $\beta_d$
determines the characteristics of the physical process associated with the
$d$-th derivative.
Of particular interest are the heat equation (${\cal D}=2$ with $\beta_1=0$ and
$\beta_2=\alpha$), and the advection-diffusion equation (${\cal D}=2$ with
$\beta_1=c$ and $\beta_2=\alpha$), where $\alpha$ is the thermal or viscous
diffusivity and $c$ is the advection speed.
When the coefficients $\beta_d$ are constant, \req{general}
is linear and usually
possesses an analytical solution, which will be used here to evaluate the error in
numerically computed solutions.
The so-called nonlinear viscous Burgers' equation, which is widely used in
understanding physical properties of fluid flows, is obtained with ${\cal D}=2$,
$\beta_1=u(x,t)$ and $\beta_2=\alpha$.
We also perform simulations of this equation to demonstrate the feasibility
of AT schemes in solving multi-scale phenomena with non-linear couplings.
\subsection{Simulation details}
The equations described in the above section are solved in a periodic domain of length $2\pi$. For initial conditions, we use a multi-scale spectrum given by superimposed sinusoidal waves:
\be
u(x,0) = \sum_\kappa A(\kappa)\sin(\kappa x + \phik),
\label{eq:IC}
\ee
where $\kappa$ denotes the wavenumber. $A(\kappa)$ and $\phik$ are the
amplitude and phase angle corresponding to each wavenumber $\kappa$.
The phase $\phik$ are included in order to avoid circumstances like
the coincidence of PE boundaries with zero-gradients in the function, which may
result in very special cancellations
of some of the error terms due to asynchrony.
The results presented below are in fact
ensemble averages of multiple simulations with different phases.
In addition to avoiding special cases in terms of accuracy as mentioned
above, this procedure
provides a probability space over which ensemble averages can be obtained.
Simulations are carried out using several configuration cases with different
governing equations to study the behavior of AT schemes in different regimes.
The synchronous computations of spatial derivatives are carried out using
standard central difference schemes.
Close to PE boundary points, the AT schemes summarized in
\rtabs{atschemes-left}{atschemes-right} are used.
The time derivatives are discretized according to the procedure described in
\rsec{time-disc}.
The details of each numerical experiment
are tabulated in \rtab{cases}.
\begin{table}[h]
\begin{center}
\begin{tabular}{c|c c c c}
\hline
Case & Equation & Time derivative & \multicolumn{2}{c}{Space derivatives} \\
& & & Synchronous & Asynchrony-tolerant \\
\hline
\hline
1 & AD & Eul & CD2 & $(1,2,2)b$, $(2,2,2)b$ \\
2 & D & Eul & CD2 & $(2,1,2)$ \\
3 & AD & Eul & CD2 & $(1,2,2)a$, $(2,2,2)a$ \\
4 & AD & AB2 & CD4 & $(1,4,2)$, $(2,4,2)$ \\
5 & D & AB3 & CD6 & $(2,6,2)$ \\
6 & VB & AB2 & CD4 & $(1,4,2)$, $(2,4,2)$ \\
\end{tabular}
\caption{Parameters of numerical experiments. In the table: AD - linear advection-diffusion equation, D - diffusion equation, VB - non-linear viscous Burgers' equation; Eul - first order Euler scheme, AB2 and AB3 - second and third order Adams-Bashforth schemes; CD2, CD4 and CD6 - second, fourth and sixth order central difference schemes; AT schemes referred according to $(d,a,r)$ notation in \rtabs{atschemes-left}{atschemes-right}.}
\label{tab:cases}
\end{center}
\end{table}
\afterpage{%
\clearpag
\thispagestyle{empty
\begin{landscape}
\begin{table}[h]
\begin{center}
{\tabulinesep=0.5mm
\begin{tabu}{|c|c|c|}
\hline
Scheme & {Scheme at} & Leading order terms \\
$(d,a,r)$ & left boundary & \\
\hline
\hline
$(2,1,2)$
&
{ $
\scriptsize \everymath{\displaystyle}
\begin{array} {c}
\left({\U{i+1}{n}-\U{i}{n}-\U{i-1}{n-\k{}}+\U{i-2}{n-\k{}}}\right)/{2\dx^2}
\end{array}$}
&
{ $
\scriptsize \everymath{\displaystyle}
\begin{array} {c}
\frac{1}{2} \k{} u^{(1,1)}\frac{\dt}{\dx},
\frac{1}{2} u^{(3,0)}{\dx}
\end{array}$}
\\
\hline
$(1,2,2)a$ &
{ $\scriptsize \everymath{\displaystyle} \begin{array} {c}
\left({3\U{i+1}{n}-3\U{i}{n}-\U{i-1}{n-\k{}}-\U{i-2}{n-\k{}}}\right)/{4\dx}
\end{array}$}
&
{ $\scriptsize \everymath{\displaystyle} \begin{array} {c}
\frac{1}{4} {\k{}} u^{(1,1)}\dt,
\frac{5}{12} u^{(3,0)}{\dx^2}
\end{array}$}
\\
\hline
$(2,2,2)a$ &
{ $\scriptsize \everymath{\displaystyle} \begin{array} {c}
\left({2\U{i+2}{n}-4\U{i+1}{n}+2\U{i}{n}+\U{i-1}{n-\k{}}-2\U{i-2}{n-\k{}}+\U{i-3}{n-\k{}}}\right)/{3\dx^2}
\end{array}$}
&
{ $\scriptsize \everymath{\displaystyle} \begin{array} {c}
\frac{1}{3} {\k{}} u^{(2,1)}\dt,
\frac{13}{12} u^{(4,0)}{\dx^2}
\end{array}$}
\\
\hline
$(1,2,2)b$ &
{ $\scriptsize \everymath{\displaystyle} \begin{array} {c}
\left({\U{i+1}{n}-(\k{}+1)\U{i-1}{n-\k{}}+\k{}\U{i-1}{n-\k{}-1}}\right)/{2\dx}
\end{array}$}
&
{ $\scriptsize \everymath{\displaystyle} \begin{array} {c}
\frac{1}{6} u^{(3,0)}{\dx^2}
\end{array}$}
\\
\hline
$(2,2,2)b$ &
{ $\scriptsize \everymath{\displaystyle} \begin{array} {c}
\left({\U{i+1}{n}-2\U{i}{n}+(\k{}+1)\U{i-1}{n-\k{}}-\k{}\U{i-1}{n-\k{}-1}}\right)/{\dx^2}
\end{array}$}
&
{ $\scriptsize \everymath{\displaystyle} \begin{array} {c}
\frac{1}{2} {\k{}(\k{}+1)} u^{(0,2)}\frac{\dt^2}{\dx^2},
\frac{1}{12} u^{(4,0)}{\dx^2}
\end{array}$}
\\
\hline
$(1,4,2)$ &
{ $\scriptsize \everymath{\displaystyle} \begin{array} {c}
\frac{1}{2}(\k{}^2+3\k{}+2)\left({-\U{i+2}{n} + 8 \U{i+1}{n} - 8\U{i-1}{n-\k{}} + \U{i-2}{n-\k{}}}\right)/{12\dx} \\
-(\k{}^2+2\k{})\left({-\U{i+2}{n} + 8 \U{i+1}{n} - 8 \U{i-1}{n-\k{}-1} + \U{i-2}{n-\k{}-1}}\right)/{12\dx} \\
+\frac{1}{2}(\k{}^2+\k{})\left({-\U{i+2}{n} + 8 \U{i+1}{n} - 8 \U{i-1}{n-\k{}-2} + \U{i-2}{n-\k{}-2}}\right)/{12\dx}
\end{array}$}
&
{ $\scriptsize \everymath{\displaystyle} \begin{array} {c}
\frac{1}{30} u^{(5,0)}{\dx^4}
\end{array}$}
\\
\hline
$(2,4,2)$ &
{ $\scriptsize \everymath{\displaystyle} \begin{array} {c}
\frac{1}{2}(\k{}^2+3\k{}+2)\left({-\U{i+2}{n} + 16 \U{i+1}{n} - 30 \U{i}{n} + 16\U{i-1}{n-\k{}} - \U{i-2}{n-\k{}}}\right)/{12\dx^2} \\
-(\k{}^2+2\k{}) \left({-\U{i+2}{n} + 16 \U{i+1}{n} - 30 \U{i}{n} + 16\U{i-1}{n-\k{}-1} - \U{i-2}{n-\k{}-1}}\right)/{12\dx^2} \\
+\frac{1}{2}(\k{}^2+\k{}) \left({-\U{i+2}{n} + 16 \U{i+1}{n} - 30 \U{i}{n} + 16\U{i-1}{n-\k{}-2} - \U{i-2}{n-\k{}-2}}\right)/{12\dx^2}
\end{array}$}
&
{ $\scriptsize \everymath{\displaystyle} \begin{array} {c}
\frac{5}{24} {\k{}(\k{}+1)(\k{}+2)} u^{(0,3)}\frac{\dt^3}{\dx^2}, \\
\frac{1}{90} u^{(6,0)}{\dx^4}
\end{array}$}
\\
\hline
$(2,6,2)$ &
{ $\scriptsize \everymath{\displaystyle} \begin{array} {c}
\frac{1}{6}(\k{}^3+6\k{}^2+11\k{}+6) \\
\left({2\U{i+3}{n}-27\U{i+2}{n} + 270 \U{i+1}{n} - 490 \U{i}{n} + 270 \U{i-1}{n-\k{}} - 27\U{i-2}{n-\k{}} + 2\U{i-3}{n-\k{}}}\right)/{180\dx^2} \\
- \frac{1}{2}(\k{}^3+5\k{}^2+6\k{}) \\
\left({2\U{i+3}{n}-27\U{i+2}{n} + 270 \U{i+1}{n} - 490 \U{i}{n} + 270 \U{i-1}{n-\k{}-1} - 27\U{i-2}{n-\k{}-1} + 2\U{i+3}{n-\k{}-1}}\right)/{180\dx^2} \\
+ \frac{1}{2}(\k{}^3+4\k{}^2+3\k{}) \\
\left({2\U{i+3}{n}-27\U{i+2}{n} + 270 \U{i+1}{n} - 490 \U{i}{n} + 270 \U{i-1}{n-\k{}-2} - 27\U{i-2}{n-\k{}-2} + 2\U{i+3}{n-\k{}-2}}\right)/{180\dx^2} \\
- \frac{1}{6}(\k{}^3+3\k{}^2+2\k{}) \\
\left({2\U{i+3}{n}-27\U{i+2}{n} + 270 \U{i+1}{n} - 490 \U{i}{n} + 270 \U{i-1}{n-\k{}-3} - 27\U{i-2}{n-\k{}-3} + 2\U{i+3}{n-\k{}-3}}\right)/{180\dx^2} \\
\end{array}$}
&
{ $\scriptsize \everymath{\displaystyle} \begin{array} {c}
\frac{49}{864} {\k{}(\k{}+1)(\k{}+2)(\k{}+3)} u^{(0,4)}\frac{\dt^4}{\dx^2}, \\
-\frac{1}{560} u^{(8,0)}{\dx^6}
\end{array}$}
\\
\hline
\end{tabu}}
\caption{Asynchrony-tolerant (AT) schemes for left boundary used in numerical simulations (in \rsec{simulations}). The name of the scheme is represented by the triplet $(d,a,r)$. Two distinguish between two different schemes have the same triplet, we added "a" and "b" to the triplet. Note: minus sign ($-$), if present, has been dropped in leading order terms.}
\label{tab:atschemes-left}
\end{center}
\end{table}
\end{landscape}
\clearpag
}
\afterpage{%
\clearpag
\thispagestyle{empty
\begin{landscape}
\begin{table}[h]
\begin{center}
{\tabulinesep=0.5mm
\begin{tabu}{|c|c|c|}
\hline
Scheme & {Scheme at} & Leading order terms \\
$(d,a,r)$ & right boundary & \\
\hline
\hline
$(2,1,2)$
&
{ $
\scriptsize \everymath{\displaystyle}
\begin{array} {c}
\left({\U{i+2}{n-\k{}}-\U{i+1}{n-\k{}}-\U{i}{n}+\U{i-1}{n}}\right)/{2\dx^2}
\end{array}$}
&
{ $
\scriptsize \everymath{\displaystyle}
\begin{array} {c}
\frac{1}{2} \k{} u^{(1,1)}\frac{\dt}{\dx},
\frac{1}{2} u^{(3,0)}{\dx}
\end{array}$}
\\
\hline
$(1,2,2)a$ &
{ $\scriptsize \everymath{\displaystyle} \begin{array} {c}
\left({\U{i+2}{n-\k{}}-\U{i+1}{n-\k{}}+3\U{i}{n}-3\U{i-1}{n}}\right)/{4\dx}
\end{array}$}
&
{$\scriptsize \everymath{\displaystyle} \begin{array} {c}
\frac{1}{4} {\k{}} u^{(1,1)}\dt,
\frac{5}{12} u^{(3,0)}{\dx^2}
\end{array}$}
\\
\hline
$(2,2,2)a$ &
{ $\scriptsize \everymath{\displaystyle} \begin{array} {c}
\left({\U{i+3}{n-\k{}}-2\U{i+2}{n-\k{}}+\U{i+1}{n-\k{}}+2\U{i}{n}-4\U{i-1}{n}+2\U{i-2}{n}}\right)/{3\dx^2}
\end{array}$}
&
{ $\scriptsize \everymath{\displaystyle} \begin{array} {c}
\frac{1}{3} {\k{}} u^{(2,1)}\dt,
\frac{13}{12} u^{(4,0)}{\dx^2}
\end{array}$}
\\
\hline
$(1,2,2)b$ &
{ $\scriptsize \everymath{\displaystyle} \begin{array} {c}
\left({(\k{}+1)\U{i+1}{n-\k{}}-\k{}\U{i+1}{n-\k{}-1}-\U{i-1}{n}}\right)/{2\dx}
\end{array}$}
&
{ $\scriptsize \everymath{\displaystyle} \begin{array} {c}
\frac{1}{6} u^{(3,0)}{\dx^2}
\end{array}$}
\\
\hline
$(2,2,2)b$ &
{ $\scriptsize \everymath{\displaystyle} \begin{array} {c}
\left({(\k{}+1)\U{i+1}{n-\k{}}-\k{}\U{i+1}{n-\k{}-1}-2\U{i}{n}+\U{i-1}{n}}\right)/{\dx^2}
\end{array}$}
&
{ $\scriptsize \everymath{\displaystyle} \begin{array} {c}
\frac{1}{2} {\k{}(\k{}+1)} u^{(0,2)}\frac{\dt^2}{\dx^2},
\frac{1}{12} u^{(4,0)}{\dx^2}
\end{array}$}
\\
\hline
$(1,4,2)$ &
{ $\scriptsize \everymath{\displaystyle} \begin{array} {c}
\frac{1}{2}(\k{}^2+3\k{}+2)\left({-\U{i+2}{n-\k{}} + 8 \U{i+1}{n-\k{}} - 8\U{i-1}{n} + \U{i-2}{n}}\right)/{12\dx} \\
-(\k{}^2+2\k{})\left({-\U{i+2}{n-\k{}-1} + 8 \U{i+1}{n-\k{}-1} - 8 \U{i-1}{n} + \U{i-2}{n}}\right)/{12\dx} \\
+\frac{1}{2}(\k{}^2+\k{})\left({-\U{i+2}{n-\k{}-2} + 8 \U{i+1}{n-\k{}-2} - 8 \U{i-1}{n} + \U{i-2}{n}}\right)/{12\dx}
\end{array}$}
&
{ $\scriptsize \everymath{\displaystyle} \begin{array} {c}
\frac{1}{30} u^{(5,0)}{\dx^4}
\end{array}$}
\\
\hline
$(2,4,2)$ &
{ $\scriptsize \everymath{\displaystyle} \begin{array} {c}
\frac{1}{2}(\k{}^2+3\k{}+2) \left({-\U{i+2}{n-\k{}} + 16 \U{i+1}{n-\k{}} - 30 \U{i}{n} + 16\U{i-1}{n} - \U{i-2}{n}}\right)/{12\dx^2} \\
-(\k{}^2+2\k{}) \left({-\U{i+2}{n-\k{}-1} + 16 \U{i+1}{n-\k{}-1} - 30 \U{i}{n} + 16\U{i-1}{n} - \U{i-2}{n}}\right)/{12\dx^2} \\
+\frac{1}{2}(\k{}^2+\k{}) \left({-\U{i+2}{n-\k{}-2} + 16 \U{i+1}{n-\k{}-2} - 30 \U{i}{n} + 16\U{i-1}{n} - \U{i-2}{n}}\right)/{12\dx^2}
\end{array}$}
&
{ $\scriptsize \everymath{\displaystyle} \begin{array} {c}
\frac{5}{24} {\k{}(\k{}+1)(\k{}+2)} u^{(0,3)}\frac{\dt^3}{\dx^2}, \\
\frac{1}{90} u^{(6,0)}{\dx^4}
\end{array}$}
\\
\hline
$(2,6,2)$ &
{ $\scriptsize \everymath{\displaystyle} \begin{array} {c}
\frac{1}{6}(\k{}^3+6\k{}^2+11\k{}+6) \\
\left({2\U{i+3}{n-\k{}}-27\U{i+2}{n-\k{}} + 270 \U{i+1}{n-\k{}} - 490 \U{i}{n} + 270 \U{i-1}{n} - 27\U{i-2}{n} + 2\U{i-3}{n}}\right)/{180\dx^2} \\
- \frac{1}{2}(\k{}^3+5\k{}^2+6\k{}) \\
\left({2\U{i+3}{n-\k{}-1}-27\U{i+2}{n-\k{}-1} + 270 \U{i+1}{n-\k{}-1} - 490 \U{i}{n} + 270 \U{i-1}{n} - 27\U{i-2}{n} + 2\U{i-3}{n}}\right)/{180\dx^2} \\
+ \frac{1}{2}(\k{}^3+4\k{}^2+3\k{}) \\
\left({2\U{i+3}{n-\k{}-2}-27\U{i+2}{n-\k{}-2} + 270 \U{i+1}{n-\k{}-2} - 490 \U{i}{n} + 270 \U{i-1}{n} - 27\U{i-2}{n} + 2\U{i-3}{n}}\right)/{180\dx^2} \\
- \frac{1}{6}(\k{}^3+3\k{}^2+2\k{}) \\
\left({2\U{i+3}{n-\k{}-3}-27\U{i+2}{n-\k{}-3} + 270 \U{i+1}{n-\k{}-3} - 490 \U{i}{n} + 270 \U{i-1}{n} - 27\U{i-2}{n} + 2\U{i-3}{n}}\right)/{180\dx^2} \\
\end{array}$}
&
{ $\scriptsize \everymath{\displaystyle} \begin{array} {c}
\frac{49}{864} {\k{}(\k{}+1)(\k{}+2)(\k{}+3)} u^{(0,4)}\frac{\dt^4}{\dx^2}, \\
-\frac{1}{560} u^{(8,0)}{\dx^6}
\end{array}$}
\\
\hline
\end{tabu}}
\caption{Asynchrony-tolerant (AT) schemes for right boundary used in numerical simulations (in \rsec{simulations}). The name of the scheme is represented by the triplet $(d,a,r)$. Two distinguish between two different schemes have the same triplet, we added "a" and "b" to the triplet. Note: minus sign ($-$), if present, has been dropped in leading order terms.}
\label{tab:atschemes-right}
\end{center}
\end{table}
\end{landscape}
\clearpag
}
In numerical simulations, we use a random number generator to simulate communication delays ($\k{j}$) at PE boundaries.
This provides a complete control over the statistics of the delays, thus, allowing us to compare the results against the theoretical predictions in different parameter regimes.
At each time advancement, the delay at a buffer point is computed from a random number drawn with a given initial seed from a uniform distribution in the interval $[0,1]$.
This interval is divided into $L$ bins according to the probabilities $\{\prob{0}{j}, \prob{1}{j}, \dots, \prob{L-1}{j}\}$
corresponding to delays $\k{j}=0, 1, 2, \dots, L-1$, respectively.
When a random number is drawn, it is matched with the corresponding bin which determines the delay.
As we use i.i.d.\ random sequences at different PE boundaries in the simulations, there is no dependence on the location and hence, we drop the subscript $j$ in probabilities for simplicity and write $\{p_{0}, p_{1}, \dots, p_{L-1}\}$.
As an example, if we choose $L=3$ and the set $\{p_{0}, p_{1},
p_{2}\}=\{0.6,0.3,0.1\}$, then the probability of having $\k{}=0$, $\k{}=1$
and $\k{}=2$ is
$0.6$, $0.3$ and $0.1$, respectively.
In the case of schemes which use a uniform delay in their stencil (like in \req{cd4-at} of Example 3), a single random number is drawn at each PE boundary to obtain the uniform delay at all the buffer points at that PE boundary.
The error is computed by comparing the numerical solution
against the analytical solution.
With periodic boundary conditions and an initial condition given in \req{IC}, the analytical solution (denoted by subscript $a$) for the linear advection-diffusion equation is
\be
u_a(x,t) = \sum_{\kappa}e^{-\alpha\kappa^2 t}A(\kappa)\sin(\kappa x + \phik -ct).
\label{eq:ana_sol}
\ee
For the heat equation, the analytical solution is given by the above expression with $c=0$.
In the case of nonlinear Burgers equation, the error is evaluated against the solution from a highly resolved simulation.
The error at a point $i$ and time level $n$ is computed as $\E{i}{n}=\U{i}{n}-u_a(x_i,t_n)$. The overall error in the domain is obtained using the different averages presented in \rsec{error}.
\subsection{Results}\label{sec:results}
\subsubsection{Linear equations}
\rfig{at-sol1} shows results from simulations with three different schemes:
synchronous (solid black lines), asynchronous-standard (dashed red lines) and
AT (green lines) schemes.
Note that by asynchronous-standard schemes we mean standard (synchronous)
schemes used in an asynchronous fashion.
The governing equation and schemes are those corresponding to Case 1 in
\rtab{cases}. The simulation parameters are $N=128$, $\kappa=\{2,3,5\}$ (the
vector $\kappa$ here contains the wavenumbers used in the initial condition
defined by \req{ana_sol}), $P=4$, and three allowable time levels for
asynchronous computations according to $\{p_0,p_1,p_2\}=\{0.5,0.3,0.2\}$.
In part (a) of the figure, we show the time evolution of function $u$.
We observe that the initial condition which is a combination of sine waves is convected with a wave speed $c$ and simultaneously damped due to diffusive action, as expected.
To highlight the differences between these cases, we show the evolution of the error in part (b) of the figure.
The error in the case of asynchronous standard schemes is an order of magnitude greater near the PE boundaries (indicated by vertical dash-dotted lines).
As discussed in \cite{DA2014}, this is due to the asynchrony in the data available at buffer points. This error, which is initially localized near PE boundaries, propagates into the interior with time.
In the case of \ats schemes, which are designed to mitigate the affect of asynchrony, the error at PE boundaries is of similar magnitude as the synchronous schemes.
\begin{figure}[h]
\centering
\subfigure{\includegraphics[width=0.49\textwidth]{sol-at.eps}}
\subfigure{\includegraphics[width=0.49\textwidth]{err-at.eps}}
\begin{picture}(0,0)
\put(-260,0){$x$}
\put(-85,0){$x$}
\put(-340,65){\rotatebox{90} {$u$}}
\put(-178,55){\rotatebox{90} {$u-u_a$}}
\put(-310,60){\vector(1,2){15}}
\put(-120,80){\vector(1,1){20}}
\put(-295,95){$t$}
\put(-98,105){$t$}
\put(-215,115){(a)}
\put(-40,115){(b)}
\end{picture}
\caption{Typical time evolution of the numerical solution of the
advection-diffusion equation
using synchronous (solid black lines), asynchronous standard (dashed red lines) and \ats schemes (green lines).
(a) The velocity field. (b) Error $\E{i}{n}=\U{i}{n} - u_a(x_i,t_n)$.
Vertical dash-dotted lines correspond to PE boundaries.
Simulation parameters: $N=128$, $P=4$, $L=3$, with
$\{p_0,p_1,p_2\}=\{0.5,0.3,0.2\}$ for the asynchronous computations.}
\label{fig:at-sol1}
\end{figure}
To verify the formal order of accuracy of AT schemes and the effect of simulation parameters ($N$, $P$, $\kmom{}$, etc.) on the overall error, we now proceed to the statistical description of the error.
An example of the effect of asynchrony on the overall error for standard
central difference schemes (Case 4 in \rtab{cases}) is shown in \rfig{at-ord4-async}.
For the fourth-order scheme used in these simulations, the error for $p_0=1.0$ decreases with a slope of $-4$, as expected in synchronous computing.
In the presence of asynchrony ($p_0<1.0$) the slope reduces to $-1$, depicting
a first order accurate solution \cite{DA2014}. Also, the absolute error for a given grid
resolution increases when asynchrony is increased ($p_0$ is reduced).
This drastic decrease in accuracy is mitigated when
AT schemes are used, as we show next.
\begin{figure}[h]
\centering
\subfigure{\includegraphics[width=0.49\textwidth]{ad-ab2-cd4-async.eps}}
\begin{picture}(0,0)
\put(-85,0){$N$}
\put(-180,65){\rotatebox{90} {$\ave{E}$}}
\put(-85,105){$-1$}
\put(-95,30){$-4$}
\put(-85,0){$N$}
\end{picture}
\caption{ Convergence plot of the average overall error with increasing grid
resolution. Results are obtained from the simulations of advection-diffusion
equation with fourth order standard central difference schemes. Different
lines correspond to varying degree of asynchrony introduced in the
simulations: $p_0=1.0$ (red), $p_0=0.7$ (green), $p_0=0.3$ (blue) and
$p_0=0.0$ (magenta). Dashed lines with a slope of $-1$ and $-4$ are shown for
reference.}
\label{fig:at-ord4-async}
\end{figure}
\rfig{at-prob-all} shows the effect on asynchrony in different configurations when AT schemes are used. The parameters used in these numerical experiments are: $\kappa=\{3,4,5\}$, $A(\kappa)=\{2.0,0.5,1.5\}$, $P=16$, $L=3$.
Different colors in the graphs represent results from different sets of $p_k$ ($k=0,1,2$) and their values are given in the caption of the figure.
Part (a) shows results from Case 2 in \rtab{cases}. A second-order central
difference scheme for synchronous computations and a first-order asymmetric
stencil AT scheme for asynchronous computations are used.
The error in absence of delays (red line) decreases
with a slope of $-2$ as expected.
We also observe that, asymptotically,
this is also the case for the asynchronous cases ($p_0<1$).
The reason for second-order accuracy even with a first-order AT scheme can be
explained with the strong scaling argument presented in \req{aveE_2}.
The effect of an
increase in the amount of asynchrony, is seen to increase the magnitude of
error leaving the asymptotic rate of convergence unchanged.
In part (b), we show results for Case 3 in \rtab{cases}.
A second-order accurate asymmetric stencil AT
scheme is used for asynchronous computations. In this case too, we see an
asymptotic convergence rate of order 2. Also, the magnitude of error increases
with the amount of asynchrony.
Note that in both parts (a) and (b), the AT schemes are constructed by
expanding the stencil in space to improve the accuracy in the presence of
asynchrony.
These schemes do not reduce to or have the same form as the central difference
schemes when $\k{}=0$.
In parts (c) and (d) of \rfig{at-prob-all}, results are shown
for AT schemes derived by
expanding the stencil in time (instead of space) to maintain accuracy in the
presence of
asynchrony. These are cases 4 and 5 in \rtab{cases}.
These schemes have symmetric stencils and coefficients, and they
reduce to central difference schemes when $\k{}=0$.
As expected from the theory, the error in part (c) converges with an accuracy of
order 4. The effect of asynchrony is hardly noticeable at higher resolutions.
This can be attributed to the fact that the AT schemes, in this case, reduce to
the synchronous central schemes in absence of asynchrony and result in a
homogeneous synchronous truncation error terms across the domain. A similar
observation is also found in part (d), which uses a sixth order accurate scheme
for space derivative.
\begin{figure}[h]
\centering
\subfigure{\includegraphics[width=0.49\textwidth]{d-eul-cd2-at1-1-case2-prob.eps}}
\subfigure{\includegraphics[width=0.49\textwidth]{ad-eul-cd2-at2-1-case3-prob.eps}}
\subfigure{\includegraphics[width=0.49\textwidth]{ad-ab2-cd4-at4-1-case4-prob.eps}}
\subfigure{\includegraphics[width=0.49\textwidth]{d-ab3-cd6-at6-1-case5-prob.eps}}
\begin{picture}(0,0)
\put(-260,0){$N$}
\put(-85,0){$N$}
\put(-260,140){$N$}
\put(-85,140){$N$}
\put(-350,62){\rotatebox{90} {$\ave{E}$}}
\put(-180,62){\rotatebox{90} {$\ave{E}$}}
\put(-350,202){\rotatebox{90} {$\ave{E}$}}
\put(-180,202){\rotatebox{90} {$\ave{E}$}}
\put(-218,252){(a)}
\put(-40,252){(b)}
\put(-218,112){(c)}
\put(-40,112){(d)}
\put(-268,190){$-2$}
\put(-278,55){$-4$}
\put(-95,180){$-2$}
\put(-100,55){$-6$}
\end{picture}
\caption{ Convergence plot of the average overall error for Cases 2, 3, 4 and
5 listed in \rtab{cases}, simulated with \ats schemes at
communication delayed buffer points. Different lines in each graph correspond
to a varying degree of asynchrony introduced in the simulations: $p_0=1.0$
(red), $p_0=0.7$ (green), $p_0=0.3$ (blue) and $p_0=0.0$ (magenta). Dashed
lines with constant slope (value shown adjacent to line) shown for
reference.}
\label{fig:at-prob-all}
\end{figure}
\begin{figure}[h]
\centering
\subfigure{\includegraphics[width=0.49\textwidth]{ord2-at2-strong-inset.eps}}
\subfigure{\includegraphics[width=0.49\textwidth]{ord2-at2-weak-inset.eps}}
\begin{picture}(0,0)
\put(-260,0){$N$}
\put(-85,0){$N$}
\put(-350,60){\rotatebox{90} {$\ave{E}$}}
\put(-178,65){\rotatebox{90} {$\ave{E}$}}
\put(-220,60){\small $P$}
\put(-53,70){\small $N/P$}
\put(-315,105){(a)}
\put(-145,105){(b)}
\put(-308,63){$-2$}
\put(-280,35){$-3$}
\put(-270,85){$-3$}
\put(-110,50){$-2$}
\end{picture}
\caption{Effect of number of PEs on the average error for the Case 1 with
$L=3$, $p_k=\{0.2,0.5,0.3\}$ and $r_\alpha = 0.1$.
(a)
Strong scaling: cases with constant $P$. Different lines correspond to $P =
2$ (red), $4$ (green), $8$ (blue), $16$ (magenta), $32$ (black). Inset: plot
of average error with $P$ at $N=512$. (b) Weak scaling: cases with constant
$P/N$. Different lines correspond to $P/N = 1/64$ (magenta), $1/32$ (blue),
$1/16$ (green), $1/16$ (red). Inset: plot of average error with $N/P$ at
$N=128$. Dashed lines with constant slope (value shown adjacent to line) are
included for reference.}
\label{fig:at-ord2-procs}
\end{figure}
Let us now recall \req{aveE_scaling_gen}, which describes the scaling of the average error $(\ave{E})$ with $P$, $N$ and $\k{}$:
\bea
\ave{E} & \sim & \frac{P}{N} \dx^a \sum_{m=1}^{\mathcal{T}}\gamma_m\kmom{m} \nonumber \\
& \sim & {P}\dx^{a+1} \sum_{m=1}^{\mathcal{T}}\gamma_m\kmom{m}
\label{eq:aveE_scaling_gen2}
\eea
Note that this scaling holds only when the error due to asynchrony
dominates the overall error.
Otherwise, the error in the leading order may have a synchronous component
and may show a different dependence on simulation
parameters.
In \rfig{at-ord2-procs}
we show numerical data from Case 1.
According to \req{aveE_scaling_gen2},
the order of accuracy is one more than the order of the AT scheme
when $P$ is fixed.
Part (a) of the figure shows the convergence of the error for different $P$.
For low $P$, the leading order terms of the error contain both synchronous and
asynchronous
contributions, and thus shows a convergence slope between $-3$ and $-2$.
However, when $P$ increases, the error due to asynchrony dominates and shows a
convergence of $-3$, as predicted by the theory.
The linear scaling of the error with
$P$ is verified in the inset of part (a).
Results for weak scaling, that is when both $P$ and $N$ in increase
such that $P/N$ is kept constant, are shown in part (b).
As expected, the error for different $P/N$ asymptotically converges to second
order accuracy. In the inset of the figure, an inverse dependence of the error on
$N/P$ is observed as predicted also by the theory.
Unlike standard synchronous schemes when asynchrony is allowed,
for which the error due to asynchrony
depends only on the average of delays ($\kmom{}$) \cite{DA2014}, the error
for \ats schemes can also depend on higher order moments of $\k{}$, as
shown in \rsec{error}.
This dependence is indeed confirmed in \rfig{at-kmom}. Results
in parts (a) and (b) of the figure are for Cases 3 and 4, respectively
and can be understood as follows.
For these schemes, which use two and three delayed time levels
in the stencil
($\mathcal{T}=2$ and $3$), respectively, the average error scales as:
\bea
\ave{E} & \sim & \left( \kmom{2} + \kmom{} \right) \text{\hspace{2.3cm}for Case 3} \nonumber \\
& \sim & \left(\kmom{3}+3 \kmom{2} +2 \kmom{} \right) \text{\hspace{1.15cm}for Case 4}
\label{eq:aveE_scaling_gen_kmom}
\eea
To simplify the dependence of error to a single variable, the probability of
occurence of a level $k$ for a given $L$ is chosen as $p_k=1/L$.
For example, if $L=3$, then $\{p_0,p_1,p_2\}=\{1/3,1/3,1/3\}$. This reduces
the scaling of the average error to
\bea
\ave{E} & \sim & \left( L^2 - 1 \right) \text{\hspace{2.5cm} for Case 3} \nonumber \\
& \sim & \left(L^3 + 2 L^3 - L - 2 \right) \text{\hspace{0.75cm} for Case 4}.
\label{eq:aveE_scaling_gen_L}
\eea
As mentioned earlier, this scaling is only valid for the asynchrony component of
the error or for the total error when the former is dominant to leading order.
Thus, to compare to \req{aveE_scaling_gen_L}, we compute the total error
and subtract it from a completely synchronous, but otherwise
identical, simulation.
In \rfig{at-kmom} we show the thus obtained asynchronous part of the
error, that is $(\ave{E}-\ave{E}_s)/\ave{E}_s$.
The
dashed curves in the graphs correspond to \req{aveE_scaling_gen_L}.
There is very good agreement between the theoretical prediction and the
data from simulations.
\begin{figure}[h]
\centering
\subfigure{\includegraphics[width=0.49\textwidth]{kmean-ord2-l8-new.eps}}
\subfigure{\includegraphics[width=0.49\textwidth]{kmean-ord4-l9-new.eps}}
\begin{picture}(0,0)
\put(-260,-4){$L$}
\put(-85,-4){$L$}
\put(-348,30){\rotatebox{90} {$(\ave{E}-\ave{E}_s)/\ave{E}_s$}}
\put(-177,30){\rotatebox{90} {$(\ave{E}-\ave{E}_s)/\ave{E}_s$}}
\put(-213,110){(a)}
\put(-40,110){(b)}
\end{picture}
\caption{Scaling of normalized average error ($(\ave{E}-\ave{E}_s)/\ave{E}_s$) with moments of the
delay ($\k{}$). In parts (a) and (b), circles are obtained from simulation of
Cases 3 and 4, respectively, with parameters $N=512$ and $P=16$. The
dashed curves are polynomials in \req{aveE_scaling_gen_L} obtained from
theory.}
\label{fig:at-kmom}
\end{figure}
\subsubsection{Nonlinear equations}
In the above numerical experiments, we have verified the performance of \ats
schemes for linear equations.
However, a number of natural and engineering systems are governed by highly
nonlinear processes like fluid turbulence phenomena.
The viscous Burgers' equation is often used as a proxy
to understand these nonlinear effects in fluid flows
with negligible pressure effects. Thus, we use this equation
to assess \ats schemes in a more realistic setup.
\rfig{at-ord4-nl} shows the convergence of the error for
the fourth-order schemes described in \rtab{cases} as Case 6.
Clearly, even with an increase in the degree of asynchrony the error converges
with fourth-order accuracy.
In nonlinear problems like turbulence, one is interested not only in
statistical moments of
the velocity field but also in its gradients as they exhibit very strong
but localized fluctuations, a phenomenon known as intermittency \cite{SA97}.
Thus, we investigated
the variation of central moments of velocity ($u$) and velocity gradients
($\pd u /\pd x$) with the resolution, $N$.
An example is shown in \rfig{at-conv}.
For this problem, most of the contribution to the velocity field
comes from low wavenumbers, while most of the contribution for its
gradients comes from high wavenumbers.
Thus, it is not surprising that asynchrony effects are more evident for velocity
gradients at low grid resolution.
Nevertheless, both synchronous and asynchronous cases seem to converge at
the same grid resolution ($N=256$). This is consistent with the results
for lower-order statistics in \cite{DA2014}.
While our numerical experiments show accurate results even for high
order statistics of velocity gradients, this result is not
expected to be general for other equations or in higher-dimensional
spaces. This is indeed an area that needs further investigation.
\begin{figure}[h]
\centering
\subfigure{\includegraphics[width=0.49\textwidth]{vb-ab2-cd4-at4-1-case5-prob.eps}}
\begin{picture}(0,0)
\put(-90,-5){$N$}
\put(-95,55){$-4$}
\put(-180,60){\rotatebox{90} {$\ave{E}$}}
\end{picture}
\caption{ Convergence plot of the average overall error with increasing grid
resolution. Results are from simulations of the nonlinear viscous
Burgers' equation (Case 6 in \rtab{cases}).
Different lines correspond to varying degree of asynchrony introduced in the
simulations: $p_0=1.0$ (red), $p_0=0.7$ (green), $p_0=0.4$ (blue) and
$p_0=0.3$ (magenta). Dashed line is a reference power law with slope $-4$.}
\label{fig:at-ord4-nl}
\end{figure}
\begin{figure}[h]
\centering
\subfigure{\includegraphics[width=0.49\textwidth]{conv-u-2-new.eps}}
\subfigure{\includegraphics[width=0.49\textwidth]{conv-du-2-new.eps}}
\subfigure{\includegraphics[width=0.49\textwidth]{conv-u-3-new.eps}}
\subfigure{\includegraphics[width=0.49\textwidth]{conv-du-3-new.eps}}
\subfigure{\includegraphics[width=0.49\textwidth]{conv-u-4-new.eps}}
\subfigure{\includegraphics[width=0.49\textwidth]{conv-du-4-new.eps}}
\begin{picture}(0,0)
\put(-260,0){$N$}
\put(-85,0){$N$}
\put(-260,73){$N$}
\put(-85,73){$N$}
\put(-260,145){$N$}
\put(-85,145){$N$}
\put(-280,175){{$\xave{u-\xave{u}}^2$}}
\put(-280,100){{$\xave{u-\xave{u}}^3$}}
\put(-280,25){{$\xave{u-\xave{u}}^4$}}
\put(-130,175){{$\xave{\pd u/\pd x -\xave{\pd u/\pd x}}^2$}}
\put(-130,100){{$\xave{\pd u/\pd x -\xave{\pd u/\pd x}}^3$}}
\put(-130,25){{$\xave{\pd u/\pd x -\xave{\pd u/\pd x}}^4$}}
\put(-210,195){(a)}
\put(-38,195){(b)}
\put(-210,117){(c)}
\put(-38,117){(d)}
\put(-210,45){(e)}
\put(-38,45){(f)}
\end{picture}
\caption{Variation of normalized central moments of velocity and velocity-gradients with grid resolution. Computations are done using Case 6 in \rtab{cases}. Graphs (a), (c) and (e) show the second, third and fourth moments of velocity, respectively. Graphs (b), (d) and (f) show the second, third and fourth moments of velocity-gradients, respectively. Symbols represent the probability sets $\{p_0,p_1,p_3\}$: $\{1,0,0\}$ (circle), $\{0.7,0.2,0.1\}$ (plus), $\{0.4,0.4,0.2\}$ (square), $\{0.3,0.5,0.2\}$ (triangle).}
\label{fig:at-conv}
\end{figure}
\section{Conclusions}
A number of natural and engineering systems are governed by PDEs
that present solutions with wide range of scales which
can only be captured by
high-fidelity
simulations (using high-order numerical schemes) on massive computational systems.
At extreme scales, global communications and synchronizations will likely
become an obstacle to sustained performance for number of scientific codes.
In this work we have presented a general methodology to analyze
and derive schemes that remove these two main obstacles by allowing
some tunable level of asynchrony.
The concept relies on finite differences to approximate
derivatives of general order using values of
the function from neighboring points. Close to PEs
boundaries current computational methodologies are stalled until communications
between PEs is completed. In previous work
we have shown that one can relax this
forced synchronization at the mathematical level such that
computations can proceed using values from past time levels.
Here we generalize the concept, established conditions
under which schemes can be obtained, classified the
resulting schemes in terms of their properties,
and provided a general framework in which
schemes of arbitrary order can be obtained for any derivative of
a function. These schemes are referred as asynchrony-tolerant or \ats schemes.
By analyzing in detail
the truncation error of general finite differences
when asynchrony
is allowed, we described the mathematical conditions needed to
obtain a scheme of arbitrary order under asynchronous conditions.
In particular, we showed that asynchrony errors
can be eliminated either by extending the stencil in
space as well as in time. These two alternatives lead
to
schemes with different properties and limitations.
However, depending on the order of the scheme and the
type of asynchrony allowed (e.g.\ on both sides of the stencil,
uniform across the stencil, etc.) not all expansions of the
stencil size will result in an \ats scheme.
The kind of stencils that do lead to a scheme,
has also been presented and requires identifying
the nature of terms present in the truncation error.
The coefficients are obtained by solving a linear
system of equations.
An alternative method was also presented where
successive terms in the truncation errors are
eliminated by a step-by-step method. The process
ends when the desired accuracy is achieved.
The resulting schemes can be classified on the nature of
their coefficients. We presented four conditions
for the classification:
(i) symmetric layout of grid points,
(ii) unconstrained or uniform delay on boundary points,
(iii) artificial delay at interior points, and
(iv) symmetry on the coefficients.
Each have
different numerical and performance properties.
Actual examples of these different schemes were also
put forth.
The truncation error was analyzed in a statistical
framework that takes into account both the stochasticity
of delays as well as the non-uniformity of delays in
space. We have further shown that multi-step time-integration
methods can be used successfully to obtain solvers of
arbitrary order using AT schemes. The general form of
the error, given by \req{aveE_scaling_gen}, shows that the average
error depends on the number of processors as well as moments of
the distribution of delays, which in turn depend on the
characteristics of the computing system simulations are run on.
Theoretical predictions on the accuracy of schemes were compared to
numerical experiments for
linear as well as non-linear equations. Good agreement was found across
different parameter space. The work presented here provides a strong
foundation for mathematically asynchronous computing methods for PDEs at extreme
scales. Application of this method to more complex phenomena and realistic
conditions is a part of our ongoing research.
\section{Acknowledgments}
The authors gratefully acknowledge NSF
(Grant OCI-1054966 and CCF-1439145)
for financial support. The authors also
thank NERSC and XSEDE for computer time on their systems. The authors benefited
from
discussions with Lawrence Rauchwerger, Raktim Bhattacharya and
Jacqueline H.\ Chen.
\bibliographystyle{model1-num-names}
|
train/arxiv
|
BkiUd1HxK6wB9k0iMHJ5
| 5 | 1 |
\section{Introduction}
Transition metal oxides (TMO) exhibit a plethora of fascinating physical
behaviors, such as metal-insulator transitions~\cite{imada98},
multiferroicity~\cite{cheong07,tokura10,tokura14}, colossal
magnetoresistance~\cite{ramirez97, tokura06}, and high-temperature
superconductivity~\cite{pickett89,lee06}. The ongoing progress in fabricating
high-quality TMO thin films led to the emergence of a new class of artificial
materials: oxide heterostructures. Their properties are often markedly
different from their TMO constituents. An archetypical example are
heterostructures of two band insulators, SrTiO$_3$ (STO) and LaAlO$_3$ (LAO):
if the latter component reaches the critical thickness of four atomic
monolayers, a two-dimensional (2D) electron gas emerges at the
interface~\cite{ohtomo04}. Even more striking effects are observed for TMO
with a partly filled $d$ shell. Here, electronic correlations become essential.
For example, the emergence of a ferromagnetic metal at LaMnO$_3$/SrMnO$_3$
interfaces was reported~\cite{bhattacharya08}, where both constituents are bulk
antiferromagnetic insulators; or the polar field and the Mott insulating gap of
STO/LaVO$_3$ can be employed as a solar cell~\cite{assmann13, wu15}.
Although the variety of TMO gives rise to a very large number of possible
binary combinations, the count of studied oxide heterostructures grows slowly.
Following the pioneering works of Hwang and Ohtomo~\cite{ohtomo02,ohtomo04},
the research has been largely focused on superlattices whose constituents have
a (possibly distorted) perovskite structure in the bulk, and the direction of
growth was typically chosen to be along the (pseudo)cubic $[$001$]$ direction.
Against this backdrop, Xiao \emph{et~al.}~\cite{xiao11} argued that bilayers
grown along the trigonal axis, i.e.\ in the $[$111$]$ direction, form a
honeycomb lattice, with an excellent potential to create correlated analogs of
graphene~\cite{xiao11}. By employing the tight-binding (TB) approximation,
they studied different fillings of the correlated $d$ shell in the presence of
the spin-orbit coupling (SOC), and demonstrated that such (111) bilayers can
host various topologically nontrivial phases. Also Haldane's~\cite{haldane88}
quantum anomalous Hall state can be realized in (111) bilayers of SrRuO$_3$ on
STO~\cite{si17}.
\begin{figure}[tb]
\includegraphics[width=8.6cm]{fig1}
\caption{\label{fig:str}(Color online) (a) In the trigonal unit cell used for
DFT+DMFT calculations, (111) nickelate bilayers are separated by four LaAlO$_3$
layers. For NiO$_6$ octahedra, the local coordinate axes $x,y,z$ are indicated.
Bond disproportionation gives rise to a breathing distortion: stretched (b) and
squeezed (c) NiO$_6$ octahedra alternate in the lattice (d). (e)
High-symmetry points of the Brillouin zone (BZ). (f) Zigzag edges of the
honeycomb lattice used for the edge state calculations.}
\end{figure}
The ensuing numerical studies extended the TB analysis by including electronic
interaction effects on a mean-field level~\cite{yang11,ruegg11}, and
identified LaNiO$_3$ bilayers (2LNO) in an LAO matrix as a promising candidate
for the realization of topological states~\cite{yang11,ruegg12}. In the
simplest ionic approximation, Ni$^{3+}$ has the $d^7$ electronic configuration,
whereby six electrons fully occupy the low-lying $t_{2g}$ states and render
them inactive. Hence, all change, orbital, and spin degrees of freedom in the
2LNO/$n$LAO superlattices (Fig.~\ref{fig:str}, a) pertain to the single
electron in the $e_g$ orbitals that remain degenerate in the trigonal
symmetry~\cite{xiao11}. This resilient degeneracy, in contrast to
(001) superlattices~\cite{hansmann09}, gives room for spontaneous ordering of
complex orbitals and thus topologically nontrivial states emerge despite the
small SOC~\cite{ruegg11}. While first analyses involving realistic
tight-binding Hamiltonians evaluated by means of density functional theory
(DFT)~\cite{ruegg12} as well as DFT+$U$ calculations~\cite{ruegg12,ruegg13}
suggested the stability of a Dirac semimetal state, later DFT+$U$ studies
established a key role of a breathing distortion of NiO$_6$ octahedra
(Fig.~\ref{fig:str} b, c) which opens a gap and competes with the topological
states~\cite{ruegg13, doennig14}. The breathing distortion is accompanied by a
polarization, rendering (111) LNO bilayers a prospective multiferroic with a
sizable spin polarization~\cite{doennig14}.
On the experimental side, transport measurements indicated a
semiconducting behavior, with the gap showing a sizable dependence on the
thickness of the LAO layer~\cite{middey12}. Recently, both the activated
behavior and the sensitivity of the gap size were corroborated by an
independent study, which reported gaps between 17 and 162\,meV for different
LAO thicknesses~\cite{wei16}. The nature of the gap, in particular whether it
is topological or related to a breathing distortion, remains an open question.
Here, we employ a combination of DFT and dynamical mean-field theory
(DMFT)~\cite{anisimov97,lichtenstein98,kotliar06,held07,janson18} to explore
the phase diagram of 2LNO/4LAO heterostructures (Fig.~\ref{fig:str}, a). By
performing detailed structural relaxations, we demonstrate that the presence of
a breathing distortion can be neither proved, nor disproved: both structure
types feature the same DFT+$U$ total energies within the error bars. Hence, we
carry out DFT+DMFT calculations for the uniform as well as the
bond-disproportionated structure. Both show, despite the symmetry breaking
associated with the distortion, actually quite similar physics.
This paper is organized as follows. The methods employed, including the
structure optimization, DMFT, and the evaluation of spectral functions, are
described in Sec.~\ref{sec:method}. DFT+DMFT results for the uniform and the
bond-disproportionated structures are presented in Sec.~\ref{sec:results}. A
discussion of the topological properties as well as comparison of our numerical
results with the available experimental data are given in Sec.~\ref{sec:disc}.
We conclude our paper and provide a brief outlook in Sec.~\ref{sec:summary}.
\section{\label{sec:method}Method}
\subsection{Optimization of the crystal structure}
DFT+DMFT results generally depend on the structural input from DFT
calculations. Hence, accurate information on the crystal structure is of
crucial importance. This is particularly challenging for superlattices that
are not amenable to standard x-ray or neutron diffraction measurements. A
common approach is to evaluate the structural input computationally, by
allowing for a relaxation of the atomic coordinates, but keeping the lattice
constants fixed to that of the substrate and minimizing the total
energy~\cite{janson18}. In contrast, for correlated materials, the
underestimation of electronic correlations can have drastic impact on the
crystal structure. A prominent example is KCuF$_3$, where orbital ordering can
give rise to a distortion of the lattice, known as the cooperative Jahn-Teller
effect. This can be assisted by lattice effects, which may also be a driving
force. While conventional DFT functionals yield a spurious undistorted
structure, DFT+$U$ captures the underlying physics and reproduce the
experimentally observed distortion~\cite{liechtenstein95}.
A distinct trait of bulk nickelates is their tendency towards bond
disproportionation, i.e.\ developing a breathing distortion of NiO$_6$
octahedra~(e.g.,~\cite{alonso99, medarde08, garcia-munoz09}). Although it is
not the case for bulk LaNiO$_3$, the compressive strain exerted by the
LaAlO$_3$ substrate can stabilize the respective distortion. Similarly to
KCuF$_3$, this physics is not captured by DFT, and conventional functionals
disfavor such a bond disproportionation~\cite{ruegg13}. Hence, structural
optimizations of nickelate superlattices performed using a conventional DFT
functional can possibly lead to spurious results, and correlations have to be
accounted for in the course of a structural optimization. The optimal solution
would be a self-consistent DFT+DMFT scheme with an atomic force calculation at
every step, but such calculations require enormous computational efforts and
remain unfeasible for multisite and multi-orbital systems such as nickelate
heterostructures. Therefore, in this work, we restrict ourselves to DFT+$U$
structural optimizations that generally capture the structural details in the
rare earth nickelates~\cite{hampel17}.
We employ the generalized gradient approximation (GGA) +$U$ functional with $U$
in the range 4.0 to 6.0\,eV and $J$\,=\,1.0\,eV as implemented in
\textsc{vasp-5.3}~\cite{vasp, *vasp_2}. Ionic relaxations are performed until
all forces are below 0.005\,eV/\r{A}. The in-plane unit cell parameters are
fixed to that of bulk LAO. To optimize the $c$ parameter, we construct cells
with different $c$ and subsequently optimize the atomic coordinates. In this
way, we find that $c$\,=\,13.30\,\r{A} yields the lowest total energy
independent of the $U$ value. Next, we consider two trial structures as a
starting point --- a uniform structure and a structure with a breathing
distortion (BD), see Fig.~\ref{fig:str} b,c --- and relax the atomic
coordinates by keeping the unit cell parameters fixed. Despite the general
trend that the uniform structure has a lower energy for smaller $U_d$, the
energy differences are of the order of several K per cell, i.e.\ on a par with
the accuracy of DFT total energies. We conclude that the elastic energy due to
BD and the concomitant change in the electric potential are well-balanced, so
that DFT+$U$ calculations cannot provide an unambiguous answer, which of the
two structure types is realized in 2LNO/4LAO. We therefore perform DFT+DMFT
calculations for both, the uniform as well as the BD structure (Table~\ref{tab:poly}).
\begin{table}[h]
\caption{\label{tab:poly} Comparison of the relevant structural parameters in
the uniform as well as the bond-disproportionated (111) 2LNO/4LAO structures
optimized in the GGA+$U$ with the experimental structure of bulk
LaNiO$_3$~\cite{garcia-munoz92}. Atomic coordinates of the uniform and the
bond-disproportionated structure are provided in the Appendix.
}
\begin{ruledtabular}
\begin{tabular}{rrrr}
structure & $\langle$Ni--O$\rangle$, \r{A} & $V_{\text{NiO$_6$}}$, \r{A}$^3$ & $\measuredangle_{\text{Ni--O--Ni}}$, $^{\circ}$\\ \hline
uniform & 1.938 & 9.70 & 166.05 \\
BD & 1.952\,/\,1.925 & 9.91\,/\,9.52 & 166.04 \\
bulk LaNiO$_3$ (298\,K) & 1.935 & 9.65 & 165.22 \\
\end{tabular}
\end{ruledtabular}
\end{table}
\subsection{DFT+DMFT}
Subsequently, self-consistent DFT calculations for both optimized structures
(uniform and BD) were performed using \textsc{wien2k}~\cite{wien2k}. Both GGA
band structures (not shown) feature a well-separated manifold crossing
the Fermi level, the respective bands are formed by the antibonding combination
of Ni $e_g$ and O $p$ states. Since we have two Ni atoms per cell (see
Fig.~\ref{fig:str}, a, d) and two $e_g$ orbitals per Ni, the total number of
bands in the manifold is four. For these four bands, we construct the
maximally localized Wannier functions (WF) using
\textsc{wannier90}~\cite{wannier90} via the \textsc{wien2wannier}
interface~\cite{wien2wannier}. Finally, we calculate $H(k)$ by Fourier
transforming the Wannier Hamiltonian on a 48$\times$48$\times$1 $k$-mesh.
DMFT calculations were performed using the continuous-time quantum Monte Carlo (CT-QMC)
in the hybridization expansion (CT-HYB)~\cite{werner06} as implemented in
\textsc{w2dynamics}~\cite{parragh12, w2dynamics}. We used the rotationally
invariant Kanamori interaction $U'\!=\!U\!-\!2J$, which, in addition to the
density-density interaction, accounts also for the spin flip and pair hopping
terms. By fixing the Hund's exchange $J$ to 0.75\,eV~\footnote{Note that SU(2)
symmetric interactions require a smaller $J$ compared to the Slater
parametrization used in DFT+$U$: $U_\text{Kanamori} = U_\text{Slater} +
\frac{8}{7}J_\text{Slater}$ and $J_\text{Kanamori} =
\frac{5}{7}J_\text{Slater}$. See Supplementary Note\,1 in \cite{hausoel17} for
details.}, we varied the inter-orbital Coulomb repulsion $U'$ from 2 to
5\,eV~\footnote{The $U_d$ value in DMFT calculation is considerably smaller
than $U_d$ in DFT+$U$ due to the different spatial extent of the orbitals to
which they are applied: DMFT operates on Wannier orbitals, having substantial
Ni and O contributions, while in DFT+$U$, repulsion pertains to the spatially
confined atomic $d$-orbital of Ni.} and scanned the temperature range between
145 and 450\,K ($80\geq\beta\geq25$\,eV$^{-1}$). At every DMFT step, two
independent two-orbital impurity problems (for each Ni atom) in the unit cell
were solved. State-of-the-art DFT+DMFT calculations involve full
charge-self-consistency, which plays an important role in heterostructures with
a sizable electron transfer~\cite{lechermann13, *lechermann15}. Here, we
employed the non-charge-self-consistent scheme, i.e.\ the chemical potential
was adjusted to have two electrons per unit cell in each DMFT iteration. We
however found that the resulting per-atom occupations in DMFT were similar to
the GGA ones. The absence of appreciable charge transfer between the Ni sites
\emph{a posteriori} justifies the usage of a non-charge-self-consistent
DFT+DMFT~(see the discussion in Ref.~\onlinecite{bhandary16}).
The quasiparticle renormalization is estimated from the slope of the imaginary
part of the self-energy $\text{Im}{\Sigma(i\omega_n)}$ which depends on
$\omega_n$ linearly at low Matsubara frequencies:
\begin{equation}
\label{eq:Z}
Z\simeq\left(1-\frac{\partial\text{Im}{\Sigma(i\omega_n)}}{\partial\omega_n}\right)^{-1}.
\end{equation}
For each DMFT calculation, we considered only those
$\text{Im}{\Sigma(i\omega_n)}$ that still lie on a straight line according to a
$\chi^2$ fit.
\subsection{Spectral functions}
For selected values of $\beta$ and $U$, we present spectral functions. For
these we used self-energies $\Sigma(i\omega)$ computed with the worm
algorithm~\cite{gunacker15}, and analytically continued them to the real axis using
\textsc{Maxent}~\cite{levy17} which employs the maximum entropy
method~\cite{jarrell96}. The resulting self-energies $\mathbf{\Sigma}(\omega)$
are used to calculate the interacting Green's function on the real frequency
axis:
\begin{equation}
\label{eq:Gw}
\textbf{G}^{-1}(\vec{k},\omega) = \left(\omega\!+\!i\delta\!+\mu\right)\mathbf{I}\!-\!\mathbf{H}(\vec{k})\!-\!\mathbf{\Sigma}(\omega)\!-\!\mathbf{\Sigma}_{\text{dc}},
\end{equation}
where matrices in terms of orbitals and sites of the cell are denoted bold,
$\mathbf{\Sigma}_{\text{dc}}$ is the double-counting correction in the fully
localized limit~\cite{anisimov93} and $\mathbf{I}$ the identity matrix. The
$\vec{k}$-resolved and $\vec{k}$-integrated spectral functions can be obtained as
\begin{equation}
\label{eq:Awk}
A(\vec{k},\omega) = -\frac{1}{\pi}\left(\frac{1}{m}\right)\text{Tr}\left[\text{Im}\textbf{G}(\vec{k},\omega)\right]
\end{equation}
and
\begin{equation}
\label{eq:Aw}
A(\omega) = \left(\frac{1}{N_{\vec{k}}}\right)\sum_{\vec{k} \in \text{BZ}}{A(\vec{k},\omega)},
\end{equation}
respectively, where $m$ is the dimension of the $\textbf{H}(\vec{k})$ matrix.
These spectral functions are based on $H(\vec{k})$ with the periodic boundary
conditions along both in-plane directions, i.e.\ the Hamiltonian is defined on
a torus. To address the edge states, we resort to mixed boundary conditions of
a cylinder, which is periodic along $x$ and open along $y$~\footnote{ We note
that the honeycomb lattice allows for two inequivalent terminations: the zigzag
edge and the armchair edge. All calculations in this study are performed for
the zigzag edge.}. The respective Hamiltonian $\textbf{H}(k_x)$ is now a
$n_ym\times{}n_ym$ matrix, where $n_y$ is the number of unit cells along the
open direction, and the Green's function is
\begin{equation}
\label{eq:Gwedge}
\textbf{G}^{-1}(k_x,\omega) = \left(\omega\!+\!i\delta\!+\mu\right)\mathbf{I}-\!\mathbf{H}(k_x)\!-\!\mathbf{\Sigma}(\omega)\!-\!\mathbf{\Sigma}_{\text{dc}}.
\end{equation}
The respective spectral function is
\begin{equation}
\label{eq:Awkx}
A(k_x,\omega) = -\frac{1}{\pi}\left(\frac{1}{n_ym}\right)\text{Tr}\left\{\text{Im}\left[\textbf{G}(k_x,\omega)\right]\right\}.
\end{equation}
Since we are primarily interested in the edge states, we also explicitly
calculate their contribution to the spectral weight as:
\begin{equation}
\label{eq:Awkx_edge}
A^{\text{edge}}(k_x,\omega) = -\frac{1}{\pi}\left(\frac{1}{2m}\right)\text{Tr}\left\{\text{Im}\left[\textbf{G}_{\text{TT}}(k_x,\omega) +\\ \textbf{G}_{\text{BB}}(k_x,\omega)\right]\right\},
\end{equation}
where $\textbf{G}_{\text{TT}}$ ($\textbf{G}_{\text{BB}}$) denotes the Green's
function projected onto the top (bottom) cell.
\subsection{Choice of the model}
Before turning to the DMFT results, we address a controversially discussed
issue of the minimal model for nickelates. The hybridization of Ni $e_g$
states and the $\sigma$-bonded O $p$ states gives rise to molecular-like
$dp_{\sigma}$ orbitals. In a unit cell of $n$ Ni atoms, the antibonding states
form an isolated $\frac14$-filled manifold of $2n$ bands at the Fermi energy.
For low-energy excitations, it is seemingly natural to restrict the analysis to
these states and use the respective antibonding $dp_{\sigma}$ orbitals as a
basis in real space. This minimal two-orbital model, known as the $d$-only
model, has been employed in early DMFT studies~\cite{hansmann09, hansmann10}.
On the other hand, the high oxidation state of Ni$^{3+}$ can lead to a very
small, possibly even negative charge transfer gap. In this case, also the
low-energy physics will be largely affected by charge transfer processes
between $d$ and $p$ states. Indeed, DMFT calculations for such $d+p$ models
yielded qualitatively different results~\cite{han11}, mainly because the
$e_g^2$ oxygen ligand hole (L) configuration resulting from the negative charge
transfer forms a spin $S$\,=\,1 on the Ni sites~\cite{parragh13}. It has been
further suggested that every second Ni site forms a spin singlet with two
ligand holes~\cite{park12, green16, haule17}, leaving only localized $S=1$
states on the other half of the Ni sites. One should carefully note however
that whether one has a negative charge transfer ($d^8$) or not ($d^7$) very
sensitively depends on the relative position of oxygen and Ni $e_g$ states. In
DFT, the oxygen bands are too close to the Fermi level, which would favor the
negative charge transfer $d^8L$ picture. On top of this, the DFT+DMFT double
counting and possible inclusion of the $d$-$p$ interaction make a theoretical
prediction unreliable. Hence, in our view, this question has to be answered by
experiment eventually. In this respect, there are indications of a $d^8L$
configuration from x-ray absorption spectroscopy for smaller rare earth cations
such as NdNiO$_3$~\cite{bisogni16}, but not for bulk LaNiO$_3$: very recent
single-crystal experiments~\cite{guo18} yield the ordered magnetic moment of
$~\sim$0.3\,$\mu_{\text{B}}$, which is far too low for $S$\,=\,1.
In fact a BD scenario can be realized also in a $d$-only model as has been
acknowledged long ago~\cite{mazin07}. Recent DFT+DMFT calculations by
Subedi~\emph{et al.} showed that the BD phase sets in if $(U_d-3J_d)$ is
smaller than the difference between the on-site energies of the $e_g$
orbitals~\cite{subedi15}, which in our case is zero (degenerate $e_g$
orbitals). This result demonstrates that the emergence of the BD phase, and
hence, the nature of the metal-insulator transition in bulk nickelates are
reproduced by a $d$-only model, albeit with a strongly reduced Coulomb
interaction $U_d$.
In view of this and the unclear experimental situation, we restrict ourselves
to the $d$-only model. It features a considerably smaller number of free (and
prospectively very sensitive) parameters; and because the effective Coulomb
repulsion in the $d$-model can be strongly reduced~\cite{subedi15}, we scan a
broad range of $U_d$.
\section{\label{sec:results}DFT+DMFT results}
\subsection{Uniform structure}
We start with the uniform structure, for which DFT+$U$ calculations yield a FM
Dirac metal, nearly independent of the $U$ value. Our DMFT ($U'$,$T$) phase
diagram (Fig.~\ref{fig:phasediag}, left) reveals a much more involved picture,
with four different phases: a ferromagnetic metal (FM) at low $U'$, a
paramagnetic metal (PM), an antiferro-orbitally ordered insulator (AOI), and a
paramagnetic insulator (PI); see Fig.~\ref{fig:aw} and Fig.~\ref{fig:awk} for
the $k$-integrated and $k$-resolved $e_g$ spectral functions, respectively.
The long-range ferromagnetic ordering transition temperature $T_{\text{C}}$
depends on the onsite Coulomb repulsion: while $U'$\,$\leq$\,3\,eV yield a
ferromagnetic state at room temperature, larger $U'$\,$\geq$\,3.5\,eV strongly
disfavor spin polarization in the studied temperature range. In contrast, the
metal-insulator transition (the thick line in Fig.~\ref{fig:phasediag}) occurs
at the critical $U'$ which is slightly smaller than 4\,eV. The nearly vertical
line separating PM and insulating phases indicates thermal fluctuations play a
minor role in the metal-insulator transition. In the insulating part of the
phase diagram, the high-temperature paramagnetic phase (PI) develops an orbital
polarization upon cooling, with a gradual crossover to the AOI phase.
\begin{figure}[tb]
\includegraphics[width=8.6cm]{fig2}
\caption{\label{fig:phasediag}(Color online) DFT+DMFT phase diagram for the
uniform structure (left) and the bond disproportionated structure (right). FM,
PM, PI, and AOI stand for ferromagnetic metal, paramagnetic metal, paramagnetic
metal, and antiferro-orbitally-ordered insulator. Every thick point denotes a
separate DMFT calculation.}
\end{figure}
\begin{figure}[tb]
\includegraphics[width=8.6cm]{fig3}
\caption{\label{fig:aw}(Color online) Spectral functions $A(\omega)$
$[$Eq.~(\ref{eq:Aw}$)]$ of the uniform structure calculated with DMFT at room
temperature ($T$\,=\,290\,K) for the FM phase ($U'$\,=\,2\,eV), the PM phase
($U'$\,=\,3.5\,eV) and the AOI phase ($U'$\,=\,4.5\,eV)} \end{figure}
\paragraph{FM phase} The existence of a FM phase is seemingly in agreement
with the DFT+$U$ results that yield a Dirac metal state for the uniform
structure. However, the spin-resolved spectral function $A(\omega,k)$
(Fig.~\ref{fig:awk}, top) reveals that our FM phase is not a Dirac metal: the
band crossing at the K point lies $\sim$0.1\,eV above the Fermi level
(Fig.~\ref{fig:awk}). This effect comes primarily from the real part of the
self-energy, which shifts the majority states towards higher frequencies.
Instead, the majority states at the Fermi surface form a loop around the K
point (Fig.~\ref{fig:fs}, middle).
\begin{figure}[tb]
\includegraphics[width=8.6cm]{fig4}
\caption{\label{fig:awk}(Color online) $k$-resolved spectral functions
$[$Eq.~(\ref{eq:Awk}$)]$ of the uniform structure calculated with DMFT at room
temperature ($T$\,=\,290\,K) for the FM phase ($U'$\,=\,2\,eV), the PM phase
($U'$\,=\,3.5\,eV) and the AOI phase ($U'$\,=\,4.5\,eV). In the FM phase, the
Dirac point at K lies $\sim$100\,meV above the Fermi level. The two colors
correspond to the spectral weight in the majority ($|\!\downarrow\rangle$) or
minority ($|\!\uparrow\rangle$) channel. }
\end{figure}
\begin{figure}[tb]
\includegraphics[width=8.6cm]{fig5}
\caption{\label{fig:fs}(Color online) Fermi surfaces of the uniform structure
at room temperature. Left: the non-interacting Hamiltonian (DFT). Central:
the ferromagnetic metal ($U'$\,=\,2\,eV). Right: the paramagnetic metal
($U'$\,=\,3.5\,eV).
}
\end{figure}
\paragraph{PM phase} Above $T_{\text{C}}$, the LNO bilayer is a paramagnetic
metal, where both Ni sites, both orbitals and both spin channels are equally
occupied with 0.25 electrons per site and orbital. Correlation effects
manifest themselves in the sizable quasiparticle renormalization
$[$Eq.~(\ref{eq:Z})$]$ which amounts to $\sim$0.35--0.60 depending on the $U'$
value. The spectral function (Fig.~\ref{fig:awk}, middle) shows a weakly
dispersive feature at the Fermi level, which is largely broadened by the
enhanced $\text{Im}\Sigma(0)$. As a result, the Fermi surface plot lacks
sharp features (Fig.~\ref{fig:fs}, right).
\paragraph{PI and AOI phases} Similar to the PM phase, both orbitals and both
spin channels are equally populated also in the PI phase, but the spectral
function has a gap which grows with $U'$. There is an orbital
disproportionation between the two $e_g$ orbitals setting in at $\sim$350\,K,
and already at room temperature, a sizable orbital polarization develops. The
two neighboring NiO$_6$ octahedra have different predominantly occupied
orbitals, giving rise to an antiferro-orbital order (AOI). Interestingly, this
spontaneous symmetry breaking occurs despite the degeneracy of the $e_g$
orbitals, and hence is of a purely electronic origin. The spectral function
(Fig.~\ref{fig:awk}, bottom) shows a wide gap between two incoherent continua
--- the lower and the upper Hubbard bands.
The degree of the orbital polarization $p$ in nickelates is typically
defined~\cite{wu13, park16} as
\begin{equation}
\label{eq:orb}
p = \left|\frac{n_{3z^2-r^2}-n_{x^2-y^2}}{n_{3z^2-r^2}+n_{x^2-y^2}}\right|,
\end{equation}
where $n_{3z^2-r^2}$ and $n_{x^2-y^2}$ are orbital occupations (a summation
over both spin channels is implied). The polarization $p$ is shown in
Fig.~\ref{fig:orb} (left) as a function of temperature for $U'$\,=\,4\,eV. A
sharp increase of orbital polarization is seen below $\sim$350\,K, signaling
the phase transition from the PI to the AOI phase.
\begin{figure}[tb]
\includegraphics[width=8.6cm]{fig6}
\caption{\label{fig:orb}(Color online)
DMFT orbital polarization
$p=|(n_{3z^2-r^2}-n_{x^2-y^2})/(n_{3z^2-r^2}+n_{x^2-y^2})|$ in the insulating
phase ($U'$\,=\,4\,eV) as a function of temperature. The lines are a guide to
the eye.
}
\end{figure}
\subsection{Bond-disproportionated structure}
For the BD structure, DFT+$U$ calculations with $U$ from a reasonable range
yield a semiconductor with a gap of about 0.05\,eV~\cite{doennig14}, in
contrast to the Dirac metal state of the uniform structure. Surprisingly, our
DFT+DMFT phase diagram for the BD structure (Fig.~\ref{fig:phasediag}, right)
is very similar to that of the uniform structure. The four emerging phases are
analogous to those of the uniform structure, expect for the slight charge
disproportionation between the Ni sites that naturally occurs for a BD
structure~\cite{han11,park12}. Two noticeable differences are i) the shift of
boundaries of both phase transitions (FM$\rightarrow$PM and
PM$\rightarrow$PI/AOI) towards larger $U'$ values and ii) the crossover between
the AOI and PI phases showing an even weaker dependence on $U'$ than in the
uniform case. Please note that the charge disproportionation of the starting
BD Hamiltonian is very small and hardly affected by the DMFT correlations.
Instead, DMFT correlations support again the orbital polarization, see
Fig.~\ref{fig:orb}~(right), which is not present in the DFT-derived BD Wannier
Hamiltonian.
\section{\label{sec:disc}Discussion}
\subsection{\label{sec:topo}Topological properties}
The honeycomb lattice has an excellent potential for the formation of
topological edge states. The emergence of topological states in (111) bilayers
of $e_g$ electrons has been addressed on the model level~\cite{xiao11, ruegg11}
and in the context of nickelate heterostructures~\cite{yang11, ruegg12,
ruegg13, okamoto14}. Hartree-Fock calculations~\cite{yang11, ruegg11,
ruegg12} yield a rich phase diagram with orbitally ordered and topological
phases, but direct DFT+$U$ calculations favor a conventional
ferromagnetic phase~\cite{ruegg12}. Lattice distortions, and in particular,
the breathing distortion can drive the system away from a topological
phase~\cite{ruegg13}, as confirmed by direct DFT+$U$ calculations for LNO
bilayers~\cite{doennig14}. But in the absence of lattice distortions, DFT+$U$
yields a Dirac metal state.
\begin{figure*}[t]
\includegraphics[width=\textwidth]{fig7}
\caption{\label{fig:awkx}(Color online) DFT+DMFT $A(k_x,\omega)$ at room
temperature on cylinders of 50 unit cells. In each panel, the left spectrum
shows the total $A(k_x,\omega)$ $[$Eq.~(\ref{eq:Awkx})$]$, while the right
spectrum shows the weight of the edge states $A^{\text{edge}}(k_x,\omega)$
$[$Eq.~(\ref{eq:Awkx_edge})$]$, as schematically depicted in the top-right
corners of each plot. Note the topological edge state visible in the
edge-projected (right) spectrum lying 0.1-0.2\,eV above the Fermi energy for
the $U'$\,=\,2.0\,eV (left panel).}
\end{figure*}
In the above studies, with the notable exception of Ref.~\onlinecite{okamoto14},
electronic correlations were either neglected or taken into account at the
Hartree-Fock level. DMFT accounts for all local Feynman diagrams, and in this
way represents a systematic and substantial improvement over the Hartree-Fock
method. On the DMFT level, the electronic correlations are described by the
frequency-dependent self-energy. In general, the self-energy is a matrix with
nonzero orbital offdiagonal elements. However, for LNO bilayers, the proximity
of the Ni--O--Ni angles (165.81 and 165.58$^{\circ}$ in the uniform and the BD
structure, respectively) to 180$^{\circ}$ leads to vanishingly small
offdiagonal elements between $3z^2-r^2$ and $x^2-y^2$ orbitals. As a result,
we can safely neglect offdiagonal elements of the hybridization function
$F(i\omega_n)$ in our impurity problems, leading to the self-energies
$\Sigma(i\omega_n)$ that are diagonal in the site-orbital-spin basis.
The resulting DMFT self-energies are used to calculate the interacting Greens
function using Eq.~(\ref{eq:Gwedge}) and subsequently, the spectral functions
using Eqs.~(\ref{eq:Awkx}) and (\ref{eq:Awkx_edge}).
To this end, we use the DMFT self-energy of our bulk calculation for all sites
and consider periodic boundary conditions along the $x$ axis and open boundary
conditions for the $y$ axis, leading to the cylinder geometry. The spectral
functions for the FM, PM, and AOI states are shown in Fig.~\ref{fig:awkx}.
Only the FM state shows a distinct edge state, which however lies entirely in
the unoccupied part of the spectrum, $~\sim$0.1--0.2\,eV above the Fermi
energy. This agrees with the position of the Dirac point in
Fig.~\ref{fig:awk}~(top). Both PM and AOI phases yield very incoherent
features, without any distinct edge states. We therefore conclude that the
emergence of topological states in (111) LNO bilayers is unlikely.
\subsection{\label{sec:compar_model}Comparison with model DMFT calculations} By
performing DFT+DMFT calculations in the $d$-basis, we actually solve a two-site
two-orbital Hubbard model at quarter filling ($n=1$). The phase diagram of
this model based on a simplified (typically, semi-elliptic) density of states
has been studied in the literature, in particular within DMFT, and it is
tempting to put our DFT+DMFT results into this context.
The early Hirsch-Fye QMC results showed a remarkable stabilization of
ferromagnetism in a two-orbital Hubbard model away from half-filling due to the
Hund's coupling~\cite{held98}. A pronounced tendency towards orbital ordering
for the quarter filling has been found in the same study~\cite{held98}. The
interplay between magnetic and orbital ordering has been further addressed by
Peters~\emph{et~al.}~\cite{peters10a,peters10b}, who employed numerical
renormalization group (NRG) to solve auxiliary impurity problems. For
quarter-filling at zero temperature, they found a first-order metal-insulator
transition between the FM state and the ferromagnetic counterpart of our AOI
state, driven by the increased interorbital interaction $U'$~\cite{peters10a}.
Looking at our DFT+DMFT phase diagrams, one may speculate that the FM metallic
phase extends to large $U'$ values at lower temperatures than accessible by
CT-QMC and shown in Fig.~\ref{fig:phasediag}. Further, the AOI order actually
supports an insulating ferromagnetic phase through superexchange. Hence one may
speculate further that at lower temperatures, an additional ferromagnetic ordering
occurs in the AOI phase. This would yield two close-by ferromagnetic phases,
similar as was discussed in Ref.~\onlinecite{peters10a} for a
two-orbital model. In this case, a huge magnetoresistance can be
expected~\cite{peters10a}.
A subsequent paper reports a detailed model study of the quarter-filled
case~\cite{peters10b}. The ground state phase diagram features, in addition to
the two ferromagnetic and a paramagnetic phase, also the AOI phase, albeit in a
narrow region of the phase diagram, where the inter-orbital repulsion $U'$
largely exceeds the Hund's exchange. One of the main results of
Ref.~\onlinecite{peters10b} is the stabilization of the orbital order without
Jahn-Teller distortions. Our DFT+DMFT calculations not only corroborate this
conclusion, but strengthen it further: the presence of the AOI phase in the BD
structure implies that the antiferro-orbital order is resilient to the
competing mechanism of a breathing distortion.
\subsection{\label{sec:compar_exp}Comparison with experiments}
Recent transport measurements on 2LNO/4LAO heterostructures yield a band gap of
120\,meV~\cite{wei16}, which grows if the thickness of the LAO layer increases,
which has been argued to stem from the accumulation of defects in thicker
layers. Nevertheless, the insulating nature of the LNO bilayers can be
regarded as a sound experimental result as it also concurs with the earlier
report~\cite{middey12}. Thus, we infer that the inter-orbital repulsion $U'$
in LNO bilayers exceeds 4\,eV, as only such high values yield an insulating
phase (Fig.~\ref{fig:phasediag}).
At the first glance, we do not have arguments in favor or against the breathing
distortion: the insulating part of the phase diagram is similar for both
structures, and the DFT+$U$ energies are essentially degenerate. But the bare
emergence of the AOI phase already indicates a tendency towards a cooperative
Jahn-Teller distortion (orbital ordering). Hence, in the insulating state, the
electronic degrees of freedom disfavor the competing mechanism of a breathing
distortion even in the BD structure. The presence of such an orbital order can
be verified experimentally by measuring x-ray linear dichroism (see
Ref.~\onlinecite{disa15}), because both the uniform and the BD structure show a very
strong orbital polarization $p$ already at room temperature
(Fig.~\ref{fig:orb}). Nickelate heterostructures with a sizable orbital
polarization do exist for the (001) case~\cite{disa15}. In contrast to the
(001) case, the AOI state in (111) LNO bilayers is an insulating state, and a
large orbital polarization is easier to achieve. In several cases, the
Mott-Hubbard metal-insulator transition is indeed accompanied by an orbital
polarization, e.g.\ in V$_2$O$_3$~\cite{keller04} and SrVO$_3$
films~\cite{zhong14}.
Magnetic properties of LNO bilayers remain hitherto unexplored, but the recent
study of NdNiO$_3$ bilayers on LAO, reporting antiferromagnetic correlations
and orbital order~\cite{middey16}, demonstrates that an experimental insight is
feasible. According to our DFT+DMFT results, the insulating phases, PI and
AOI, do not show any magnetic order above $\sim$150\,K, and it would be
interesting to verify this result experimentally.
A good agreement between our DFT+DMFT and model DMFT
studies~(Sec.~\ref{sec:compar_model}) gives hope that the low-temperature
physics of (111) LNO bilayers can be even more exciting. In particular, a very
high magnetoresistance was found at the boundary between the
antiferro-orbitally-ordered (insulating) and the orbitally-disordered
(metallic) ferromagnetic phases in the model DMFT~\cite{peters10b}. Although we
do not see the former phase in the phase diagram (Fig.~\ref{fig:phasediag}), it
may become stabilized by cooling. Unfortunately, performing CT-QMC at low
temperature becomes prohibitively challenging, although very recent
developments such as the superstate sampling method~\cite{kowalski18} can
largely alleviate the computational effort. Nonetheless our phase diagram sets
the stage of what orders can be expected in experiment.
\section{\label{sec:summary}Summary and outlook}
Using DFT+DMFT calculations, we evaluated the phase diagram of a (111) oxide
heterostructure formed by LaNiO$_3$ bilayers interleaved with four layers of
LaAlO$_3$, in a wide range of temperatures and the values of the inter-orbital
Coulomb repulsion $U'$. Independent of the presence or absence of breathing
distortions that are typical for bulk nickelates, we find four phases: a
ferromagnetic and a paramagnetic metal, a paramagnetic insulator, as well as an
antiferro-orbitally-ordered insulator. Spectral functions calculated on
cylinders feature edge states in the ferromagnetic metallic state, whereas both
insulating phases are topologically trivial. Taking the experimentally
observed activated behavior as an indication for a insulating state, we argue
that LaNiO$_3$ bilayers can develop sizable orbital polarization at room
temperature. Based on earlier model DMFT studies, we can expect ferromagnetic
ordering at lower temperatures, offering an intriguing possibility of a
transition between a metallic and insulating ferromagnetic phases, with a
concomitant high magnetoresistance.
Compared to DFT+$U$, DFT+DMFT provides a more realistic treatment of electronic
correlations and gives access to finite temperature properties. However, DMFT
is restricted to local correlations. For a quasi-2D system
with the low coordination number such as nickelate (111) bilayers, nonlocal
correlation effects can play an important role. A natural extension of our
study would be the application of cluster~\cite{lichtenstein00,kotliar01} or
diagrammatic~\cite{rohringer18} extensions of DMFT to the phase diagram of
2LNO/4LAO.
\begin{acknowledgments}
We acknowledge financial support by European Research Council under the
European Union's Seventh Framework Program (FP/2007-2013)/ERC through grant
agreement n.\ 306447. OJ was supported by the Austrian Science Fund (FWF)
through the Lise Meitner programme, project no.\ M2050. We thank Marta Gibert,
Sumanta Bhandary, and Gang Li for fruitful discussions. Calculations have been
done on the Vienna Scientific Cluster~(VSC).
\end{acknowledgments}
|
train/arxiv
|
BkiUeVo4dbghe2kmZUGI
| 5 | 1 |
\section{Introduction}
High-energy physics uses quantum field theory mainly
to describe scattering experiments through the S-matrix.
In solid-state or molecular physics, we are rather interested in the
value of physical observables, such as the charge and current densities
inside the sample or the response to an external perturbation.
At the quantum field theory (QFT) level, these quantities are calculated as
expectation values of Heisenberg operators. For example,
the current density for a system in a state $|\Phi\rangle$
is $\langle \Phi| \mathbf{J}(x) |\Phi\rangle$, where
$|\Phi\rangle$ and $\mathbf{J}(x)$ are written in the
Heisenberg picture.
The first QFT calculation of Heisenberg operators
was made by Dyson in two difficult papers \cite{Dyson51I,Dyson51II}
that were completely ignored.
At about the same time, Gell-Mann and Low discovered that,
when the initial state of the system is nondegenerate, the
expectation value of a Heisenberg operators can be obtained by
a relatively simple formula \cite{GellMann}. The Gell-Mann and Low
formula has been immensely successful and is a key element of the
many-body theory of condensed matter \cite{Fetter,Gross}.
Its main advantage over the formalism developed by
Dyson is that all the standard tools of QFT can be
used without change.
However, it was soon realized that the assumption of a nondegenerate
initial state is not always valid. As a matter of fact, the problem
of what happens when the initial state is not trivial
is so natural that it was discussed in many fields of physics:
statistical physics \cite{Fujita}, many-body physics \cite{Hall},
solid-state physics \cite{Esterling},
atomic physics \cite{Lindgren1},
quantum field theory and nuclear physics \cite{Henning,FauserWolter}.
As a consequence, the theory developed to solve this problem
received several names such as nonequilibrium quantum field
theory (or quantum statistical mechanics) with initial correlations
(or with cumulants, or for open shells, or for degenerate systems).
It is also called the
closed-time path or the (Schwinger-)Keldysh approach
for an arbitrary initial density matrix.
It should be stressed that the problem of the quantum field theory
of a degenerate system is not only of academic interest.
For instance, many strongly-correlated systems contain open-shell
transition metal ions which are degenerate by symmetry.
This degeneracy makes the system very sensitive to external
perturbation and, therefore, quite useful for the design of
functional materials.
The elaboration of a QFT for degenerate systems took a long
time. It started with Symanzik \cite{Symanzik} and Schwinger
\cite{SchwingerJMP} and made slow progress because the
combinatorial complexity is much higher than with standard QFT.
To illustrate this crucial point, it is important to consider an example.
According to Wick's theorem, the time-ordered product
of free fields can be written in terms of normal order
products:
\begin{eqnarray*}
T\varphi(x_1)\dots\varphi(x_4) &=&
{:}\varphi(x_1)\dots\varphi(x_4){:}
+\sum_{ijkl} {:}\varphi(x_i)\varphi(x_j){:}\,G_0(x_k,x_l)
\\&&
+\sum_{ijkl} {:}\varphi(x_k)\varphi(x_l){:}\,G_0(x_i,x_j)
+\sum_{ijkl} G_0(x_i,x_j)G_0(x_k,x_l),
\end{eqnarray*}
where the quadruplet of indices $(i,j,k,l)$ runs over
$(1,2,3,4)$, $(1,3,2,4)$ and $(1,4,2,3)$.
The expectation value of this expression over the vacuum
gives the familiar result
$\sum_{ijkl} G_0(x_i,x_j)G_0(x_k,x_l)$.
However, when the initial state $| \psi\rangle$
is not the vacuum
(as in solid-state physics), we obtain
\begin{eqnarray*}
\langle \psi| T\varphi(x_1)\dots\varphi(x_4) | \psi\rangle
&=&
\langle \psi|{:}\varphi(x_1)\dots\varphi(x_4){:}| \psi\rangle
+\sum_{ijkl} \rho_2(x_i,x_j)G_0(x_k,x_l)
\\&&
+\sum_{ijkl} \rho_2(x_k,x_l)G_0(x_i,x_j)
+\sum_{ijkl} G_0(x_i,x_j)G_0(x_k,x_l),
\end{eqnarray*}
where $ \rho_2(x,y)=\langle\psi|{:}\varphi(x)\varphi(y){:}| \psi\rangle$.
If we assume, for notational convenience, that the expectation value of
the normal product of an odd number of field operators is zero,
the fourth cumulant $\rho_4(x_1,\dots,x_4)$ is defined by the equation
\begin{eqnarray*}
\langle \psi|{:}\varphi(x_1)\dots\varphi(x_4){:}| \psi\rangle
&=&
\rho_4(x_1,\dots,x_4)
+\sum_{ijkl} \rho_2(x_k,x_l)\rho_2(x_i,x_j).
\end{eqnarray*}
If we put $g=G_0+\rho_2$, the free four-point Green function
becomes
\begin{eqnarray*}
\langle \psi| T\varphi(x_1)\dots\varphi(x_4) | \psi\rangle
&=&
\rho_4(x_1,\dots,x_4)+ \sum_{ijkl} g(x_i,x_j)g(x_k,x_l).
\end{eqnarray*}
When $\rho_4=0$, the expression is the same
as over the vacuum,
except for the fact that the free Feynman propagator
$G_0$ is replaced by $g$.
When this substitution is valid, standard QFT can be applied
without major change and
the structure of the interacting Green functions is not
modified. For fermionic systems described by a
quadratic Hamiltonian $H_0$, this happens when
the ground state is nondegenerate, so that
$| \psi\rangle$ is a Slater determinant.
When $\rho_4\not=0$, the expression becomes essentially
different because the cumulant $\rho_4$ appears as a sort
of free Feynman propagator with four legs.
In general, the expectation value of a time-ordered product of
$n$ free fields involves $\rho_k$ with $k\le n$.
In other words, the perturbative
expansion of the Green functions can no longer be written
as a sum of standard Feynman diagrams.
Generalized Feynman diagrams have to be used,
involving free Feynman propagators with any number
of legs \cite{Fujita,Hall,Kukharenko}.
Because of this additional complexity, the structure of the
Green functions for degenerate systems is almost completely
unknown. The only result available is the equivalent of the
Dyson equation for the one-body Green function
$G(x,y)$ \cite{Hall}
\begin{eqnarray*}
G &=& (1-A)^{-1}(G_0+C)(1-B)^{-1}(1+\Sigma G),
\end{eqnarray*}
where $A$, $B$, $C$ and $\Sigma$ are sums of one-particle
irreducible diagrams. When the initial state is nondegenerate,
$A=B=C=0$ and the Dyson equation
$G=G_0+G_0 \Sigma G$ is recovered.
In the present paper, a formal method is presented to determine
the structure of Green functions for degenerate systems.
The main idea is to use external sources that
transform the additional propagators $\rho_n$
into \emph{interaction terms}. This brings the
problem back into the standard QFT scheme, where
many structural results are available.
\section{Expectation value of Heisenberg operators}
Let us consider a physical observable $A(t)$, for instance
the charge density or the local magnetic field.
In the Heisenberg picture, this observable is
represented by the operator $A_H(t)$
and the value of its observable when the system is
in the state $|\Phi_H\rangle$ is given by the
expectation value
$\langle A (t)\rangle = \langle \Phi_H | A_H(t)|\Phi_H\rangle$.
Going over to the interaction picture, we write the
Hamiltonian of the system as the sum of a free and an
interaction parts: $H(t)=H_0+H_I(t)$, we define the
evolution operator
$U(t,t')=T\big(\exp(-i\int_{t'}^t H_I(t)\mathrm{d} t)\big)$
and we assume that the state $|\Phi_H\rangle$
can be obtained as the adiabatic evolution of an
eigenstate $|\Phi_0\rangle$ of $H_0$.
The expectation value of $A$ becomes
\begin{eqnarray*}
\langle A(t) \rangle &=&
\langle \Phi_0|U(-\infty,t) A(t) U(t,-\infty)|\Phi_0\rangle,
\end{eqnarray*}
where $A(t)$ on the right hand side is the operator representing
the observable in the interaction picture.
The identity $1=U(t,\infty)U(\infty,t)$ and the definition
$S=U(\infty,-\infty)$ enable us to derive the basic
expression for the expectation value of an observable
in the interaction picture:
\begin{eqnarray}
\langle A(t)\rangle &=&
\langle \Phi_0|S^\dagger T(A(t)S)|\Phi_0\rangle.
\end{eqnarray}
When $|\Phi_0\rangle$ is nondegenerate, this expression can be further
simplified into the Gell-Mann and Low formula
\begin{eqnarray*}
\langle \Phi|A(t)|\Phi \rangle &=&
\frac{ \langle\Phi_0 | T(A(t)S) | \Phi_0\rangle}
{ \langle\Phi_0 | S | \Phi_0\rangle}.
\end{eqnarray*}
If the system is in a mixed state, as is the case for a
degenerate system by L\"uders' principle, the expectation
value becomes
\begin{eqnarray*}
\langle A(t)\rangle &=&
\sum_n p_n \langle \Phi_n|S^\dagger T(A(t)S)|\Phi_n\rangle,
\end{eqnarray*}
where $p_n$ is the probability to find the system in the
eigenstate $|\Phi_n\rangle$.
It will be convenient to use more general mixed states
$\sum_{mn} \omega_{mn} |\Phi_m\rangle \langle \Phi_n|$,
where $\omega_{mn}$ is a density matrix (i.e. a
nonnegative Hermitian matrix with unit trace).
Such a mixed state corresponds to a linear form
$\omega$ defined by its value over an operator $O$:
\begin{eqnarray*}
\omega(O) &=&
\sum_{mn} \omega_{mn} \langle \Phi_n|O|\Phi_m\rangle.
\end{eqnarray*}
Then, the expectation value of $A(t)$ becomes
\begin{eqnarray}
\langle A(t)\rangle &=&
\omega\big(S^\dagger T(A(t)S)\big).
\label{evAomega}
\end{eqnarray}
\section{QFT with a general state}
In all practical cases, the operator representing the observable
$A(t)$ in the interaction picture is a polynomial in $\varphi$
and its derivatives. Its expectation value \eqref{evAomega}
can be expressed in terms of Green functions that
are conveniently calculated by a formal
trick due to Symanzik \cite{Symanzik}
and Schwinger \cite{SchwingerJMP}, and reinterpreted by
Keldysh \cite{Keldysh}.
The first step is to define an S-matrix in the presence of an
external current $j$ as
$S(j) = T\big(\mathrm{e}^{-i\int H^\mathrm{int}(t) \mathrm{d} t+i\int j(x)\varphi(x)\mathrm{d} x} \big)$,
where $H^\mathrm{int}$ in the interaction Hamiltonian in the interaction
picture.
The interaction Hamiltonian is then written in terms of a
Hamiltonian density $V(x)$, so that
$\int H^\mathrm{int}(t) \mathrm{d} t=\int V(x) \mathrm{d} x$ and
the generating function of the interacting Green functions is
defined by $Z(j_+,j_-)=\omega\big(S^\dagger(j_-)S(j_+)\big)$.
The interacting Green functions can then be obtained
as functional derivatives of $Z$ with respect to the
external currents $j_+$ and $j_-$.
For example
\begin{eqnarray*}
\langle T(\varphi(x)\varphi(y))\rangle =
- \frac{\delta^2 Z(j_+,j_-)}{\delta j_+(x)\delta j_+(y)},\quad
\mathrm{and}\quad
\langle \varphi(x)\varphi(y)\rangle =
\frac{\delta^2 Z(j_+,j_-)}{\delta j_-(x)\delta j_+(y)}.
\end{eqnarray*}
As in standard QFT, the connected Green functions are generated
by $\log Z$.
In the functional method \cite{Schwinger,Chou}, the generating
function $Z$ of the interacting system is written as
$Z=\mathrm{e}^{-iD}Z_0$, where $D$ is the interaction in terms of
functional derivatives
\begin{eqnarray*}
D &=& \int V\Big(\frac{-i\delta}{\delta j_+(x)}\Big)
-V\Big(\frac{i\delta}{\delta j_-(x)}\Big) \mathrm{d} x,
\end{eqnarray*}
and where
$Z_0(j_+,j_-)=\omega\big(S_0^\dagger(j_-)S_0(j_+)\big)$,
with $S_0(j) = T\big(\mathrm{e}^{i\int j(x)\varphi(x)\mathrm{d} x} \big)$.
Note that $Z_0(j_+,j_-)$ is the generating function of the
free Green functions.
A straightforward calculation \cite{Chou} leads to
\begin{eqnarray*}
Z^0(j_+,j_-) &=&
\mathrm{e}^{-1/2 \int \mathbf{j}(x) G_0'(x,y) \mathbf{j}(y) \mathrm{d} x \mathrm{d} y}
\mathrm{e}^{\rho'(j_+-j_-)},
\end{eqnarray*}
where $\mathbf{j}=(j_+,j_-)$ is the source vector,
\begin{eqnarray}
G_0'(x,y) &=& \left( \begin{array}{cc}
\langle 0 | T\big(\phi(x)\phi(y)\big)|0\rangle
& -\langle 0 | \phi(y)\phi(x)|0\rangle \\
-\langle 0 | \phi(x)\phi(y)|0\rangle
& \langle 0 |
\bar{T}\big(\phi(x)\phi(y)\big)|0\rangle
\end{array}\right),
\label{defG0}
\end{eqnarray}
is a free Green function (with $\bar{T}$ the anti-time
ordering operator) and
\begin{eqnarray}
\mathrm{e}^{\rho'(j)} &=& \omega \big( {:}\mathrm{e}^{i\int j(x)\varphi(x)\mathrm{d} x}{:}\big)
\label{defrho}
\end{eqnarray}
defines the generating function $\rho'(j)$ of the cumulants
of the initial state $\omega$.
The free Green function $G_0'$ describes the dynamics generated
by the free Hamiltonian $H_0$. It can also be written in terms
of advanced and retarded Green functions \cite{SchwingerJMP}.
The idea of describing a state by its cumulants was introduced
in QFT by Fujita \cite{Fujita} and Hall \cite{Hall}.
It was recently rediscovered in nuclear
physics \cite{Henning,FauserWolter} and
in quantum chemistry \cite{KutzMukh}.
The next step is to modify the definition of the free
Green function.
The cumulant function is Taylor expanded
\begin{eqnarray*}
\rho'(j) &=& \sum_{n=2}^\infty \frac{1}{n!}
\int \mathrm{d} x_1\dots\mathrm{d} x_n \rho_n(x_1,\dots,x_n) j(x_1)\dots j(x_n).
\end{eqnarray*}
The expansion starts at $n=2$ because $\omega(1)=1$
and the linear term can be removed by shifting the
field $\varphi$.
The bilinear term $\rho_2(x,y)$ is included into
the free Green function by defining
\begin{eqnarray*}
G_0(x,y) &=& G_0'(x,y) + \rho_2(x,y) \left( \begin{array}{cc}
1 & -1 \\ -1 & 1 \end{array}\right),
\end{eqnarray*}
and the corresponding cumulant function becomes
\begin{eqnarray*}
\rho(j) &=& \rho'(j)-(1/2) \int \mathrm{d} x \mathrm{d} y j(x) \rho_2(x,y) j(y)\\
&=& \sum_{n=3}^\infty \frac{1}{n!}
\int \mathrm{d} x_1\dots\mathrm{d} x_n \rho_n(x_1,\dots,x_n) j(x_1)\dots j(x_n).
\end{eqnarray*}
\begin{rem}
There are several good reasons to use
$G_0$ and $\rho$ instead of $G_0'$ and $\rho'$:
(i) This modification is exactly what is done in solid-state physics
when the free Green function includes a sum over occupied states
\cite{BrouderPRA};
(ii) At a fundamental level, $G_0$ and $\rho$
have a more intrinsic meaning than $G_0'$ and $\rho'$
because they do not depend on the state $|0\rangle$
chosen as the vacuum; (iii) An important theorem of quantum field
theory \cite{Hollands3}
states that, under quite general conditions, $\rho_n(x_1,\dots,x_n)$
is a smooth function of its arguments when $n>2$, so that
$G_0$ gathers all possible singular terms (a related result was
obtained by Tikhodeev \cite{TikhodeevCor}); (iv) A state
for which $\rho(j)=0$ is called a quasi-free state \cite{Kay1},
quasi-free states are very convenient in practice because
the rules of standard QFT can be used without basic changes.
Thus, the additional complications arise precisely when $\rho$
(and not $\rho'$) is not zero.
\end{rem}
\section{Nonperturbative equations}
To size up the combinatorial complexity due to the presence
of a non-zero $\rho$, we present the diagrammatic expansion
of the one-body Green function $G(x,y)$ for the $\varphi^3$
theory to second order in perturbation theory. For this
illustrative purpose, it will be enough to say that
the cumulant $\rho_n(x_1,\dots,x_n)$ is pictured as a white
vertex with $n$ edges attached to it, the other vertex
of the edge is associated with one of the points
$x_1,\dots,x_n$. For example, $\rho_4(x_1,\dots,x_4)$
is represented by the diagram
\begin{figure}[!ht]
\includegraphics[width=5.2cm]{figrho.eps}
\end{figure}
In this diagram, the white dot does not stand for a
spacetime point, it just indicates that the points
$x_1$ to $x_4$ are arguments of a common
cumulant.
If we restrict the calculation to the
case when $\rho_n=0$ if $n$ is odd, we obtain the following
expansion
\begin{figure}[!ht]
\includegraphics[width=12cm]{figrose.eps}
\end{figure}
In standard QFT, only the first and last diagrams of the right hand
side are present. In the general case when all $\rho_n\not=0$,
the number of diagrams is still much larger.
\subsection{Generalized Dyson equation}
As mentionned in the introduction, the only known result concerning
the structure of Green functions with a general state was derived
by Hall for the one-body Green function $G(x,y)$ \cite{Hall}
\begin{eqnarray*}
G &=& (1-A)^{-1}(G_0+C)(1-B)^{-1}(1+\Sigma G).
\end{eqnarray*}
In diagrammatic terms the quantities $A$, $B$,
$C$ and $\Sigma$ are sums of one-particle irreducible
diagrams. If we take our example of the Green function
of $\varphi^3$ theory up to second order, we find
\begin{figure}[!ht]
\includegraphics*[width=7cm]{diagA.eps}
\end{figure}
\begin{figure}[!ht]
\includegraphics*[width=7cm]{diagB.eps}
\end{figure}
\begin{figure}[!ht]
\includegraphics*[width=12cm]{diagCuneligne.eps}
\end{figure}
\begin{figure}[!ht]
\includegraphics*[width=7cm]{diagsigma.eps}
\end{figure}
In standard QFT, we have $A=B=C=0$ and the diagrammatic
representation of $\Sigma$ contains much less terms.
However, the difference with standard QFT is not only limited to
the number of diagrams. The definition \eqref{defrho} of the
cumulant function, and the fact that the free field $\varphi$
is a solution of the Klein-Gordon equation imply that
$\rho_n$ is a solution of the Klein-Gordon equation
in each of its variables. Thus, $A(x,y)$, $B(x,y)$
and $C(x,y)$ are solutions of the Klein-Gordon equation
for $x$ and $y$. As a consequence,
applying the Klein-Gordon operator to the Green function
gives us $(\Box+m^2)G=(1-B)^{-1}(1+\Sigma G)$.
In other words, applying the Klein-Gordon operator kills
a large number of terms of $G$. This is in stark contrast
with standard QFT, where $(\Box+m^2)G=1+\Sigma G$ and
amputating a Green function does not modify its structure.
This important difference makes some tools of standard QFT
(e.g. amputated diagrams or Legendre transformation)
invalid in the presence of a general state.
All those difficulties explain the scarcity of results
available in non-perturbative QFT with a general state.
Apart from Hall's work \cite{Hall}, the only non-perturbative
results are Tikhodeev's cancellation theorems
\cite{Tikhodeev,Danielewicz}
and the equation of motion for the Green
functions \cite{BrouderEuroLett}.
In the next section, we present a simple trick to derive
the structure of Green functions with a general state.
\subsection{Quadrupling the sources}
We first determine the main formal
difference between standard QFT and QFT with a general state.
In both cases, the generating function of the Green functions can be
written $Z=\mathrm{e}^{-iD}Z_0$, where $D$ describes the interaction
and $Z_0$ the initial state. In the presence of a general state,
the interaction $D$ is simple but $Z_0$ is
made non standard by the cumulant factor $\mathrm{e}^{\rho}$.
The idea of the solution is to
transfer the cumulant function $\rho$ from $Z_0$ to $D$,
because powerful functional methods were
developed to deal with general interactions $D$. These methods
were first proposed by Dominicis and Englert \cite{DominicisEnglert}
and greatly expanded by the
Soviet school \cite{Vasilev1,Vasilev2,Vasilev3,Vasilev4,%
Vasilev,Pismak1,Pismak2,Pismak3,Pismak4}.
This transfer from the initial state to the interaction
can be done easily by introducing two additional external
sources $k_+$ and $k_-$ and using the identity
\begin{eqnarray*}
\mathrm{e}^{\rho(j_+-j_-)} =
\mathrm{e}^{\rho(-i\frac{\delta}{\delta k_+}-i\frac{\delta}{\delta k_-})}
\mathrm{e}^{i\int (j_+(x)k_+(x)-j_-(x)k_-(x))\mathrm{d} x}\big|_{k_+=k_-=0}.
\end{eqnarray*}
The term involving $\rho$ can now be transferred from
$Z_0$ to $D$ by defining the new generating function
\begin{eqnarray*}
\bar{Z}(j_\pm,k_\pm) &=& \mathrm{e}^{-i\bar{D}} \bar{Z}_0(j_\pm,k_\pm),
\end{eqnarray*}
where the modified interaction is
\begin{eqnarray*}
\bar{D} &=& \int V\Big(\frac{-i\delta}{\delta j_+(x)}\Big)
-V\Big(\frac{i\delta}{\delta j_-(x)}\Big) \mathrm{d} x
-i\rho(-i\frac{\delta}{\delta k_+}-i\frac{\delta}{\delta k_-}),
\end{eqnarray*}
and the modified free generating function is
\begin{eqnarray*}
\bar{Z}_0(j_\pm,k_\pm) &=&
\mathrm{e}^{-1/2 \int \mathbf{J}(x) {\bar{G}}_0(x,y) \mathbf{J}(y)\mathrm{d} x \mathrm{d} y},
\end{eqnarray*}
with $\mathbf{J}=(j_+,j_-,k_+,k_-)$.
The modified free Green function ${\bar{G}}_0$ is now a 4x4 matrix
that can be written as a 2x2 matrix of 2x2 matrices
\begin{eqnarray*}
{\bar{G}}_0 &=& \left( \begin{array}{cc}
G_0
& -i\mathbf{1} \\
-i\mathbf{1}
& 0 \end{array}\right).
\end{eqnarray*}
In contrast to the standard case, the free Green function
${\bar{G}}_0$ is invertible
\begin{eqnarray*}
{{\bar{G}}_0}^{-1} &=& \left( \begin{array}{cc}
0
& i\mathbf{1} \\
i\mathbf{1}
& G_0 \end{array}\right),
\end{eqnarray*}
and it is again possible to use amputated diagrams and Legendre
transformations. The free generating function $\bar{Z}_0$ is
the exponential of a function that is bilinear in the sources,
and all the standard structural tools of QFT are available again.
We illustrate this by recovering Hall's analogue of the Dyson equation.
\subsection{An algebraic proof of Hall's equation}
The free generating function $\bar{Z}_0$ has a standard form and
the Dyson equation holds again:
$\bar{G}={\bar{G}}_0+{\bar{G}}_0{\bar{\Sigma}}\bar{G}$, where
$\bar{G}$ is the 4x4 one-body Green function obtained from
the generating function $\bar{Z}$ and ${\bar{\Sigma}}$ is
the corresponding self-energy.
Each 4x4 matrix is written as a 2x2 matrix of 2x2 matrices.
For example
\begin{eqnarray*}
\bar{G} &=& \left( \begin{array}{cc}
\bar{G}_{11}
& \bar{G}_{12} \\
\bar{G}_{21}
& \bar{G}_{22} \end{array}\right).
\end{eqnarray*}
We want to determine the structure of the
2x2 Green function $G$, which is equal to
$\bar{G}_{11}$ when $k_+=k_-=0$.
The upper-left component of the Dyson equation for $\bar{G}$ is
\begin{eqnarray}
\bar{G}_{11} &=& G_0 + (G_0{\bar{\Sigma}}_{11}-i{\bar{\Sigma}}_{21})
\bar{G}_{11}+ (G_0{\bar{\Sigma}}_{12}-i{\bar{\Sigma}}_{22})\bar{G}_{21}.
\label{upperleft}
\end{eqnarray}
The lower-left component gives us
$\bar{G}_{21}=-i(1+i{\bar{\Sigma}}_{12})^{-1}(1+{\bar{\Sigma}}_{11}\bar{G}_{11})$.
If we introduce this expression for $\bar{G}_{21}$ into
equation \eqref{upperleft}, rearrange a bit and
use the operator identity
$1+O(1-O)^{-1}=(1-O)^{-1}$, we obtain
\begin{eqnarray*}
(1+i{\bar{\Sigma}}_{21})\bar{G}_{11} &=& (G_0-{\bar{\Sigma}}_{22})
(1+i{\bar{\Sigma}}_{12})^{-1}(1+{\bar{\Sigma}}_{11}\bar{G}_{11}).
\end{eqnarray*}
Hall's equation is recovered by identifying
$A=-i{\bar{\Sigma}}_{21}$,
$B=-i{\bar{\Sigma}}_{12}$ and $C=-{\bar{\Sigma}}_{22}$, where the right
hand side is taken at $k_+=k_-=0$.
Note that Hall's equation is now obtained after a few lines
of algebra instead of a subtle analysis of the graphical
structure of the diagrams.
With the same approach, all the nonperturbative methods
used in solid-state physics, such as the
GW approximation \cite{GW} and the Bethe-Salpeter
equation \cite{Albrecht,Benedict}, can
be transposed to the case of a general
initial state. This will be presented in
a forthcoming publication.
\section{Determination of the ground state}
QFT with a general state was studied because the
initial eigenstate of a quantum system is sometimes
degenerate. However, it remains to determine
which density matrix $\omega_{mn}$ of the
free Hamiltonian leads to the ground state
of the interacting system.
A solution to this problem was inspired by quantum
chemistry methods \cite{BrouderICDIM}.
A number of eigenstates $|\Phi_n\rangle$ of $H_0$
are chosen, for example the complete list of degenerate
eigenstates corresponding to a given energy.
These eigenstates span the so-called \emph{model space} and
the ground state of the interacting system is assumed
to belong to the adiabatic evolution of the model space.
This model space generates, for each density matrix,
a linear form $\omega$
as described in equation \eqref{evAomega}. The problem
boils down to the determination of the density matrix
$\omega_{mn}$ that minimizes the energy of the interacting
system.
This minimization leads to an effective Hamiltonian
and the proper density matrix is obtained by
diagonalizing the effective Hamiltonian.
This type of method is typical
of atomic and molecular physics \cite{LindgrenMorrison}.
However, the effective Hamiltonian can now be determined
by powerful non-perturbative Green function methods.
Therefore, the present approach leads to a sort of unification
of quantum chemistry and QFT: it contains
standard QFT when the dimension of the model space is one,
it contains standard quantum chemistry (more precisely
many-body perturbation theory) when the Green functions
are expanded perturbatively.
Therefore, the present approach might help developing
some new nonperturbative methods in quantum chemistry.
On the other hand, quantum chemistry has accumulated
an impressive body of results. The physics Nobel-prize
winner Kenneth Wilson stated that \cite{KWilson}
``Ab initio quantum chemistry is an emerging computational
area that is fifty years ahead of lattice gauge theory.''
Therefore, the experience gained in quantum chemistry
can be used to solve some of the remaining problems of the present approach,
such as the removal of the secular terms\cite{Kukharenko} to all
order.
\section{Conclusion}
The present paper sketched a new method to determine the
Green functions of quantum field theory with a general state.
The main idea is to transform the cumulant function describing
the intial state into an interaction term.
As a consequence, the cumulants become dressed by
the interaction, providing a much better description of the
correlation in the system.
An alternative method would be to work at the operator level,
as was done recently by D\"utsch and Fredenhagen \cite{Dutsch04},
and to take the expectation value at the end of the calculation.
This would have the obvious advantage of dealing with a
fully rigorous theory. However, we would loose the non-perturbative
aspects of the present approach.
Although this approach seems promissing, much remains to be
done before it can be applied to realistic systems:
(i) our description is purely formal; (ii) the
degenerate initial eigenstates lead to secular terms that must
be removed \cite{Kukharenko}; (iii) renormalization
must be included, although this will probably not be
very different from the standard case, because all the
singularities of the free system are restricted to $G_0$.
Interesting connections can be made with other problems.
For example, the cancellation theorem \cite{Tikhodeev}
seems to be interpretable as a consequence of
the unitarity of the S-matrix. It would
extend Veltman's largest time equation \cite{Veltmancut}
to the case of spacetime points with equal time.
Another exciting track would be a connection with noncommutative
geometry. Keldysh\cite{Keldysh} noticed that the doubling of sources could
be replaced by a doubling of spacetime points. In other words,
$j_\pm(x)$ becomes $j(x_\pm)$, where $x_\pm$ are two copies of the
spacetime point $x$: time travels from the past
to the future for $x_+$ and in the other direction for $x_-$.
Sivasubramanian and coll. \cite{Sivasubramanian} have
proposed to interpret this doubling of spacetime points
in terms of noncommutative geometry. It would be interesting
to follow this track for our quadrupling of spacetime points.
From the practical point of view, the main applications of
our scheme will be for the
calculation of strongly-correlated systems, in particular
for the optical response of some materials, such as gemstones,
that remain beyond the reach of the standard tools of contemporary
solid-state physics.
After the completion of this work, we came across a little known article
by Sergey Fanchenko, where the cumulants are used to define
an effective action \cite{Fanchenko}. His paper is also
interesting because it gives a path integral formulation
of quantum field theory with a general state.
His approach and the one of the present paper provide
complementary tools to attack nonperturbative problems
of quantum field theory with a general state.
\subsection*{Acknowledgment}
I thank Alessandra Frabetti, Fr\'ed\'eric Patras, Sergey Fanchenko and Pierre
Cartier for very useful discussions.
|
train/arxiv
|
BkiUcGvxK7Dgs_cY42_H
| 5 | 1 |
\section{Introduction}
The semitaunic decays $B\to D^{(\ast)}\tau\nu_{\tau}$ have drawn a lot of attention in recent years as sensitive probes of NP
\cite{Hou,Chen:2006nua,Nierste,Tanaka:2010,Fajfer:2012vx,Bhattacharya:2015}. The present experimental status is summarized in
Fig. \ref{fig_belle} \cite{belle_talk}.
\begin{figure}[!hbt]
\centering
\includegraphics[scale=0.4]{belle_2016.png}
\caption{Current experimental status in the measurements of $R(D)$ and $R(D^*)$.}
\label{fig_belle}
\end{figure}
Here, $R(D)$ and $R(D^*)$ are defined as
\begin{align}
\nonumber \mathcal{R}(D) &= \frac{\mathcal{B}\left(\overline{B} \to D \tau^- \overline{\nu}_{\tau}\right)}
{\mathcal{B}\left(\overline{B} \to D l^- \overline{\nu}_{l}\right)}~~ {\rm and}\\
\mathcal{R}(D^*) &=
\frac{\mathcal{B}\left(\overline{B} \to D^* \tau^- \overline{\nu}_{\tau}\right)}{\mathcal{B}
\left(\overline{B} \to D^* l^- \overline{\nu}_{l}\right)}\,.
\end{align}
The Standard Model (SM) predictions for $R(D^{(*)})$ are taken from \cite{Fajfer:2012vx}and \cite{Na:2015kha},
respectively. The theory uncertainties in these observables are only a few percent, and
independent of the CKM element $|V_{cb}|$. In the figure, the contours show the correlation
between the measured values of $R(D)$ and $R(D^{*})$ from different experimental collaborations.
We note that the contour obtained after averaging the Belle measurements \cite{Huschle:2015rga,bellexp,Abdesselam:2016xqt},
which is more than 3$\sigma$ away from the SM prediction, lies in between the SM expectation and the
{\mbox{\slshape B\kern-0.1em{\smaller A}\kern-0.1em B\kern-0.1em{\smaller A\kern-0.2em R}}}~measurement \cite{babarexp}. LHCb results on $\mathcal{R}(D^{*})$ \cite{lhcbexp} are $2.1 \sigma$ larger than the value
expected in SM. Although the Belle average is slightly smaller than the LHCb and {\mbox{\slshape B\kern-0.1em{\smaller A}\kern-0.1em B\kern-0.1em{\smaller A\kern-0.2em R}}}~results, it is still considerably larger
than the SM prediction.
One can explain this excess by considering the contribution from some NP model of one's choice, {\it e.g.}
\cite{model,Sakai:2013}. On the other hand, one may write down the most general relevant effective NP operators which may include new
scalar, vector and tensor currents other than the SM, and then try to estimate the size of the NP Wilson coefficients from the
excess \cite{modelind} in a model independent analysis.
We observe that with passing time and increasing statistics, the measured value of $R(D^*)$ is becoming closer to that of the SM.
However, we have to wait for more precise measurements on $R(D)$. This is important, since the NP sensitivity of $R(D)$ and $R(D^*)$
are not the same \cite{Bhattacharya:2015}.
Also, the sensitivity to a particular type of interaction is more apparent
in the binned data, compared to that from the integrated observables like $R(D^{(*)})$ \cite{Bhattacharya:2015}.
On the other hand, as the measured values of $R(D^{(*)})$ are highly model sensitive due to the model-dependence of
the kinetic distribution, one may get different signal yields per bin from fits using different models.
Consequently, the measured values obtained from fits assuming only the SM background should not be used to fit the NP parameters.
Although we use the background-subtracted and normalized binned data for most of our analysis, we compensate for any
systematic errors coming from such assumption by doing a separate study with over-estimated errors and their correlations.
In this article, we systematically divide our analysis into two parts. In the first part of our analysis (section \ref{sec:form-factors}),
we will assume that there is no NP in $B\to D^{(\ast)}\tau\nu_{\tau}$, just as in $B\to D^{(\ast)}\ell\nu_{\ell}$ ($\ell = e$ or $\mu$), and will fit the form-factors.
Different experimental collaborations have already fitted the form-factor parameters \cite{CLN} from the data
collected for the decays $B\to D^{(\ast)}\ell\nu_{\ell}$, {\it e.g.} \cite{Aubert:2009,Dungel:2010}. Using the present data on $B\to D^{(\ast)}\tau\nu_{\tau}$,
we can check whether the fitted form-factors are in good agreement with those obtained from the decay $B\to D^{(\ast)}\ell\nu_{\ell}$.
Any discrepancy between the two will indicate a possible new effect in $B\to D^{(\ast)}\tau\nu_{\tau}$, which is absent in $B\to D^{(\ast)}\ell\nu_{\ell}$.
It will help us to pinpoint the possible type(s) of new interaction which could be responsible for such deviations.
In the second part of the analysis (section \ref{sec:np}), we will consider the contributions from different NP interactions in $B\to D^{(\ast)}\tau\nu_{\tau}$,
but not in $B\to D^{(\ast)}\ell\nu_{\ell}$.
Our goal will be the search for new interactions most compatible with and best elucidates the present data. Throughout our analysis we
will use the $q^2$-binned data on the decay rate as well the data on $R(D^{(*)})$.
Detailed discussion on our methodology can be found
in sections \ref{latparfit}, \ref{sec:goodfit}, \ref{ffresults} and \ref{sec:npmethod}.
\section{Form-factors from $B\to D^{(\ast)}\tau\nu_{\tau}$ }\label{sec:form-factors}
\subsection{Formalism}\label{fftheory}
The amplitudes of semileptonic $B$ meson decays can be factorized in the product of the matrix elements of leptonic and
hadronic currents. The matrix elements of the hadronic currents are non-perturbative objects called form-factors.
For a precise determination of the form-factors, we have to rely either on lattice QCD calculations or on the light cone sum
rule approaches (LCSR). The uncertainties in the form-factors is one of the major sources of uncertainties in the predictions
of the decay rates.
In the SM, the differential decay rates for the decay $B \to D^{(*)} \ell \nu_{\ell}$, where $\ell = e,~\mu$ or $\tau$,
are given by \cite{Korner:1989}
\begin{align}
\nonumber &\frac{d\Gamma \left(\overline{B} \rightarrow D \ell \overline{\nu}_{\ell}\right)}{d q^2} = \frac{G^2_F
\left|V_{cb}\right|^2}{192 \pi^3 m^3_B} q^2 \sqrt{\lambda_D(q^2)} \\
& \left(1 - \frac{m^2_{\ell}}{q^2}\right)^2 \left[ \left(1 + \frac{m^2_{\ell}}{2 q^2}\right) H^{s 2}_{V,0} +
\frac{3}{2} \frac{m^2_{\ell}}{q^2} H^{s 2}_{V,t}\right] \,,
\label{dgamd}
\end{align}
\begin{align}
\nonumber &\frac{d\Gamma \left(\overline{B} \rightarrow D^* \ell \overline{\nu}_{\ell}\right)}{d q^2} =
\frac{G^2_F \left|V_{cb}\right|^2}{192 \pi^3 m^3_B} q^2 \sqrt{\lambda_D^*(q^2)} \left(1 - \frac{m^2_{\ell}}{q^2}\right)^2 \\
&\left[\left(1 + \frac{m^2_{\ell}}{2 q^2}\right)
\left(H^2_{V,+} + H^2_{V,-} + H^2_{V,0}\right) + \frac{3}{2} \frac{m^2_{\ell}}{q^2} H^{2}_{V,t}\right] \,,
\label{dgamdst}
\end{align}
where $ \lambda_D^{(*)}(q^2) = ((m_B - m_D^{(*)})^2 - q^2)((m_B + m_D^{(*)})^2 - q^2)$.
Here, the helicity amplitudes $H^{\lambda_M}_{i,\lambda }$'s are defined through the hadronic matrix elements
\begin{equation}
H^{\lambda_M}_{i,\lambda } = \epsilon^*_{\mu} \langle M (\lambda_M) |{ \bar c}\gamma^{\mu}(1 - \gamma_5) b |{\bar B} \rangle,
\end{equation}
where $\lambda_M$ and $\lambda$ are the helicities of the final
state meson $M$ and the virtual intermediate boson in the $B$ meson rest frame respectively. Also note that whereas for $D$ meson
$\lambda_M = s$, for $D^*$ meson $\lambda_M = \pm 1,~0$ and $\lambda = 0,~\pm 1$ and $t$. These helicity amplitudes are related
to the form-factors
\begin{align}
H^s_{V,0}(q^2) &= \sqrt{\frac{\lambda_D(q^2)}{q^2}}F_1(q^2), \nonumber \\
H^s_{V,t}(q^2) &= {\frac{m_B^2 - m_D^2}{\sqrt{q^2}}}F_0(q^2), \nonumber \\
H_{V,\pm}(q^2) &= (m_B + m_{D^*})A_1(q^2)\mp \frac{\sqrt{\lambda_{D^*}}}{m_B + m_{D^*}}V(q^2), \nonumber \\
H_{V,0}(q^2) &= \frac{(m_B + m_{D^*})}{2m_{D^*}\sqrt{q^2}} \nonumber \\
& \Big[(m_B^2-m_{D^*}^2-q^2)A_1(q^2) \nonumber \\
& + \frac{\lambda_{D^*}}{(m_B + m_{D^*})^2}A_2(q^2)\Big] \nonumber \\
H_{V,t}(q^2) &= \sqrt{\frac{\lambda_D(q^2)}{q^2}}A_0(q^2)\,.
\end{align}
The form-factors are defined as the matrix elements of various currents,
\begin{align}
\nonumber \langle D(K)| \bar{c}\gamma_\mu b | \bar{B}(p)\rangle &= [(p+k)_\mu -\frac{m_B^2-m_D^2}{q^2}q_\mu] \\
& F_1(q^2)+ q_\mu \frac{m_B^2-m_D^2}{q^2} F_0(q^2)\,,
\label{matrixD}
\end{align}
and
\begin{align}
\nonumber \langle D^*(k,\varepsilon)|\bar{c}\gamma_\mu b &|\bar{B}(p)\rangle = i \epsilon_{\mu \nu\rho \sigma}
\varepsilon^{\nu *}p^{\rho}k^{\sigma} \frac{2 V(q^2)}{m_B+m_{D^*}} \\
\langle D^*(k,\varepsilon)|\bar{c}\gamma_\mu \gamma_5 b &|\bar{B}(p)\rangle = \varepsilon_{\mu}^{*}(m_B + m_{D^*})A_1(q^2) \nonumber \\
& -(p+k)_\mu (\varepsilon^* q)\frac{A_2(q^2)}{m_B+m_{D^*}} \nonumber \\
& -q_\mu(\varepsilon^* q)\frac{2 m_{D^*}}{q^2}[A_3(q^2) - A_0(q^2]\,,
\label{matrixDst}
\end{align}
where
\begin{equation}
A_3(q^2) = \frac{m_B + m_{D^*}}{2 m_{D^*}} A_1(q^2) - \frac{m_B - m_{D^*}}{2 m_{D^*}} A_2(q^2)\,.
\end{equation}
A direct comparison of the matrix elements in eq.(\ref{matrixD}) with those in heavy quark effective theory
(HQET) gives us the relations
\begin{eqnarray}
F_1(q^2) &=& \frac{1}{2\sqrt{m_B m_D}}\Big[(m_B+m_D)h_+(w(q^2)) \nonumber \\
& {}& -(m_B-m_D)h_-(w(q^2))\Big] \nonumber \\
F_0(q^2)&=& \frac{1}{2\sqrt{m_B m_D}}\Big[ \frac{(m_B + m_D)^2 - q^2}{m_B + m_D}h_+(w(q^2)) \nonumber \\
&{}& - \frac{(m_B - m_D)^2 - q^2}{m_B - m_D}h_-(w(q^2))\Big]\,,
\end{eqnarray}
where $h_{\pm}(w(q^2))$ are the HQET form-factors, with $w = v_B . v_{D^{(*)}} = \frac{m_B^2 + m_{D^{(*)}}^2 -q^2}{2 m_{D^{(*)}} m_B}$.
Following the parametrization given in \cite{CLN}, the HQET form-factors can be expressed as
\begin{eqnarray}
h_+(w) &=& \frac{1}{2(1+r_D^2-2 r_D w)}\Big[-(1+r_D)^2(w-1) V_1(w) \nonumber \\
&{}& + (1-r_D)^2(w+1) S_1(w)\Big] \nonumber \\
h_-(w) &=& \frac{(1-r_D^2)(w+1)}{2(1+r_D^2-2 r_D w)}[S_1(w) - V_1(w)] \,,
\end{eqnarray}
where $r_D = m_D / m_B$. The hadronic form-factors $V_1(w)$ and $S_1(w)$ coincide with the Isgur-Wise function $\xi(w)$ in the
infinite mass limit of the heavy quark $m_Q$ ( = $m_b$ or $m_c$). This function is normalized
to unity at zero recoil, i.e at $w=1$. In the Ref. \cite{CLN}, the $w$ dependence is parameterized as in eq.(\ref{ffV1}).
The idea is to expand $V_1(w)$ around zero recoil point $w=1$.
\begin{align}
\nonumber V_1(w) = V_1(1) \times &\left[ 1 - 8 \rho_D^2 z(w) + (51 \rho_D^2 - 10) z(w)^2 \right.\\
&\left. - (252 \rho_D^2 - 84) z(w)^3\right]
\label{ffV1}
\end{align}
where $z(w) = (\sqrt{w + 1} - \sqrt{2}) / (\sqrt{w + 1} + \sqrt{2})$. $V_1(1)$ includes
corrections at order $\alpha_s(m_Q)$ and $\Lambda_{QCD}/m_Q$ in HQET. Although $V_1(1)$ cancels in the ratio
$R(D)$, it is better to note that lattice QCD can predict the value of $V_1(1) = 1.053 \pm 0.008$ \cite{Lattice:2015}.
On the other hand, $\rho_D^2$ can be fitted directly from the data on $\Gamma(B\to D \ell\nu_{\ell})$, where
$\ell = e,~\mu$\footnote{From hereon, $\ell$ will mean light leptons, i.e. $e$ and $\mu$, unless specified otherwise.}.
As of now, $\rho_D^2 = 1.186 \pm 0.054$, determined by the Heavy Flavor Averaging Group (HFAG) \cite{hfag}.
Following \cite{Tanaka:2010}, we parameterized the $w$ dependence of $S_1(w)$ as
\begin{align}
\nonumber S_1(w) = V_1(w) \times &\left\{1 + \Delta \left[ -0.019 + 0.041 \left(w - 1\right) \right.\right.\\
&\left.\left. -0.015\left(w -1 \right)^2 \right]\right\}\,.
\label{form-factors1}
\end{align}
Here, $\Delta$ parameterizes the unknown higher order corrections in HQET. In earlier analyses, for the prediction
of the $R(D)$, $\Delta$ is assumed to have 100\% error.
The decay rate $\Gamma(B\to D \ell\nu_{\ell}) $ is not useful to fit the parameters of $S_1(w)$, as it is not sensitive to the decay rates
because of the negligible lepton masses. However, in our analysis, we fit $\Delta$ from the existing data on $R(D)$
along with the other parameters defined earlier.
As shown in eq. (\ref{matrixDst}), the $B \to D^* \tau \nu$ decays are described by four independent hadronic form-factors:
$V$, $A_0$, $A_1$ and $A_2$, which are related to HQET form-factors by the following relations \cite{Fajfer:2012vx}:
\begin{align}
\nonumber V(w) &= \frac{R_1(w)}{r_{D^*}} h_{A_1}(w)\,, \\
\nonumber A_1(w) &= \frac{1}{2} r_{D^*}(w+1)h_{A_1}(w)\,, \\
\nonumber A_2(w) &= \frac{R_2(w)}{r_{D^*}}h_{A_1}(w)\,, \\
\nonumber A_0(w) &= \frac{R_0(w)}{r_{D^*}}h_{A_1}(w) \\
\label{dstFFparam1}
\end{align}
where $r_{D^*} = 2\sqrt{m_B m_{D^*}} / (m_B + m_{D^*})$.
The $w$ dependencies of the HQET form-factors are parameterized following the
ref. \cite{CLN},
\begin{align}
\nonumber h_{A_1}(w) =& h_{A_1}(1) \left[ 1 - 8\rho_{D^*}^2 z(w) + (53\rho_{D^*}^2-15) z(w)^2 \right. \\
\nonumber &\left. - (231\rho_{D^*}^2-91) z(w)^3 \right] \,, \\
\nonumber R_1(w) =& R_1(1) - 0.12(w-1) + 0.05(w-1)^2 \,, \\
\nonumber R_2(w) =& R_2(1) + 0.11(w-1) - 0.06(w-1)^2 \,, \\
R_0(w) =& R_0(1) - 0.11(w-1) + 0.01(w-1)^2 \,.
\label{dstFFparam3}
\end{align}
Here, the current lattice prediction is $h_{A_1}(1) = 0.906 \pm 0.013$ \cite{bailey14}, the rest of the three parameters
like $\rho_{D^*}$, $R_1(1)$, $R_2(1)$ are fitted directly from the decay rate $\Gamma(B\to D^\ast\ell\nu_{\ell})$ \cite{hfag},
\begin{align}
\nonumber \rho^2_{D^*} &= 1.207 \pm 0.026, ~~~ C\left(\rho^2_{D^*},~R_1(1)\right) = 0.568,\\
\nonumber R_1(1) &= 1.406 \pm 0.033, ~~~ C\left(\rho^2_{D^*},~R_2(1)\right) = -0.809,\\
R_2(1) &= 0.853 \pm 0.020, ~ C\left(R_1(1),~R_2(1)\right) = -0.758,
\label{dstFFparam4}
\end{align}
where the second column lists the correlations between the parameters. As $B\to D^* \ell \nu$ decays are not sensitive to $R_0(w)$,
there is only theoretical estimate available on $R_0(1) = 1.14 \pm 0.07$, based on HQET \cite{Fajfer:2012vx}. However, it can be
considered to be a free parameter in our analysis of $B\to D^* \tau \nu$ data.
\subsection{$\chi^2$ analysis}\label{latparfit}
\begin{figure*}\centering
\subfloat[$B \rightarrow D \tau \nu$({\mbox{\slshape B\kern-0.1em{\smaller A}\kern-0.1em B\kern-0.1em{\smaller A\kern-0.2em R}}})]{
\includegraphics[scale=0.27]{BABAR_dataD.pdf}
\label{fig:BABAR_dataD}}
\subfloat[$B \rightarrow D^* \tau \nu$({\mbox{\slshape B\kern-0.1em{\smaller A}\kern-0.1em B\kern-0.1em{\smaller A\kern-0.2em R}}})]{
\includegraphics[scale=0.27]{BABAR_dataDst.pdf}
\label{fig:BABAR_dataDst}}
\subfloat[$B \rightarrow D^* \tau \nu$(Belle)]{
\includegraphics[scale=0.27]{Belle_dataDst.pdf}
\label{fig:Belle_dataDst}}
\caption{Fig.s \ref{fig:BABAR_dataD} and \ref{fig:BABAR_dataDst} are the measured background subtracted $q^2$-distributions
for $\overline{B} \rightarrow D \tau \overline{\nu}_{\tau}$ and $\overline{B} \rightarrow D^* \tau \overline{\nu}_{\tau}$
events, extracted from the {\mbox{\slshape B\kern-0.1em{\smaller A}\kern-0.1em B\kern-0.1em{\smaller A\kern-0.2em R}}}~ data \cite{babarexp}. Fig. \ref{fig:Belle_dataDst} is the background subtracted and
normalized momentum distribution of $D^*$ extracted from the Belle data \cite{bellexp}}
\end{figure*}
Several parameters parameterizing the form-factors, otherwise not accessible in $\bar{B} \to D^{(*)} \ell^- \bar{\nu_{\ell}}$ decays,
appear in $\bar{B} \to D^{(*)} \tau^- \bar{\nu_{\tau}}$ decays. By taking the binned data from the $q^2$-distribution of the decay rates
in $\bar{B} \to D^{(*)} \tau^- \bar{\nu_{\tau}}$ normalized by $d\Gamma(B\to D^{(\ast)}\ell\nu_{\ell})/dq^2$, we fit all the parameters given
in section \ref{fftheory}. The only exceptions are $V_1(1)$ and $h_{A_1}(1)$ which will cancel in the ratios.
Fig.s \ref{fig:BABAR_dataD} and \ref{fig:BABAR_dataDst} show efficiency-corrected $q^2$-distributions for
$\overline{B} \rightarrow D \tau^- \overline{\nu}_{\tau}$ and $\overline{B} \rightarrow D^* \tau^- \overline{\nu}_{\tau}$
events with $m^2_{miss} > 1.5 ~\text{GeV}^2$, scaled to results of isospin-constrained fit extracted from the
{\mbox{\slshape B\kern-0.1em{\smaller A}\kern-0.1em B\kern-0.1em{\smaller A\kern-0.2em R}}}~\cite{babarexp} data. The $B^0$ and $B^+$ samples are combined and the normalization and background events are
subtracted. The uncertainty on the data points includes the statistical uncertainties of data and simulation.
Fig. \ref{fig:Belle_dataDst} is the background subtracted and normalized momentum distribution of $D^*$ for
$\overline{B} \rightarrow D^* \tau^- \overline{\nu}_{\tau}$ events extracted from the Belle \cite{bellexp} data.
Here also, the $B^0$ and $B^+$ samples are combined and the normalization and background events are subtracted.
The light blue histogram represent the SM prediction for the same in each individual bin. We note that both
Belle and {\mbox{\slshape B\kern-0.1em{\smaller A}\kern-0.1em B\kern-0.1em{\smaller A\kern-0.2em R}}}~binned data show deviations from SM predictions.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
Experiment & Channel & Input & Value\\
\hline
& $\overline{B} \rightarrow D \tau^- \overline{\nu}_{\tau}$ & $N_{sig}$ & $489 \pm 63$ \\
{\mbox{\slshape B\kern-0.1em{\smaller A}\kern-0.1em B\kern-0.1em{\smaller A\kern-0.2em R}}}~ & & $N_{norm}$ & $ 2981 \pm 65$ \\
\cite{babarexp} & & $\epsilon_{sig} / \epsilon_{norm}$ & $ 0.372 \pm 0.010$ \\
\cline{2-4}
& $\overline{B} \rightarrow D^* \tau^- \overline{\nu}_{\tau}$ & $N^*_{sig}$ & $888 \pm 63$ \\
& & $N^*_{norm}$ & $ 11953 \pm 122 $ \\
& & $\epsilon^*_{sig} / \epsilon^*_{norm}$ & $ 0.224 \pm 0.004$ \\
\hline
& $\overline{B} \rightarrow D^* \tau^- \overline{\nu}_{\tau}$ & $N^*_{sig}$ & $231 \pm 23$ \\
Belle(2016) & & $N^*_{norm}$ & $ 2800 \pm 57 $ \\
\cite{bellexp} & & $\epsilon^*_{norm} / \epsilon^*_{sig}$ & $ 1.289 \pm 0.015$ \\
\hline
LHCb & $\overline{B} \rightarrow D^* \tau^- \overline{\nu}_{\tau}$ &
$R(D^*)$ & $0.336\pm0.027$ \\
\cite{lhcbexp} & & & $\pm 0.030$ \\
\hline
& $\overline{B} \rightarrow D \tau^- \overline{\nu}_{\tau}$ & $R(D)$ & $0.375\pm0.064$ \\
Belle(2015) & & & $\pm 0.026$ \\
\cline{2-4}
\cite{Huschle:2015rga} & $\overline{B} \rightarrow D^* \tau^- \overline{\nu}_{\tau}$ & $R(D^*)$ & $0.293\pm0.038$ \\
& & & $\pm 0.015$ \\
\hline
Belle(Latest) & $\overline{B} \rightarrow D^* \tau^- \overline{\nu}_{\tau}$ &
$R(D^*)$ & $0.276\pm0.034$ \\
\cite{Abdesselam:2016xqt} & & & $^{+0.029}_{-0.026}$ \\
\hline
\end{tabular}
\end{center}
\caption{Experimental inputs for fits. Only statistical uncertainties are supplied for $N^{(*)}_{norm(sig)}$. Whenever two
uncertainties are quoted, they are the statistical and systematic ones respectively.}
\label{tab:expinput}
\end{table}
To fit the parameters of the form-factors, we have performed a test of significance (goodness of fit) by
defining a $\chi^2$ statistic, a function of the parameters parameterizing the form-factors, which is defined as
\begin{align}
\chi^2_{Lat} &=
\sum^{{\rm bins}}_{i,j = 1} \left(R(D^{(*)})^{exp}_i - R(D^{(*)})^{th}_i\right)\,. \nonumber \\
& V^{-1}_{i j}\,. \left(R(D^{(*)})^{exp}_j - R(D^{(*)})^{th}_j\right),
\label{chi2lat}
\end{align}
where
\begin{eqnarray}
R(D^{(*)})^{th}_{bin} &= \frac{\int^{q^2_{{\rm max}}}_{q^2_{{\rm min}}} \left(d\Gamma\left(\overline{B} \rightarrow D^{(*)}
\tau^- \overline{\nu}_{\tau}\right)/d q^2\right) d q^2}{\int_{{\rm full } ~q^2} \left(d\Gamma\left(\overline{B} \rightarrow
D^{(*)} \ell \overline{\nu}_{\ell}\right) / d q^2 \right) d q^2},
\label{Rth}
\end{eqnarray}
\begin{equation}
R(D^{(*)})^{exp}_{bin} = \begin{cases} \frac{N^{(*)}_{bin}}{N^{(*)}_{norm}} \times \frac{\epsilon^{(*)}_{norm}}
{\epsilon^{(*)}_{sig}} \quad \text{~{\mbox{\slshape B\kern-0.1em{\smaller A}\kern-0.1em B\kern-0.1em{\smaller A\kern-0.2em R}}}} \\
\frac{1}{2 \mathcal{B}(\tau^- \to \ell^- \bar{\nu_{\ell}} \nu_{\tau})}
~\frac{N^{(*)}_{bin}}{N^{(*)}_{norm}} \times \frac{\epsilon^{(*)}_{norm}}{\epsilon^{(*)}_{sig}}
& \text{Belle}.\\
\end{cases}
\label{RexpBaBe}
\end{equation}
$V_{i j}$ is the covariance matrix. It comprises of $\sigma^2_{exp,~bin}$, the experimental uncertainties obtained by
propagating the uncertainties of individual parts in the r.h.s of eq.(\ref{RexpBaBe}).
As input, we consider the central values of number of events $N^{(*)}_{bin}$, along with their errors, for each $q^2$ or $p_{D^*}$ bin
depending on whether we are analyzing the {\mbox{\slshape B\kern-0.1em{\smaller A}\kern-0.1em B\kern-0.1em{\smaller A\kern-0.2em R}}}~or the Belle data. The
total signal yield $N_{sig}^{(*)}$, along with the errors are given in table \ref{tab:expinput}.
For simplicity and due to lack of knowledge of $q^2$-distribution of the efficiencies, we have taken the
ratio of efficiencies $\epsilon_{sig}^{(*)} / \epsilon_{norm}^{(*)}$ to be constant over all
the $q^2$ regions and equal to the value shown in table \ref{tab:expinput}.
In eqs. (\ref{Rth}) and (\ref{RexpBaBe}), $q^2_{max(min)}$ are end points of a particular bin. For the denominator
in eq.(\ref{Rth}), we integrate over the whole allowed
phase space (from $ q^2 = m^2_{\ell}$ to $q^2 = \left(m_B - m_{D^{(*)}}\right)^2$).
In defining $V_{i j}$, we follow these procedures:
\begin{enumerate}
\item Our $V$ comprises of two parts - the statistical covariance matrix $V^{stat}$ and the systematic one,
$V^{syst}$. So, $V^{exp} = V^{stat} + V^{syst}$. As there is no information available
to us about the systematic uncertainties and their correlations on the binned data, we do two separate analyses.
\item The first analysis is done using only the data available to us, i.e. $V^{syst}$ is set to be zero and
$V^{stat}_{i j} = \delta_{i j} ~\delta R^{exp}_{i} ~\delta R^{exp}_{j}$ (here $\delta_{i j}$ is the Kronecker delta). We will
call this ``Fit-1'' from hereon.
\item The second analysis is done assuming the systematic uncertainties to be the same as the statistical ones
and $100\%$ systematic correlation, i.e. $V^{syst}_{i j} = \delta R^{exp}_{i} ~\delta R^{exp}_{j}$ and
$V^{stat}_{i j}$ defined as earlier. We will call this ``Fit-2'' from hereon.
\end{enumerate}
The utility of considering the systematic uncertainties to be the same as statistical ones and considering
$100\%$ systematic correlations in the second analysis are multi-pronged. First of all, as the statistical uncertainties
on the binned data are quite large, this makes the systematic errors similarly large and that in turn can conservatively account
for the possible systematic errors coming from $a)$ the `model-dependence' of the `background-subtracted' binned data as mentioned in
section \ref{sec:npmethod} and $b)$ the dependence of the shape of the $q^2$-distribution on the experimental cuts on the leptons
and hadrons. Secondly, separately analyzing the data in both under-correlated and over-correlated ways and comparing them, gives
us an idea of the dependence of the analysis on these unknown systematic bin-bin correlations.
The Belle results \cite{bellexp} used here is the first measurement of $R(D^*)$
using semileptonic tagging method for the ``other $B$'', referred to as $B_{tag}$ and instead of a $q^2$-distribution,
the momentum distribution of $D^*$ and $\ell$ are given. For our analysis, we note that
$p^2_{D^*} = \left(\frac{m^2_{B} + m^2_{D^*} - q^2}{2 m_{B}}\right)^2 -m^2_{D^*}$, and using this, eq.(\ref{Rth}) can
be calculated for each bin in the $p_{D^*}$-distribution by converting the limits of integration appropriately.
For $R(D^{*})^{exp}_{bin}$, we use eq.(\ref{RexpBaBe}). We do not use those bins for which central values of $ N^{(*)}_{bin} \leq 0$.
To utilize the fact that $V_1(1)$ and $h_{A_1}(1)$ get canceled respectively in $R(D)$ and $R(D^*)$,
$R(D^{(*)})_{bin}$ is used instead of $N^{(*)}_{bin}$. So, the $\chi^2_{Lat}$ is a function of $\rho^2_D$ and
$\Delta$ for $R(D)_{bin}$ and a function of $\rho^2_{D^*}$, $R_1(1)$, $R_2(1)$ and $R_0(1)$ for $R(D^{*})_{bin}$.
\subsection{Goodness of Fit}\label{sec:goodfit}
A true model with true parameter values will generate a $\chi^2 = d.o.f$ i.e. $\chi^2_{red} = 1$ as there is no fit involved.
But due to noise present in the data, this is not sufficient information to assess convergence or compare different models.
The obligatory step to assess the goodness-of-fit of an analysis after optimization is then to inspect the distribution of
the residuals. For the true model, with a-priori known measurement errors, the distribution of normalized residuals
(in our case, $\frac{R^{th}_{bin} - R^{exp}_{bin}}{\delta R_{bin}}$) is by definition a Gaussian with mean $\mu = 0$ and
variance $\sigma^2 = 1$ \cite{dosdonts}. This fact is utilized to test the significance of the fit by objectively
quantifying a significance test of fitting the distribution of residuals to this Gaussian. For this,
we use Shapiro-Wilk's(S-W) test \cite{shapiro} for normality. The reasons for choosing
S-W over other competing tests for normality are following: $a)$ Though we have used the algorithm $AS~R94$ by Royston \cite{royston},
which was developed for any sample size ($n$) $3-5000$, the original S-W test was specifically designed for $n<50$; this is precisely
our case. $b)$ This is the first test which detected departures from normality using skewness and/or kurtosis and since then have
been regularly corrected and developed. $c)$ It has repeatedly been shown \cite{comptest} that from low to medium sample sizes, where
degenerate values occur less, S-W is the `most powerful' parametric test for normality among other popular contenders like
`Kolmogorov-Smirnov', `Anderson-Darling', `Cram\'{e}r-von Mises', `Jarque-Bera' etc.; as this identically applies to our case,
we choose S-W test throughout this analysis. In all such tests,
the validity of a hypothesis depends on whether the probability of the goodness of fit test is above or below
the significance, which in our case is set at $5\%$.
Across all the fitted models, the ones with the $p$-value of the residual-distribution above $5\%$
will be considered to fit the data well; all of the rest can be thrown out.
Therefore, if a particular model fitting analysis passes our normality test, we consider that model
as the plausible explanation of the data.
\subsection{Fit Results}\label{ffresults}
\begin{figure*}[!htbp]
\centering
\subfloat[$R(D)_{bin}$]{
\includegraphics[scale=0.68]{V1Dplot.pdf}
\label{fig:V1D}}
\subfloat[$R(D)_{bin}$]{
\includegraphics[scale=0.68]{S1Dplot.pdf}
\label{fig:S1D}}\\
\subfloat[$R(D^{*})_{bin}$]{
\includegraphics[scale=0.68]{VDstplot.pdf}
\label{fig:VDst}}
\subfloat[$R(D^{*})_{bin}$]{
\includegraphics[scale=0.68]{A0Dstplot.pdf}
\label{fig:A0Dst}}\\
\subfloat[$R(D^{*})_{bin}$]{
\includegraphics[scale=0.68]{A1Dstplot.pdf}
\label{fig:A1Dst}}
\subfloat[$R(D^{*})_{bin}$]{
\includegraphics[scale=0.68]{A2Dstplot.pdf}
\label{fig:A2Dst}}
\caption{Results obtained from `Fit-1'. Fig.s \ref{fig:V1D} and \ref{fig:S1D} are the $q^2$ dependence of form-factors for semileptonic $b \to c$ transitions. Red (dotted) and blue (dot-dashed) lines enclose $\pm 1 \sigma$ regions for the form-factors with parameters fitted
from $\overline{B} \rightarrow D \ell \overline{\nu}_{\ell}$ (world average) and
$\overline{B} \rightarrow D \tau \overline{\nu}_{\tau}$ decays ({\mbox{\slshape B\kern-0.1em{\smaller A}\kern-0.1em B\kern-0.1em{\smaller A\kern-0.2em R}}}) respectively.
The rest of the figures are for form-factors for $\overline{B} \rightarrow D^* \ell \overline{\nu}_{\ell}$ and
$\overline{B} \rightarrow D^* \tau \overline{\nu}_{\tau}$ decays. Here green (solid) lines enclose the region for
$\overline{B} \rightarrow D^* \tau \overline{\nu}_{\tau}$ decays (Belle).}
\label{fig:form-factors}
\end{figure*}
\begin{table}[htbp]
\begin{center}
\begin{tabular}{|l|c|c|c|c|c|}
\hline
\, Obs. & Par.s & Value & $\chi^2_{min}$ & $d.o.f $ & Normality\\
& & & & & (S-W) \\
\hline
{\mbox{\slshape B\kern-0.1em{\smaller A}\kern-0.1em B\kern-0.1em{\smaller A\kern-0.2em R}}}~ & & & & & \\
$R(D)_{bin}$ & $\Delta$ & $-0.04 \pm 2.00$ & $9.6 $ & $ 12$ & 0.20 \\
& $\rho^2_D$ & $ 1.43 \pm 0.18 $ & & &\\
\cline{2-6}
$R(D^*)_{bin}$ & $\rho^2_{D^*}$ & $-0.55 \pm 0.66 $ & $5.4$ & $ 8$ & 0.25 \\
& $R_1(1)$ & $ 0.04 \pm 2.96 $ & & &\\
& $R_2(1)$ & $ 3.79 \pm 0.20 $ & & &\\
& $R_0(1)$ & $ 0.02 \pm 1.37 $ & & & \\
\hline
Belle~ & & & & & \\
$R(D^*)_{bin}$ & $\rho^2_{D^*}$ & $ - 1.52 \pm 1.61 $ & $ 8.7 $ & $ 13 $ & 0.91 \\
& $R_1(1)$ & $ 0.04 \pm 2.86 $ & & & \\
& $R_2(1)$ & $ 3.58 \pm 0.53 $ & & & \\
& $R_0(1)$ & $ -0.84 \pm 0.71 $ & & & \\
\hline
\end{tabular}
\end{center}
\caption{Fit-I Results of parameters parameterizing the form-factors in HQET. The last column lists the results of the
hypothesis test (Shapiro-Wilk) for assessment of goodness-of-fit.}
\label{tab:latfitres}
\end{table}
\begin{table}[htbp]
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
Channel & Correlation & {\mbox{\slshape B\kern-0.1em{\smaller A}\kern-0.1em B\kern-0.1em{\smaller A\kern-0.2em R}}} & Belle (2016) \\
\hline
& $C\left(\rho^2_{D^*},~R_1(1)\right)$ & 0.057 & 0.023 \\
\cline{2-4}
$\overline{B} \rightarrow D^* \tau \overline{\nu}_{\tau}$ & $C\left(\rho^2_{D^*},~R_2(1)\right)$ & 0.907 & 0.928 \\
\cline{2-4}
& $C\left(\rho^2_{D^*},~R_0(1)\right)$ & -0.004 & -0.741 \\
\cline{2-4}
& $C\left(R_1(1),~R_2(1)\right)$ & 0.082 & 0.024 \\
\cline{2-4}
& $C\left(R_1(1),~R_0(1)\right)$ & 0.000 & -0.008 \\
\cline{2-4}
& $C\left(R_2(1),~R_0(1)\right)$ & 0.007 & -0.861 \\
\hline
$\overline{B} \rightarrow D \tau \overline{\nu}_{\tau}$ & $C\left(\Delta,~\rho^2_{D}\right)$ & 0.146 & - \\
\hline
\end{tabular}
\end{center}
\caption{Correlations between the fitted form-factor parameters from Fit-I.}
\label{tab:corrections}
\end{table}
\begin{table}[htbp]
\begin{center}
\begin{tabular}{|l|c|c|c|c|c|}
\hline
\, Obs. & Par.s & Value & $\chi^2_{min}$ & $d.o.f $ & Normality\\
& & & & & (S-W) \\
\hline
{\mbox{\slshape B\kern-0.1em{\smaller A}\kern-0.1em B\kern-0.1em{\smaller A\kern-0.2em R}}}~ & & & & & \\
$R(D)_{bin}$ & $\Delta$ & $-0.03 \pm 2.25$ & $8.71 $ & $ 12$ & 0.14 \\
& $\rho^2_D$ & $ 0.92 \pm 0.60 $ & & &\\
\cline{2-6}
$R(D^*)_{bin}$ & $\rho^2_{D^*}$ & $-0.54 \pm 0.73 $ & $5.13$ & $ 8$ & 0.55 \\
& $R_1(1)$ & $ 0.04 \pm 1.99 $ & & &\\
& $R_2(1)$ & $ 3.93 \pm 0.31 $ & & &\\
& $R_0(1)$ & $ 0.03 \pm 0.76 $ & & & \\
\hline
Belle~ & & & & & \\
$R(D^*)_{bin}$ & $\rho^2_{D^*}$ & $ -3.03 \pm 2.24 $ & $ 6.62 $ & $ 13 $ & 0.68 \\
& $R_1(1)$ & $ 0.04 \pm 2.31 $ & & & \\
& $R_2(1)$ & $ 3.78 \pm 0.45 $ & & & \\
& $R_0(1)$ & $ 0.03 \pm 0.93 $ & & & \\
\hline
\end{tabular}
\end{center}
\caption{Fit-II Results of parameters parameterizing the form-factors in HQET.}
\label{tab:latfitres2}
\end{table}
\begin{table}[htbp]
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
Channel & Correlation & {\mbox{\slshape B\kern-0.1em{\smaller A}\kern-0.1em B\kern-0.1em{\smaller A\kern-0.2em R}}} & Belle (2016) \\
\hline
& $C\left(\rho^2_{D^*},~R_1(1)\right)$ & 0.031 & 0.015 \\
\cline{2-4}
$\overline{B} \rightarrow D^* \tau \overline{\nu}_{\tau}$ & $C\left(\rho^2_{D^*},~R_2(1)\right)$ & 0.698 & 0.563 \\
\cline{2-4}
& $C\left(\rho^2_{D^*},~R_0(1)\right)$ & 0.011 & 0.004 \\
\cline{2-4}
& $C\left(R_1(1),~R_2(1)\right)$ & 0.035 & 0.021 \\
\cline{2-4}
& $C\left(R_1(1),~R_0(1)\right)$ & 0.000 & 0.000 \\
\cline{2-4}
& $C\left(R_2(1),~R_0(1)\right)$ & 0.018 & 0.012 \\
\hline
$\overline{B} \rightarrow D \tau \overline{\nu}_{\tau}$ & $C\left(\Delta,~\rho^2_{D}\right)$ & 0.07 & - \\
\hline
\end{tabular}
\end{center}
\caption{Correlations between the fitted form-factor parameters from Fit-II.}
\label{tab:corrections2}
\end{table}
The fit results for the parameters of the form-factors are listed in tables \ref{tab:latfitres} and \ref{tab:latfitres2}
for `Fit-1' and `Fit-2' respectively.
We find the distribution of the residuals for all those fits and check whether that
distribution is accordant with a normal distribution with mean $0$ and variance $1$ (with the null hypothesis $H_0$
that this is true). $p$-values obtained in our chosen normality test (S-W) quantify the
probability of $H_0$ being true.
After the minimization, we find the uncertainties of and correlations between the parameters around their best fit points.
A general approach to find these is to construct the `Hessian Matrix' $H$, which is the matrix of second order
partial-derivatives of the test-statistic with respect to the parameters; this describes the local curvature of
a function of many variables, and find its inverse. This constitutes the `error matrix', square roots of whose
diagonal elements give us the `standard error' of the parameters and the normalized matrix (w.r.t the errors)
makes the `correlation matrix'. We list such errors in tables \ref{tab:latfitres} and \ref{tab:latfitres2} and relevant correlations in tables \ref{tab:corrections} and \ref{tab:corrections2}.
In the following we will discuss the outcome of our analysis, and compare our fit results with that determined by HFAG
\cite{hfag} (also given in eq. (\ref{dstFFparam4})):
\begin{itemize}
\item We fit $\rho^2_D$ only using the {\mbox{\slshape B\kern-0.1em{\smaller A}\kern-0.1em B\kern-0.1em{\smaller A\kern-0.2em R}}}~data, the obtained values are consistent
with that determined by the HFAG at 1$\sigma$. Our fitted values of $\Delta$ include
$\Delta = 1 \pm 1$, so far, which is used in the prediction of $R(D)$ by {\mbox{\slshape B\kern-0.1em{\smaller A}\kern-0.1em B\kern-0.1em{\smaller A\kern-0.2em R}}}~ \cite{babarexp}.
\item The analysis of the {\mbox{\slshape B\kern-0.1em{\smaller A}\kern-0.1em B\kern-0.1em{\smaller A\kern-0.2em R}}}~bin data on $R(D^*)$ from both `Fit-1' and `Fit-2' shows that the fitted parameters like $\rho^2_{D^*}$ and $R_1(1)$
are consistent within 2$\sigma$, with HFAG. However, $R_2(1)$ shows a
large deviation (more than 10$\sigma$ away). It is important to note that we can extract
$R_2(1)$ with relatively small error.
\item After analyzing the data by Belle on $R(D^*)$ from `Fit-1', we obtain large errors on $\rho^2_{D^*}$ and $R_1(1)$,
and they are consistent with the fitted value by HFAG at 1$\sigma$. `Fit-2' increases both the best-fit value
and errors of $\rho^2_{D^*}$ even more.
Also in this case, $R_2(1)$ fits with a small error, and shows a large deviation from that determined
by HFAG.
\item Whereas the analysis of $R(D^*)$ from `Fit-1' results obtained using {\mbox{\slshape B\kern-0.1em{\smaller A}\kern-0.1em B\kern-0.1em{\smaller A\kern-0.2em R}}}~and Belle binned data (table
\ref{tab:latfitres}) are roughly consistent with each other, including the best-fit values of $R_0(1)$, the same analysis from `Fit-2' (table \ref{tab:latfitres2}) actually makes the results compatible. So much so, that the
$R_0(1)$ best-fit value becomes almost identical. This makes one inclined to think that Belle binned data is more correlated than
is assumed.
\end{itemize}
\begin{figure*}[!htbp]
\centering
\subfloat[$R(D)_{bin}$]{
\includegraphics[scale=0.68]{V1Dplotfit2.pdf}
\label{fig:V1Dfit2}}
\subfloat[$R(D)_{bin}$]{
\includegraphics[scale=0.68]{S1Dplotfit2.pdf}
\label{fig:S1Dfit2}}\\
\subfloat[$R(D^{*})_{bin}$]{
\includegraphics[scale=0.68]{VDstplotfit2.pdf}
\label{fig:VDstfit2}}
\subfloat[$R(D^{*})_{bin}$]{
\includegraphics[scale=0.68]{A0Dstplotfit2.pdf}
\label{fig:A0Dstfit2}}\\
\subfloat[$R(D^{*})_{bin}$]{
\includegraphics[scale=0.68]{A1Dstplotfit2.pdf}
\label{fig:A1Dstfit2}}
\subfloat[$R(D^{*})_{bin}$]{
\includegraphics[scale=0.68]{A2Dstplotfit2.pdf}
\label{fig:A2Dstfit2}}
\caption{Similar figures of $q^2$ dependence of form-factors as figure \ref{fig:form-factors}. These are obtained from `Fit-2'.}
\label{fig:form-factors2}
\end{figure*}
We note that across all the cases listed in tables \ref{tab:latfitres} and \ref{tab:latfitres2}, $R_2(1)$ can be fitted with
a small error and has large deviations from the value obtained from the analysis of $B\to D^\ast\ell\nu_{\ell}$ (eq. (\ref{dstFFparam4})).
As the treatment of uncertainties in `Fit-1' and `Fit-2' are vastly different, we can conclude that this large deviation is not dependent on the fitting procedure, rather a consequence of the data-distribution.
All other parameters are extracted with relatively larger errors and are consistent with the fit results obtained by HFAG
within $68\%$ or $90\%$ confidence levels (C.L.).
The consequence of these results are reflected in the $q^2$ dependences of the various form-factors, as shown
in figures \ref{fig:form-factors} and \ref{fig:form-factors2}. In these figures we have compared the $q^2$-distribution of the
form-factors obtained from
our fit results with those obtained using the values given in and around eq. (\ref{dstFFparam4}).
As there is some agreement between the $\rho^2_D$ fitted from $B\to D \tau\nu_{\tau}$ and $B\to D \ell\nu_{\ell}$,
the $q^2$-distributions of $V_1(q^2)$ and $S_1(q^2)$, shown
in figs. \ref{fig:V1D} and \ref{fig:S1D} respectively, do not show any considerable deviation.
In the analysis of $R(D^*)$, $V(q^2)$ depends on $R_1(1)$ and $\rho^2_{D^*}$, its $q^2$-distribution has large error
and is consistent with those fitted from $B\to D^\ast\ell\nu_{\ell}$. $A_1(q^2)$ depends on $\rho^2_{D^*}$ and its $q^2$-distribution does not show
any considerable deviation from that obtained from $B\to D^\ast\ell\nu_{\ell}$ fit. As $q^2$-distributions of both these form-factors obtained from our
analysis have large errors, at the moment it is hard to conclude anything and we have to wait
for more precise data. On the other hand, among the form-factors associated with $B\to D^{\ast}\tau\nu_{\tau}$, $A_2(q^2)$
depend on $R_2(1)$ and hence it shows large deviation (in all the $q^2$ regions) from the analysis of $B\to D^\ast\ell\nu_{\ell}$ decay.
If we assume that the $B\to D^{(\ast)}\ell\nu_{\ell}$ decays are free from any kind of NP effects,
which may be a natural assumption, then our results allow the possibility of a new contribution beyond the SM
in $B\to D^{\ast}\tau\nu_{\tau}$ decay. In particular, it could be a beyond-the-SM (BSM) contribution from a pseudo-vector or
a pseudo-tensor \footnote{The pseudo-scalar and pseudo-tensor currents are related to the pseudo-vector currents following
the equation of motions,
$i \partial_{\mu} (\bar{c}\gamma^{\mu} \gamma^5 b) = - (m_b + m_c) \bar{c}\gamma^{5} b$ and
$\partial_{\mu} (\bar{c}\sigma^{\mu\nu}\gamma^5 b) = (m_b - m_c) {\bar c}\gamma^{\nu}\gamma^{5} b - (i \partial^{\nu} {\bar c})\gamma^{5}b +
{\bar c}\gamma^{5}(i\partial^{\nu}b)$, respectively. Hence, the form-factors associated with the pseudo-scalar and
pseudo-tensor are related to $A_0(q^2)$ and/or $A_2(q^2)$. Therefore, a large
deviation in $A_2(q^2)$ can also be compensated by adding pseudo-tensor
current contributions, proportional to these form-factors, in the decay width.} current. On a similar note,
we can comment that the SM contributions in $B\to D \tau\nu_{\tau}$ can explain the observed data.
\section{New Physics Analysis}\label{sec:np}
\subsection{Formalism: Theory}\label{sec:npform}
We follow a model independent approach in the search of the type of NP interactions that can best explain the present data
on $B\to D^{(\ast)}\tau\nu_{\tau}$. The most general effective Hamiltonian describing the
$b\to c\ell \nu_{\ell}$ transitions (where $\ell = e$, $\mu$ or $\tau$) with all possible four-fermion operators
in the lowest dimension is given by \cite{Bhattacharya:2015},
\begin{align}
\nonumber {\cal H}_{eff} &= \frac{4 G_F}{\sqrt{2}} V_{cb} \Big[( \delta_{\ell\tau} + C_{V_1}^{\ell}) {\cal O}_{V_1}^{\ell} +
C_{V_2}^{\ell} {\cal O}_{V_2}^{\ell} \\
&+ C_{S_1}^{\ell} {\cal O}_{S_1}^{\ell} + C_{S_2}^{\ell} {\cal O}_{S_2}^{\ell}
+ C_{T}^{\ell} {\cal O}_{T}^{\ell}\Big],
\label{eq1}
\end{align}
where the operator basis is defined as
\begin{eqnarray}
{\cal O}_{V_1}^{\ell} &=& ({\bar c}_L \gamma^\mu b_L)({\bar \tau}_L \gamma_\mu \nu_{\ell L}) \nonumber, \\
{\cal O}_{V_2}^{\ell} &=& ({\bar c}_R \gamma^\mu b_R)({\bar \tau}_L \gamma_\mu \nu_{\ell L}) \nonumber, \\
{\cal O}_{S_1}^{\ell} &=& ({\bar c}_L b_R)({\bar \tau}_R \nu_{\ell L}) \nonumber, \\
{\cal O}_{S_2}^{\ell} &=& ({\bar c}_R b_L)({\bar \tau}_R \nu_{\ell L}) \nonumber, \\
{\cal O}_{T}^{\ell} &=& ({\bar c}_R \sigma^{\mu\nu} b_L)({\bar \tau}_R \sigma_{\mu\nu} \nu_{\ell L}),
\label{eq2}
\end{eqnarray}
and the corresponding Wilson coefficients are given by $C_W^{\ell}$ ( $W =V_1,V_2,S_1,S_2,T$ ). In this basis,
neutrinos are assumed to be left handed.
The complete expressions for the $q^2$-distributions of the differential decay rates $d\Gamma /{dq^2}$ in $B\to D^{(\ast)}\tau\nu_{\tau}$ decays,
obtained using the effective Hamiltonian in eq.(\ref{eq1}), are given by \cite{Sakai:2013}
\begin{widetext}
\begin{align}
\nonumber &\frac{d\Gamma \left(\overline{B} \rightarrow D \tau \overline{\nu}_{\tau}\right)}{d q^2} =
\frac{G^2_F \left|V_{cb}\right|^2}{192 \pi^3 m^3_B} q^2 \sqrt{\lambda_D(q^2)} \left(1 - \frac{m^2_{\tau}}{q^2}\right)^2
\left\{ \left|1+ C_{V_1}+ C_{V_2}\right|^2 \left[ \left(1 + \frac{m^2_{\tau}}{2 q^2}\right) H^{s 2}_{V,0} +
\frac{3}{2} \frac{m^2_{\tau}}{q^2} H^{s 2}_{V,t}\right] \right.\\
\nonumber &~~~\left. +\frac{3}{2} \left|C_{S_1} + C_{S_2}\right|^2 H^{s 2}_S + 8 \left|C_T \right|^2 \left(1 +
\frac{2 m^2_{\tau}}{q^2}\right) H^{s 2}_T +3 \mathcal{R}e\left[\left(1+ C_{V_1}+ C_{V_2}\right) \left(C^*_{S_1} +
C^*_{S_2}\right)\right] \frac{m_{\tau}}{\sqrt{q^2}} H^s_S H^s_{V,t} \right.\\
&~~~\left. -12 \mathcal{R}e\left[\left(1+ C_{V_1}+ C_{V_2}\right) C^*_{T}\right] \frac{m_{\tau}}{\sqrt{q^2}} H^s_T H^s_{V,0}
\right\}\,,
\label{dgambd}
\end{align}
and
\begin{align}
\nonumber &\frac{d\Gamma \left(\overline{B} \rightarrow D^* \tau \overline{\nu}_{\tau}\right)}{d q^2} = \frac{G^2_F
\left|V_{cb}\right|^2}{192 \pi^3 m^3_B} q^2 \sqrt{\lambda_D^*(q^2)} \left(1 - \frac{m^2_{\tau}}{q^2}\right)^2
\left\{ \left(\left|1 + C_{V_1}\right|^2 + \left|C_{V_2}\right|^2\right) \left[\left(1 + \frac{m^2_{\tau}}{2 q^2}\right)
\left(H^2_{V,+} + H^2_{V,-} + H^2_{V,0}\right) \right.\right.\\
\nonumber &~~~ \left.\left.+ \frac{3}{2} \frac{m^2_{\tau}}{q^2} H^{2}_{V,t}\right] - 2 \mathcal{R}e \left[\left(1+ C_{V_1}\right)
C^*_{V_2}\right] \left[\left(1 + \frac{m^2_{\tau}}{2 q^2}\right) \left(H^2_{V,0} + 2 H_{V,+} H_{V,-} \right) + \frac{3}{2}
\frac{m^2_{\tau}}{q^2} H^{2}_{V,t}\right] + \frac{3}{2} \left|C_{S_1} - C_{S_2}\right|^2 H^2_S \right. \\
\nonumber &~~~ \left. + 8 \left|C_T\right|^2 \left(1 + \frac{2 m^2_{\tau}}{q^2}\right) \left(H^2_{T,+} + H^2_{T,-} +
H^2_{T,0}\right) + 3 \mathcal{R}e\left[ \left(1 + C_{V_1} - C_{V_2}\right) \left(C^*_{S_1} - C^*_{S_2}\right)\right]
\frac{m_{\tau}}{\sqrt{q^2}} H_S H_{V,t} \right.\\
\nonumber & \left. -12 \mathcal{R}e\left[\left(1 + C_{V_1}\right) C^*_{T}\right] \frac{m_{\tau}}{\sqrt{q^2}}
\left(H_{T,0} H_{V,0} + H_{T,+} H_{V,+} - H_{T,-} H_{V,-}\right) \right.\\
&~~~\left. + 12 \mathcal{R}e\left[C_{V_2} C^*_{T}\right] \frac{m_{\tau}}{\sqrt{q^2}} \left(H_{T,0} H_{V,0} +
H_{T,+} H_{V,-} - H_{T,-} H_{V,+}\right) \right\}\,.
\label{dgambdst}
\end{align}
\end{widetext}
The $q^2$-distribution of the decay rate of the decays $B\to D^{(\ast)}\ell\nu_{\ell}$ are obtained from equations (\ref{dgambd}) and (\ref{dgambdst})
by setting $C_W = 0$ and $m_{\tau} = 0$. we define our observables as given in equations (\ref{Rth}) and (\ref{RexpBaBe}).
\subsection{Methodology}\label{sec:npmethod}
We know that the yield in each bin depends on the probability density functions (PDFs) of different
(56 in case of {\mbox{\slshape B\kern-0.1em{\smaller A}\kern-0.1em B\kern-0.1em{\smaller A\kern-0.2em R}}}) signal and background sources. Considering any NP contribution changes these PDFs and they
in turn change the two dimensional $m^2_{miss} - |\mathbf{p}^*_l|$ distributions. This change is reflected in
the $q^2$-distribution as well, because of the following relation: $m^2_{miss} = (q - p_l)^2$. A complete and simultaneous
fit to all PDFs can only be done for each specific NP model separately and the dependence of the shape and normalization
of the PDFs on the NP parameters should be extracted rigorously using raw experimental data. Without the aid of
simulation, we do not attempt to do such an analysis. Instead, we use the background subtracted and normalized
binned data for $q^2$ and $p_{D^*}$-distributions as depicted in Fig.s \ref{fig:BABAR_dataD},
\ref{fig:BABAR_dataDst} and \ref{fig:Belle_dataDst} to perform a phenomenological analysis in a
model independent way. Such an assumption can become a source of systematic errors in our analysis and the way we have
dealt with that is discussed in section \ref{sec:npnumanalys}.
In addition to the binned data from {\mbox{\slshape B\kern-0.1em{\smaller A}\kern-0.1em B\kern-0.1em{\smaller A\kern-0.2em R}}}~and Belle, we also have the total $R(D^{(*)})$ data from various
experiments (see table \ref{tab:expinput}). Keeping in mind that the binned data is going to
dominate the fit results, we take different combinations of these separate data points and do the whole
analysis separately for them.
At the beginning of our analysis, we have defined the most general scenario with contributions from all possible dimension 6 effective
operators present simultaneously (with 10 parameters i.e. real and imaginary parts of all $C^l_W$s) as the global
scenario. We have defined various sub-scenarios as different possible combinations of those operators. Including the global scenario,
there are in total 31 such scenarios, which we are going to call ``cases'' from here onwards.
One of the main motivations of this paper is to do a multi-scenario analysis on the experimentally available binned
data, to obtain a data-based selection of a `best' case and ranking and weighting of the remaining cases in
the predefined set of 31. To that goal, we have made use of information-theoretic approaches, especially of
AIC$_c$ in the analysis of empirical data. Such procedures lead to
more robust inferences in simultaneous comparative analysis of multiple competing scenarios.
Traditional statistical inference(e.g. confidence levels, errors on fit parameters, bias etc.) can then be obtained
based on the selected best models.\footnote{One of the most powerful and most reliable methods for model comparison
(also computationally expensive) is 'cross-validation' \cite{dosdonts}. The most straightforward (and also most expensive)
flavor of cross-validation is "leave-one-out cross-validation" (LOOCV). It simultaneously tests the predictive power of
the model as well minimizes the bias and variance together. In LOOCV, one of the data points is left out and the rest of
the sample (``training set'') is optimized. Then that result is used to find the predicted residual for the left out data point.
This process is repeated for all data points and a mean-squared-error (MSE) is obtained. For model selection, this
MSE is minimized. It has been shown that this method is asymptotically equivalent to minimizing AIC \cite{shibata}}.
\subsubsection{A Short Introduction to AIC$_c$}
\begin{table}
\begin{center}
\begin{tabular}{|c|c|}
\hline
$\Delta^{AIC}_i$ & Level of Empirical Support for Model $i$\\
\hline
$0 - 2$ & Substantial \\
$4 - 7$ & Considerably Less \\
$> 10$ & Essentially None \\
\hline
\end{tabular}
\end{center}
\caption{Rough rule-of-thumb values of $\Delta^{AIC}_i$ for analysis of nested models.}
\label{tab:delAICrule}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{|c|c|}
\hline
Input & Value\\
\hline
$\Delta$ & $1 \pm 1$ \cite{Tanaka:2010}\\
$\rho^2_D$ & $1.186 \pm 0.054$ \cite{hfag}\\
$\rho^2_{D^*}$ & $1.207 \pm 0.026$ \cite{hfag}\\
$R_1(1)$ & $1.406 \pm 0.033$ \cite{hfag}\\
$R_2(1)$ & $0.853 \pm 0.020$ \cite{hfag}\\
$R_0(1)$ & $1.14 \pm 0.07$ \cite{Fajfer:2012vx}\\
$V_1(1)$ & $1.053 \pm 0.008$ \cite{Lattice:2015}\\
$h_{A_1}(1)$ & $0.906 \pm 0.013$ \cite{bailey14}\\
$m_{B_0}$ & $5.27958 \pm 0.00015 \pm 0.00028$ \cite{Aaij:2011}\\
$m_{D_0}$ & $1.86484 \pm 0.00005$ \cite{Agashe:2014}\\
$m_{b}$ & $4.18 \pm 0.03$ \cite{Agashe:2014}\\
$m_{c}$ & $1.275 \pm 0.025$ \cite{Agashe:2014}\\
$m_{\tau}$& $1.77682 \pm 0.00012$ \cite{Agashe:2014}\\
\hline
\end{tabular}
\end{center}
\caption{Inputs used in the fitting of new Wilson coefficients. All Masses are in GeV. Correlations between a few
form-factor parameters are listed in eq. (\ref{dstFFparam4}).}
\label{tab:thinput}
\end{table}
\begin{figure*}[htbp]
\centering
\includegraphics[scale=0.5]{qqplt_11_2}
\includegraphics[scale=0.5]{qqplt_10_6}
\caption{Q-Q Plot of the residuals of the best fits. Each plot compares the quantiles of the distribution of
the residuals with a Gaussian with $\mu = 0$ and $\sigma^2 = 1$. The closer the distribution o the points are to the corresponding
dotted lines, the better they fit to the Gaussian. Here we show the best NP cases for data-set `3'
from table \ref{tab:Result1}}
\label{fig:hypotestnp}
\end{figure*}
\begin{table*}
\begin{center}
{\renewcommand{\arraystretch}{1.1}%
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
~Experiment~ & ~Dataset~ & ~Observables~ & ~Cases~ & $~\chi^2_{min}~$ & ~$d.o.f$~ & ~Parameters~ & ~Akaike Wgt.s~ & ~Normality~ & ~$\chi^2$ (SM)~\\
& Index. & & & & & & ($w_i$) & (S-W) &\\
\hline
$\text{}$ & $\text{}$ & $\text{}$ & $5$ & $7.41$ & $12$ & $C_T$ & $0.26$ & $0.38$ & \\
$\text{}$ & $\text{}$ & $\text{}$ & $1$ & $7.79$ & $12$ & $C_{V_1}$ & $0.22$ & $0.29$ & \\
$\text{}$ & $\text{1}$ & $\text{$R(D)_{bin}$}$ & $2$ & $7.79$ & $12$ & $C_{V_2}$ & $0.22$ & $0.29$ & 10.31\\
$\text{}$ & $\text{}$ & $\text{}$ & $3$ & $9.17$ & $12$ & $C_{S_1}$ & $0.11$ & $0.18$ & \\
$\text{}$ & $\text{}$ & $\text{}$ & $4$ & $9.17$ & $12$ & $C_{S_2}$ & $0.11$ & $0.18$ & \\
\cline{2-10}
$\text{}$ & $\text{}$ & $\text{}$ & $1$ & $6.3$ & $10$ & $C_{V_1}$ & $0.56$ & $0.11$ & \\
$\text{{\mbox{\slshape B\kern-0.1em{\smaller A}\kern-0.1em B\kern-0.1em{\smaller A\kern-0.2em R}}}}$ & $\text{2}$ & $\text{$R(D^*)_{bin}$}$ & $2$ & $7.18$ & $10$ & $C_{V_2}$ & $0.36$ & $0.12$ & 79.85\\
\cline{2-10}
$\text{}$ & $\text{}$ & $\text{}$ & $8$ & $13.01$ & $22$ & $C_{V_2}$, $C_{S_2}$ & $0.32$ & $0.86$ & \\
$\text{}$ & $\text{3}$ & $\text{Combined}$ & $2$ & $19.12$ & $24$ & $C_{V_2}$ & $0.22$ & $0.59$ & 90.16\\
$\text{}$ & $\text{}$ & $\text{}$ & $7$ & $14.23$ & $22$ & $C_{V_2}$, $C_{S_1}$ & $0.17$ & $0.79$ & \\
$\text{}$ & $\text{}$ & $\text{}$ & $6$ & $14.61$ & $22$ & $C_{V_1}$, $C_{V_2}$ & $0.14$ & $0.19$ & \\
\hline
$\text{}$ & $\text{}$ & $\text{}$ & $2$ & $9.07$ & $15$ & $C_{V_2}$ & $0.47$ & $0.95$ & \\
$\text{Belle(2016)}$ & $\text{4}$ & $\text{$R(D^*)_{bin}$}$ & $1$ & $9.43$ & $15$ & $C_{V_1}$ & $0.39$ & $1.00$ & 26.20\\
\hline
$\text{}$ & $\text{}$ & $\text{}$ & $8$ & $22.59$ & $39$ & $C_{V_2}$, $C_{S_2}$ & $0.31$ & $0.95$ & \\
$\text{{\mbox{\slshape B\kern-0.1em{\smaller A}\kern-0.1em B\kern-0.1em{\smaller A\kern-0.2em R}}}+}$ & $\text{}$ & $\text{}$ & $2$ & $28.33$ & $41$ & $C_{V_2}$ & $0.19$ & $0.94$ & \\
$\text{Belle(2016)}$ & $\text{5}$ & $\text{Combined}$ & $7$ & $23.69$ & $39$ & $C_{V_2}$, $C_{S_1}$ & $0.18$ & $0.89$ & 116.36\\
$\text{}$ & $\text{}$ & $\text{}$ & $6$ & $24.16$ & $39$ & $C_{V_1}$, $C_{V_2}$ & $0.14$ & $0.69$ & \\
\hline
$\text{{\mbox{\slshape B\kern-0.1em{\smaller A}\kern-0.1em B\kern-0.1em{\smaller A\kern-0.2em R}}}+}$ & $\text{}$ & $\text{}$ & $2$ & $48.54$ & $28$ & $C_{V_2}$ & $0.52$ & $0.02$ & \\
$\text{Belle(2015)+}$ & $\text{}$ & $\text{}$ & $8$ & $45.71$ & $26$ & $C_{V_2}$, $C_{S_2}$ & $0.16$ & $0.01$ & \\
$\text{LHCb+}$ & $\text{6}$ & $\text{Combined}$ & $7$ & $46.87$ & $26$ & $C_{V_2}$, $C_{S_1}$ & $0.09$ & $0.02$ & 96.68\\
$\text{Belle(Latest)}$ & $\text{}$ & $\text{}$ & $6$ & $47.24$ & $26$ & $C_{V_1}$, $C_{V_2}$ & $0.08$ & $0.04$ & \\
\hline
$\text{Belle(2016)+}$ & $\text{}$ & $\text{}$ & $2$ & $28.81$ & $19$ & $C_{V_2}$ & $0.34$ & $0.64$ & \\
$\text{Belle(2015)+}$ & $\text{}$ & $\text{}$ & $1$ & $30.81$ & $19$ & $C_{V_1}$ & $0.13$ & $0.77$ & \\
$\text{LHCb+}$ & $\text{7}$ & $\text{Combined}$ & $4$ & $31.29$ & $19$ & $C_{S_2}$ & $0.1$ & $0.83$ & 32.72\\
$\text{Belle New}$ & $\text{}$ & $\text{}$ & $3$ & $31.48$ & $19$ & $C_{S_1}$ & $0.09$ & $0.91$ & \\
$\text{}$ & $\text{}$ & $\text{}$ & $5$ & $31.52$ & $19$ & $C_{T}$ & $0.09$ & $0.82$ & \\
\hline
\end{tabular}}
\end{center}
\caption{The best selected scearios for ``Fit-1'' (section \ref{sec:resultfit1}).
The cases listed in order in the fourth column for each dataset have passed through the selection criteria
$0 \le\Delta^{AIC}_i \le 4$, where $\Delta^{AIC}_1 = 0$ in each dataset.
Note that the case-index values represent a specific set of parameters and each parameter listed here is considered to be complex, so
the number of parameters is actually double. $w_i$ in the eighth column is defined in eq.(\ref{omegai}).
The next column lists the results of the S-W normality test
for the assessment of goodness-of-fit. The last column lists the $\chi^2$ value corresponding to the SM for each dataset. Note that AIC$_c$ value for SM is same as the $\chi^2$ as no. of fit parameters $K=0$ for SM.}
\label{tab:Result1}
\end{table*}
The `concept of parsimony' \cite{boxjenkins} dictates that a model representing the truth should be obtained with
``... the smallest possible number of parameters for adequate representation of the data.''
In general, bias decreases and variance increases as the
dimension of the model increases. Often, the number of parameters in a model is used as a measure of the degree of
structure inferred from the data. The fit of any model can be improved by increasing the number of parameters.
Parsimonious models achieve a proper trade-off between bias and variance. All model selection methods are based to
some extent on the principle of parsimony \cite{breiman}.
In information theory, the Kullback-Leibler (K-L) Information or measure $I(f,g)$ denotes the information lost when
$g$ is used to approximate $f$. Here $f$ is a notation for full reality or truth and $g$ denotes an approximating
model in terms of probability distribution. $I(f,g)$ can also be defined between the `best' approximating model
and a competing one. Akaike, in his seminal paper \cite{akaike73} proposed the use of the K-L information as a fundamental
basis for model selection. However, K-L distance cannot be computed without full knowledge of both $f$ (full reality)
and the parameters ($\Theta$) in each of the candidate models $g_i(x|\Theta)$ (a model $g_i$ with parameter-set $\Theta$
explaining data $x$). Akaike found a rigorous way to estimate
K-L information, based on the empirical log-likelihood function at its maximum point.
`Akaike's information criterion'(AIC) with respect to our analysis can be defined as,
\begin{equation}
{\rm AIC} = \chi^2_{min} + 2 K\,
\label{aic}
\end{equation}
where $K$ is the number of estimable parameters. In application, one computes AIC for each of the candidate models and
selects the model with the smallest value of AIC. It is this model that is estimated to be ``closest'' to the unknown
reality that generated the data, from among the candidate models considered.
While Akaike derived an estimator of K-L information, AIC may perform poorly if there are too many parameters in relation
to the size of the sample. Sugiura\cite{sugiura78} derived a second-order variant of AIC,
\begin{equation}
{\rm AIC}_c = \chi^2_{min} + 2 K + \frac{2 K (K+1)}{n - K -1}\,
\label{aicc}
\end{equation}
where $n$ is the sample size. As a rule of thumb, Use of AIC$_c$ is preferred in literature when $n/K < 40$. There are
various other such information criteria defined later on, e.g. QAIC, QAIC$_c$, TIC etc. In this analysis, we consistently
use AIC$_c$.
Whereas AIC$_c$ are all on a relative (or interval) scale and are strongly dependent on sample size, simple differences of
AIC$_c$ values ($\Delta^{AIC}_i = {\rm AIC}^i_c - {\rm AIC}^{min}_c$) allow estimates of the relative expected K-L
differences between $f$ and $g_i(x|\Theta)$. This allows a quick comparison and ranking of candidate models.
The model estimated to be best has $\Delta^{AIC}_i \equiv \Delta^{AIC}_{min} = 0$. The larger $\Delta^{AIC}_i$ is,
the less plausible it is that the fitted model $g_i(x|\Theta)$ is the
K-L best model, given the data $x$. Table \ref{tab:delAICrule} lists rough rule-of-thumb values of $\Delta^{AIC}_i$
for analysis of nested models.
While the $\Delta^{AIC}_i$ are useful in ranking the models, it is possible to quantify the plausibility of each model as
being the actual K-L best model. This can be done by extending the concept of the likelihood of the parameters given both
the data and model, i.e. $\mathcal{L}(\Theta|x, g_i)$, to the concept of the likelihood of the model given the data,
hence $\mathcal{L}(g_i|x)$;
\begin{equation}
\mathcal{L}(g_i|x) \propto e^{(-\Delta^{AIC}_i / 2)}\,.
\end{equation}
Such likelihoods represent the relative strength of evidence for each model \cite{akaike83a}.
To better interpret the relative likelihood of a model, given the data and the set of $R$ models, we normalize the
$\mathcal{L}(g_i|x)$ to be a set of positive "Akaike weights," $w_i$ , adding up to $1$:
\begin{equation}\label{omegai}
w_i = \frac{e^{(-\Delta^{AIC}_i / 2)}}{\sum_{r = 1}^R e^{(-\Delta^{AIC}_r / 2)}}
\end{equation}
A given $w_i$ is considered as the weight of evidence in favor of model $i$ being the actual K-L best model for the
situation at hand, given that one of the $R$ models must be the K-L best model of that set.
The $w_i$ depend on the entire set; therefore, if a model is added or dropped during a post hoc analysis,
the $w_i$ must be recomputed for all the models in the newly defined set.
\subsubsection{Numerical Multi-parameter Optimization}\label{sec:npnumanalys}
To compare the latest {\mbox{\slshape B\kern-0.1em{\smaller A}\kern-0.1em B\kern-0.1em{\smaller A\kern-0.2em R}}}~and Belle binned data with a specific model, we devise a $\chi^2$ defined as:
\begin{equation}
\chi^2_{NP} = \sum^{n_b}_{i,j = 1} \left(R^{exp}_i - R^{th}_i\right) \left(V^{exp} + V^{th}\right)^{-1}_{i j}
\left(R^{exp}_j - R^{th}_j\right)\,,
\end{equation}
where $R^{th}_{bin}$ and $R^{exp}_{bin}$ are defined in eq.s (\ref{Rth}) and (\ref{RexpBaBe}). $i$ and
$j$ vary over the number of bins ($n_b$) taken into account in the analysis. For the calculation of $R^{th}_{bin}$,
central values of HQET hadronic form-factors and the quark masses are used (listed in table \ref{tab:thinput}).
The standard bin-width for the {\mbox{\slshape B\kern-0.1em{\smaller A}\kern-0.1em B\kern-0.1em{\smaller A\kern-0.2em R}}}~analysis is $ 0.5 (\text{GeV}^2 / c^4)$ and due to this the last bin exceeds
the allowed phase space($q^2_{max} = \left(m_B - m_{D^{(*)}}\right)^2$) in both channels. Instead of changing the
bin width for those last bins, we drop these bins from our analysis. We follow this same philosophy for Belle bins too.
$V^{th}_{i j}$ and $V^{exp}_{i j}$ are the theoretical and experimental covariance matrices respectively.
For the analysis of any specific NP model, the uncertainties of the HQET hadronic form-factors and the
quark masses(table \ref{tab:thinput}) are taken into account in the calculation of $V^{th}_{i j}$.
To calculate the errors $\delta R^{exp}_{bin}$, we use eq.(\ref{RexpBaBe}) according to the
case and propagate the errors listed in table \ref{tab:expinput}. Following the reasoning stated in section \ref{latparfit},
we break the NP analysis in two parts: `Fit-1' and `Fit-2'. In addition to $V^{th}_{ij}$, here we treat the $V^{exp}_{ij}$ exactly
as the $V_{ij}$ in section \ref{latparfit}.
We define the $\chi^2$ statistic for each of the 31 cases, a function of the NP Wilson coefficients.
The definition and usage of the observables closely follow the fitting process in section \ref{latparfit}.
Here, we take the existing world-averages of the parameters of the form-factors \cite{hfag}.
If we include all the NP interactions, we have total 10 unknown NP parameters and 26 observables
for {\mbox{\slshape B\kern-0.1em{\smaller A}\kern-0.1em B\kern-0.1em{\smaller A\kern-0.2em R}}}~(14 bins for $B \rightarrow D \tau \nu$ and 12 bins for $B \rightarrow D^* \tau \nu$) and 17 observables for
Belle. We then minimize the $\chi^2$ for different cases and different set of observables.
Though we have varied the process for various global optimization methods to optimize the minimization,
due to the presence of large uncertainties, this is not important for the present analysis.
To glean any information of goodness-of-fit from $\chi^2_{min}$, we need to know the degrees of freedom
($d.o.f = N_{Obs} - N_{Params}$). A reduced statistic $\chi^2_{red} = \chi^2_{min} / d.o.f$ can thus be defined.
In many cases in our optimization problem, the minimum is not an isolated single point, rather a contour
in the parametric dimensions. For these cases, Hessian in not positive definite and the errors thus obtained
are meaningless. In those cases, the $1 \sigma$ uncertainties have to found from the contours in the parameter
space and we have done that for all cases with $2-3$ parameters. As contours are impossible to draw when number of
parameters $> 3$, we have devised a numerical method to obtain the range of a parameter.
In this method, we sequentially minimize or maximize each parameter by scanning along the enclosing
$1 \sigma$ $\chi^2_{NP}$ hyper-contour-surface (the method can be extended to any number of $n \sigma$ contours).
These values give us the range of each parameter while taking their correlation into account all along.
These errors, for obvious reasons, are asymmetric. We have also systematically found these uncertainties
for all cases. We will in general quote them in our results.
In our present analysis, after optimizing the $\chi^2_{NP}$ for all $31$ cases, we make use of $\Delta^{AIC}_i$ and
$w_i$ to find the `best' set of cases, which are more favorable compared to others, and do further analysis on them.
After selecting a class of models describing the data with optimum bias and variance
with AIC$_c$, we check the significance of them to find most suited model to describe the data.
\subsubsection{Note on Model Selection Criteria}\label{sec:result3}
Unlike the AIC$_c$ or the Schwarz-Bayesian Criterion (BIC) \cite{bic}, which incorporate the concept of
parsimony and can be applied to nested as well as non-nested models, Likelihood-Ratio test - more commonly known as
$\Delta \chi^2$ test, can only be applied to nested models. When the model with the fewer free parameters (null,
in this case) is true, and when certain conditions are satisfied, Wilks' Theorem \cite{Wilks:1938dza} says that this
difference ($\Delta\chi^2$) should have a
$\chi^2$ distribution with the number of degrees of freedom equal to the difference in the number of free parameters in the
two models. This lets one compute a $p$-value and then compare it to a critical value to decide whether to reject the null
model in favor of the alternative model.
\begin{table}[!hbt]
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
Case Index. & Params & $\chi^2_{min}$ & $d.o.f$\\
\hline
$A$ & $C_{V_2}$ & $27.35$ & $41$ \\
$B$ & $C_{V_2}$, $C_{S_2}$ & $19.11$ & $39$ \\
$C$ & $C_{V_2}$, $C_{S_1}$, $C_{S_2}$ & $18.68$ & $37$ \\
$D$ & $C_{V_2}$, $C_{S_1}$, $C_{S_2}$, $C_{T}$ & $18.39$ & $35$\\
\hline
\end{tabular}
\end{center}
\caption{The cases with lowest $\chi^2_{min}$ values for different sets according to number of parameters for dataset-5 of
`Fit-2' (e.g. scenario $A$ is the case with lowest $\chi^2$ among all the 2 parameter cases analyzed in `Fit-2',
$B$ is the best among all 4 parameter cases and so on.)}
\label{tab:modselnote1}
\end{table}
For a demonstration of this method, as an example, we have taken dataset-5 from `Fit-2' (table \ref{tab:Result1}) as our
experimental input and we have separated all the cases in different sets according to their number of parameters. This means
that in this method, all the cases in such a set have same number of parameters and the best among them has the lowest
$\chi^2$ at its best-fit point. Only the best cases with their $\chi^2$ values and $d.o.f$s are listed in table
\ref{tab:modselnote1}. Whereas the AIC$c$ analysis picked up a group of best possible scenarios, here
we have used all the cases for comparison. Then in table \ref{tab:modselnote2}, we have compared different combinations of
these best cases from table \ref{tab:modselnote2} to do a $\Delta\chi^2$ test. From table \ref{tab:modselnote2} it can be seen
that case $A$ (i.e. with only $C_{V_2}$) is disfavored in comparison to $B$ and $C$ (though it cannot be be discarded at
a significance of $5\%$ in comparison with $D$, the $p$-value obtained is pretty small), whereas $B$ is favored with very
high $p$-values when compared to cases with larger number of parameters. This analysis thus picks out the
case with both $C_{V_2}$ and $C_{S_2}$ as the winning model among others.
\begin{table}[!hbt]
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
Cases Compared & $\Delta\chi^2$ & $\Delta d.o.f$ & $p$-value\\
\hline
$A$, $B$ & 8.24 & 2 & 0.02 \\
$A$, $C$ & 8.67 & 4 & 0.07 \\
$A$, $D$ & 8.96 & 6 & 0.18 \\
$B$, $C$ & 0.42 & 2 & 0.81 \\
$B$, $D$ & 0.72 & 4 & 0.95 \\
$C$, $D$ & 0.30 & 2 & 0.86 \\
\hline
\end{tabular}
\end{center}
\caption{$\Delta\chi^2$ analysis of the best models obtained from table \ref{tab:modselnote1}.}
\label{tab:modselnote2}
\end{table}
Though the system of all competing models are nested in our analysis, merely being able to reject one of the models compared to another is clearly not enough.
On the other hand, BIC, (also defined with the help of the likelihood function) can be defined as:
\begin{align}
{\rm BIC} = \chi^2 + (\log{n})~p
\label{bic}
\end{align}
where $n$ is the sample size and $p$ is the number of parameters. We can then define $\Delta$BIC in a similar manner as
$\Delta$AIC. In \cite{kass}, the authors have shown that $0 < \Delta\text{BIC} < 2$ selects the best models.
AIC$c$ and BIC were originally derived under different assumptions and are useful under different settings.
AIC$c$ was derived under the assumption that the true model requires an infinite number of parameters and attempts to
minimize the information lost by using a given finite dimensional model to approximate it. BIC was derived as a
large-sample approximation to Bayesian selection among a fixed set of finite dimensional models. The only difference
between the two criteria extended to take number of samples into account.
As can be seen from eqs. \ref{aic}, \ref{aicc} and \ref{bic}, the two criteria may produce quite different
results for large $n$.
The reasons we prefer AIC$c$ over BIC are as follows:
\begin{enumerate}
\item BIC applies a much larger penalty for complex models, and hence may lead to a simpler model than AIC$c$.
In general, BIC penalizes models with more parameters than AIC$c$ does and thus leads to choosing more parsimonious models than AIC$c$.
\item While AIC compares the cases as approximations of some true model, BIC tries to assign the best model as the true model.
This is one of the prevalent arguments against BIC.
\item For realistic sample sizes, BIC selected models may underfit the data.
\end{enumerate}
For a comparative study, we have included table \ref{tab:Result4}, which lists the best scenarios obtained from
``Fit-2'' using both AIC$c$ and BIC. To make BIC selection at par with AIC$_c$, i.e. more lenient, we have chosen a
range $0 - 4$. This is same as for $\Delta$AIC. We note that, in our case, the same sets of scenarios/models are
selected in both the selection criteria.
Both AIC, BIC and such criteria fail when the models being compared have same number of independent parameters and
comparable likelihood. In such cases, something like `parametric bootstrap' \cite{paraboot} can be used, but such
analysis is out of scope for the present work.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
DataSet Ind. & Cases & Param.s & B.F. Val & $\pm$ Err. \\
\hline
$\text{}$ & $5$ & $\text{$Re(C_T)$}$ & $0.27$ & $0.10$ \\
$\text{}$ & & $\text{$Im(C_T)$}$ & $0.00$ & $1.06$ \\
\cline{2-5}
$\text{1}$ & $3$ & $\text{$Re(C_{S_1})$}$ & $0.09$ & $0.06$ \\
$\text{}$ & & $\text{$Im(C_{S_1})$}$ & $0.00$ & $0.30$ \\
\cline{2-5}
$\text{}$ & $4$ & $\text{$Re(C_{S_2})$}$ & $0.09$ & $0.06$ \\
$\text{}$ & & $\text{$Im(C_{S_2})$}$ & $0.00$ & $0.30$ \\
\hline
$\text{}$ & & $\text{$Re(C_{V_2})$}$ & $0.27$ & $0.03$ \\
$\text{2}$ & $2$ & $\text{$Im(C_{V_2})$}$ & $0.00$ & $0.40$ \\
\hline
$\text{}$ & & $\text{$Re(C_{V_2})$}$ & $0.34$ & $0.12$ \\
$\text{}$ & & $\text{$Im(C_{V_2})$}$ & $-0.33$ & $0.20$ \\
$\text{}$ & $8$ & $\text{$Re(C_{S_2})$}$ & $-0.52$ & $0.43$ \\
$\text{}$ & & $\text{$Im(C_{S_2})$}$ & $-0.16$ & $0.21$ \\
\cline{2-5}
$\text{3}$ & $2$ & $\text{$Re(C_{V_2})$}$ & $0.23$ & $0.02$ \\
$\text{}$ & & $\text{$Im(C_{V_2})$}$ & $0.00$ & $0.09$ \\
\cline{2-5}
$\text{}$ & & $\text{$Re(C_{V_2})$}$ & $0.29$ & $0.03$ \\
$\text{}$ & & $\text{$Im(C_{V_2})$}$ & $0.00$ & $0.54$ \\
$\text{}$ & $7$ & $\text{$Re(C_{S_1})$}$ & $-0.23$ & $0.09$ \\
$\text{}$ & & $\text{$Im(C_{S_1})$}$ & $0.00$ & $0.48$ \\
\hline
$\text{}$ & & $\text{$Re(C_{V_2})$}$ & $0.58$ & $0.69$ \\
$\text{4}$ & $2$ & $\text{$Im(C_{V_2})$}$ & $-0.59$ & $0.37$ \\
\hline
$\text{}$ & & $\text{$Re(C_{V_2})$}$ & $0.34$ & $0.12$ \\
$\text{}$ & & $\text{$Im(C_{V_2})$}$ & $-0.35$ & $0.21$ \\
$\text{}$ & $8$ & $\text{$Re(C_{S_2})$}$ & $-0.51$ & $0.46$ \\
$\text{}$ & & $\text{$Im(C_{S_2})$}$ & $-0.14$ & $0.22$ \\
\cline{2-5}
$\text{5}$ & & $\text{$Re(C_{V_2})$}$ & $0.23$ & $0.02$ \\
$\text{}$ & $2$ & $\text{$Im(C_{V_2})$}$ & $0.00$ & $0.09$ \\
\cline{2-5}
$\text{}$ & & $\text{$Re(C_{V_2})$}$ & $0.28$ & $0.03$ \\
$\text{}$ & & $\text{$Im(C_{V_2})$}$ & $0.00$ & $0.70$ \\
$\text{}$ & $7$ & $\text{$Re(C_{S_1})$}$ & $-0.22$ & $0.08$ \\
$\text{}$ & & $\text{$Im(C_{S_1})$}$ & $0.00$ & $0.55$ \\
\hline
& $2$ & $Re(C_{V_2})$ & $0.10$ & $0.05$ \\
& & $Im(C_{V_2})$ & $-0.23$ & $0.17$ \\
\cline{2-5}
& $4$ & $Re(C_{S_2})$ & $-0.93$ & $0.73$ \\
& & $Im(C_{S_2})$ & $-0.69$ & $0.32$ \\
\cline{2-5}
7 & $3$ & $Re(C_{S_1})$ & $0.14$ & $0.08$ \\
& & $Im(C_{S_1})$ & $0.00$ & $0.43$ \\
\cline{2-5}
& $5$ & $Re(C_{T})$ & $0.04$ & $0.03$ \\
& & $Im(C_{T})$ & $0.00$ & $0.03$ \\
\hline
\end{tabular}
\end{center}
\caption{Best-fit values and Gaussian errors of all parameters for the selected `best' cases for `Fit-1', listed in table
\ref{tab:Result1}. Some cases are omitted due to the reason explained in section \ref{sec:goodfit} and corresponding plots are
tabulated in fig. \ref{fig:contours}.}
\label{tab:Result2}
\end{table}
\begin{table}
\begin{center}
{\renewcommand{\arraystretch}{1.1}%
\begin{tabular}{|c|c|c|}
\hline
~Dataset~ & ~Cases with~ & ~Cases with~ \\
Index. & $0 < \Delta\text{AIC}_c < 4$ & $0 < \Delta\text{BIC} < 4$\\
\hline
& $C_{T}$ & $C_{T}$ \\
& $C_{V_1}$ & $C_{V_1}$ \\
1 & $C_{V_2}$ & $C_{V_2}$ \\
& $C_{S_1}$ & $C_{S_1}$ \\
& $C_{S_2}$ & $C_{S_2}$ \\
\hline
2 & $C_{V_1}$ & $C_{V_1}$ \\
& $C_{V_2}$ & $C_{V_2}$ \\
\hline
& $C_{V_2}$ & $C_{V_2}$ \\
3 & $C_{V_2}$, $C_{S_2}$ & $C_{V_2}$, $C_{S_2}$ \\
& $C_{V_1}$ & $C_{V_1}$ \\
& $C_{V_2}$, $C_{S_1}$ & $-$ \\
\hline
4 & $C_{V_2}$ & $C_{V_2}$ \\
& $C_{V_1}$ & $C_{V_1}$ \\
\hline
& $C_{V_2}$, $C_{S_2}$ & $C_{V_2}$, $C_{S_2}$ \\
& $C_{V_2}$, $C_{S_1}$ & $C_{V_2}$ \\
5 & $C_{V_1}$, $C_{V_2}$ & $C_{V_1}$ \\
& $C_{V_2}$ & $C_{V_2}$, $C_{S_1}$ \\
& $C_{V_1}$ & $C_{V_1}$, $C_{V_2}$ \\
\hline
& $C_{V_2}$ & $C_{V_2}$ \\
6 & $C_{V_2}$, $C_{S_2}$ & $C_{V_2}$, $C_{S_2}$ \\
& $C_{V_2}$, $C_{S_1}$ & $-$ \\
\hline
& $C_{S_1}$ & $C_{S_1}$ \\
& $C_{T}$ & $C_{T}$ \\
7 & $C_{S_2}$ & $C_{S_2}$ \\
& $C_{V_2}$ & $C_{V_2}$ \\
& $C_{V_1}$ & $C_{V_1}$ \\
\hline
\end{tabular}}
\end{center}
\caption{The best selected scenarios for ``Fit-2'' (section \ref{sec:resultfit2}). Here we compare the performance of AIC$_c$
with BIC in model selection.}
\label{tab:Result4}
\end{table}
\begin{table*}
\begin{center}
{\renewcommand{\arraystretch}{1.1}%
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
~Experiment~ & ~Dataset~ & ~Observables~ & ~Cases~ & $~\chi^2_{min}~$ & ~$d.o.f$~ & ~Parameters~ & ~Akaike Wgt.s~ & ~Normality~ & ~$\chi^2$ (SM)~ \\
& Index. & & & & & & ($w_i$) & (S-W) & \\
\hline
$\text{}$ & $\text{}$ & $\text{}$ & $5$ & $7.28$ & $12$ & $C_T$ & $0.25$ & $0.54$ & \\
$\text{}$ & $\text{}$ & $\text{}$ & $1$ & $7.65$ & $12$ & $C_{V_1}$ & $0.20$ & $0.38$ & \\
$\text{}$ & $\text{1}$ & $\text{$R(D)_{bin}$}$ & $2$ & $7.65$ & $12$ & $C_{V_2}$ & $0.20$ & $0.38$ & 8.63\\
$\text{}$ & $\text{}$ & $\text{}$ & $3$ & $8.56$ & $12$ & $C_{S_1}$ & $0.13$ & $0.15$ & \\
$\text{}$ & $\text{}$ & $\text{}$ & $4$ & $8.56$ & $12$ & $C_{S_2}$ & $0.13$ & $0.15$ & \\
\cline{2-10}
$\text{{\mbox{\slshape B\kern-0.1em{\smaller A}\kern-0.1em B\kern-0.1em{\smaller A\kern-0.2em R}}}}$ & $\text{2}$ & $\text{$R(D^*)_{bin}$}$ & $1$ & $5.71$ & $10$ & $C_{V_1}$ & $0.57$ & $0.17$ & 20.20\\
$\text{}$ & $\text{}$ & $\text{}$ & $2$ & $6.75$ & $10$ & $C_{V_2}$ & $0.34$ & $0.09$ & \\
\cline{2-10}
$\text{}$ & $\text{}$ & $\text{}$ & $2$ & $15.68$ & $24$ & $C_{V_2}$ & $0.48$ & $0.53$ & \\
$\text{}$ & $\text{3}$ & $\text{Combined}$ & $8$ & $12.32$ & $22$ & $C_{V_2}$, $C_{S_2}$ & $0.18$ & $0.73$ & 70.44\\
$\text{}$ & $\text{}$ & $\text{}$ & $1$ & $19.03$ & $24$ & $C_{V_1}$ & $0.09$ & $0.3$ & \\
$\text{}$ & $\text{}$ & $\text{}$ & $7$ & $14.13$ & $22$ & $C_{V_2}$, $C_{S_1}$ & $0.07$ & $0.81$ & \\
\hline
$\text{Belle(2016)}$ & $\text{4}$ & $\text{$R(D^*)_{bin}$}$ & $2$ & $6.44$ & $15$ & $C_{V_2}$ & $0.47$ & $0.74$ & 17.76\\
$\text{}$ & $\text{}$ & $\text{}$ & $1$ & $6.92$ & $15$ & $C_{V_1}$ & $0.37$ & $0.86$ & \\
\hline
$\text{}$ & $\text{}$ & $\text{}$ & $8$ & $19.11$ & $39$ & $C_{V_2}$, $C_{S_2}$ & $0.41$ & $0.72$ & \\
$\text{{\mbox{\slshape B\kern-0.1em{\smaller A}\kern-0.1em B\kern-0.1em{\smaller A\kern-0.2em R}}}+}$ & $\text{}$ & $\text{}$ & $7$ & $21.54$ & $39$ & $C_{V_2}$, $C_{S_1}$ & $0.12$ & $0.72$ & \\
$\text{Belle(2016)}$ & $\text{5}$ & $\text{Combined}$ & $6$ & $21.75$ & $39$ & $C_{V_1}$, $C_{V_2}$ & $0.11$ & $0.12$ & 87.91\\
$\text{}$ & $\text{}$ & $\text{}$ & $2$ & $27.35$ & $41$ & $C_{V_2}$ & $0.07$ & $0.96$ & \\
$\text{}$ & $\text{}$ & $\text{}$ & $1$ & $27.42$ & $41$ & $C_{V_1}$ & $0.07$ & $0.65$ & \\
\hline
$\text{{\mbox{\slshape B\kern-0.1em{\smaller A}\kern-0.1em B\kern-0.1em{\smaller A\kern-0.2em R}}}+}$ & $\text{}$ & $\text{}$ & $2$ & $47.88$ & $28$ & $C_{V_2}$ & $0.49$ & $0.03$ & \\
$\text{Belle(2015)+}$ & $\text{6}$ & $\text{Combined}$ & $8$ & $44.97$ & $26$ & $C_{V_2}$, $C_{S_2}$ & $0.16$ & $0.01$ & 85.47\\
$\text{LHCb+}$ & $\text{}$ & $\text{}$ & $7$ & $46.47$ & $26$ & $C_{V_2}$, $C_{S_1}$ & $0.08$ & $0.01$ & \\
Belle(Latest) & & & & & & & & & \\
\hline
$\text{Belle(2016)+}$ & $\text{}$ & $\text{}$ & $3$ & $27.68$ & $19$ & $C_{S_1}$ & $0.21$ & $0.56$ & \\
$\text{Belle(2015)+}$ & $\text{}$ & $\text{}$ & $5$ & $27.83$ & $19$ & $C_{T}$ & $0.19$ & $0.44$ & \\
$\text{LHCb+}$ & $\text{7}$ & $\text{Combined}$ & $4$ & $27.93$ & $19$ & $C_{S_2}$ & $0.18$ & $0.84$ & 29.85\\
$\text{Belle(Latest)}$ & $\text{}$ & $\text{}$ & $2$ & $28.00$ & $19$ & $C_{V_2}$ & $0.18$ & $0.86$ & \\
$\text{}$ & $\text{}$ & $\text{}$ & $1$ & $29.82$ & $19$ & $C_{V_1}$ & $0.07$ & $0.75$ & \\
\hline
\end{tabular}}
\end{center}
\caption{The best selected scearios for ``Fit-2'' (section \ref{sec:resultfit2}). For clarification of columns, please see
the caption of table \ref{tab:Result1}}
\label{tab:Result3}
\end{table*}
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
DataSet Ind. & Cases & Param.s & B.F. Val & $\pm$ Err. \\
\hline
& $5$ & $Re(C_T)$ & $0.36$ & $0.19$ \\
& & $Im(C_T)$ & $0.00$ & $1.14$ \\
\cline{2-5}
1 & $3$ & $Re(C_{S_1})$ & $-0.05$ & $0.16$ \\
& & $Im(C_{S_1})$ & $0.00$ & $0.28$ \\
\cline{2-5}
& $4$ & $Re(C_{S_2})$ & $-0.05$ & $0.16$ \\
& & $Im(C_{S_2})$ & $0.00$ & $0.28$ \\
\hline
2 & $2$ & $Re(C_{V_2})$ & $0.34$ & $0.08$ \\
& & $Im(C_{V_2})$ & $0.00$ & $0.38$ \\
\hline
& $2$ & $Re(C_{V_2})$ & $0.22$ & $0.02$ \\
& & $Im(C_{V_2})$ & $0.00$ & $0.20$ \\
\cline{2-5}
& & $Re(C_{V_2})$ & $0.31$ & $0.11$ \\
& & $Im(C_{V_2})$ & $0.04$ & $2.16$ \\
3 & $8$ & $Re(C_{S_2})$ & $-0.52$ & $0.37$ \\
& & $Im(C_{S_2})$ & $0.004$ & $0.46$ \\
\cline{2-5}
& & $Re(C_{V_2})$ & $0.31$ & $0.06$ \\
& $7$ & $Im(C_{V_2})$ & $0.00$ & $0.36$ \\
& & $Re(C_{S_1})$ & $-0.31$ & $0.21$ \\
& & $Im(C_{S_1})$ & $0.00$ & $0.40$ \\
\hline
4 & $2$ & $Re(C_{V_2})$ & $0.67$ & $0.69$ \\
& & $Im(C_{V_2})$ & $-0.36$ & $0.47$ \\
\hline
& & $Re(C_{V_2})$ & $0.34$ & $0.04$ \\
& & $Im(C_{V_2})$ & $0.00$ & $0.62$ \\
& $8$ & $Re(C_{S_2})$ & $-0.61$ & $0.22$ \\
& & $Im(C_{S_2})$ & $0.00$ & $0.34$ \\
\cline{2-5}
& & $Re(C_{V_2})$ & $0.37$ & $0.06$ \\
5 & & $Im(C_{V_2})$ & $0.00$ & $0.23$ \\
& $7$ & $Re(C_{S_1})$ & $-0.51$ & $0.20$ \\
& & $Im(C_{S_1})$ & $0.00$ & $0.30$ \\
\cline{2-5}
& $2$ & $Re(C_{V_2})$ & $0.22$ & $0.02$ \\
& & $Im(C_{V_2})$ & $0.00$ & $0.11$ \\
\hline
& $3$ & $Re(C_{S_1})$ & $-0.45$ & $0.47$ \\
& & $Im(C_{S_1})$ & $0.80$ & $0.15$ \\
\cline{2-5}
& $5$ & $Re(C_T)$ & $0.09$ & $0.03$ \\
& & $Im(C_T)$ & $0.00$ & $0.06$ \\
\cline{2-5}
7 & $4$ & $Re(C_{S_2})$ & $0.18$ & $0.08$ \\
& & $Im(C_{S_2})$ & $0.00$ & $0.87$ \\
\cline{2-5}
& $2$ & $Re(C_{V_2})$ & $0.09$ & $0.05$ \\
& & $Im(C_{V_2})$ & $0.39$ & $0.14$ \\
\hline
\end{tabular}
\end{center}
\caption{Best-fit values and Gaussian errors of all parameters for the selected `best' cases for `Fit-2', listed in table
\ref{tab:Result3}. Some cases are omitted due to the reason explained in section \ref{sec:goodfit
}
\label{tab:Result3a}
\end{table}
\begin{figure*}[htbp]
\centering
\subfloat[Dataset $1$, Case $1$]{
\includegraphics[scale=0.25]{BaBD1}
\label{fig:BaBD1}}
\subfloat[Dataset $1$, Case $2$]{
\includegraphics[scale=0.25]{BaBD2}
\label{fig:BaBD2}}
\subfloat[Dataset $2$, Case $1$]{
\includegraphics[scale=0.25]{BaBDst1}
\label{fig:BaBDst1}}\\
\subfloat[Dataset $3$, Case $6$]{
\includegraphics[scale=0.25]{BaTot6a}
\label{fig:BaTot6a}}
\subfloat[Dataset $3$, Case $6$]{
\includegraphics[scale=0.25]{BaTot6b}
\label{fig:BaTot6b}}
\subfloat[Dataset $4$, Case $1$]{
\includegraphics[scale=0.25]{BeBDst1}
\label{fig:BeBDst1}}\\
\subfloat[Dataset $5$, Case $6$]{
\includegraphics[scale=0.25]{BaBem6a}
\label{fig:BaBem6a}}
\subfloat[Dataset $5$, Case $6$]{
\includegraphics[scale=0.25]{BaBem6b}
\label{fig:BaBem6b}}
\subfloat[Dataset $7$, Case $1$]{
\includegraphics[scale=0.25]{BeLBeNewm1}
\label{fig:BeLBeNewm1}}
\caption{The `cases' for different datasets listed in table \ref{tab:Result1}, which pass the goodness-of-fit hypothesis tests but
could not be listed in table \ref{tab:Result2} as for these cases, the minimum, instead of being an isolated point, is actually
a contour in the parameter-space. Though this is true for all plots listed here, some cases have four parameters and we are only
able to show the two-parameter cross-sections of these.(e.g. plots \ref{fig:BaTot6a} and \ref{fig:BaTot6b} are actually cross-sections of
a single four-dimensional plot. Same is true for \ref{fig:BaBem6a} and \ref{fig:BaBem6b}).}
\label{fig:contours}
\end{figure*}
\begin{figure*}[htbp]
\centering
\subfloat[Dataset $1$, Case $5$]{
\includegraphics[scale=0.3]{D1C5}
\label{fig:D1C5}}
\subfloat[Dataset $1$, Case $3$]{
\includegraphics[scale=0.3]{D1C3}
\label{fig:D1C3}}
\subfloat[Dataset $1$, Case $4$]{
\includegraphics[scale=0.3]{D1C4}
\label{fig:D1C4}}
\subfloat[Dataset $2$, Case $2$]{
\includegraphics[scale=0.3]{D2C2}
\label{fig:D2C2}}\\
\subfloat[Dataset $3$, Case $8$]{
\includegraphics[scale=0.3]{D3C8}
\label{fig:D3C8}}
\subfloat[Dataset $3$, Case $2$]{
\includegraphics[scale=0.3]{D3C2}
\label{fig:D3C2}}
\subfloat[Dataset $3$, Case $7$]{
\includegraphics[scale=0.3]{D3C7}
\label{fig:D3C7}}
\subfloat[Dataset $4$, Case $2$]{
\includegraphics[scale=0.3]{D4C2}
\label{fig:D4C2}}\\
\subfloat[Dataset $5$, Case $8$]{
\includegraphics[scale=0.3]{D5C8}
\label{fig:D5C8}}
\subfloat[Dataset $5$, Case $2$]{
\includegraphics[scale=0.3]{D5C2}
\label{fig:D5C2}}
\subfloat[Dataset $5$, Case $7$]{
\includegraphics[scale=0.3]{D5C7}
\label{fig:D5C7}}
\subfloat[Dataset $11$, Case $2$]{
\includegraphics[scale=0.3]{D11C2}
\label{fig:D11C2}}\\
\includegraphics[scale=0.8]{legend}
\caption{Array-plots showcasing the correlations between the fitted parameters of separate `cases' for different datasets
listed in table \ref{tab:Result2}. The color-coding is explained in the horizontal legend. As can be seen, for the cases with only
two independent parameters, the parameters are strongly (negatively) correlated, compared to other cases, as expected.}
\label{fig:corrplt}
\end{figure*}
\subsection{Results}\label{sec:resultsnp}
\subsubsection{Fit-1}\label{sec:resultfit1}
In this fit, as mentioned in the previous section, we do not consider the systematic errors or their correlations.
The best probable NP cases (scenarios), which are obtained after minimizing the $\chi^2_{NP}$ and using $w_i$ (eq. (\ref{omegai})),
are listed in table \ref{tab:Result1}. Then using the formalism defined in section \ref{sec:goodfit}), we find the distribution
of the residuals for all those fits and we check whether that distribution is accordant with a normal distribution with mean
$0$ and variance $1$. As was mentioned and justified in section \ref{sec:goodfit}, we use Shapiro-Wilk's normality-test
for this. Also, in order to check the normality of the residuals, we use the graphical method
known as quantile-quantile ($Q-Q$) plot. In general, the $Q-Q$ plots are used to compare two probability ditributions.
In fig. \ref{fig:hypotestnp}, we show the residual-distributions while comparing them with the reference Gaussian
($\mu = 0$, $\sigma = 1$). The $p$-value obtained in the normality-test quantifies the probability of $H_0$ being true.
In table \ref{tab:Result1}, the last column lists the $p$-values for the performed S-W test.
Only those NP scenarios, which pass the normality test, are listed in table \ref{tab:Result2} with the best-fit
values and $1 \sigma$ uncertainties of their parameters. Other than that, some cases are not shown in the Table,
where the minimum, instead of being an isolated point,
is actually a contour in the parameter-space. For such cases, we have plotted the best-fit contours in the parameter-space.
These are shown in fig. \ref{fig:contours}. We have prepared these plots in terms of the goodness-of-fit contours for
joint estimation of multiple NP parameters at a time. $1\sigma$
and $4\sigma$ contours that are equivalent to $p$-values of $0.3173$
and $0.0001$, correspond to confidence levels of $68.27\%$
and $99.99\%$, respectively.
For our purpose, each confidence interval corresponds to a particular value of
$X = \Delta\chi^2$ (i.e.\ $\chi^2 - \chi^2_{min}$) for a particular model with $d.o.f = N_{params}$, where the SM is
considered to be the model with no free parameters. For cases up-to 3 parameters, errors on parameters can be
estimated from the edges of the 2 or 3 dimensional contours as they properly reflect the correlation between
the involved parameters.
From Table \ref{tab:Result1}, we note that all types of new interactions considered
in our analysis can individually explain the data on $R(D)_{bin}$ published by {\mbox{\slshape B\kern-0.1em{\smaller A}\kern-0.1em B\kern-0.1em{\smaller A\kern-0.2em R}}}.
However, when it comes to the $q^2$-distribution of decay rate of $B\to D^{\ast}\tau\nu_{\tau}$, both {\mbox{\slshape B\kern-0.1em{\smaller A}\kern-0.1em B\kern-0.1em{\smaller A\kern-0.2em R}}}~and Belle data
independently allow a contribution from a new left or right-handed vector current effective operator (cases 1 and 2)
as plausible explanation. Moreover, when the data ($q^2$-bins) from both the {\mbox{\slshape B\kern-0.1em{\smaller A}\kern-0.1em B\kern-0.1em{\smaller A\kern-0.2em R}}}~and Belle are combined, the most
likely scenarios are the cases with new right handed vector current, either alone or along with other new right or
left handed scalar current effective operators. In addition to binned data, we have done the analysis by taking into account
the Belle and LHCb measurements of the $q^2$ integrated $R(D^{(\ast)})$ (see Table \ref{tab:expinput} for numerical values).
The outcome of these analyses are shown for datasets 6 and 7 in the table \ref{tab:Result1}. No scenario
passes the normality test for dataset-6. In dataset-7, the
most likely scenarios are the new left or right handed scalar or vector current operators, though, across all the
cases the reduced $\chi^2$s are $>1$.
Accross all the datasets discussed above, we note that wherever measurements of $R(D)$s are
included in our fit the effective operators associated with the scalar current become relevant, either alone (less
preferable) or along with the right handed vector current operator. It could be considered as an indication that
current data on $R(D)$ still allow a scalar current contribution as a possible explanation of the observed deviations.
Also, across all the scenarios which qualify our predefined test criteria, a common NP explanation is case 2,
i.e the presence of a new $(V + A)$ type interaction. Here, we can not distinguish whether the new contribution
is a vector or a pseudo-vector or both. However, if we combine the information obtained from the parametric fit of the form
factors, it won't be wrong to conclude that the most favorable solution of the present data on the decay $B\to D^{\ast}\tau\nu_{\tau}$
could be obtained from the presence of a pseudo-vector current.
\subsubsection{Fit-2}\label{sec:resultfit2}
In this fit, as mentioned earlier, we consider the systematic error-sizes to be same as the statistical ones and
assume 100\% correlation among them. The best cases according to their Akaike weights are listed in table \ref{tab:Result3}.
The results are obtained and analyzed in the same manner as for `Fit-1'. Here too, no fit-result for data-set `6'
passes the normality criteria. Hence we drop that set from further analysis. The outcome of the analyses of the rest of
the datasets are similar to the ones obtained in `Fit-1', i.e both the fits have almost identical conclusions.
The only exception is that, here, the role of left handed vector current becoms equally important as the right handed vector
current, i.e apart from a new $(V+A)$ type interaction, the presence of a new $(V-A)$ type interaction can also be considered
as common NP explanation of the current data. The best fit values of the fitted parameters along with the corresponding
errors are shown in table \ref{tab:Result3a}.
\section{Summary}
We look for possible new physics effects in the decays $B\to D^{(\ast)}\tau\nu_{\tau}$ in the light of the recently available data from
Belle, {\mbox{\slshape B\kern-0.1em{\smaller A}\kern-0.1em B\kern-0.1em{\smaller A\kern-0.2em R}}}~and LHCb. At first, the form-factors, relevant in these decays, are fitted assuming the absence of any
contribution coming from operators other than the SM. The fitted results are then compared with those obtained by
HFAG from a fitting to the available data on $B\to D^{(\ast)}\ell\nu_{\ell}$. We note that the fit results of the parameter $R_2(1)$ largely
disagree with each other, while the rest are more or less consistent with each other within errors.
The effects are prominent in all the regions of the $q^2$ distribution of the form-factor $A_2(q^2)$,
which is associated with a pseudo-vector current. Therefore, assuming the decays $B\to D^{(\ast)}\ell\nu_{\ell}$ are free from
any new physics effects, such a difference in the $q^2$ distribution of $A_2$ (obtained from $B\to D^{\ast}\tau\nu_{\tau}$ and $B\to D^\ast\ell\nu_{\ell}$)
can be compensated by adding a contribution from new pseudo vector and/or pseudo tensor currents.
In the next part of our analysis, we consider the new physics contributions in the decays $B\to D^{(\ast)}\tau\nu_{\tau}$ which come from
new vector, scalar or tensor type operators. In this case, we take the relevant form-factors as obtained using
the fit results by HFAG. We define different
possible NP scenarios which are obtained after combining contributions from the new operators in many different ways.
Our goal is to select the best possible NP scenarios (new interactions) that can accommodate all the available data.
In doing so, we use the AIC$_c$ in the analysis of the empirical data.
Such procedures lead to more robust inferences in simultaneous comparative analysis of multiple competing scenarios.
In order to check whether all the NP scenarios that are coming out of AIC$_c$ test can fit the data well or not,
we have done Shapiro-Wilk's normality-test for each selected model. For a comparative study, we have also analyzed the data
for selecting the best model using Schwarz-Bayesian Criterion (BIC). For our different
datasets the best selected models are identical in both the selection criteria.
Our analysis of the available data on $R(D^{\ast})$ from {\mbox{\slshape B\kern-0.1em{\smaller A}\kern-0.1em B\kern-0.1em{\smaller A\kern-0.2em R}}}, Belle, and LHCb shows that the most plausible explanation
of the data can be obtained from the presence of new effective oparators with left or right handed charged vector current.
In addition, if we include $R(D)$ in our fit, apart from the vector currents the contributions from charged scalar currents
might become relevant, either alone (though less preferable) or along with right handed vector current operators.
Overall, our analysis of $B\to D^{\ast}\tau\nu_{\tau}$ shows that it is the
contribution from a left or right-handed charged vector current effective operator, that, as well as accommodating all
the available data, passes all the selection criteria for being the best possible NP scenario.
Here, we would like to point out that we have made use of the available data on the $q^2$ (bins) distributions of the decays
$B\to D^{(\ast)}\tau\nu_{\tau}$, which have large errors. This, in turn, gives our fitted results large errors.
Once the more precise data on the $q^2$ bins are made available, one may and should repeat the
analysis to check sustainability of the above conclusions.
\section*{Acknowledgements}
We thank Gabriele Simi (Univ. of Padova) and Devdatta Majumder (Univ. of Kansas) for really helpful discussions on binned data and
residual-distributions.
|
train/arxiv
|
BkiUddI5qX_BjprW8pye
| 5 | 1 |
\section{Introduction}
\label{sec:intro}
Separated flows over lifting surfaces have been studied extensively due to their critical importance in aerodynamics and hydrodynamics.
Over the past few decades, substantial studies have been dedicated to the understanding of post-stall flows over finite-aspect-ratio wings \citep{winkelmann1980effects,taira2009three,mulleners2017flow,eldredge2019leading,zhang2020formation}.
For a steadily translating wing, the vortices generated from the leading and trailing edges exhibit complex nonlinear evolution under the influence of the tip vortices. The resulting wake is highly three-dimensional in nature.
The introduction of sweep to the finite-aspect-ratio wings further enriches the wake dynamics.
\citet{harper1964review} asserted that once local stall appears on the swept wing, the spanwise boundary layer flow alters the stall characteristics of the attached sections of the span. Thus the separated flows over swept wings bear little resemblance to the two-dimensional flows.
The stalled flow over a swept wing usually features a ``ram's horn" vortex, which stems from the inboard leading edge and grows in size as it trails to the wake behind the tip. Such unique flow is also referred to as ``tip stall" \citep{black1956flow,zhang2019experimental}.
Focusing on the detailed flow structures,
\citet{yen2007flow} and \citet{yen2009flow} experimentally studied the effects of sweep angle, Reynolds number and angle of attack on the wake vortices, boundary layer flow patterns and aerodynamic performance of the swept wings. The wake features were found to be significantly dependent on these parameters.
\citet{visbal2019effect} conducted large eddy simulations of flows over swept wings with a semi aspect ratio of 4 at $Re=2\times 10^{5}$. A region of laminar flow is observed near the midspan, and it grows in extent with increasing sweep angle. The authors attributed this observation to the relief effect provided by the sweep-induced spanwise flow toward the tip.
Furthermore, dynamic stall was numerically simulated for a swept wing undergoing pitching and plunging maneuvers \citep{visbal2019effect,garmann2020examination}. Under these wing motions, the release of the arch-shaped vortices during downstroke occurs near both tip regions, whereas only a single arch vortex is seen to detach from the midspan in the unswept case.
As a special case, delta wing is well known for harnessing separated flow physics to enhance its aeordynamic performance.
Unlike the unswept wing, for which the leading-edge vortex grows and inevitably sheds away, the LEVs on the delta wing are stable due to the balance between spanwise vorticity transport and local vorticity generation \citep{polhamus1966concept,rockwell1993three,gursul2005unsteady}.
As the LEVs trail downstream, they create low pressure on the suction side of the wing, producing a sizable vortical lift.
Such favorable effect grants delta wing the ability to operate at much higher angle of attack than a conventional wing.
The LEVs may also contribute to enhanced lift for sustained avian flight.
\citet{videler2004leading} identified LEVs in the swept wings of common swifts (\emph{Apus apus}) during gliding.
\citet{ben2019lift} proposed a pseudo-three-dimensional flow model for investigating the stationary LEV mechanism over swept back wings. The model revealed that wing geometry has a major role in the localization of the stationary LEVs over high aspect-ratio wings.
In addition to the above studies, LEVs on translating swept wings has been further examined by \citet{lentink2009rotational} and \citet{beem2012stabilization}, who showed that the spanwise flow alone can not sustain stationary LEVs.
Moreover, the effectiveness of the LEVs in augmenting lift is still arguable for the low-speed flights \citep{lentink2007swifts}.
These mixed findings regarding the role of separated flows in avian flights with swept wings warrants further investigations.
The detailed features of separated flows over swept wings depends on various parameters including the aspect ratio, angle of attack, and sweep angle.
The interplay between these effects generates complex wake dynamics, which are not thoroughly understood thus far.
In this work, we present an extensive numerical study on the laminar separated flows over swept wings.
The objective is to characterize the different wake structures observed over a large range of parameters, and to identify their formation mechanisms.
The rest of the paper is organized as follows. In what follows, we present the computational setup in \S \ref{sec:setup}. A variety of wakes observed from this study is reported in \S \ref{sec:results}. We conclude this study by summarizing our findings in \S \ref{sec:conclusion}.
\section{Computational setup}
\label{sec:setup}
We consider three-dimensional incompressible flows over swept finite-aspect-ratio wings with the NACA 0015 cross-section profile.
The setup of the wing geometry is shown in figure \ref{fig:scheme}.
The wings are subjected to uniform flow with velocity $U_{\infty}$ in the $x$ direction. The $z$ axis aligns with the spanwise direction of an unswept wing, and the $y$ axis points at the lift direction.
For the swept wings, the sweep angle $\Lambda$ is defined as the angle between the $z$ axis and leading edge of the wing.
We consider a range of sweep angles from $0^{\circ}$ to $45^{\circ}$, at an interval of $7.5^{\circ}$.
Half of the swept wing model is simulated by prescribing a symmetry boundary condition along the midspan.
Denoting the half wing span as $b$ and the chord length as $c$, the semi aspect ratio is defined as $sAR=b/c$, which is varied from 0.5 to 4.
We focus on flows that develop behind wings at high angles of attack, $\alpha=16^{\circ}$, $20^{\circ}$, $26^{\circ}$ and $30^{\circ}$, as we are particularly interested in separated flows.
The Reynolds number, which is defined as $Re= U_{\infty}c/\nu$ ($\nu$ is the kinematic viscosity), is kept fixed at 400.
In what follows, all spatial variables are scaled by the chord length $c$, velocity by freestream velocity $U_{\infty}$, and time by $c/U_{\infty}$.
The flows over swept wings are simulated by numerically solving the three-dimensional Navier-Stokes equations. An incompressible solver \emph{Cliff} (in \emph{CharLES} software package, Cascade Technologies, Inc.) is used for the direct numerical simulations. The solver employs a collocated, node-based finite-volume method to compute the solutions to the governing equations with second-order accuracy in both space and time \citep{ham2004energy,ham2006accurate}. The computational domain and the spatial discretization setup in this study follow our previous work with extensive validation \citep{zhang2020formation}. For the swept cases, the straight wing mesh system is sheared in the $x$ direction along with the wing.
\begin{figure}
\centering
\includegraphics[scale=0.45]{Fig1.png}
\caption{Schematic of setup for $(a)$ unswept wing and $(b)$ swept wing. $(c)$ shows the cross-sectional slice along the broken lines in $(a)$ and $(b)$.}
\label{fig:scheme}
\end{figure}
\section{Results}
\label{sec:results}
The wakes of swept wings exhibit a rich variety of features depending on the aspect ratio, angle of attack, and sweep angle.
We show representative wakes in figure \ref{fig:VorticalStructures}, with their distributions over the $\Lambda$-$\alpha$ space in figure \ref{fig:regimes} for different aspect ratios.
The wakes are broadly divided into two categories: steady flows and unsteady flows.
Steady flows take different forms, including those with tip vortices (\protect\raisebox{0.0pt}{\tikz{\node[scale=0.45, regular polygon, regular polygon sides=3, fill={rgb,255:red,126; green,47; blue,142}](){};}}), those with midspan structures (\protect\raisebox{0.0pt}{\tikz{\node[scale=0.45,regular polygon, regular polygon sides=3,fill={rgb,255:red,119; green,172; blue,48},rotate=-90](){};}}), and those with streamwise vortices (\protect\raisebox{0.0pt}{\tikz{\node[scale=0.45,regular polygon, regular polygon sides=3,fill={rgb,255:red,77; green,190; blue,238},rotate=-180](){};}}). The steady flow regime appear over a wide range of the parameter space.
Unsteady flows are characterized by vortex shedding near the midspan ($\MyDiamond[draw={rgb,255:red,217; green,83; blue,25},line width=0.3mm, fill=white]$) or near the tip (\protect\raisebox{0.0pt}{\tikz{\node[scale=0.55,regular polygon, regular polygon sides=4, draw = {rgb,255:red,237; green,177; blue,32}, line width=0.3mm, fill={rgb,255:red,255; green,255; blue,255},rotate=0](){};}}).
In what follows, we describe key mechanisms that are responsible for the formation of the different wakes mentioned above.
\begin{figure}
\centering
\includegraphics[width=0.95\textwidth]{Fig2.png}
\caption{Representative wakes of swept wings. $(a)$ Steady flow with tip vortex \protect\raisebox{0.0pt}{\tikz{\node[scale=0.45, regular polygon, regular polygon sides=3, fill={rgb,255:red,126; green,47; blue,142}](){};}}; $(b)$ unsteady shedding near midspan $\MyDiamond[draw={rgb,255:red,217; green,83; blue,25},line width=0.3mm, fill=white]$; $(c)$ steady flow with midspan structures \protect\raisebox{0.0pt}{\tikz{\node[scale=0.45,regular polygon, regular polygon sides=3,fill={rgb,255:red,119; green,172; blue,48},rotate=-90](){};}}
; $(d)$ and $(e)$ unsteady shedding near wing tip \protect\raisebox{0.0pt}{\tikz{\node[scale=0.55,regular polygon, regular polygon sides=4, draw = {rgb,255:red,237; green,177; blue,32}, line width=0.3mm, fill={rgb,255:red,255; green,255; blue,255},rotate=0](){};}}; $(f)$ steady flow with streamwise vortices \protect\raisebox{0.0pt}{\tikz{\node[scale=0.45,regular polygon, regular polygon sides=3,fill={rgb,255:red,77; green,190; blue,238},rotate=-180](){};}}. The figures are scaled for visual clarity.
}
\label{fig:VorticalStructures}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.99\textwidth]{Fig3.png}
\caption{Classification of vortical structures behind finite-aspect-ratio swept wings. $(a)$ $sAR=0.5$; $(b)$ $sAR=1$; $(c)$ $sAR=2$; and $(d)$ $sAR=4$. The dash lines denote the approximate boundaries between steady (filled symbols) and unsteady (empty symbols) flows over the $\Lambda$-$\alpha$ space. Vortical structures are visualized by iso-surfaces of $Q=1$ for select cases.}
\label{fig:regimes}
\end{figure}
\subsection{Tip effects}
\label{subsec:tipvortex}
Tip effects play an important role in the development of wakes behind low-aspect-ratio wings.
At $sAR=0.5$, steady flows are observed up to $\alpha=26^{\circ}$, as shown in figure \ref{fig:regimes}$(a)$.
These steady flows typically feature a pair of tip vortices, which induce strong downwash over the wing span, suppressing the formation of large leading-edge vortices.
Such mechanism is responsible for the stability of the wake, particularly for low-aspect-ratio unswept wings \citep{taira2009three,devoria2017mechanism,zhang2020formation}.
At higher sweep angles, the vortical structures emanated from the leading edge grows in size, covering a significant portion of the suction surface of the wing.
These vortical structures are also beneficial to the stability of the wake, as will be discussed in detail in \S \ref{subsec:midspan}.
Unsteady flows are only observed at $\alpha\gtrsim 30^{\circ}$ for $\Lambda=0^{\circ}-37.5^{\circ}$.
These unsteady wakes are characterized by the periodic shedding of hairpin vortices \citep{taira2009three}.
As the aspect ratio is increased, the downwash induced by the tip vortices weakens along the midspan, allowing the roll-up of vortex sheet at the leading edge.
As a result, the unsteady vortex shedding develops near the midspan, as visualized in figure \ref{fig:VorticalStructures}$(b)$.
The stability boundary (dash line in figure \ref{fig:regimes}) shifts towards lower angles of attack at $sAR=1$ and 2 for low-swept wings.
Additional details on the three-dimensional unsteady wake dynamics of wings under tip effects are reported in \citet{taira2009three} and \citet{zhang2020formation}.
\subsection{Midspan effects}
\label{subsec:midspan}
For wings with larger aspect ratio ($sAR\gtrsim 1$), in place of the weakened tip effects, the midspan symmetry introduces another type of three dimensionality that dominates the wake.
Let us present the skin-friction lines for cases of $(\alpha,sAR)=(20^{\circ},1)$ with varying $\Lambda$ as shown in figure \ref{fig:SkinFrictionLine}.
With the increase in sweep angle, the boundary layer separation point near the midspan gradually shifts towards the trailing edge.
This is caused by the growth of the vortical structures emanated from the leading edge, as observed in figure \ref{fig:regimes}$(b)$.
The increase in sweep angle for $sAR\gtrsim 1$ also leads to the attenuation of the tip vortices, as reflected by the diminishing three-dimensional skin-friction line pattern near the wing tip in figure \ref{fig:SkinFrictionLine}.
The above observations suggest a switch-over of the source of three-dimensionality from the wing tip in low-$\Lambda$ cases to the midspan in high-$\Lambda$ cases.
\begin{figure}
\centering
\includegraphics[scale=0.47]{Fig4.png}
\caption{Skin-friction lines on the suction side of wings with $(\alpha,sAR)=(20^{\circ},1)$}
\label{fig:SkinFrictionLine}
\end{figure}
Let us take a closer look at the vortical structures near the midspan. A representative case of $(\alpha,sAR,\Lambda)=(20^{\circ}, 2, 45^{\circ})$ is shown in figure \ref{fig:MidSpanDownwash}.
For this case, the vortex sheet rolls up along the spanwise direction, covering the entire chord over the inboard section of the wing.
Due to the symmetric condition, the identical vortical structures on two sides of the mid-plane are oriented at an angle of $180^{\circ} - 2\Lambda$, which in the current case is $90^{\circ}$.
As a result, each of the vortical structures is subjected to the downward velocity (pointing at $-y$ direction) induced by its symmetric peer on the other side of the midspan.
This is clearly manifested in figure \ref{fig:MidSpanDownwash}$(b)$ by the strong negative crossflow velocity $u_y$ on the suction side near the $z=0$ plane.
Due to such three-dimensional midspan effects, the vortex sheet emanated from the leading edge is pinned to the suction side of the wing, forming steady vortical structures.
For cases with smaller $\Lambda$, the angle between the symmetric vortical structure tends towards $180^{\circ}$, and the mutual induced velocity becomes smaller.
The three dimensionality developed from the midspan is able to stabilize the wakes of a considerable number of cases, as labeled by the green triangle (\protect\raisebox{0.0pt}{\tikz{\node[scale=0.45,regular polygon, regular polygon sides=3,fill={rgb,255:red,119; green,172; blue,48},rotate=-90](){};}}) in figure \ref{fig:regimes}.
The midspan effects are also observed for swept wings with $sAR=0.5$, in which cases the tip vortices also stabilize the wake.
In fact, the strengthening of the tip vortices with increasing $\Lambda$ at $sAR=0.5$ (see figure \ref{fig:regimes}$(a)$, $\alpha=26^{\circ}$ for example) is likely related to the three-dimensional midspan effects. As vortex sheets emanated from leading edge are less likely to roll up due to the mutual downwash, they counteract less on the formation of tip vortices.
We note that spanwise vorticity transport, which is considered as the key mechanism for the formation of steady LEVs on delta wings \citep{polhamus1966concept,gursul2005review}, does not play an important role in the formation of the midspan vortical structures in the current case. This is manifested by the fact that the strong outboard velocity $u_z$ does not coincide with the vortex core, as shown in figure \ref{fig:MidSpanDownwash}$(c)$.
The downward induced velocity described above is strong near the midspan, as shown in figure \ref{fig:MidSpanDownwash}$(b)$.
With the gradual weakening of the midspan effects towards the outboard sections, unsteadiness develops locally near the tip region, while the midspan region still remains steady. The resulting flows resemble the ``tip stall" phenomenon, as described in \citet{black1956flow,zhang2019experimental,visbal2019effect}.
This type of flows, as indicated by yellow squares (\protect\raisebox{0.0pt}{\tikz{\node[scale=0.55,regular polygon, regular polygon sides=4, draw = {rgb,255:red,237; green,177; blue,32}, line width=0.3mm, fill={rgb,255:red,255; green,255; blue,255},rotate=0](){};}}) in figure \ref{fig:regimes}, prevails for swept wings with large aspect ratios (e.g., $sAR=4$) and high angles of attack (e.g., $\alpha=26^{\circ}-30^{\circ}$).
For wings with large $sAR$, the unsteady shedding develops from the leading edge and features mostly spanwise vortices, see figure \ref{fig:VorticalStructures}$(e)$ for example.
In contrast, for wings with low aspect ratio but large angles of attack, the unsteady vortices are shed from the wing tip, and they feature hairpin structures with the legs originated from the pressure and suction sides of the wing, as shown in figure \ref{fig:VorticalStructures}$(f)$.
\begin{figure}
\centering
\includegraphics[width=0.97\textwidth]{Fig5.png}
\caption{Wake for the case $(sAR, \alpha, \Lambda)=(2, 20^{\circ}, 45^{\circ})$. $(a)$ Vortical structures visualized by isosurface of $Q=1$. The thick solid lines represent the approximate location of vortical structures near the midspan, with the curved arrows showing their directions of rotation. The dash lines indicates the locations of visualizations shown on the right. $(b)$: $u_y$ and $(c)$: $u_z$ fields at different $z$ locations.}
\label{fig:MidSpanDownwash}
\end{figure}
\subsection{Formation of streamwise finger-like structures}
\label{subsec:streamwise}
For high sweep angles of $\Lambda \gtrsim 37.5^{\circ}$, the flows over high-aspect-ratio wings ($sAR \gtrsim 2$) transition from the unsteady tip shedding to steady wakes, through the formation of the streamwise finger-like structures.
As shown in figure \ref{fig:VorticalStructures}$(f)$, the finger-like structures are oriented at an angle higher than $\Lambda$ with respect to the $z$ axis.
These structures bend towards the streamwise direction away from the wing.
Wakes of swept wings with even $sAR$ as large as 10 are stabilized with the alternating formation of the streamwise structures along the wing span, as shown in figure \ref{fig:StreamwiseVortices}$(a)$.
These streamwise vortices observed in the present work are formed under the same mechanism with those in the wakes of axisymmetric slender bodies at high incidence \citep{sarpkaya1966separated,thomson1971spacing}.
The impulse flow analogy has long been used to understand these vortical structures.
According to this analogy, the progressive development of the wake along the wing span when viewed in cross-flow planes is similar to the temporal growth of the flow behind a two-dimensional wing translated impulsively from rest.
Three slices at different spanwise locations in figure \ref{fig:StreamwiseVortices}($b$) show a temporal-like evolution of wake vortices along the wing span.
In addition, the spacing between the neighbouring streamwise vortices is fixed at $g = 2.4$, which follows $g \approx U_{\infty}\sin\Lambda\cdot T_{2D}$, in which $U_{\infty}\sin\Lambda$ represents the speed of the spanwise flow, and $T_{2D}=3.2$ is the nondimensional period of vortex shedding in the analogous two-dimensional case (for an infinite swept wing).
This relationship suggests a close analogy between the two-dimensional unsteady flow and the present three-dimensional steady flow, where the time-dependence of the former is replaced by spatial dependence of the latter.
From another perspective, when viewed from the wake on a plane parallel to the wing span, as shown in figure \ref{fig:StreamwiseVortices}$(c)$, the steady streamwise vortices are positioned at two sides of strong outboard velocity $u_z$, resembling the configuration where inverse K\'arm\'an vortices are formed in the jet profile.
This observation suggests the important role of the spanwise velocity in the formation of steady three-dimensional vortices.
Similar streamwise structures have also been reported for rotating large-$AR$ wings \citep{jardin2017coriolis}, in which spanwise flow is promoted by artificially adding Coriolis force.
\begin{figure}
\centering
\includegraphics[width=0.97\textwidth]{Fig6.png}
\caption{Streamwise vortices along high-aspect-ratio wings with $\Lambda=45^{\circ}$. $(a)$ Vortical structures visualized by isosurface of $Q=1$ for $sAR=4$, 6 and 10. $(b)$ $u_x$ fields at indicated slices as shown in $(a)$. The red lines represent contours of $Q=0.5$ with the curved arrows indicating the direction of rotation. $(c)$ $u_z$ field on the slice IV. Black circles show the isosurfaces of $Q=0.5$.}
\label{fig:StreamwiseVortices}
\end{figure}
\subsection{Aerodynamic forces}
\label{subsec:forces}
The vortical structures described above influence the aerodynamic forces on the swept wings. We examine the effects of sweep angle on the distribution of the time-averaged sectional lift coefficients $\overline{C_l}$ for the representative cases of $(\alpha,sAR)=(20^{\circ},2)$ in figure \ref{fig:CLCD}$(a)$.
For the unswept wing, the sectional lift increases slowly from the midspan towards outboard, reaching maximum at $z\approx 1.5$ before its drastic decrease at the tip.
The slight swell-up of the sectional lift at the outboard location is attributed to downwash from the tip vortices \citep{devoria2017mechanism,zhang2020formation}.
For swept wings, the sectional lift coefficients are significantly higher at the inboard locations, due to the additional circulation maintained by the midspan effects.
We note that the lift distribution of swept wings in high $Re$ flows exhibits the opposite pattern, where sectional lift is low near midspan and high at outboard sections \citep{schlichting1959aerodynamik,nickel1994tailless}. Moreover, for the same angle of attack, one sees lift loss everywhere along the span of swept wing compared with the unswept wing \citep{visbal2019effect}.
These differences highlight the midspan effects in augmenting lift of swept wings in low Reynolds number separated flows.
The effects of aspect ratio on the distribution of sectional lift coefficients of swept wings are examined for the cases $(\alpha,\Lambda)=(20^{\circ},45^{\circ})$ in figure \ref{fig:CLCD}$(b)$.
Monotonic decrease of sectional lift is observed across $sAR=0.5$ to 6.
The maximum sectional lift at midspan $z=0$ increases with the aspect ratio, and eventually saturates as $sAR$ reaches 3.
This observation suggests that the tip effect is still noticeable
for low-$AR$ swept wings even though distinct tip vortices are not observed.
For high-$AR$ wings, the $C_l$-$z$ curves feature steep slope over the inboard region, reflecting the weakening of the midspan effects. The slope becomes much gentler towards the outboard, where midspan effects diminish.
\begin{figure}
\centering
\includegraphics[width=0.99\textwidth]{Fig7.png}
\caption{Force coefficients for $\alpha=20^{\circ}$. $(a)$ Sectional lift coefficients for varied sweep angles at a fixed aspect ratio $sAR=2$. $(b)$ Sectional lift coefficients for cases of $\Lambda=45^{\circ}$ with different aspect ratios. $(c)$ and $(d)$ lift coefficients $\overline{C_L}$ and drag coefficients $\overline{C_D}$ over sweep angle $\Lambda$ for different aspect ratios.}
\label{fig:CLCD}
\end{figure}
Finally, we present the total lift $\overline{C_L}$ and drag $\overline{C_D}$ coefficients of the swept wings at $\alpha=20^{\circ}$ with different aspect ratios and varying sweep angles in figure \ref{fig:CLCD}$(c)$ and $(d)$.
For swept wings with $sAR=0.5$ and 1, the lift coefficients increase with sweep angle, due to the dominance of the midspan effects over the entire wing span.
The drag coefficients of the low-aspect-ratio wings, on the other hand, do not show significant variations over the increase of sweep angle.
It is interesting to note that $\overline{C_D}$ for $sAR=0.5$ is higher than that for $sAR=1$, an observation also reported in \citet{taira2009three}.
For wings with $sAR\gtrsim 3$, the contribution of the high sectional lift at inboard region to the total lift force is overshadowed by the lower sectional lift at the outboard region.
As a result, the lift coefficients of wings with high aspect ratio decrease with increasing sweep angle.
Similar trend is also observed for the drag coefficients of high-aspect-ratio wings.
While for wings with small $\Lambda$ the lift coefficients $\overline{C_L}$ are positively related with $sAR$, at the highest sweep angle of $\Lambda=45^{\circ}$, maximum lift coefficient is achieved with $sAR=2$, and is even higher than the analogous two-dimensional case.
The midspan effects as lift enhancement mechanism for low-aspect-ratio swept wings could potentially inspire designs of high lift devices.
\section{Conclusion}
\label{sec:conclusion}
We have studied laminar separated flows over finite-aspect-ratio swept wings with direct numerical simulations at a chord-based Reynolds number of 400.
Due to the complex interplay between the effects of aspect ratio, sweep angle and angle of attack, the wakes of finite swept wings exhibit a variety of vortical features, which are not observed behind the unswept wings.
We have described key mechanisms that are responsible for the emergence of different types of flows over swept wings.
For wings with low aspect ratios and low sweep angles, the downwash by the tip vortices stabilizes the wake.
With the increase in aspect ratio, the downwash weakens along the midspan, allowing the formation of unsteady vortex shedding.
For higher sweep angles, the source of three dimensionality in the wake transitions from the wing tip to the midspan.
A pair of symmetric vortical structures form near the midspan due to their mutually induced downward velocity, which stabilizes the wake of higher-aspect-ratio swept wings.
Such midspan effects also act as a lift enhancement mechanism for wings with low to medium aspect ratios.
At high aspect ratio, the midspan effects diminishes near the outboard of the wing, and unsteady vortex shedding occurs near the wing tip region.
For wings with high aspect ratios, steady wakes are again achieved at high sweep angles, where a transposition occurs from two-dimensional unsteady flow to three-dimensional steady flow.
The resulting steady wake features the repetitive formation of the streamwise finger-like structures along the span.
This study provided a detailed look into the effects of sweep on the wake dynamics of finite-aspect-ratio wings. The insights obtained from this study, particularly those regarding the midspan effects, could potentially be used for designing high-lift devices. In addition, the knowledge gained here also forms a stepping stone for further understanding the complex wake dynamics at higher Reynolds numbers and those generated by unsteady wing maneuvers.
\section*{Declaration of interest}
The authors report no conflict of interest.
\section*{Acknowledgement}
We acknowledge the generous support from the US Air Force Office of Scientific Research (FA9550-17-1-0222) monitored by Dr. Gregg Abate.
|
train/arxiv
|
BkiUc5_xK4sA-7sDey5X
| 5 | 1 |
\section{Introduction}
\label{intro} To date, all gaseous quantum condensates have been
produced by evaporative cooling of confined atoms. Confinement is
necessary to thermally isolate the particles from the warmer
environment and long confinement times are necessary because the
evaporative cooling process can take tens of seconds.
Strong magnetic field gradients have been used to confine neutral
paramagnetic molecules \cite{weinstein} and electric-field gradients
have been used to confine neutral polar molecules in electrostatic
traps \cite{bethlem00} and in toroidal storage rings
\cite{crompvoets01,crompvoets04}. In addition, polar molecule
confinement in a synchrotron storage ring has been modeled
\cite{nishimura03}.
All of these methods use molecules or atoms in
weak-field-seeking states, whose binding energy decreases in the field.
These states are not the lowest energy state and are therefore
subject to collisional relaxation.
In alkali atoms, the relaxation rates from the stretched hyperfine levels
($m_F = F$) is small. But in magnetically trapped
paramagnetic molecules \cite{volpi02} and in electrically confined
polar molecules \cite{bohn01,kajita01,kajita02,avdeenkov02}, the relaxation
rate can be large enough to
prevent achieving the confinement time needed for evaporative cooling.
Collisional relaxation will be absent for polar molecules in their
lowest rotational state. This ground state is strong-field-seeking,
as are all rotational states in the limit of strong electric field.
The technical challenges of storing molecules in a
strong-field-seeking state have not been previously addressed.
The major challenge is focusing these molecules because
electrostatic lenses can focus strong-field-seeking molecules in
only one transverse plane while defocusing in the other.
Therefore alternating-gradient focusing is required.
For experiments on molecules in strong-field-seeking states, a
storage ring has some useful features not generally found in traps.
The ring has a beam geometry with field-free regions accessible to
experiments, and it can simultaneously store many bunches of
particles producing a large flux of molecules.
\begin{figure}
\begin{center}
\resizebox{0.45\textwidth}{!}{%
\includegraphics{LatticeCH3F}
} \caption{Layout of the storage ring. Each octant contains a buncher and
a pair of alternating-gradient focusing triplets to match the beam
traversing from the straight sections to the bend sections.
A bend section contains combined bend and
alternating gradient focusing elements.
The focusing and bend elements have time-independent electric fields.
An injection line is located in one of the straight sections}
\label{fig:RingLattice}
\end{center}
\end{figure}
In this paper we show, by modeling and simulation, that it is
feasible to construct a storage ring (Fig.\ref{fig:RingLattice})
that will store a symmetric-top molecule (methyl fluoride) in the
$J = 0$ state, at a kinetic energy of 2 K (30 m/s), and by extension
other molecules and velocities. In the storage ring, bunching
electrodes hold the molecules in a string of short bunches.
The molecules are calculated to be
stable against losses due to defocusing, oscillations, and diffusion
for over two minutes. We also model a decelerator for slowing the
molecules to 30 m/s, and an injector for loading the storage
ring.
A storage ring in which the density of the molecules
in a bunch is allowed to vary around the ring, can provide a
mechanism for evaporative cooling. Regions of high density speed
the thermalization of the molecules. In regions of low density
the molecules can become spatially separated due to their velocity spread
allowing the hottest molecules to be removed.
\section{Forces Due to Electric Field Gradients\label{sec:1}}
\subsection{Focusing and Deflection Using Multipole Fields}
A brief description of focusing and deflecting a beam of molecules
using electrostatic multipole fields is given below.
Additional details of beam transport and focusing of
molecules in strong-field-seeking states, with specific
application to methyl fluoride in the $J = 0$ state, may be found
in Kalnins et al. \cite{kalnins02}.
The guide field in a storage ring for molecules in
strong-field-seeking states must provide all the functions,
such as focusing,
bending, and bunching, that are used in a ring for charged
particles but with forces that arise from gradients of the
magnitude of the electric field.
In a pure quadrupole or sextupole field, the total electric field
increases radially and the force on a molecule, in a
strong-field-seeking state, is away from the centerline in all
transverse directions. Therefore a dipole component must be added to
remove the double-defocusing, and obtain focusing in one transverse
direction while still defocusing in the other. The force on a
molecule is given by the gradient of its Stark potential energy,
$W(E)$:
\begin{eqnarray}
\emph{\textbf{F}}=-\nabla W(E)=-\frac{dW}{dE}\nabla E\label{eq:Force}
\end{eqnarray}
where $E$ is the magnitude of an external field.
The Stark energy of the molecular level is in general a nonlinear
function and is described for methyl fluoride in the $J = 0$
rotational state in Ref. \cite{kalnins02}. In the limit of large $E$,
$W(E)\rightarrow-d_eE$ where $d_e$ is the molecule's electric dipole
moment.
The transverse ($x$ horizontal, $y$ vertical) electric
multipole potential used to bend and focus a molecule is:
\begin{eqnarray}
\Psi=E_0[y+A_2xy+A_3(x^2y-\frac{1}{3}y^3)] \label{eq:PotPsi}
\end{eqnarray}
where $E_0$ is the dipole field strength, and $A_2$ and $A_3$ are
the relative quadrupole and sextupole component strengths.
For the Stark energy in the high-field limit, the forces to second
order are:
\begin{eqnarray}
F_x&\rightarrow&d_eE_0[A_2+2A_3x-\frac{1}{2}(A_2^3-4A_2A_3)y^2]\nonumber \\
F_y&\rightarrow&d_eE_0[(A_2^2-2A_3)y-(A_2^3-4A_2A_3)xy ]\label{eq:PhiXYQ}
\end{eqnarray}
We see that a combined dipole and sextupole ($A_3$) field
lens will focus in one plane, while defocusing in the other.
To deflect the molecule we must add a quadrupole ($A_2$) component.
This also defocuses the beam in the $y$ direction and stronger
sextupole ($A_3$) strengths are needed \cite{OurPAC2003paper}.
To obtain net focusing in both transverse planes, the lenses are
arranged in a sequence with gradients alternating in sign ($A_3 <0$
for $x$-focusing and $A_3>$0 for $y$-focusing).
\subsection{Other Effects}
When a molecule in a strong-field-seeking state
enters the field of an electrode pair
it is accelerated longitudinally, and upon exiting the field it is decelerated.
Also, the
fringing field is stronger away from the midplane and this causes
a net defocusing force in the direction of the electric field. Between
successive sets of electrodes, this unwanted defocusing is reduced
if the dipole fields are of the same polarity and strength.
Longitudinal bunching, as in a charged-particle ring, requires a
pulsed field. The field is ramped in a sawtooth or sine-wave form
and the time-dependent acceleration is the net difference between
the fields when entering and when exiting.
The effect of gravity is small but not negligible
for 30 m/s molecules in this ring. The vertical orbit will be
distorted and an orbit correction must be applied.
\subsection{Equations of Motion}
The equations of motion of a molecule in the ring are obtained
from the Hamiltonian:
\begin{eqnarray}
H = H_0 +W(E) - gy
\end{eqnarray}
where $W(E)$ is the Stark energy, g is the acceleration due to gravity,
and $H_0$ is the kinetic energy which in a bend region is:
\begin{eqnarray}
H_{0}=\frac{1}{2m}(P_{x}^2+P_{y}^2+\frac{P_\theta^2}{(\rho+x)^{2}})\label{eq:H0bend2}
\end{eqnarray}
where $P_x$ and $P_y$ are the transverse momenta,
$P_{\theta}$ is the angular momentum and $\rho$ is the
bend radius. In straight sections the last term is replaced by
the square of the longitudinal momentum, $P{_z}^2$.
The longitudinal variation of the Stark energy at the ends of
electrodes (treated here as a step function) adds or subtracts from the
kinetic energy, the change in longitudinal velocity being about
$\pm$10$\%$.
Vertical defocusing in a fringe field is derived from the
longitudinal variation of the field on the midplane and to lowest
order is:
\begin{eqnarray}
(F_y)_{fringe}=-\frac{dW}{dE}\phantom{a}
[\frac{1}{E_y}( \frac{\partial E_y}{\partial z })^2
-\frac{\partial ^2 E_y}{\partial z^2}]\phantom{a}y \label{eq:Edge}
\end{eqnarray}
\section{Storage Ring Design}
\label{sec:4}
\subsection{Molecule and Energy}
The principles and techniques we use apply to all polar molecules in
strong-field-seeking states. We choose methyl fluoride (CH$_3$F) as
our reference molecule because it is a nearly symmetric rotor with a
large electric dipole moment of $d_e$ = $6.2 \times 10^{-30}$ C-m
(1.84 D).
It has a moderate rotational
constant of $B = 0.88$ cm$^{-1}$ and a simple level structure
with a $J = K = 0$
rotational ground state. The rotational constant is large enough to
limit the number of rotational levels populated in the beam from a
jet-source but still small enough to allow for a large Stark effect
at moderate electric fields. Methyl fluoride is also a gas at room
temperature.
The velocity of 30 m/s (kinetic energy of about 2K) is low enough to
make for a compact ring, yet keep small the effects of gravity.
\subsection{Ring Lattice}
\label{ring}
Long straight regions free of focusing electrodes make the stored
beam accessible for experiments and give space for injection and
extraction. Molecules, in order to drift through the straight
section without loss, must have only small divergences and
therefore a large beam width.
In a bending region, we need strong deflecting forces
to minimize the bend radius for overall compactness.
These strong forces call for a small beam width to avoid
nonlinearities. To make the transition (match) from straight sections to
arc sections, triplets (Q1, Q2, Q3) of focusing lenses
are placed at the ends of the
straight sections, as shown in Fig. \ref{fig:RingLattice}.
In each of the eight bend regions, there are five electrode pairs;
each has a
combined dipole and quadrupole field to provide the strong
deflecting force. To this is added a sextupole component, the
gradient of which alternates in sign.
The electrode parameters are given in Table
\ref{Table:QuadParam} where Q are focusing elements
and BF and BD are combined bend and focusing elements.
Each arc is a series of BF and BD elements:
$\frac{1}{2}$BF+BD+BF+BD+$\frac{1}{2}$BF.
In this sequence of lenses with alternating gradients, the molecules
execute oscillatory transverse motions. The parameters of BF and BD
are chosen such that the phases of these horizontal and vertical
motions each advance through an angle of 2$\pi$ in each octant of
arc. The parameters of Q1, Q2 and Q3 are varied to
find values that produce large
dynamic aperture and momentum acceptance.
The decapole
coefficient, $A_5$ of Q2, which adds the term
$E_0A_5(x^4y-2x^2y^2+\frac{1}{5}y^5)$ to the potential
Eq(\ref{eq:PotPsi}), is introduced to reduce the nonlinearity
of Q2 focusing where the beam is at it's largest.
For longitudinal confinement with many short bunches,
we use eight bunchers in the ring; each has
a short uniform field that is pulsed in time as illustrated in Fig.
\ref{fig:dect}.
\begin{figure}
\begin{center}
\resizebox{0.35\textwidth}{!}{%
\includegraphics{BunchingField}
} \caption{A molecule at the bunch center
enters and exits the buncher when the field is the same
and receives no net acceleration. For a molecule that arrives later,
the entering
field is stronger than at its exit; it is accelerated and it then
drifts downstream toward the bunch center.}
\label{fig:dect}
\end{center}
\end{figure}
Molecules with different energies have their closed orbits radially
separated in the arcs and perhaps elsewhere in the ring. If this
dispersion of orbits is present at a buncher, the energy change from
the buncher produces a shift in the orbit and an increment in the
radial oscillation. This is called synchro-betatron coupling and to
avoid growth of radial oscillation amplitude, the dispersion of
orbits must be made zero at the bunchers. With the phases of the
vertical and horizontal motions advancing through an angle of $2\pi$
in each octant, as noted above, the
dispersion becomes zero at all eight buncher locations.
\begin{table}[htbp]
\begin{center}
\caption{Parameters of Storage Ring Electrodes}
\begin{tabular}{|l|c|c|c|c|c|}
\hline
&$Eo$&$L$&$A_2$&$A_3$&$A_5$\\
&(MV/m)&(cm)&(m$^{-2}$)&(m$^{-3}$)&(m$^{-5}$)\\
\hline
Q1 & 3.0 & 3.34 & 0 & 2000& 0\\
Q2 & 4.0 & 3.71 & 0 &-2000&-1.28$\times$10$^6$\\
Q3 & 4.0 & 2.85 & 0 & 2000& 0\\
BF & 7.85 & 4.00 &-10.55&-2296&0\\
BD & 7.85 & 4.00 &-10.55& 2343&0\\
\hline
\end{tabular}
\label{Table:QuadParam}
\end{center}
\end{table}
\subsection{Numerical Modeling and Simulation}
The lattice parameters (Table \ref{Table:QuadParam})
are found by
numerical calculations using a newly-developed simulation code that
tracks the particles in time (rather than in longitudinal position)
to account for the longitudinal velocity changes as a function of
the external field. The tracking code includes the effects of
nonlinearities, gravity and the longitudinal kick at the bunchers.
The effect of each fringe field (Eq. \ref{eq:Edge}) in every element
has been integrated and replaced by a vertically defocusing thin lens.
The parameters in Table \ref{Table:QuadParam} result in the
ring performance listed in Table.
\ref{Table:MainParam} and shown in Figures \ref{fig.TwissCH3F} and
\ref{fig:DynapCH3F}.
\begin{table}[hbt]
\begin{center}
\caption{Ring Parameters}
\begin{tabular}{|l|c|}
\hline
Parameter&Value\\
\hline
Circumference (m) & 9.850 \\
Circulation period (s) & 0.3121\\
Velocity in free space (m/s) & 30.0\\
Symmetry of the ring & 8 \\
Bending radius (m) & 0.60 \\
Long straight section (m) & 0.40 \\
Beta function$^*$ $\beta_x$ (m)& 0.274 \\
\phantom{Beta function$^*$ }$\beta_y$ (m)& 0.596 \\
Dispersion$^*$ $\eta_x$ (m)& 0.0\\
Betatron tune $\nu_x$ & 13.368 \\
\phantom{Betatron tune }$\nu_y$ & 10.398 \\
Dynamic aperture$^*$ $a_x$ (mm) & $\pm$1.75\\
\phantom{Dynamic aperture$^*$ }$a_y$ (mm) & $\pm$3.50\\
Acceptance $\epsilon_x$ (mm - mr) & 11\\
\phantom{Acceptance }$\epsilon_y$ (mm - mr) & 21\\
Momentum acceptance ($\%$) & $\pm$1.2\\
Number of longitudinal buckets & 203 \\
\hline
\end{tabular}
\begin{tabular}{l}
$^*$At the center of straight sections
\end{tabular}
\label{Table:MainParam}
\end{center}
\end{table}
\begin{figure}[htbp]
\begin{center}
\resizebox{0.50\textwidth}{!}{%
\includegraphics{TwissAndWidth}
}
\caption{Beam half-widths (a) and the beta functions and
horizontal dispersion (b) in
the storage ring.
Beta is the distance in which the transverse
(betatron) oscillation advances in phase by one radian. A schematic of the
lattice is shown for location reference.}
\label{fig.TwissCH3F}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\resizebox{0.35\textwidth}{!}{%
\includegraphics{Dynap}
}
\caption{Starting coordinates in the center of the straight
section for the molecules that survive 400 turns.
This defines the dynamic aperture.}
\label{fig:DynapCH3F}
\end{center}
\end{figure}
The beta functions and the horizontal dispersion are shown in Fig.
\ref{fig.TwissCH3F}b.
Small beta functions in the bends produce a smaller beam profile,
allowing the bend elements to be stronger and the beam to occupy
the most linear region of the elements.
The straight sections are designed to be free of horizontal dispersion
to avoid synchro-beta coupling at the bunchers.
If uncorrected, the vertical closed orbit displacement caused
by gravity is 2.6 mm
and is large enough to cause loss of the circulating beam.
The orbit is corrected by displacing Q2 by 0.24 mm downward to
produce upward kicks. The resulting vertical orbit
distortion shrinks to 0.26 mm as shown in Fig. \ref{fig:CODCH3FWC} and is
no longer a problem.
\begin{figure}[htbp]
\begin{center}
\resizebox{0.40\textwidth}{!}{%
\includegraphics{CODCH3FWC}
} \caption{Corrected vertical closed orbit displacement
of the beam in the storage ring}
\label{fig:CODCH3FWC}
\end{center}
\end{figure}
With this orbit correction, the dynamic aperture for 400 turns,
at the center of a straight section, is about 2 mm by 3 mm
half-width as shown in Fig. \ref{fig:DynapCH3F}.
This dynamic aperture corresponds to acceptances
of 11 mm-mr horizontal and 21 mm-mr vertical, as listed in Table
\ref{Table:MainParam}. The resulting beam size is shown in Fig.
\ref{fig.TwissCH3F}a. The momentum acceptance, calculated by the
multi-particle tracking simulation, is $\pm 1.2\%$ which is
equivalent to an energy acceptance of $\pm$ 45 mK.
\section{Decelerated Beam}
\label{sec:6}
\subsection{Decelerator}
To reduce the velocity from the 310 m/s at the source
to 30 m/s requires many
stages of deceleration by pulsed electric fields in a long linear
array. At each of the 139 decelerating stages, a bunch of molecules
enters a set of parallel electrodes when the field is zero;
the field pulses on and the
molecules lose kinetic energy equal to $|W(E)|$ as they exit the electrodes.
Our decelerator design differs in almost every way from previous
designs \cite{bethlem02a,tarbutt04}. A decrease in the strength of
the electric field while the bunch exits the electrodes provides
longitudinal restoring action that prevents the bunch lengthening
due to velocity spread \cite{maddi99}. The lengths of successive
electrodes decreases as the velocity and spacing of the bunches
decreases.
Interspersed between the pulsed parallel electrodes
are alternating-gradient lenses to confine the
molecules transversely. Their overall focusing action must be
stronger in the plane of the electric fields to counter the
defocusing from fringe fields.
The major parameters of the decelerator are summarized in
Table \ref{table:ParamDecelInjection}.
Details of decelerator design will be published later.
\begin{table}
\caption{Parameters of the decelerator for injected beam}
\label{table:ParamDecelInjection}
\begin{center}
\begin{tabular}{|l|c|}
\hline
Parameter&Value\\
\hline
Velocity at source (m/s)&310\\
Velocity at exit (m/s)&30\\
Velocity spread at exit (\%)&$\pm$2\\
Length of bunch at exit (mm)&10\\
Emittances at exit, x and y (mm-mr)&30\\
Electrode gap (mm)&7\\
Decelerating field at entrance (MV/m)&9\\
Decelerating field at exit (MV/m)&4.5\\
Length of last decel. electrode (mm)&24\\
Length of decelerator (m)&19.6\\
Number of decel. electrodes&139\\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Injector}
To inject the beam, we need a bend electrode that can pulse on or
off in the time between buckets in the ring.
This allows us to store multiple (up to 203) bunches in the ring.
The deflecting electrode (Fig.\ref{fig:RingLattice}) is part of a
transport line that transforms the pulse leaving the decelerator to match the
orientation of the transverse acceptances of the ring at the point
of entry onto the closed orbit of the ring. The deflecting
electrode is actually an array of bend electrodes with radius 0.6 m,
similar to a bend section in the ring. A horizontal phase advance of
$2\pi$ in this bend, avoids a net dispersal of molecules that are within the
$\pm 2$ \% velocity spread.
In passage along the line, the velocity
spread of $\pm 2$\% lengthens the bunch and a debuncher at the point
of injection (Fig \ref{fig:RingLattice})
brings 90\% of the bunch within the $\pm 1.2$\% longitudinal momentum
acceptance of the ring.
\subsection{Source and Intensity}
We calculate the intensity based upon a pulsed jet source with 1\% methyl
fluoride seeded in xenon carrier gas, using the equations in Miller
\cite{miller88} and verified against seeded xenon jet source
performance reported in the literature
\cite{crompvoets01,crompvoets04,gupta99}. Xenon's high mass (133)
produces much slower beams (310 m/s from a room-temperature
reservoir) than do light carrier gases, resulting in a shorter (19.6
m) decelerator.
The bunch intensity is determined by the source flow rate, the $J = 0$ state
population, the velocity distribution and the acceptances.
A source orifice of 1 mm diameter and reservoir pressure
of $6.56 \times 10^4$ Pa (500 Torr)
will produce an intense cold beam with a peak intensity of $3 \times
10^{19}$ molecules sr$^{-1}$ s$^{-1}$, a longitudinal velocity
spread of 7.2 m/s FWHM, and less than 1\% clusters. We estimate the
methyl fluoride J = 0 rotational state fraction to be 30\%. In an
apparatus with a finite pumping speed, this peak intensity is
possibly only by using a pulsed jet source operating with a small duty
cycle. The short widely-spaced beam pulses entering the decelerator
(which become more closely spaced after deceleration) require a
duty cycle of less than one percent for a 100 Hz pulse rate.
This would allow all 203 buckets in the ring to be filled in 6.4
turns.
The transverse and the longitudinal emittances (units of m$^2$ s$^{-1}$) of a
bunch of molecules are unchanged in passing through the deceleration
process\cite{lambertson04} from the source to their injection in the storage ring.
Therefore the
fraction of molecules from the source that enters the ring is the
product of the ratios of ring acceptances to source emittances.
In the transverse directions, the beam from the source has $\pm 0.5$ mm spatial
extents and $\pm 1000$ mr angular divergences; then the horizontal and
vertical acceptances of the storage ring (Table \ref{Table:MainParam})
of 11 mm-mr and 21
mm-mr respectively, result in $8.66 \times 10^{-6}$ of the molecules being
transversely accepted. Longitudinally, one second of beam from the
source is 310 m long and has a velocity spread of $\pm 3.6$ m/s. The storage
ring will accept $\pm 0.6$ m/s in a 10-mm long bunch, which is
$5.4 \times 10^{-6}$ of the source longitudinal emittance.
Combining these nunbers and acounting for the 90\% acceptance of the
storage ring from the injector yields an intensity of $3.8 \times
10^8$ molecules/bunch. Bunches could be injected into the storage
ring singularly or in large numbers. With a maximum of 203 stored
bunches there would be nearly 10$^{11}$ molecules
circulating in the storage ring and a flux of 2.5 $\times$ 10$^{11}$
molecules/s. Each bunch would have a density of about 3 $\times
$10$^9$ molecules/cm$^3$ in the long straight sections, and higher
in the bends.
\section{Acknowledgments}
The authors acknowledge and thank Richard Gough and
David Robin for their enthusiastic encouragement,
and Swapan Chattopadhyay and Ying Wu
for early contributions to the storage ring work.
Work supported by the Director,
Office of Science; Office of Basic Energy Sciences, and
Office of High Energy and Nuclear Physics, U.S. Department of
Energy, under Contract No. DE-AC03-76SF00098.
|
train/arxiv
|
BkiUd8XxK7IDPADJ3_fZ
| 5 | 1 |
\section{INTRODUCTION}
Improvements of the Kogut-Susskind quark action may allow extraction
of continuum physics from coarser lattices than would be required with
the simplest formulation. This is especially important for full QCD
simulations, which are much more time consuming than quenched
calculations. Improvement of the rotational symmetry and the
dispersion relation can be achieved by introducing the Naik term, a
coupling to third nearest neighbors. A more severe problem is flavor
symmetry breaking, which is large for the currently accessible lattice
spacings. It has been shown that smearing of the gauge links reduces
flavor symmetry breaking. This is because smearing reduces the
coupling to high transverse momentum gluons which cause transitions
among the corners of the Brillouin zone. There are various smearings
proposed so far. The simplest one, which has been studied by the MILC
collaboration~\cite{MILC_FATLINKS}, is the introduction of the 3-link
staple to the coupling of the nearest neighbors. A more extensive
smearing has been studied by Sinclair and Lagae~\cite{SL}, who
have shown that flavor symmetry breaking is further reduced. Finally,
extensive APE smearing has been studied in SU(2) quenched
spectroscopy~\cite{TD_AH_TK}, showing degeneracy within statistical
errors between the Goldstone pion and the local non-Goldstone pion
($\pi_2$).
In this paper, we study the effects of several types of smearing on the
flavor symmetry breaking, extending the results reported
in Ref.~\cite{OT}. Our goal is to find an action that achieves
small flavor symmetry breaking, yet is localized enough to
allow for a relatively cheap force computation when used in dynamical
simulations. As a measure of flavor symmetry breaking we use the mass
splittings of all the sixteen pions (in eight separate representations
of the lattice symmetry group)~\cite{GOLTERMAN_MESONS} in
hadron spectroscopy on a common set of stored lattices.
\section{ACTIONS TESTED}
As a guide in the construction of actions with improved flavor
symmetry breaking we require that the coupling of the
quarks to high transverse momentum gluons is minimized. Consider
an action which has links smeared by a 3-link staple $S^{(3)}$, a
5-link staple $S^{(5)}$ and a 7-link staple $S^{(7)}$:
\begin{eqnarray}
U_\mu(x)\!\!\!\!&\rightarrow&\!\!\!\! c_1U_\mu(x)+\sum_\nu \Big[ w_3S^{(3)}_{\mu\nu}(x)+\nonumber\\
\!\!\!\!&+&\!\!\!\!\sum_\rho \Big( w_5 S^{(5)}_{\mu\nu\rho}(x) +
\sum_\sigma w_7 S^{(7)}_{\mu\nu\rho\sigma}(x)\Big)\Big]
\label{smearing}
\end{eqnarray}
\begin{eqnarray}
\lefteqn{S^{(3)}_{\mu\nu}(x) = U_\nu(x)
U_\mu(x+\hat\nu)U^\dagger_\nu(x+\hat\mu)}\nonumber\\
\lefteqn{S^{(5)}_{\mu\nu\rho}(x) = U_\nu(x)
S^{(3)}_{\mu\rho}(x+\hat\nu)U^\dagger_\nu(x+\hat\mu)}\nonumber\\
\lefteqn{S^{(7)}_{\mu\nu\rho\sigma}(x) = U_\nu(x)
S^{(5)}_{\mu\rho\sigma}(x+\hat\nu)U^\dagger_\nu(x+\hat\mu)}
\label{staples}
\end{eqnarray}
In the weak coupling limit the couplings $V_1,V_2$, and $V_3$ to the
gauge field with one, two or three
of the transverse momentum components $\pm\pi/a$
can be written as functions of the staple couplings $w_3,w_5,w_7$, and
the single link coupling $c_1$:
\begin{eqnarray}
V_1 &=& c_1 + 2 w_3 - 8 w_5 - 48 w_7 \nonumber\\
V_2 &=& c_1 - 2 w_3 - 8 w_5 + 48 w_7 \nonumber\\
V_3 &=& c_1 - 6 w_3 + 24 w_5 - 48 w_7
\label{vertex}
\end{eqnarray}
The overall normalization condition
\begin{equation}
c_1 + 6 w_3 + 24 w_5 + 48 w_7 = 1
\label{normalization}
\end{equation}
is used to ensure that the total coupling to the nearest neighbor in
the free field limit is one. For $c_1 = 2 w_3 = 8 w_5 = 48 w_7 = 1/8
$, all the couplings to gluons with any of the transverse momenta
equal to $\pm\pi$ are zero. This set of parameters defines our
``Fat7'' action. The ``Fat5'' action is constructed by requiring that
the magnitude of all the couplings $V$ is minimized. The ``Fat5''
couplings are $c_1 = 2 w_3 = 8 w_5 = 1/7$, $w_7=0$, which give
$|V_1|=|V_2|=|V_3| = 1/7 $.
We have also tested an action (``All5'') which contains all the non
self-intersecting length 5 paths that connect nearest neighbors and
third nearest neighbors. At the free field limit, such an action is
the same as the ``Fat5'' action with a Naik term. The paths connecting
nearest neighbors divide into three classes. The planar paths that
displace the fundamental link by 0 or 2 sites with total weight $c_1$,
the planar paths that displace the fundamental link by 1 site with
total weight $w_3$, and the non-planar paths with total weight $w_5$.
If we use the ``Fat5'' parameters, appropriately scaled to accommodate
the Naik term, and distribute the weight equally among the members of
each class of paths, the couplings $V$ are minimized.
Together with the MILC collaboration, Anna Hasenfratz and Chet Nieter,
this work is now being extended to include ``APE smeared'' actions,
where the fattened link is projected back on to SU(3). Here we present
two preliminary results. ``Ape1'' has one level of APE smearing with
APE parameter $\alpha=0.75$~\cite{TD_AH_TK}, which was chosen to match
the MILC fat action. The MILC fat action and the ``Ape1'' action
differ only by the projection to SU(3). The
second variation of APE smeared action we tested is ``Ape4'', which
has four APE smearings with $\alpha=0.5$.
\section{SIMULATIONS AND RESULTS}
For our spectroscopy, we used lattices with dynamical
Kogut-Susskind quarks. These lattices were produced with the Symanzik
improved gauge action (same lattices as those in~\cite{OT}).
The dynamical fermion action used was the MILC
``fat Naik action'' with Dirac matrix $2m + \mathop{\not\!\! D}$, where
\begin{eqnarray}
\mathop{\not\!\! D}(x,y)&&\!\!\!\!\!\!\!\!=\sum_{\mu=-4,4} \eta_\mu(x)\,sign(\mu)\times\bigg[
\nonumber\\
&&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\bigg(c_1 U_\mu(x)+
w_3 \sum_{\nu\neq\mu}S^{(3)}_{\mu\nu}(x)\bigg)\delta_{y,x+\hat\mu}\nonumber\\
&&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!+c_3 U_\mu(x)U_\mu(x+\hat\mu)U_\mu(x+2\hat\mu) \delta_{y,x+3\hat\mu}
\bigg].
\end{eqnarray}
Here, $c_1=(9/8)(1/4)$ is the coefficient of the conventional
single-link term, $c_3=-1/24$ is the coefficient of the third nearest
neighbor (Naik) term, and $w_3=(9/8)(1/8)$ is the coefficient of the
staple term.
Spectroscopy was done on $12^3\times 32$ lattices with
$\beta_{imp}=7.3$ and $16^3\times 48$ lattices with $\beta_{imp}=7.5$.
At $\beta_{imp}=7.3$ we used $m=0.02$ and $0.04$, while at
$\beta_{imp}=7.5$ we used $m=0.015$ and $0.030$. In order to have a
fair comparison of the actions tested, for each valence action we did
spectroscopy on the same set of lattices and interpolated the spectrum
of all the sixteen pions to the quark mass where $M_G/M_\rho=0.55$
($M_G$ is the Goldstone pion mass). Successive lattices were
separated by five molecular dynamics time units, and sample sizes
ranged from 48 to 60 lattices.
In Fig.~\ref{fig:beta73} and Fig.~\ref{fig:beta75} we present the
spectrum of all the sixteen pions interpolated to $M_G/M_\rho=0.55$,
for $\beta=7.3$ and $\beta=7.5$ respectively. We present data for the
actions we tested and for comparison we also plot the data for the
standard one link (OL) Kogut-Susskind action and the ``fat Naik
action''(OFN). The lowest level is always the Goldstone pion at
$0.55$. The highest is the 3-link pion $\gamma_0\gamma_5\otimes{\bf
1}$. The intermediate levels come in nearly degenerate pairs. In order to
distinguish them, they are plotted shifted left and right from the
center. The second level is the $\pi_2$,
$\gamma_0\gamma_5\otimes\gamma_0\gamma_5$ (left) and the 1-link
$\gamma_5\otimes\gamma_i\gamma_5$ (right). The third is the 2-link
$\gamma_5\otimes\gamma_i\gamma_0$ (left) and the 1-link
$\gamma_0\gamma_5\otimes\gamma_i\gamma_j$ (right). The fourth is the
3-link $\gamma_5\otimes\gamma_0$ (left) and the 2-link
$\gamma_0\gamma_5\otimes\gamma_i$ (right). The above degeneracies are
predicted in the contribution~\cite{Sharpe} by Lee and Sharpe in this
conference.
\begin{figure}[t]
\epsfxsize=7.5cm
\epsfbox{spect73.ps}
\vspace{-.2cm}
\caption{Interpolated masses for $\beta_{imp}=7.3$.}
\label{fig:beta73}
\end{figure}
\begin{figure}[t]
\epsfxsize=7.5cm
\epsfbox{spect75.ps}
\vspace{-.2cm}
\caption{Interpolated masses for $\beta_{imp}=7.5$.}
\label{fig:beta75}
\end{figure}
From our data it can be seen that in general, smearing improves
flavor symmetry breaking. Of the actions tested here, flavor symmetry
breaking is smallest with the ``Ape4'' action, which is also the one
with the most extensive smearing. Unfortunately, such an action is
very costly for dynamical simulations. ``Ape1'' seems better than the
OFN action. This leads us to conclude that the projection to SU(3),
which APE smearing does, contributes to the improvement of the flavor
symmetry breaking. The ``Fat5'' action gives some additional
improvement over the ``Ape1''. It would be interesting to check if a
projection of the ``Fat5'' link to SU(3) would give a further
improvement. The ``Fat7'', which has all the couplings to gluons with
transverse momenta $\pm\pi$, does not improve significantly over the
``Fat5'' action. Finally, ``All5'', which is the same as ``Fat5'' from
the weak coupling point of view, does improve significantly over
``Fat5''. However, when the cost of the force computation is taken
into account, the best of the above actions for full QCD calculations
may be the ``Fat5'' (with the Naik term added) or even the ``OFN''
action. The decrease in flavor symmetry breaking as $\beta_{imp}$
increases form $7.3$ to $7.5$ is consistent with the expected $a^2$
dependence of lattice artifacts with Kogut-Susskind quarks.
Clearly, our data show that one has to look at the spectrum of all
the sixteen pions before drawing any conclusions for the quality of
the action. In particular, for studies of QCD thermodynamics with the
strange quark~\cite{ADD_STRANGE}, one would like to have all the pions
light compared to the kaons.
In order to
do so, we have to go to lattice spacings significantly
smaller~\cite{OT} than those predicted from looking just at the local
non-Goldstone pion.
|
train/arxiv
|
BkiUbV85qhLACAkw_mNc
| 5 | 1 |
\section{Introduction}
Information shared on social networks is ever increasing and users are often overwhelmed by the number of posts (e.g., tweets) they receive. Many of the incoming posts are of marginal or no interest to their recipients. Consequently, interesting posts may be ignored or overlooked by time-constrained users, who may also give up reading their timelines. Filters that estimate the interest of each incoming post can alleviate this problem, for example by allowing users to sort incoming posts by predicted interest (e.g., `top stories' vs.\ `most recent' in Facebook) or by mixing recent posts with predicted interesting ones (e.g., `in case you missed it' in Twitter).
There have been two main approaches to detect interesting posts in social networks: \emph{global} filters \cite{alonso_o_1,alonso_o_2,yang_m} and \emph{personal} filters \cite{waldner_w,vougioukas_m,chen_j}. Global filters try to predict how interesting a post is for the entire social network or at least a broad audience. A single global filter is typically trained on a large collection of posts and the reactions of all users to each post (e.g., total number of retweets per post). The trained global filter is then used to assign a single, user-independent interest score to each new post. By contrast, personal filters are typically trained on posts received by a particular user and the reactions of the particular user (e.g., whether or not the user retweeted each post). A separate filter is trained per user and is then employed to provide user-specific interest scores for each tweet or, generally, social post. Personal filters can, at least in principle, provide recommendations tailored to a particular user's own interests, which may not coincide with the interests of the majority of users that global filters are trained to predict. On the other hand, global filters are typically trained on much larger datasets compared to personal filters. Hence, global filters may work better in practice, especially with new users, for which personal filters may have very few training instances (the `cold start' problem).
Following Uysal and Croft \cite{uysal_i} and Zhang et al.\ \cite{zhang_q}, in this paper we investigate a hybrid approach that attempts to combine the strengths of both global and personal filters. As in global filters, we train a \emph{single} system on a large collection of tweets received by multiple users. Each tweet, however, is represented as a feature vector that includes \emph{user-specific features} (Fig.~\ref{fig:system}), for example indicating the extent to which the incoming tweet is similar to tweets previously posted or retweeted by the recipient, or how often the recipient has retweeted posts of the sender of the tweet. If the same tweet is received by two different users, it will be represented by two different feature vectors. This allows the system to take into account user preferences and produce different predictions per recipient, even for the same incoming tweet, as in personal filters, while still being able to generalize over different users (e.g., learn that users are in general more likely to retweet posts that are similar to their own posts). We train a single shared logistic regression model for all users, in order to predict if a tweet received by a particular user will be retweeted by that user or not. We examine the effect of several types of features that examine the content of each incoming tweet, the similarity of the incoming tweet to tweets previously posted or retweeted by the recipient or the sender, the network influence of the sender and recipient, the interaction between them (e.g., if they have mentioned each other in previous tweets), the novelty of the incoming tweet (e.g., its similarity to tweets recently seen by the recipient). On a dataset of approx.\ 130K tweets received by 122 journalists, our system obtains $F_1 \approx 0.9$ using only 10 features and approximately 5K training instances.
\begin{figure}
\includegraphics[scale=0.4]{system.png}
\caption{Architecture of our system.}
\label{fig:system}
\end{figure}
Using previous retweet (and non-retweet) actions as gold labels has the advantage that no extra human labeling is required to construct training and test data, as opposed to asking users to label their incoming tweets with interest scores. On the other hand, retweeting is only an approximate signal of interest, as users do not retweet all the posts they find interesting. Nevertheless, retweeting is usually an indication of great interest in a post and, hence, our system can be used to detect tweets that a particular user would find very interesting (interesting enough to retweet), which could then be ranked higher or mixed with recent tweets.
The main contributions of this paper are: (a) a lightweight prediction model, which attains high F1 score with a small number of features and training instances; (b) investigation of most candidate features mentioned in related literature and variants thereof, grouped into feature types for further research; (c) a large dataset of tweets and associated user information, which we plan to make publicly available in an encoded form.\footnote{Instructions to obtain the dataset will be made available at \url{http://nlp.cs.aueb.gr/}.}
Section~\ref{sec:systemDescription} below describes our system. Section~\ref{sec:experiments} presents the experiments we performed. Section~\ref{sec:related} discusses related work. Section~\ref{sec:conclusions} concludes and proposes future work. A summary of the work of this paper has also been published \cite{Vougioukas2017}.
\section{System description} \label{sec:systemDescription}
\subsection{System overview}
Our system predicts how likely it is that a particular user (the \emph{recipient} of Fig.~\ref{fig:system}) will retweet a particular incoming tweet. The system also has access to the history of the recipient (e.g., tweets the recipient has previously received or posted), the history of the sender of the tweet, as well as background information about the recipient and the sender (e.g., number of followers).\footnote{We use Twitter's API (\url{https://dev.twitter.com/rest/public}) to obtain this information.} By \emph{sender} we mean the user that caused the recipient to receive the tweet, either by authoring it directly (if the recipient follows the author) or by retweeting it (if the recipient does not follow the author). The tweet is represented as a feature vector, which includes features that depend on the particular recipient; hence, the same tweet will be represented by a different feature vector when the system tries to estimate if another recipient will retweet it or not. The feature vector is passed on to a (binary) logistic regression classifier that predicts if the recipient will retweet the incoming tweet or not. The classifier (one model for all recipients) is trained on tweets received by Twitter users and the users' reactions (whether they retweeted the incoming tweets or not).\footnote{We used Weka's implementation of logistic regression (\url{http://www.cs.waikato.ac.nz/ml/weka/}), with default hyper-parameter values. Modifying the defaults had no significant effect in preliminary experiments.}
\subsection{Preprocessing of the tweet text}
Before further processing, the text of each tweet is normalized as follows to allow the classifier to generalize (e.g., over different URLs, different numbers, smileys that express the same sentiment).
\begin{enumerate}
\small
\item All URLs are replaced by the same pseudo-token (e.g., `\texttt{\_url\_}'), which denotes a generic URL.
\item All numbers are replaced by a pseudo-token (e.g., `\texttt{\_num\_}').
\item Each type of smiley is replaced by a different pseudo-token:
\begin{enumerate}
\small
\item Love/like smileys (e.g., `\texttt{<3}').
\item Positive sentiment smileys (e.g., `\texttt{:-)}').
\item Negative sentiment smileys (e.g., `\texttt{:-(}').
\item Neutral sentiment smileys (e.g., `\texttt{:-|}').
\end{enumerate}
\item All tokens are converted to lower case.
\end{enumerate}
These steps are based on the preprocessing used in GloVe \cite{pennington_j} to turn words into embeddings \cite{mikolov_t}. Hence, in a future extension of our system one could easily use GloVe embeddings.
\subsection{Features used by the classifier} \label{subsection:datarep_feat}
The feature vector of each incoming tweet contains up to 50 features, each corresponding to a factor that we suspect may help predict if the tweet will be retweeted or not. The features were constructed by taking into account previous related work (Section~\ref{sec:related}), the information provided by Twitter's API, and our own experience as Twitter users. The 50 features are divided into 7 groups.
\begin{figure}
\includegraphics[scale=0.4]{feature_visual.png}
\caption{Groups of features used by our system and how they relate to the tweet itself, the sender, the recipient etc.}
\label{fig:featurespace}
\end{figure}
Group 1 (Fig.~\ref{fig:featurespace}, Table~\ref{tab:feat1}) contains features that examine the tweet itself (e.g., length, if it contains a URL or not, if it mentions a Twitter account). Longer tweets, or tweets that contain URLs of longer posts (e.g., news articles) or photographs may be more informative and, thus, more interesting. Tweets that mention other user accounts may be parts of dialogues, which may be uninteresting to recipients, unless they interact frequently with the sender (see also Group 4). Hashtags may indicate trending topics. Tweets that have already been retweeted or favoured by many users are more likely to be important. Exclamation marks indicate surprise or strong feelings.
Group 2 (Fig.~\ref{fig:featurespace}, Table~\ref{tab:feat2}) contains features that examine how similar the incoming tweet is to particular collections of tweets (e.g., all tweets previously posted by the sender). The similarity between the incoming tweet $t$ and a collection of tweets $C = \{c_1, \dots, c_n\}$ is computed as the average TF-IDF cosine similarity between $t$ and each $c_i$. The intuition in Group 2 is that recipients may prefer tweets that are similar or dissimilar (if they prefer surprising posts) to the posts of the particular sender, or their own posts, or the posts they usually see or retweet.
Group 3 (Fig.~\ref{fig:featurespace}, Table~\ref{tab:feat3}) contains features modeling the network influence, popularity, and authority of the sender and the recipient. These features include Twitter account statistics (number of followers, number of posts, days active for, list subscriptions), features that may indicate authority (verified accounts, URLs in the description fields of their profiles), as well as scores obtained from Klout, a service that estimates a user's social influence by taking into account their activity in various social networks.\footnote{See \url{http://klout.com/}. All the features are normalized to $[0, 1]$.}
Group 4 (Fig.~\ref{fig:featurespace}, Table~\ref{tab:feat4}) contains features that capture the interaction between the sender and the recipient (e.g., whether or not tweets of the sender mention the recipient). The intuition is that recipients are more likely to be interested in posts of senders they interact more closely with.
Group 5 (Fig.~\ref{fig:featurespace}, Table~\ref{tab:feat5}) contains features that attempt to estimate the timeliness of the incoming tweet. A tweet that is very similar to other recently received or retweeted tweets may be old news. The similarity scores of these features are again averaged TF-IDF cosine similarities.
Group 6 (Fig.~\ref{fig:featurespace}, Table~\ref{tab:feat6}) contains features related to the users the recipient follows (the user's {\em neighbours}). The neighbours presumably have common interests with the recipient. Hence, if the original author of the incoming tweet is a neighbour of the recipient or if the incoming tweet has been retweeted by many neighbours of the recipient, this may be an indication that the recipient will also find the incoming tweet interesting.
Group 7 (Fig.~\ref{fig:featurespace}, Table~\ref{tab:feat7}) complements the features of Group 1 by looking for particular keywords and parts of speech (nouns, verbs, articles) in the incoming tweet.\footnote{We use CMU ARK Twitter tagger \cite{gimpel_k} (\url{http://www.cs.cmu.edu/~ark/TweetNLP/}).} The features of Group 7 are based on the work of Tan et al. \cite{tan_c}, who found that the wording of a post significantly affects its propagation, compared to other posts that express the same information using different wordings. Tan et al.\ provide a list of 20 `good' keywords, believed to increase the propagation probability of a post, and 20 `bad' keywords.
\begin{table}
\caption{Features of Group 1 (the tweet itself).}
\label{tab:feat1}
\begin{tabular}{{p{0.17\linewidth}p{0.77\linewidth}}}
\hline
Feature ID & Feature Description\\\hline
FT1 & Tweet length in characters.\\
FT2 & Does the tweet contain a URL?\\
FT3 & Does it mention a Twitter account ($@$username)?\\
FT4 & Does it contain a hashtag?\\
FT5 & Global retweet count (times it has been retweeted). \\
FT6 & Global favourite count.\\
FT7 & Does the tweet contain an exclamation mark?\\
FT8 & Does it contain a photo?\\
FT9 & Number of Twitter accounts it mentions.\\\hline
\end{tabular}
\end{table}
\begin{table}
\caption{Features of Group 2 (average TF-IDF cosine similarity of the tweet to other tweet collections).}
\label{tab:feat2}
\begin{tabular}{{p{0.16\linewidth}p{0.78\linewidth}}}
\hline
Feature ID & Feature Description\\\hline
FT10 & Similarity to tweets previously posted (authored or retweeted) by the sender.\\
FT11 & Similarity to tweets previously posted (authored or retweeted) by the recipient.\\
FT12 & Similarity to tweets previously seen by the recipient (excluding `easy' negative tweets and tweets from recently inactive neighbours -- see Section~\ref{sec:dataset}). \\
FT13 & Similarity to previous retweets of the recipient.\\\hline
\end{tabular}
\end{table}
\begin{table}
\caption{Features of Group 3 (influence, popularity, authority of the sender and recipient).}
\label{tab:feat3}
\begin{tabular}{{p{0.16\linewidth}p{0.78\linewidth}}}
\hline
Feature ID & Feature Description\\\hline
FT14 & Number of users that follow the sender.\\
FT15 & Number of users the sender follows.\\
FT16 & Number of tweets the sender has posted (authored or retweeted).\\
FT17 & Number of curated lists the sender subscribes to.\\
FT18 & Is the sender a verified account?\\
FT19 & Days the sender's account has been active for.\\
FT20 & Does the sender have a URL in their description?\\
FT21 & The Klout score (influence) of the sender.\\
FT22 & Delta of FT21 from the previous 24 hours.\\
FT23 & Delta of FT21 from the previous 7 days.\\
FT24 & Delta of FT21 from the previous 30 days.\\
FT25 & Number of users that follow the recipient.\\
FT26 & Number of users the recipient follows.\\
FT27 & Number of tweets the recipient has posted.\\
FT28 & Number of curated lists the recipient subscribes to.\\
FT29 & Is the recipient a verified account?\\
FT30 & Days the recipient's account has been active for.\\
FT31 & Does the recipient have a URL in their description?\\
FT32 & The Klout score of the recipient.\\
FT33 & Delta of FT32 from the previous 24 hours.\\
FT34 & Delta of FT32 from the previous 7 days.\\
FT35 & Delta of FT32 from the previous 30 days.\\\hline
\end{tabular}
\end{table}
\begin{table}
\caption{Features of Group 4 (sender-recipient interaction).}
\label{tab:feat4}
\begin{tabular}{{p{0.16\linewidth}p{0.78\linewidth}}}
\hline
Feature ID & Feature Description\\\hline
FT36 & Is the recipient mentioned ($@$username) in the incoming tweet?\\
FT37 & Has the sender ever mentioned the recipient?\\
FT38 & Has the recipient ever mentioned the sender?\\
FT39 & Has the sender ever retweeted the recipient?\\
FT40 & Has the recipient ever retweeted the sender?\\
FT41 & No.\ of times the recipient has retweeted the sender.\\\hline
\end{tabular}
\end{table}
\begin{table}
\caption{Features of Group 5 (timeliness of incoming tweet).}
\label{tab:feat5}
\begin{tabular}{{p{0.16\linewidth}p{0.78\linewidth}}}
\hline
Feature ID & Feature Description\\\hline
FT42 & Similarity to tweets seen by the recipient during the previous week (excluding `easy' negative tweets and tweets from recently inactive neighbours). \\
FT43 & Similarity to tweets retweeted by the recipient during the previous week.\\\hline
\end{tabular}
\end{table}
\begin{table}
\caption{Features of Group 6 (neighbours of the recipient).}
\label{tab:feat6}
\begin{tabular}{{p{0.16\linewidth}p{0.78\linewidth}}}
\hline
Feature ID & Feature description\\\hline
FT44 & Is the author of the incoming tweet a neighbour of the recipient? (The sender may be the author of the tweet or a neighbour that retweeted it. In the latter case, the original author may not be a neighbour.) \\
FT45 & Number of times the incoming tweet has been retweeted by the neighbours of the recipient.\\\hline
\end{tabular}
\end{table}
\begin{table}
\caption{Features of Group 7 (wording of the tweet).}
\label{tab:feat7}
\begin{tabular}{{p{0.16\linewidth}p{0.78\linewidth}}}
\hline
Feature ID & Feature Description\\\hline
FT46 & Number of keywords in the incoming tweet explicitly asking to retweet/share (e.g., `RT', `spread', `share').\\
FT47 & Number of nouns and verbs in the incoming tweet. \\
FT48 & Number of definite articles in the incoming tweet. \\
FT49 & Number of indefinite articles in the incoming tweet.\\
FT50 & Number of `good' keywords minus number of `bad' keywords in the tweet, using the keywords of \cite{tan_c}. \\\hline
\end{tabular}
\end{table}
\section{Experiments} \label{sec:experiments}
\subsection{Dataset} \label{sec:dataset}
In our experiments, the recipients (Fig.~\ref{fig:system} and \ref{fig:featurespace}) were 122 journalists. We started with a list of 262 journalists, available from previous work \cite{zamani_k}, but we retained only journalists that write in English.\footnote{We used a flag in Twitter's API to detect the language.} We also discarded journalists for which we could not collect at least 500 retweets, ending up with 122 journalists. The dataset of our experiments consists of 122 subsets, one for each journalist. Each subset comprises the most recent retweets of the corresponding journalist that we could collect through Twitter's API. The number of retweets in each subset was at least 500 and at most 2,500.\footnote{We could not collect more, due to restrictions of Twitter's API.} In each subset, the journalist's retweets are treated as \emph{positive instances}.
Each subset also contains \emph{negative instances}, meaning incoming tweets that the journalist did not retweet. To obtain the negative instances for each journalist we crawled the timelines of the users the journalist follows (neighbours) and collected their most recent posts (tweets authored or retweeted by the neighbour) that were not included in the positive instances of the journalist. To make the dataset more challenging, we excluded \emph{`easy' negative instances}, meaning incoming tweets from neighbours that the journalist has never retweeted in the past, assuming that the journalist does not really care about posts from such neighbours. We also excluded negative instances from \emph{recently inactive neighbours} (neighbours without any posts in the last seven days).
Our dataset was collected in late September 2015. To avoid using very old tweets, we discarded instances that were posted before January 2014. Hence, the dataset covers a period of approximately 19 months and contains approximately 12 million instances in total, involving 63,800 users (senders or recipients). Since the collected negative instances were many more than the positive ones, we randomly downsampled the negative instances of each journalist to obtain an equal number of positive and negative instances in each subset. This left a total of 133,000 instances (66,500 positive, 66,500 negative) in the 122 subsets.\footnote{IDF scores were estimated on the 12 million instances.} To create training, development, and test sets, we first merged the 122 subsets and temporally ordered (by time posted) all the positive instances and, separately, all the negative instances. We removed all incoming duplicates per receiver (e.g., same tweet reaching the same receiver at different times via retweets of different senders the receiver follows), keeping only the earliest among duplicates.
We then formed 140 temporally ordered \emph{batches}. Batch 1 contains the earliest 475 positive and the earliest 475 negative of the 133,000 instances. Batch 2 contains the next 475 positive and the next 475 negative instances etc.\footnote{The incoming tweets of the 122 journalists are distributed almost uniformly across the batches.} The first 120 batches were used as the \emph{training set} (57,000 positive and 57,000 negative instances), the next 10 batches were used as the \emph{balanced development set} (4,750 positive and 4,750 negative instances), and the last 10 batches were used as the \emph{balanced test set} (4,750 positive and 4,750 negative instances). We also constructed alternative, \emph{unbalanced development and test sets} by randomly downsampling the positive (retweeted) instances in each batch of the balanced development and test sets, leaving 25 positive (5\%) and 475 negative instances (95\%) in each batch (250 positive and 4,750 negative instances in each unbalanced set).
We always train the logistic regression classifier of our system (Fig.~\ref{fig:system}) on the balanced training set. Using a balanced training set is common practice for discriminative supervised learning algorithms. Previous experiments \cite{vougioukas_m2} also indicated that training the logistic regression classifier on a balanced set leads to better performance on the development set, compared to using an unbalanced training set, even when the classifier is evaluated on an unbalanced development set with the same positive-to-negative ratio as the unbalanced training set. For a classifier trained on a balanced set, the balanced development and test sets are expected to be easier than their unbalanced counter-parts, since all the balanced sets have the same priors; this is also confirmed by our experimental results. The balanced development and test sets, however, are unrealistic, because they assume that receivers retweet on average half of their incoming tweets. The unbalanced development and test sets are intended to evaluate our system in a more realistic scenario, where receivers retweet only 5\% of their incoming tweets.
To bypass privacy issues, the training, development, and test sets (balanced and unbalanced) of our experiments will be made publicly available in an encoded form, where words will be replaced by unique integer identifiers, as in previous spam filtering and legal text analytics datasets we have made available \cite{androutsopoulos_i,Chalkidis2017}. We also plan to provide pre-trained word embeddings (e.g., generated by word2vec \cite{mikolov_t} or GloVe \cite{pennington_j}) for each encoded word (integer identifier).
\subsection{Incremental training and evaluation} \label{sec:incremental}
To study the effect of the size of the training set, each experiment was repeated 120 times, each time training the logistic regression classifier on the first (earliest) $k$ batches of the training set ($1 \leq k \leq 120$), always using the same development or test set (10 batches each) to evaluate the performance of the classifier for each $k$ value. We used \emph{precision} ($P$), \emph{recall} ($R$), and \emph{F1 score} to evaluate the performance of the classifier, defined as usually.
\subsection{Experiments on the development set} \label{sec:devExperiments}
To get a first view of the usefulness of the features of Section~\ref{subsection:datarep_feat}, we ranked them by decreasing Pearson correlation \cite{benesty_j} to the class label, using a 10-fold cross-validation on the training set (Section~\ref{sec:dataset}). The Pearson correlations of the top 10 features are shown in Table~\ref{tab:correl}. Interestingly, the seven feature groups of Section~\ref{subsection:datarep_feat} are not equally represented in the top 10 (Table~\ref{tab:correl}). Only Group 2 (content similarity), Group 3 (influence, authority, popularity, but mostly of the sender), Group 4 (sender-recipient interaction), and Group 6 (neighbours) have features among the top 10.
\begin{table}
\caption{Pearson correlation of the top 10 features to the class label (10-fold cross-validation on the training set).}
\label{tab:correl}
\small
\begin{tabular}{{p{0.11\linewidth}p{0.08\linewidth}p{0.64\linewidth}}}
\hline
Feature & Pearson & Feature Description\\\hline
FT43 & 0.60 & Similarity to tweets retweeted by the recipient during the previous week.\\
FT10 & 0.57 & Similarity to tweets previously posted (authored or retweeted) by the sender.\\
FT21 & 0.49 & The Klout score (influence) of the sender.\\
FT16 & 0.47 & Number of tweets the sender has posted.\\
FT13 & 0.44 & Similarity to tweets previously retweeted by the recipient.\\
FT45 & 0.42 & Number of times the tweet has been retweeted by the recipient's neighbours.\\
FT44 & 0.40 & Is the author of the incoming tweet a neighbour of the recipient?\\
FT40 & 0.40 & Recipient ever retweeted the sender?\\
FT11 & 0.40 & Similarity to tweets previously posted (authored or retweeted) by the recipient.\\
FT38 & 0.36 & Recipient ever mentioned the sender?\\\hline
\end{tabular}
\end{table}
We then evaluated the system with respect to its F1 score on the unbalanced development set, using an increasing number $k$ of training batches ($1 \leq k \leq 120$), with different numbers of top-$m$ features ($m \in \{1, 2, 10, 20, 35, 50\}$). The results of these experiments are shown in Fig.~\ref{fig:exp1}. A first observation is that the learning curves are steep for the first few training batches, but flatten out after approximately the first 12 batches (11,400 examples). This is a general trend for all of our experiments and suggests that a larger training set would not improve the system's performance.
\begin{figure}
\includegraphics[scale=0.35]{exp1.png}
\caption{F1 on the unbalanced development set, for different numbers of top features.}
\label{fig:exp1}
\end{figure}
A second observation is that the best results are obtained with the top 10 features (Fig.~\ref{fig:exp1}). Adding more features leads to increasingly worse results, possibly because the additional features add noise. Indeed, after the first 15-20 top features, the Pearson correlation of the features to the class label is quite low (\textless 0.13). The performance of a `lightweight' system with only the top two features (F1 $\approx$ 0.87) is comparable to that of the top 10 features (Fig.~\ref{fig:exp1}).
We investigated further the notable change in F1 when the second top feature is added to the top one (Fig.~\ref{fig:exp1}, curves Top-1 and Top-2). Figure~\ref{fig:exp2} shows the F1 score, again on the unbalanced development set, using only the top feature (FT10), only the second-top (FT43), or both. The second-top feature alone is not a good predictor, but the combination of the two features increases F1.
\begin{figure}
\includegraphics[scale=0.35]{exp2.png}
\caption{F1 on the unbalanced development set, using only the top feature (FT10), only the 2nd-top (FT43), or both.}
\label{fig:exp2}
\end{figure}
Figure~\ref{fig:exp3} sheds more light on the role of the top two features (FT10, FT43). It plots the positive and negative instances of a random subset (251 positive instances, 4,494 negatives) of the unbalanced development set. The straight line is the separator the logistic regression learned on the training set. In most cases, the line correctly separates the negative (stars) from the positive (crosses) instances, which agrees with the high F1 score in Figures~\ref{fig:exp1} and \ref{fig:exp2}.
\begin{figure}
\includegraphics[scale=0.35]{exp3.png}
\caption{Sample positive and negative instances from the unbalanced development set and the linear separator the logistic regression classifier learned on the training set.}
\label{fig:exp3}
\end{figure}
As one might expect, most negative instances (stars) have low similarity (small values on the horizontal axis of Fig.~\ref{fig:exp3}) to the tweets the recipient retweeted during the previous week (FT43). This suggests that recent retweets of the recipients are good indicators of their current interests. Perhaps more unexpectedly, most positive examples (crosses) have very \emph{low} similarities to the previous posts of the sender (FT10). Intuitively, recipients tend to prefer (or at least retweet) posts that are \emph{unusual} for the particular sender (posts that are surprisingly not about the usual topics of the sender, to the extent that TF-IDF cosine similarity captures topic similarity).
Figure~\ref{fig:exp3} also illustrates the effect of combining the two features. Negative instances tend to have small values on the horizontal axis (FT43), but a non-negligible subset of positive instances also have small FT43 values. Most of those positive instances, however, have near-zero values on the vertical axis (FT10), unlike most negative instances and, hence, the combination of the two features improves classification accuracy. However, a non-linear classifier might manage to separate better the instances near the origin, where an S-shaped separator seems to be needed.
\subsection{Experiments on the test set} \label{sec:testExperiments}
In a final set of experiments, we evaluated our system on the (previously unseen) test set (10 fresh batches), using both the balanced (50\% positives, 50\% negatives) and the unbalanced (5\% positives, 95\% negatives) versions of the test set (Section~\ref{sec:dataset}). We used the top 10 features in these experiments, which had led to the best results on the development set (Section~\ref{sec:devExperiments}). The training set was the same as in the previous experiments (balanced). Fig.~\ref{fig:exp4} shows the F1 scores on the two versions of the test set, along with the F1 scores on the batches of the training set the classifier has been trained on. The performance of a supervised classifier is typically better on the training data it has encountered, compared to its performance on unseen test data. Hence, the performance on the encountered training data is a boundary of the performance on test data. A large gap between the two is often due to overfitting the training data. The performance on the training data typically deteriorates as more training data are added, due to reduced overfitting.
\begin{figure}
\includegraphics[scale=0.35]{exp4.png}
\caption{F1 on the balanced and unbalanced test set vs.\ F1 on the (always balanced) training set, using the top 10 features.}
\label{fig:exp4}
\end{figure}
Figure~\ref{fig:exp4} shows the system performs better on the unbalanced test set (F1 $\approx$ 0.92) than on the unbalanced development set (cf.\ Fig~\ref{fig:exp1}). As expected, the system performs better on the balanced test set, which has the same positive-to-negative ratio (50\% positives) as the training set, and worse on the unbalanced test set (5\% positives). The gap between the performance on the training and balanced test data is small, indicating that the system does not significantly overfit the training data. The larger gap between the performance on the training and unbalanced test data is due to the change of ratio from the training to the test data, which makes the problem more difficult for the classifier. Again, both test curves flatten out after very few training batches ($\sim$5 for the unbalanced test set, $\sim$12 for the balanced), though the balanced test F1 score continues to improve slowly, whereas the unbalanced test F1 does not.
\section{Related Work} \label{sec:related}
\subsection{Global filters for social media}
Global filters aim to identify content which is interesting for a large audience. Yang et al.\ \cite{yang_m} used Latent Dirichlet Allocation (LDA) in a filter aiming to detect globally interesting tweets, as opposed to tweets that are only interesting to their direct recipients.
Hurlock and Wilson \cite{hurlock_j} investigated qualitative factors (e.g., reporting personal experience or not, providing specific information, timeliness, trusted author) that affect the perceived usefulness of the tweets returned by a search engine. Although they considered a different task (search) than the one we considered (predicting retweets) and their factors are not always easy to map to computable features (e.g., reporting personal experience, usefulness of a link), their work influenced our choice of features.
Duan et al.\ \cite{duan_y} used a learning-to-rank algorithm, experimenting with several types of features. They found that features related to the authority of senders (e.g., number of lists the author is included in) along with tweet length and presence of URL were particularly useful. These findings influenced our choice of features.
Alonso et al.\ \cite{alonso_o_1,alonso_o_2} considered several types of features and in their early work reported that a single feature (presence of URL) was enough to obtain 80\% accuracy. Their later work \cite{alonso_o_1}, however, showed that human annotators did not agree on which tweets were interesting (inter-annotator agreement was as low as for random choices), concluding that interest is a subjective, not global notion.
\subsection{Personal filters for social media}
In previous work \cite{vougioukas_m}, we developed personal filters for Twitter, using the incoming tweets of six recipients, annotated with interest scores by the recipients themselves. Each filter was trained and tested on incoming tweets of a particular recipient, using the same learning algorithm and features. Manual annotation turned out to be a bottleneck and we could not obtain more than 1,000 annotated incoming tweets per recipient. Thus, we concluded that training a separate filter per user is not realistic and does not address the cold start problem, where a filter must be provided to a new user (recipient), with no training data available for this user.
Waldner and Vassileva \cite{waldner_w} trained a different filter per Twitter user, using Naive Bayes. They classified incoming tweets in three classes (interesting, neutral, uninteresting) and studied user interface designs to emphasize `interesting' tweets in timelines.
\subsection{Hybrid personalized global filters}
Uysal and Croft \cite{uysal_i} consider two tasks: (a) predicting if an incoming tweet will be retweeted by a particular recipient or not and (b) ranking the potential recipients of a particular tweet so that recipients more likely to retweet it will be higher. We considered only the former task, but the same system could be used for the latter task too. The system of Uysal and Croft is hybrid, in the sense that it is global (a single filter for all users), but the feature vectors that represent the tweets include recipient-specific features, as in our own work. The features of Uysal and Croft are also similar to the ones we used. They consider the incoming tweet, the author, the recipient, their previous interaction etc. In fact, our feature set was largely based on that of Uysal and Croft, though we strived for engineering simplicity (e.g., we do not use personal language models), we included additional features (e.g., Klout scores, more similarity scores), and we studied the predictive power (Pearson correlation) of each individual feature, whereas Uysal and Croft assessed the predictive power of entire groups of features only.
Uysal and Croft found that features roughly corresponding to our Group 1 (the tweet itself) were the most useful, whereas in our experiments (Section~\ref{sec:devExperiments}) only Group 2 (content text similarity), Group 3 (influence, authority, popularity), Group 4 (sender-recipient interaction), and Group 6 (neighbours) had features in the top 10. This difference may be due to the different datasets and learning algorithms that we used. Uysal and Croft used a decision tree classifier, whereas we used logistic regression. Also, we used 122 journalists as recipients, whereas Uysal and Croft used 242 random (but reasonably active) Twitter users. On the other hand, the dataset of Uysal and Croft was smaller (24,200 instances in total) compared to ours (133,000 instances), Uysal and Croft did not examine the effect of the size of the training set, and the tweets of their dataset were not temporally ordered.
Hong et al.\ \cite{Hong2012} use types of features that are similar to the ones we used, but rely on Factorization Machines. We use a much simpler logistic regression classifier, still obtaining very promising results.
Zhang et al.\ \cite{zhang_q} also developed a hybrid personalized global filter (a single filter for all recipients, with recipient-sensitive feature vectors) to predict retweets. They used word embeddings to represent the words of the tweets and a convolutional neural network (CNN) to construct a single embedding for each tweet. The senders and recipients are also represented by (user) embeddings, and their embeddings influence the behaviour of a second version of the CNN that produces an alternative embedding of each tweet, in effect making the second CNN sensitive to the interests of the senders and recipients. The output tweet embeddings of the two versions of the CNN, concatenated with the embeddings of the recipient and sender and the similarity of the scores of the two CNN versions are then used as a feature vector by a logistic regression classifier layer. The work of Zhang et al.\ is an interesting attempt to avoid manual feature engineering. The embeddings that they use, however, in effect encode information only about the words of the tweet and the previous tweets of the sender and recipient. Our experiments showed that features that consider the influence, authority, and popularity of the sender, the previous interaction between the sender and the recipient, and the neigbours of the recipient are also useful. Their experiments were conducted on a collection of 37,515 incoming tweets from 1,000 random recipients.
\section{Conclusions and future work} \label{sec:conclusions}
We presented a personalized global filter that aims to identify incoming tweets a particular recipient would find interesting enough to retweet. The filter is global in the sense that it is common for all the recipients. It is also personalized in the sense that the incoming tweets are represented as feature vectors that include user-specific features. Thus, the system can produce different predictions per recipient, even for the same incoming tweet, as in personal filters, while still being able to generalize over different users. We experimented with features that examined the content of each tweet, its novelty and its similarity to tweets previously posted or retweeted by the recipient or sender. Furthermore, features describing the network influence and authority of the author and sender, their past interactions and neighbours were used. In experiments with a collection of approximately 130K tweets received by 122 journalists, our system achieved very high accuracy ($F_1 \approx 0.9$) using only 10 features and only 5K training instances.
Future work could incorporate the features we used (e.g., by turning them into embeddings) in convolutional or recursive neural networks, possibly building upon the work of Zhang et al.\ (\citeyear{zhang_q}). Benchmark datasets are also needed to compare methods proposed by different researchers. The (encoded) dataset of our experiments, which will be made available, is a step towards this direction, but the recipients of its tweets were all journalists.
|
train/arxiv
|
BkiUgjfxK7IDPADJ5IrX
| 5 | 1 |
\section{Introduction}
It is impossible to isolate a quantum system from its surroundings leading to information loss in the form of dissipation and decoherence. The theory of open quantum systems offers the necessary tools for describing and analyzing the interactions of a system with its surrounding \cite{Breuer}. In the theory of open quantum systems, various methods have been proposed to illustrate the environment and its effects on the dynamics of the desired system \cite{Breuer,Weiss}. If the coupling strength between the system and the environment is weak and the relaxation time of the system is longer than correlation time of the environment, then there exist a one-way flow of information from the system to the environment. Such a quantum evolution is called Markovian \cite{Gorini} and it can be described by the master equation in Lindblad form \cite{Lindblad,Zhang1,Carmichael}. In a more realistic situation, the coupling strength between the system and the environment is strong and the relaxation time of the system is shorter than the correlation time of the environment. In this case there exist a back-flow of information from environment to the system. This type of quantum evolution is called non-Markovian\cite{Wolf,Rivas,Rivas1,Haseli,Haseli1,Haseli2,Fanchini}. In the theory of open quantum systems, dynamical memory effects play a fundamental role in various physical phenomena such as quantum biology \cite{Lambert,Thorwart,Huelga}, quantum cryptography \cite{Vasile}, quantum metrology \cite{Chin} and quantum control \cite{Hwang}. \textbf{ According to the types of dynamical memory effects, quantum evolutions can be divided into two categories: memory-less evolution (Markovian evolution) and quantum process with memory (non-Markovian evolution)}. \textbf{In the case of non-Markovian evolution, future states of a system can be depend on its past because of the back-flow of information . So it is natural to conclude that the back-flow of information from the environment to the system has a direct relation to the existence of memory.}
This view about the dynamical memory effects and non-Markovianity as a typical part of the theory of open quantum systems is completely different from the concept of quantum channels with memory. In order to distinguish between these two, the term "correlated quantum channel" is used to describe quantum channels with memory. The memory of the quantum channel is depicted by successive uses of the channels on a sequence of quantum systems \cite{Macchiavello,Caruso,Kretschmann}. \textbf{In this sense, channels with memory and without memory channels represent cases in which the successive uses of the channels are correlated or independent, respectively}. In the case of a correlated quantum channel, memory is not due to the correlations created during the time evolution but due to the correlated action of channels on the system consisting of a set of individual quantum systems. Addis et al. have studied the connection between these two insight about memory in Ref. \cite{Addis}.
The dynamics of quantum correlations under correlated quantum channels has been studied previously. In Ref. \cite{Ramzan}, the effects of correlated quantum channels have been investigated on the entanglement of $X$-type state of the Dirac fields in the non-inertial frame. In Ref. \cite{Guo}, the authors have shown that how the correlated channel affects the dynamics of quantum correlations. The behavior of memory-assisted entropic uncertainty relation under the effects of the correlated quantum channels has been investigated in Refs. \cite{Karpat,Guo1}.
We study the quantum speed limit QSL time for correlated and uncorrelated quantum channels. QSL time is the bound on the minimal time which is needed for a quantum system to evolve from an initial state at initial time $\tau$ to desired states at time $\tau+\tau_D$, where $\tau_D$ is the driving time. In Ref.\cite{Mandelstam}, Mandelstam and Tamm have provided a bound for closed quantum systems which is given by
\begin{equation}\label{MT1}
\tau \geq \tau_{QSL}=\frac{\pi \hbar}{2 \Delta E},
\end{equation}
where $\Delta E = \sqrt{\langle \hat{H}^{2} \rangle - \langle \hat{H} \rangle^{2}}$ is the inverse of the variation of energy of the initial state and $\hat{H}$ is time-independent Hamiltonian describing the dynamics of quantum system. This bound is known as the MT bound. Margolus and Levitin have presented the bound for closed quantum system based on the mean energy $E=\langle \hat{H} \rangle$ as \cite{Margolus}
\begin{equation}\label{ML1}
\tau \geq \tau_{QSL}=\frac{\pi \hbar}{2 E},
\end{equation}
which is called the ML bound. Combining the MT and ML bounds in Eqs. (\ref{MT1}) and (\ref{ML1}) provides a unified bound for the QSL time for the dynamics of closed quantum system as \cite{Giovannetti}
\begin{equation}
\tau \geq \tau_{QSL}=\max \lbrace \frac{\pi \hbar}{2 \Delta E} , \frac{\pi \hbar}{2 E} \rbrace.
\end{equation}
Recently, QSL time has also been studied for the dynamics of open quantum systems which are described by positive non-unitary maps. For open quantum systems QSL time has quantified based on quantum Fisher information \cite{Taddei,Escher}, Bures angle \cite{Deffner1}, relative purity \cite{del,Zhang} and other proper geometric approach \cite{Xu,Mondal,Levitin,Xu1,Meng,Mirkin,Campaioli,Uzdin}.
In this work, we will show how correlations in the application of quantum channels can effect QSL time. We provide results for some unital and non-unital correlated channels. \textbf{We will consider random dephasing correlated noise as an example for unital correlated quantum channels and consider correlated amplitude damping and correlated squeezed generalized amplitude damping channels SGAD as examples for the non-unital correlated quantum channels. }
\textbf{In this work, we investigate the effects of correlations in the quantum channel on QSL time, so we do not limit ourselves to choose a particular measure of QSL time. We use the bound based on relative purity for QSL time which was introduced in \cite{Zhang}}. \textbf{We choose this bound because it can be used for arbitrary initial pure and mixed states}.
This work is organized as follows. In Sec. \ref{Sec.2} we review the geometric approaches based on relative purity for driving the QSL bounds. In Sec. \ref{Sec3}, the QSL time for correlated and uncorrelated quantum channel is investigaed. We will consider correlated pure dephasing colored noise as an example for correlated unital channel and consider correlated amplitude damping and squeezed generalized amplitude damping SGAD channels as examples for the non-unital correlated quantum channels. In Sec. \ref{Sec4}, we summarize the results.
\section{The quantum speed limit time for open quantum system}\label{Sec.2}
\textbf{The state of the open quantum system at time $t$ is characterized by density matrix $\rho_t$. Time evolution of an open quantum system is defined by the following time-dependent master equation as
\begin{equation}
\dot{\rho}_{t}=L_{t}(\rho_{t}),
\end{equation}
where $L_{t}$ is the positive generator \cite{Breuer}. The goal is to find the minimum time for evolving from the state $ \rho_{\tau}$ at time $\tau$ to desire state $\rho_{\tau + \tau_D}$ at time $\tau + \tau_D$. Here, $\tau_D$ is the driving time of the open quantum system. Based on relative purity the QSL time has been introduced by the authors in Refs. \cite{del,Zhang}. Zhang et al. have shown that this QSL time is applicable for arbitrary initial mixed and pure states. The relative purity $f(\tau)$ between initial state $\rho_{\tau}$ and desire state $\rho_{\tau+\tau_D}$ is given by \cite{Audenaert}
\begin{equation}\label{relative purity}
f(\tau + \tau_D)=\frac{tr(\rho_{\tau}\rho_{\tau + \tau_D})}{tr(\rho_{\tau}^{2})}.
\end{equation}
The ML bound of QSL time for open quantum systems is given by (see Ref. \cite{Zhang} for details)
\begin{equation}\label{ML}
\tau \geq \frac{\vert f( \tau + \tau_D ) -1 \vert tr (\rho_{\tau}^{2})}{\overline{ \sum_{i=1}^{n} \Lambda_{i} \beta_{i}}},
\end{equation}
where $\Lambda_{i}$ and $\beta_{i}$ are the singular values of $\mathcal{L}_{t}(\rho_{t})$ and $\rho_{\tau}$, respectively and in the denominator of the bound, we have $\overline{\square}=\frac{1}{\tau_{D}} \int_{\tau}^{\tau + \tau_{D}} \square dt$. Following the same procedure the MT bound of QSL-time for open quantum systems can be written as
\begin{equation}\label{MT}
\tau \geq \frac{\vert f( \tau + \tau_D ) -1 \vert tr (\rho_{\tau}^{2})}{\overline{ \sqrt{\sum_{i=1}^{n} \Lambda_{i}^{2}}}}.
\end{equation}
Combining Eqs. (\ref{ML}) and (\ref{MT}) leads to a unified bound for QSL time as
\begin{equation}\label{(QSL)T}
\tau_{QSL}=\max \lbrace \frac{1}{\overline{ \sum_{i=1}^{n} \Lambda_{i} \beta_{i}}}, \frac{1}{\overline{ \sqrt{\sum_{i=1}^{n} \Lambda_{i}^{2}}}} \rbrace \times \vert f( \tau + \tau_D ) -1 \vert tr (\rho_{\tau}^{2}).
\end{equation}
Zhang et al. have shown that the QSL-time is associated with quantum coherence of an arbitrary initial state $\rho_{\tau}$ \cite{Zhang}. They have also shown that for open quantum systems the ML bound of the QSL time in Eq. (\ref{ML}) is tighter than MT bound. QSL time is shorter than $\tau_D$. It is worth noting that QSL-time $\tau_{(QSL)}$ can be interpreted as the potential capacity for further evolution acceleration. If $\tau_{QSL}=\tau_{D}$ then the evolution is now in the situation with the highest speed, thus the evolution does not have the potential capacity for further acceleration. However, when $\tau_{QSL} < \tau_D$, the potential capacity for further acceleration will be greater. Another important point to be noted is that when the coupling strength between the system and environment is weak $\tau_{QSL}$ tends to the actual driving time $\tau_{D}$. On the contrary, in the strong coupling limit between the system and environment, $\tau_{QSL}$ can reduce below the actual driving time $\tau_D$ \cite{Deffner1}.}
\section{Correlated quantum channels}\label{Sec3}
We provide a brief review on quantum channel with correlated noise \cite{Macchiavello,Caruso,Kretschmann,Addis,Ramzan,Guo,Yeo,Awasthi}. Quantum channels are divided into two categories of with memory and without memory channels. If the correlation time of the environment is shorter than the time between successive application then there is no correlation between consecutive uses of the channel, i.e. a quantum channel $\varepsilon$ for $N$ consecutive uses obey $\varepsilon_{N}=\varepsilon^{\otimes N}$. These kinds of channels are known as channel without memory (uncorrelated channels). However, in real physical quantum noise, it is logical to have correlations between consecutive application of the channels. In this case, the correlation
time of the environment is longer than the time between the successive application of the channels, i.e. a quantum channel $\varepsilon$ for $N$ successive uses obeys $\varepsilon_{N} \neq \varepsilon^{\otimes N}$. These kinds of channels are known as memory channels (correlated channels). For correlated channels, the channel acts dependently on each input. Here we consider $N$ consecutive uses of the quantum channel. A quantum channel $\varepsilon$ can be represented as a completely positive, trace-preserving map from input state $\rho$ to output $\varepsilon(\rho)$ in Kraus form
\begin{equation}
\varepsilon(\rho)=\sum_{i_{1}...i_{N}}E_{i_{1}...i_{N}}\rho E_{i_{1}...i_{N}}^{\dag},
\end{equation}
where $E_{i_{1}...i_{N}}$'s are Kraus operators which are defined as
\begin{equation}
E_{i_{1}...i_{N}}=\sqrt{P_{i_{1}...i_{N}}}A_{i_{1}} \otimes ... \otimes A_{i_{N}} , \quad \sum_{i}P_{i_{1}...i_{N}}=1.
\end{equation}
Here $P_{i_{1}...i_{N}}$ is the probability that a random sequence of operations is acted on the sequence input $N$ qubits which are transmitted through the channel. In general, the Kraus operators for two consecutive uses of a two-qubit quantum channel can be represented as
\begin{equation}
E_{ij}=\sqrt{P_{ij}} A_{i}\otimes A_j.
\end{equation}
For uncorrelated channel we have $P_{ij} = P_iP_j$ and Kraus
operators are independent of each other. Whereas, for correlated channel based on Bayes rule we have $P_{ij}=P_iP_{j \vert i}$, where $P_{j \vert i}$ is the conditional probability. Thus, for two consecutive uses of a two-qubit quantum channel with partial correlation the Kraus operators can be represented as
\begin{equation}
E_{ij}=\sqrt{P_{i}[(1-\mu)P_{j}+\mu \delta_{ij}]},
\end{equation}
where $\mu \in [0,1]$ defines the correlation of the quantum channel.
According to the Kraus operator formalism the final state is given by
\begin{eqnarray}\label{dynamicsfinal}
\varepsilon(\rho)&=&(1-\mu)\sum_{i,j} E_{ij}\rho E_{ij}^{\dag}+ \mu \sum_{k}E_{kk}\rho E_{kk}^{\dag} \nonumber \\
&=&(1-\mu)\varepsilon_{un}(\rho)+\mu \varepsilon_{co}(\rho),
\end{eqnarray}
where $\varepsilon_{un}$ represents the uncorrelated channel and $\varepsilon_{co}$ stands for the correlated channel. In the case $\mu=0$, there is no correlation between two consecutive uses of channel and when $\mu=1$, the channel is fully correlated. In other words, $\mu=0$ represents the channel without memory and $\mu=1$ implies the channel with memory. Note that, in all parts of this work, we will consider the following initial states
\begin{equation}
\rho_0=r \vert \psi \rangle\langle \psi \vert + \frac{1-r}{4}\mathrm{I},
\end{equation}
where $\vert \psi \rangle = \sqrt{1-\alpha^{2}}\vert 01 \rangle + \alpha \vert 10 \rangle$, $0 \leq \alpha \leq 1$ and $r$ represents the purity of the initial state.
\subsection{Unital correlated channel }
The completely positive trace preserving channel $\varepsilon$ is unital if it maps the identity operator $\sigma_{0}=\mathcal{I}$ to itself in the same space, i.e. $\varepsilon(\sigma_{0})=\sigma_{0}$. For single-qubit systems, the unital channel can be represented in terms of a convex combination of the Pauli operators \cite{Nielsen,Imre,King}. From geometrical insights, the unital channels map the center of the Bloch sphere to itself. Here we consider dephasing colored noise in the category of Pauli channels as an example of an unital channel \cite{Daffer}. We will study the QSL time for correlated dephasing colored noise. \\
\textbf{Pure dephasing colored noise}: Let us consider the interaction between a single-qubit system and an environment which has the property of random telegraph signal noise \cite{Daffer}. The dynamics of a single-qubit is described by the time-dependent Hamiltonian
\begin{equation}
H(t)=\sum_{k=1}^{3} \Gamma_k(t) \sigma_k,
\end{equation}
where $\sigma_k$'s are the Pauli operators in ($x,y,z$)directions, and $\Gamma_k(t)$'s are random variables
which follow the statistics of a random telegraph signal. $\Gamma_{k}(t)$ depends on the random variable $n_{k}(t)$ as $\Gamma_{k} (t)=\alpha_k n_k(t)$, where $n_{k}(t)$ has a Poisson distribution with an average value equal to $t/2\tau_k$ and $\alpha_k$'s are coin-flip random variables that can have values $\pm \alpha_k$ randomly. We have a dephasing model with colored noise if $\alpha_1=\alpha_2= 0$ and $\alpha_3=\alpha$. In this case, the dynamics of single-qubit system can be described by the following Kraus operators
\begin{equation}
E_{0} = \sqrt{P_0}\sigma_{0} , \quad E_{3} = \sqrt{P_3}\sigma_{3}, \\
\end{equation}
with $P_0=1-z_t$ and $P_3=z_t$, where $z_t=\frac{1-\Lambda(t)}{2}$ and $\Lambda(t)=e^{-t/2\nu}[\cos(\mu t/2\nu)+\sin(\mu t/2\nu)/\mu]$, $\mu=\sqrt{(4 \nu )^{2}-1}$ . Here, the range of $\nu$ quantifies the interval in which the channel is non-Markovian. Based on the results presented in Ref. \cite{Haseli1}, the quantum evolution will be non-Markovian if $\nu \geq 1/4$.
\begin{figure}[!]
\centerline{\includegraphics[scale=0.7]{Fig1.eps}}
\caption{QSL time for correlated pure dephasing colored as a function of the driving time $\tau_D$ when the initial state parameters are $r=1/2$ and $\alpha=1/\sqrt{2}$ and $\tau=1$. (a)The dynamics is Markovian $\nu=0.1$ (b)The dynamics is non-Markovian $\nu=1$. The inset represents the QSL time as a function of initial time $\tau$ when $\tau_D=1$. }\label{Fig1}
\end{figure}
When two-qubit are transmitted through the colored pure dephasing channel, the channel with uncorrelated noise can be defined by the following Kraus operators
\begin{equation}
E_{ij}=\sqrt{P_i P_j}\sigma_{i} \otimes \sigma_{j}, \quad i,j \in \lbrace 0,3 \rbrace.
\end{equation}
In the presence of correlation between two successive uses of the colored pure dephasing channel on two-qubit system, the Kraus operators $E_{kk}$ are given by
\begin{equation}
E_{kk}=\sqrt{P_{k}}\sigma_{k}\otimes\sigma_{k}, \quad k \in \lbrace 0,3 \rbrace.
\end{equation}
\textbf{From Eq. (\ref{dynamicsfinal}), the elements of the time-dependent density matrix of a two qubit system under correlated dephasing colored noise can be written as
\begin{eqnarray}
\rho^{t}_{11}&=&\rho^{t}_{44}=\frac{1-r}{4}, \nonumber \\
\rho^{t}_{22}&=&\frac{1}{4} \left(\left(3-4 \alpha ^2\right) 1+r\right), \nonumber \\
\rho^{t}_{33}&=&\frac{1}{4} \left(1-\left(1-4 \alpha ^2\right) r\right), \nonumber \\
\rho^{t}_{23}&=&\rho^{t\star}_{32}=\alpha \sqrt{1-\alpha ^2} r \left(\mu +(1-\mu ) (1-z_t)^2\right).
\end{eqnarray}
So, QSL time in Eq. (\ref{(QSL)T}) is derived as
\begin{equation}
\tau_{QSL}=\frac{2 \alpha ^2\vert \left(1-\alpha ^2\right) r^2 \left(\mu +( 1-\mu) (z_{\tau} -1)^2\right) \left(\left(z_{\tau+\tau_D} -1\right){}^2-(z_{\tau}-1)^2\right)\vert}{\frac{2 \sqrt{2} \alpha \sqrt{1-\alpha ^2} r}{\tau_D} \int_{\tau}^{\tau+\tau_D}(1-z_t) \dot{z}_t dt}
\end{equation}}
\textbf{In Fig. \ref{Fig1}, the QSL time is plotted as a function of driving time $\tau_D$ for correlated colored pure dephasing channel for different values of the correlation parameter $\mu$ when $\tau=1$. The insets represent the QSL time in terms of the initial time $\tau$ for different values of correlation parameter $\mu$ when $\tau_D=1$. In Fig. \ref{Fig1}(a) the environmental parameter is chosen such that the evolution is Markovian ($\nu=0.1$). As can be seen from Fig. \ref{Fig1}(a), the QSL time is increased by increasing the correlation parameter $\mu$. In Fig. \ref{Fig1}(b) the evolution is non-Markovian ($\nu=1$). As can be seen from Fig. \ref{Fig1}(b), the QSL time is also increased by increasing correlation parameter. From Figs. \ref{Fig1}(a) and \ref{Fig1}(b) one can find that for both Markovian and non-Markovian evolution the QSL time for correlated channel ($\mu=1$) is greater than uncorrelated channel ($\mu=0$). In other word, in the presence of correlation between two successive uses of the colored pure dephasing channel on two-qubit system the quantum evolution will be slower than the case in which the correlation does not exist. }
\subsection{Non-unital correlated channel}
In this section we will study the QSL time for non-unital correlated channels. Here, we consider correlated amplitude damping and correlated squeezed generalized amplitude damping channels as the examples for non-unital correlated channels.\\
\textbf{Correlated amplitude damping channel:}\\
Let us consider a two-level quantum system that interacts with a zero temperature environment which is described by a collection of bosonic oscillators . In this model the corresponding interaction Hamiltonian is given by
\begin{equation}
H=\omega_0 \sigma_+ \sigma_- + \sum_{k}\omega_{k}a_{k}^{\dagger}a_k +(\sigma_+ B + \sigma_- B^{\dagger}),
\end{equation}
where $\sigma_\pm$ are the raising and lowering operators of the two-level quantum system having the transition frequency $\omega_0$ and $B=\sum_{k}g_{k}a_{k}$. $a_{k}$ and $a^{\dagger}_{k}$ are the annihilation and creation operators of the environment with
the frequencies $\omega_k$, respectively. Let us assume that the environment has an spectral density of the form $J(\omega)=\gamma_{0}\lambda^{2}/2\pi\left[ (\omega_{0}-\omega)^{2}+\lambda^{2} \right] $, where the the coupling spectral width $\lambda$ is connected to the correlation time of the environment $\tau_{B}$ by $\tau_B \sim 1/\lambda$. $\gamma_{0}$ is related to the relaxation time of the system $\tau_{R}$ by $\tau_{R}\sim 1/\gamma_0$. The dynamics of the two-level quantum system, with this spectral density, can be described by a master equation having the form of
\begin{equation}
\dot{\rho}_{t}=L_t\rho_t=\gamma_{t}\left(\sigma_- \rho_t \sigma_+ - \frac{1}{2} \left\lbrace \sigma_+\sigma_-,\rho_{t} \right\rbrace \right) ,
\end{equation}
where time-dependent decay rate is given by
\begin{equation}
\gamma_t=\frac{2 \gamma_{0} \lambda \sinh (dt/2)}{d \cosh(dt/2)+\lambda \sinh(dt/2)}, \quad d=\sqrt{\lambda^2-2\gamma_0\lambda}.
\end{equation}
The dynamics of such a two-level quantum system can be expressed by the following Kraus operators as
\begin{equation}
A_{0}=\left(
\begin{array}{cc}
\sqrt{1-p_t} & 0 \\
0 & 1 \\
\end{array}
\right), \quad A_1=\left(
\begin{array}{cc}
0 & 0 \\
\sqrt{p_t} & 0 \\
\end{array}
\right),
\end{equation}
where the parameter $p_t$ is given by
\begin{equation}
p_t=1-e^{- \lambda t}\left[ \cosh(\frac{d t }{2})+\frac{\lambda}{d}\sinh(\frac{d t }{2}) \right]^{2}.
\end{equation}
So, the quantum amplitude damping channel with uncorrelated noise can be defined as
\begin{equation}
E_ij=A_i \otimes A_j, \quad (i,j=0,1).
\end{equation}
The Kraus operators for two consecutive uses of a two-qubit amplitude damping correlated quantum channel can be represented as
\begin{equation}
A_{00}=\left(
\begin{array}{cccc}
\sqrt{1-p_t} & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \\
\end{array}
\right), \quad A_{11}=\left(
\begin{array}{cccc}
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
\sqrt{p_{t}} & 0 & 0 & 0 \\
\end{array}
\right)
\end{equation}
From Eq. (\ref{dynamicsfinal}), the evolving
density matrix elements of a two qubit system under correlated amplitude damping channel can be written as
\begin{eqnarray}
\rho^{t}_{11}&=&\frac{1}{4} (1-r) (1-p_t) (1-(1-\mu ) p_t), \nonumber \\
\rho^{t}_{22}&=&\frac{1}{4} \left(-4 \left(1-\alpha ^2\right) (1-\mu ) r p_t-(1-\mu ) (1-r) p_t p_t+\left(3-4 \alpha ^2\right) r+1\right), \nonumber \\
\rho^{t}_{33}&=&\frac{1}{4} \left(-4 \alpha ^2 (1-\mu ) r p_t-(1-\mu ) (1-r) p_t p_t-\left(1-4 \alpha ^2\right) r+1\right), \nonumber \\
\rho^{t}_{44}&=&\frac{1}{4} ((2-3 \mu ) r p_t+(1-\mu ) (1-r) p_t p_t-\mu p_t+2 p_t-r+1), \nonumber \\
\rho^{t}_{23}&=&\rho^{t\star}_{32}=\alpha \sqrt{1-\alpha ^2} r (1-(1-\mu ) p_t)
\end{eqnarray}
One can obtain the singular value of $\dot{\rho}_t$ as
\begin{eqnarray}
\Lambda_1&=&\frac{1}{2} (\mu -1) (r-1) p(t) \dot{p}_t, \nonumber \\
\Lambda_2 &=& \frac{1}{2} (\mu -1) (r p(t)-p(t)-2 r) \dot{p}_t, \nonumber \\
\Lambda_3&=&\frac{1}{4} (-\mu +2 \mu r p(t)-2 r p(t)-2 \mu p(t)+2 p(t)-3 \mu r+2 r+2)\dot{p}_t, \nonumber \\
\Lambda_4&=& \frac{1}{4} (\mu +2 \mu r p(t)-2 r p(t)-2 \mu p(t)+2 p(t)-\mu r+2 r-2)\dot{p}_t.
\end{eqnarray}
Now from Eq. \ref{(QSL)T}, one can obtain the QSL time for correlated amplitude damping quantum channel.
\begin{figure}[!]
\centerline{\includegraphics[scale=0.7]{Fig2.eps}}
\caption{QSL time for correlated amplitude damping channel as a function of the driving time $\tau_D$ when the initial state parameters are $r=1/2$ and $\alpha=1/\sqrt{2}$ and $\tau=1$. (a)The dynamics is Markovian $\lambda/\gamma_{0}=2$(b)The dynamics is non-Markovian $\lambda/\gamma_0=0.2$. The inset represents the QSL time as a function of initial time $\tau$ when $\tau_D=1$. }\label{Fig2}
\end{figure}
\textbf{In Fig. \ref{Fig2}, the QSL time is plotted as a function of driving time $\tau_D$ for correlated amplitude damping channel with different values of correlation parameter $\mu$ when $\tau=1$. The inset shows the quantum speed limit time in terms of the initial time $\tau$ for different values of correlation parameter $\mu$ when $\tau_D=1$. In Fig. \ref{Fig2}(a) we choose $\lambda / \gamma_0 =2$ and the evolution is Markovian. As can be seen from Fig. \ref{Fig2}(a), the QSL time is decreased by increasing the correlation parameter $\mu$. In Fig. \ref{Fig2}(b) we have $\lambda /\gamma_0 =0.2$ and the evolution is non-Markovian. As can be seen from Fig. \ref{Fig2}(b), the QSL time is also decreased by increasing correlation parameter. From Figs. \ref{Fig1}(a) and \ref{Fig2}(b) one can find that for both Markovian and non-Markovian evolution the QSL time for correlated channel ($\mu=1$) is smaller than uncorrelated channel ($\mu=0$). In other word, in the presence of correlation between two successive uses of the colored pure dephasing channel on two-qubits system the quantum evolution will be faster than the case in which the correlation does not exist. }\\
\textbf{Correlated Squeezed Generalized Amplitude Damping Channels}: \\
An amplitude damping channel represents a physical process such as energy dissipation of a two-level quantum system due to spontaneous emission of a photon into the vacuum at zero temperature \cite{Nielsen}. The generalized amplitude damping (GAD) channel describes the relaxation of a quantum system when the surrounding environment is at finite temperature initially i.e., when the environment starts from a mixed state \cite{Fujiwara}. Generalized amplitude damping channel is developed as a squeezed generalized amplitude damping (SGAD) channel by considering a squeezed thermal environment \cite{Srikanth}. The SGAD channel is a combination of both effects of dissipation at finite temperature and environment squeezing \cite{Daffer1,Banerjee,Wilson,Banerjee1}.
The SGAD channel defines the quantum noise in which the quantum system interacts with an environment that is initially in a squeezed thermal state under the Markov and Born approximations. The dynamics of such a quantum system can be described by following Lindblad master equation
\begin{eqnarray}\label{master}
L(\rho_{t})&=&-\frac{\Omega (n+1)}{2}(\sigma_{+}\sigma_{-}\rho_{t} + \rho_{t} \sigma_{+}\sigma_{-} - 2 \sigma_{-} \rho_{t} \sigma_{+})\nonumber \\
&-&\frac{\Omega n}{2}(\sigma_{-}\sigma_{+}\rho_{t} + \rho_{t} \sigma_{-}\sigma_{+} - 2 \sigma_{+} \rho_{t} \sigma_{-}) \nonumber \\
&-&\Omega m (\sigma_{+}\rho_t \sigma_{+} + \sigma_{-}\rho_t \sigma_{-}),
\end{eqnarray}
where $\sigma_{+}=\frac{1}{2}(\sigma_{1}+i \sigma_{2})$ and $\sigma_{-}=\frac{1}{2}(\sigma_{1}-i \sigma_{2})$ are creation and annihilation operators, respectively, $n$ is associated with the number of thermal photons, $m$ is the squeezing parameter ($m$ and $n$ satisfy $m<n+1/2$) and $\Omega$ is the dissipation rate related to spontaneous emission at zero-temperature \cite{Breuer,Nielsen,Daffer1}. If $m=0$ then SGAD transforms to the GAD channel. When $m=n=0$ the SGAD channel reduces to the amplitude damping channel.
\begin{figure}[!]
\centerline{\includegraphics[scale=0.7]{Fig3.eps}}
\caption{QSL time as a function of the driving time $\tau_D$ when the initial state parameters are $r=1/2$ and $\alpha=1/\sqrt{2}$ when $\tau=1$ for (a) Generalized amplitude damping channel with $m=0,n=1$ and (b) squeezed Generalized amplitude damping channel with $m=1,n=1$. The inset represents the QSL time as a function of initial time $\tau$ when $\tau_D=1$. }\label{Fig3}
\end{figure}
We first review the simple method for solving the master equation in Eq. (\ref{master}) to find the structure of uncorrelated SGAD channel. The dynamics of single-qubit state with initial input $\rho_{0}=\sum_{i,j=0}^{1}\vert i \rangle \langle j \vert$, associated with this Lindblad master equation has the following form \cite{Daffer1}
\begin{eqnarray}\label{30}
\rho_{t} &=& e^{L t} \rho \nonumber \\
&=&\sum_{i}tr(\mathcal{R}_i \rho)e^{\eta_{i}t}\mathcal{L}_{i}=\sum_{i}tr(\mathcal{L}_i \rho)e^{\eta_{i}t}\mathcal{R}_{i},
\end{eqnarray}
where $\mathcal{R}_i$ and $\mathcal{L}_i$ are the right and left eigenoperators of super-operator $L$ in Lindblad master equation, $\eta_{i}$'s are corresponding eigenvalues of these eigenoperators, such that
\begin{equation}\label{31}
L \mathcal{R}_{i}=\eta_{i}\mathcal{R}, \quad \mathcal{L}_{i}L=\eta_{i}\mathcal{L}_{i},
\end{equation}
with $tr(\mathcal{L}_{i}\mathcal{R}_j)=\delta_{ij}$. For SGAD channel, $\mathcal{R}_{i}$'s and $\mathcal{L}_{i}$'s can be defined as \cite{Jeong}
\begin{eqnarray}\label{32}
\mathcal{R}_{1}&=&\frac{1}{\sqrt{2}}(\mathbf{I}_{2 \times 2} - \frac{1}{2n+1}\sigma_3), \quad \mathcal{L}_{1}=\frac{1}{\sqrt{2}}\mathbf{I}_{2 \times 2}, \nonumber \\
\mathcal{R}_{2}&=&\mathcal{L}_{2}=\frac{1}{\sqrt{2}}(\sigma_+ + \sigma_-),\nonumber \\
\mathcal{R}_{3}&=&-\mathcal{L}_{3}=\frac{1}{\sqrt{2}}(\sigma_- - \sigma_+), \nonumber \\
\mathcal{R}_{4}&=& \frac{1}{\sqrt{2}} \sigma_{3}, \quad \mathcal{L}_{4}= \frac{1}{\sqrt{2}}(\frac{1}{2n+1} \mathbf{I}_{2 \times 2} + \sigma_3).
\end{eqnarray}
The eigenvalues $\eta_{i}$ are given by
\begin{eqnarray}\label{33}
\eta_{1}&=&0, \quad \eta_{2}=-\Omega(n+m+\frac{1}{2}), \nonumber \\
\eta_{3}&=&-\Omega(n-m+\frac{1}{2}), \quad \eta_{4}=-2\Omega(n+\frac{1}{2}).
\end{eqnarray}
From Eqs. (\ref{30}), (\ref{31}), (\ref{32}) and (\ref{33}), the evolved single-qubit density matrix can be quantified as
\begin{equation}\label{dynamics}
\rho_{t}=
\left( {\begin{array}{cc}
\frac{n+p_t^{2}(n+1)\rho_{11}-n \rho_{22}}{2n+1} & p_t(q_{t}\rho_{12}-r_{t}\rho_{21}) \\
p_t(q_{t}\rho_{21}-r_{t}\rho_{12}) & 1- \frac{n+p_t^{2}(n+1)\rho_{11}-n \rho_{22}}{2n+1} \\
\end{array} } \right),
\end{equation}
where $p_t=e^{-\Omega(n+1/2)t}$, $q_t=\cosh(\Omega m t)$ and $r_t=\sinh(\Omega m t)$. From Eq. (\ref{dynamics}), the Kraus operators $A_i$ for single-qubit dynamics under SGAD channel can be written as
\begin{eqnarray}\label{kraussingle}
A_{1}&=&
\left( {\begin{array}{cc}
\sqrt{\frac{n}{2n+1}+\frac{n+1}{2n+1} p_{t}^{2}- p_{t}q_{t}} & 0 \\
0 & 0 \\
\end{array} } \right), \nonumber \\
A_{2}&=&
\left( {\begin{array}{cc}
0 & 0 \\
\sqrt{\frac{n+1}{2n+1}(1-p_{t}^2)-p_{t}r_{t}} & 0 \\
\end{array} } \right), \nonumber \\
A_{3}&=&
\left( {\begin{array}{cc}
0 & 0 \\
0 & \sqrt{\frac{n+1}{2n+1}+\frac{n}{2n+1}p_{t}^2-p_{t}r_{t}} \\
\end{array} } \right), \nonumber \\
A_{4}&=&
\left( {\begin{array}{cc}
\sqrt{p_{t} q_{t}} & 0 \\
0 & \sqrt{p_{t}q_{t}} \\ \end{array} }\right), \nonumber \\
A_{5}&=&
\left( {\begin{array}{cc}
0 & \sqrt{p_{t}r_{t}} \\
\sqrt{p_{t}r_{t}} & 0\\ \end{array} }\right), \nonumber \\
A_{6}&=&
\left( {\begin{array}{cc}
0 & \sqrt{\frac{n}{2n+1}(1-p_{t}^{2})-p_t r_t} \\
0 & 0\\ \end{array} }\right).
\end{eqnarray}
Now, we consider two consecutive uses of the SGAD channel on two-qubit quantum system. Uncorrelated SGAD channel $\varepsilon_{un}$ can be shown by the following Kraus operators \cite{Jeong}
\begin{equation}
E_{ij}= A_{i} \otimes A_{j}, \quad i,j=1,...,6,
\end{equation}
We consider the following correlated Lindblad master equation for two-qubit system to find the structure of the correlated SGAD channel
\begin{eqnarray}\label{master2}
\tilde{L}(\rho_{t})&=&-\frac{\Omega (n+1)}{2}(\sigma_{+}^{\otimes 2}\sigma_{-}^{\otimes 2}\rho_{t} + \rho_{t} \sigma_{+}^{\otimes 2}\sigma_{-}^{\otimes 2} - 2 \sigma_{-}^{\otimes 2} \rho_{t} \sigma_{+}^{\otimes 2})\nonumber \\
&-&\frac{\Omega n}{2}(\sigma_{-}^{\otimes 2}\sigma_{+}^{\otimes 2}\rho_{t} + \rho_{t} \sigma_{-}^{\otimes 2}\sigma_{+}^{\otimes 2} - 2 \sigma_{+}^{\otimes 2} \rho_{t} \sigma_{-}^{\otimes 2}) \nonumber \\
&-&\Omega m (\sigma_{+}^{\otimes 2}\rho_t \sigma_{+}^{\otimes 2} + \sigma_{-}^{\otimes 2}\rho_t \sigma_{-}^{\otimes 2}),
\end{eqnarray}
where $\sigma_{\pm}^{\otimes 2}=\sigma_{\pm} \otimes \sigma_{\pm}$. The correlated dynamics of a two-qubit state can be found to be similar to the single-qubit case by using Eqs.(\ref{30}) and (\ref{31}). We consider a general two-qubit state $\rho_0=\sum_{\alpha_1,\alpha_2=1}^{4} \rho_{\alpha_1,\alpha_2}\vert \alpha_{1} \rangle \langle \alpha_{2} \vert$ as an initial input state, where $\vert \alpha_{1,2} \rangle \in \left\lbrace \vert 00 \rangle, \vert 01 \rangle, \vert 10 \rangle, \vert 11 \rangle \right\rbrace $. The solution of correlated master equation in Eq.\ref{master2} is derived as
\begin{eqnarray}\label{38}
\rho_{11}(t)&=&\frac{1}{2n+1} \left( \left((n+1) p_t^2-(2 n+1) s_t \left(1-u_t\right)+n\right) \rho _{11} \right. \nonumber \\
&+& \left. \left(n-p_t \left(n p_t+2 (2 n+1) r_t\right)\right) \rho _{44} \right), \nonumber \\
\rho_{12}(t)&=& \sqrt{s_t u_t} \rho _{12} , \nonumber \\
\rho_{13}(t)&=& \sqrt{s_t u_t} \rho _{13} , \nonumber \\
\rho_{14}(t)&=& (\sqrt{s_t} u_t - p_t \left(1-q_t\right)) \rho _{14} -p_t r_t \rho _{41}, \nonumber \\
\rho_{22}(t)&=& \rho_{22}, \nonumber \\
\rho_{23}(t)&=&\rho_{23}, \nonumber \\
\rho_{24}(t)&=& \sqrt{u_t}\rho _{24}, \nonumber \\
\rho_{33}(t)&=&\rho _{33}, \nonumber \\
\rho_{34}(t)&=& \sqrt{u_t}\rho _{34}, \nonumber \\
\rho_{44}(t)&=&1-\rho_{11}(t)-\rho_{22}-\rho_{33}.
\end{eqnarray}
\vfill
From Eq. (\ref{38}), the Kraus operators $E_{kk}$ for correlated part is obtained as
\begin{eqnarray}\label{kraustwo}
E_{11}&=&
\left( {\begin{array}{cccc}
\sqrt{s_{t}} & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & \sqrt{u_t}\\
\end{array} } \right), \nonumber \\
E_{22}&=&
\left( {\begin{array}{cccc}
0 & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
\sqrt{\frac{n+1}{2 n+1}(1-p_{t}^{2})-p_{t}r_{t}} & 0 & 0 & 0\\
\end{array} } \right), \nonumber \\
E_{33}&=&
\left( {\begin{array}{cccc}
0 & 0 & 0 & \sqrt{\frac{n}{2n+1}(1-p_{t}^{2})-p_{t}r_{t}}\\
0 & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
\end{array} } \right), \nonumber \\
E_{44}&=&
\left( {\begin{array}{cccc}
\sqrt{\frac{n}{2n+1}+\frac{n+1}{2n+1}p_{t}^{2}-p_{t}(q_{t}-1)-s_{t}} & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
\end{array} } \right), \nonumber \\
E_{55}&=&
\left( {\begin{array}{cccc}
0 & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & 0 & \sqrt{\frac{n+1}{2n+1}+\frac{n}{2n+1}p_{t}^{2}-p_{t}(q_{t}-1)-u_t}\\
\end{array} } \right), \nonumber \\
E_{66}&=&
\left( {\begin{array}{cccc}
\sqrt{p_{t}(q_t)-1} & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & 0 & \sqrt{p_{t}(q_t)-1}\\
\end{array} } \right), \nonumber \\
E_{77}&=&
\left( {\begin{array}{cccc}
0 & 0 & 0 & i \sqrt{p_{t}r_{t}}\\
0 & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
i \sqrt{p_{t}r_{t}} & 0 & 0 & 0\\
\end{array} } \right),
\end{eqnarray}
where $u_{t}=e^{-\Omega n t}$ and $s_{t}=e^{-\Omega (n+1)t}$. From Eq. (\ref{(QSL)T}), one can find the QSL time for correlated SGAD quantum channel after some straightforward calculation with large output . \textbf{In Fig. \ref{Fig3}, the QSL time is plotted as a function of driving time $\tau_D$ for correlated GAD channel and correlated SGAD channel with different values of correlation parameter $\mu$ when $\tau=1$. The inset shows the quantum speed limit time in terms of the initial time $\tau$ for different values of correlation parameter $\mu$ when $\tau_D=1$. In Fig. \ref{Fig3}(a) QSL time is plotted as a function of driving time for correlated generalized amplitude damping i.e. the correlated channel with parameters $m=0, n=1$. As can be seen from Fig. \ref{Fig3}(a), the QSL time is decreased by increasing the correlation parameter $\mu$. In Fig. \ref{Fig3}(b) QSL time is plotted as a function of driving time $\tau_D$ for correlated squeezed generalized amplitude damping i.e. the correlated channel with parameters $m=1, n=1$. As can be seen from Fig. \ref{Fig3}(b), the QSL time is also decreased by increasing correlation parameter. From Figs. \ref{Fig3}(a) and \ref{Fig3}(b) one can find that for both GAD and SGAD correlated channels the QSL time is decreased by increasing correlation parameter $\mu$. In other word, in the presence of correlation between two successive uses of these channels on two-qubits system the quantum evolution will be faster than the case in which the correlation does not exist. }\\
\section{Summary and conclusion}\label{Sec4}
We have studied the QSL time for correlated quantum channels, where the term correlated indicates the existence of correlations between two consecutive uses of the quantum channel on a two-qubit quantum system. We have used correlated pure dephasing colored noise as an example for the unital correlated quantum channels and correlated amplitude damping and SGAD channels as the examples for the non-unital quantum channels.\textbf{ We found that in the case of correlated dephasing colored noise for both Markovian and non-Markovian evolution the QSL time is increased by increasing correlation parameter of quantum channel $\mu$. In other word, in the presence of correlation between two successive uses of the colored pure dephasing channel on two-qubits quantum system the quantum evolution will be slower than the case in which the correlation does not exist. In the case of correlated amplitude damping channel for both Markovian and non-Markovian evolution the QSL time is decreased by increasing correlation parameter of quantum channel $\mu$. In the case of correlated generalized amplitude damping and correlated squeezed generalized amplitude damping channel the quantum speed limit time is decreased by increasing correlation parameter of quantum channel. }
\section*{Acknowledgments}
The authors would like to thank Prof. Masashi Ban for his valuable comments.
|
train/arxiv
|
BkiUeAk5jDKDyJzsKrfv
| 5 | 1 |
\section{Introduction}
\label{s:intro}
As more scientists and engineers use computer simulations,
some begins to harness the versatile power of sensitivity analysis.
It helps them engineer products \cite{Jameson1988,adjoint2},
control processes and systems \cite{adjoint1,bewley01},
solve inverse problems \cite{seismic_adjoint},
estimate simulation errors \cite{Becker_2001_An_Optimal_Control_Approach,
Giles2002, error2010, fidkowski2011review},
assimilate measurement data \cite{QJ:QJ49711750206,weather4}
and quantify uncertainties \cite{qiqiwang_thesis}.
Sensitivity analysis computes the derivative of outputs to inputs of a
simulation. Conventional methods, including the tangent and the adjoint
method, are introduced in Section
\ref{s:conventional}. These methods, however, fails when the dynamical
system is chaotic and the outputs are long time averaged
quantities. They compute derivatives that are
orders of magnitude too large, and that grow exponentially larger
as the simulation runs longer. What causes this failure is
the ``butterfly effect'' -- sensitivity of chaotic initial value
problems. This diagnosis is first published by Lea et al \cite{leaclimate},
and explained in Section \ref{s:breakdown}.
Many researchers have become interested in overcoming this failure,
a challenge in both dynamical systems and numerical methods.
They have recently developed a few methods for
computing \emph{useful} derivatives of long time averaged outputs
in chaotic dynamical systems. Lea et al pioneered the ensemble
adjoint method \cite{leaclimate,eyinkclimate}, which
applies the adjoint method to many random trajectories,
then averages the computed derivatives.
Nevertheless, they need impractically many trajectories,
making the method costly even for small
dynamical systems such as the Lorenz system.
Thuburn introduced an approach that solves the adjoint of the
Fokker-Planck equation, which governs a probability distribution in the phase
space \cite{QJ:QJ200513160505}. However, this approach assumes the
probability distribution to be smooth, a property often achieved by
by adding dissipation to the Fokker Planck equation, causing
error in the result.
In addition, researchers have adopted the Fluctuation-Dissipation Theorem
for sensitivity analysis \cite{0951-7715-20-12-004}. This approach have
several variants. Different variants, however, has different limitations.
Some assume the dynamical system to have an equilibrium distribution
similar to the Gaussian distribution, an
assumption often violated in dissipative dynamical systems.
Other variants nonparametrically estimate the equilibrium distribution
\cite{cooper2011climate}, but add artificial
noise to the dynamical system to ensure its smoothness.
The first author recently used Lyapunov eigenvector decomposition for
sensitivity analysis \cite{wangLorenz}.
However, this decomposition
requires high computational cost when the dynamical system has many positive
Lyapunov exponents. Despite these new methods, nobody has applied
sensitivity analysis to long time averaged outputs in turbulent flows,
or other large, dissipative and chaotic systems.
This paper presents the \emph{Least Squares Shadowing method},
a new method for computing derivatives
of long time averaged outputs in chaos.
The method linearizes the \emph{least
squares shadowing problem}, a constrained least squares problem
defined in Section \ref{s:shadowing}.
It then solves the linearized problem with a numerical method
described in Section \ref{s:numerical}.
Demonstrated with three application
in Sections \ref{s:vdp}, \ref{s:lorenz} and \ref{s:aeroelastic},
the method is concluded in Section \ref{s:conclude} to be
potentially useful in large chaotic dynamical systems.
\section{Conventional method for sensitivity analysis}
\label{s:conventional}
In sensitivity analysis, an output $J$ depends on an input $s$
via a simulation, which solves an ordinary differential equation
\begin{equation} \label{ode}
\frac{du}{dt} = f(u, s)
\end{equation}
starting from an initial condition
\begin{equation} \label{odeiv} u|_{t=0} = u_0(s) \;,\end{equation}
where the input $s$ can
represent control variables, design variables, and
uncertain parameters. This initial value problem
(\ref{ode}-\ref{odeiv})
determines a solution $u_{iv}(t;s)$ that depends on time and the input.
An output $J(u,s)$ is a function of the solution and the input.
It can also be viewed as a function of time and the input by
substituting the solution $u_{iv}(t;s)$.
The time averaged output,
\begin{equation} \label{finiteobj}
\overline{J}^{(T)}_{iv}(s) := \frac1T \int_0^T J(u_{iv}(t;s), s) \,
dt\;, \end{equation}
then depends only on the input $s$.
Its derivative to $s$ can be computed by
the conventional tangent method of sensitivity
analysis \cite{brysonho}.
The conventional tangent method first solves the linearized
governing equation, also known as the \emph{tangent equation},
\begin{equation} \label{linearized}
\frac{dv}{dt} = \frac{\partial f(u_{iv},s)}{\partial u} v
+ \frac{\partial f(u_{iv},s)}{\partial s}
\end{equation}
with the linearized initial condition
\begin{equation}
\quad v|_{t=0} = \frac{d u_0}{ds}\;.
\end{equation}
The solution $v_{iv}(t;s)$ indicates how a small change in $s$ alters the
solution to the initial value problem $u_{iv}(t;s)$:
\begin{equation}
v_{iv}(t;s) = \frac{\partial u_{iv}(t;s)}{\partial s}
\end{equation}
This solution is then used
to compute the derivative of $\overline{J}_{iv}^{(T)}(s)$:
\begin{equation}\label{overlineJder} \frac{d \overline{J}_{iv}^{(T)}}{d s}
= \frac1T \int_0^T \left(\frac{\partial J(u_{iv},s)}{\partial u} v_{iv}
+ \frac{d J(u_{iv},s)}{d s} \right) dt \end{equation}
This method can be transformed into the conventional adjoint method
\cite{brysonho}, which computes the derivative of
one objective function to many inputs simultaneously.
This advantage makes the adjoint method popular in optimal control,
inverse problems and data assimilation applications.
\section{Failure of the conventional method for time averaged outputs
in chaos}
\label{s:breakdown}
The conventional method fails when the simulation
(\ref{ode}) is chaotic, and the output (\ref{finiteobj}) is
averaged over a long time $T$. A chaotic dynamical
system is sensitive to its initial condition, causing the solution to the
linearized initial value problem (\ref{linearized}) to grow at a
rate of $e^{\lambda t}$, where $\lambda>0$
is the maximal Lyapunov exponent of the dynamical system.
This exponential growth makes $v_{iv}(t;s)$ large
unless $t$ is small. When substituted into Equation
(\ref{overlineJder}), we expect a large
$\frac{d \overline{J}_{iv}^{(T)}}{d s}$ unless $T$ is small.
The value of $\frac{d \overline{J}_{iv}^{(T)}}{d s}$ can exceed
$10^{100}$ time of what scientists had expected.
Lea et al. \cite{leaclimate} documented this in the Lorenz system,
which models heat convecting from a warm horizontal surface to a cooler
one placed above it. Their temperature difference, described by the Rayleigh
number, affects how fast the heat convects; it is therefore chosen by
Lea at al as the input $s$.
The heat convection rate is chosen as the output $J$;
its time average should increase with $s$ at a ratio of about 1.
\footnote{In Lea et al.'s original paper,
the Rayleigh number is denoted
as $\rho$ and the convective heat transfer rate is denoted as $z$.
These notations are conventional in Lorenz system literature.
But in this paper, we denote the Rayleigh number as $s$ and
the heat transfer rate as $J$, so that we are consistent with the
general notation of input and output.}
Lea et al. considered a range of input $s$
and several values of the averaging length $T$. At each $s$ and $T$,
they simulated the Lorenz system and computed
$\overline{J}^{(T)}_{iv}(s)$. They then computed the derivative
$\frac{d \overline{J}_{iv}^{(T)}}{d s}$ using the conventional
adjoint sensitivity analysis method. When $T$ is large, they
found the derivative of
$d \overline{J}_{iv}^{(T)}$ orders of magnitude larger
than its expected slope of about 1.
By repeating Lea et al.'s procedure, we found that the astronomical values of
$\frac{d \overline{J}_{iv}^{(T)}}{ds}$, plotted in Figure \ref{f:lea01}, are
insensitive to how Equations (\ref{ode}-\ref{overlineJder}) are discretized.
\begin{figure}[htb!] \centering
\subfloat[$\overline{J}_{iv}^{(T)}(s)$ for $T=2.26$
]{\includegraphics[width=2.2in]{figures/z2.eps}}
\hspace{0.1in}
\subfloat[$\left|\frac{d\overline{J}_{iv}^{(T)}}{ds}\right|$ for $T=2.26$
]{\includegraphics[width=2.2in]{figures/dz2.eps}}\\
\subfloat[$\overline{J}_{iv}^{(T)}(s)$ for $T=131.4$
]{\includegraphics[width=2.2in]{figures/z131.eps}}
\hspace{0.1in}
\subfloat[$\left|\frac{\overline{J}_{iv}^{(T)}}{d s}\right|$ for $T=131.4$
]{\includegraphics[width=2.2in]{figures/dz131.eps}}
\caption{Plots created following the procedure in Lea et al\cite{leaclimate}
(permission granted). Left: time averaged output $\overline{J}_{iv}^{(T)}(s)$
plotted against the input $s$. Right: the derivative of
the time averaged output with respect to $s$. Note the order of
magnitude of the $y$-axes.}
\label{f:lea01}
\end{figure}
The computed derivative $\frac{d \overline{J}_{iv}^{(T)}}{d s}$
is too large to be useful.
The derivative is useful in approximating the slope of the function,
$\frac{\overline{J}_{iv}^{(T)}(s+\delta s) -
\overline{J}_{iv}^{(T)}(s)}{\delta s}$.
The better it approximates this slope, and over a larger interval
size $\delta s$, the more useful it is.
If the derivative is as large as $10^{50}$, the function must
have a correspondingly steep slope when plotted against $s$,
but only so monotonically over intervals smaller than $10^{-50}$.
The derivative can approximate the slope of the function well
only within these impractically tiny intervals -- computers
cannot even represent an interval of
$[1, 1+10^{-16}]$ in double precision.
For approximating the slope of the function over a practical interval
$[s,s+\delta s]$, the derivative is useless.
This failure happens not only to the Lorenz system, but to other
chaotic dynamical systems such as chaotic
fluid flows \cite{wanggao}. It is caused by the sensitivity of chaos.
Popularly known as the ``butterfly effect'', this sensitivity
makes the finite time average $\overline{J}_{iv}^{(T)}$ ill-behaved,
its derivative with respect to $s$ fluctuating wildly.
A small change in $s$ almost always causes a large change in the solution
$u_{iv}$, thus a large change in the tangent solution
$v_{iv}$, and thus a large change in the derivative
$\frac{d \overline{J}_{iv}^{(T)}}{d s}$.
As an $s$ increases to $s+\delta s$,
the derivative can vary over a wide range of positive and negative
values. These derivative values, by the fundamental
theorem of calculus, must average to the slope of the function
\begin{equation} \label{ftc}
\mbox{slope}:=
\frac{\overline{J}_{iv}^{(T)}(s+\delta s) -
\overline{J}_{iv}^{(T)}(s)}{\delta s}
=
\frac{1}{\delta s} \int_{s}^{s+\delta s}
\frac{d \overline{J}_{iv}^{(T)}}{ds} ds'\;,
\end{equation}
but because the derivative fluctuates rapidly and wildly between extreme
values of either sign, at almost any point within $[s,s+\delta s]$, the
derivative is much larger in magnitude than the slope of the function
over $[s,s+\delta s]$.
How sensitive a solution $u$ is to its
input $s$ can be quantified by the \emph{condition number}, defined as
$\|du/ds\|$. We call a problem ill-conditioned if it has a large
condition number, or well-conditioned if it has a small one.
A chaotic initial value problem has a condition number on the order of
$e^{\lambda T}$, where $\lambda$ is the maximal Lyapunov exponent.
Even moderately long simulations can be ill-conditioned, causing sensitivity
analysis to fail. To overcome this failure,
we must substitute the initial value problem with a
well-conditioned one.
\section{Sensitivity analysis via Least Squares Shadowing}
\label{s:shadowing}
\subsection{The nonlinear Least Squares Shadowing (LSS) problem}
The initial condition of a simulation can be relaxed
if the following assumptions hold:
\begin{enumerate}
\item {\bf We are interested in infinite time averaged outputs.}
When scientists and engineers compute a long time averaged output, they
often intend to approximate the limit
\begin{equation} \label{infiniteobj}
\overline{J}^{(\infty)}(s)
:= \lim_{T\to\infty}\frac1T \int_0^T J(u(t;s), s) \, dt\;.
\end{equation}
We assume that these infinite time averaged outputs, and functions
thereof, are the only outputs of interest.
\item {\bf The dynamical system is \emph{ergodic}.}
An ergodic dynamical system behaves the same over long
time, independent of its initial condition.
Specifically, the initial condition does not affect an
infinite time averaged outputs defined above.
\end{enumerate}
Under these two assumptions, we can approximate the outputs
using a long solution of the governing equation, regardless
of where the solution starts. We
replace initial condition with a criterion that makes the problem
better-conditioned. Among all trajectories that satisfy the
governing equation, we chose
one that is closest to a pre-specified reference trajectory $u_r$
in the following metric:
\begin{equation} \label{lsq0} \begin{split}
& \underset{\tau, u}{\mbox{minimize }} \frac1T\int_{0}^{T} \left(\Big\|u(\tau(t)) - u_r(t)\Big\|^2
+ \alpha^2 \left(\frac{d\tau}{dt} - 1\right)^2\right) dt \;,\\
& \quad\mbox{such that} \qquad\frac{du}{dt} = f(u,s)\;.
\end{split}\end{equation}
We choose the reference trajectory $u_r(t)$ to be a solution to the governing
equation at a different $s$, set the constant $\alpha$ so that the two
terms in the integral have similar magnitude, then
minimize this metric among all trajectories $u(t)$ and all
monotonically increasing time transformations $\tau(t)$.
We call this constrained minimization problem (\ref{lsq0}) the Least
Squares Shadowing (LSS) problem. We denote its solution
as $u_{lss}^{(T)}(t;s)$ and $\tau_{lss}^{(T)}(t;s)$.
They are
a solution of the governing equation and a time transformation that makes
this solution close to $u_r$.
Because $u_{lss}^{(T)}(t;s)$
satisfies the governing equation, we use it to approximate
\begin{equation} \label{lssobj}
\overline{J}^{(\infty)}(s)
\approx \overline{J}^{(T)}_{lss}(s)
:= \frac1{\tau(T)-\tau(0)} \int_{\tau(0)}^{\tau(T)}
J(u^{(T)}_{lss}(t;s), s) \, dt\;.
\end{equation}
with sufficiently large $T$.
\subsection{Well-conditioning of the Least Squares Shadowing (LSS) problem}
An initial value problem of chaos is ill-conditioned, causing failure to
conventional sensitivity analysis methods, a failure we now overcome
by switching to the LSS problem, a well-conditioned problem
whose solution is less sensitive to perturbations in the parameter
value, and whose long time averages have useful derivatives.
\begin{figure}[htb!]
\subfloat[$J(u_{iv}(t;s), s)$]
{\includegraphics[width=0.48\textwidth]{figures/lorenzivp.eps} }
\subfloat[$J(u_{lss}^{(T)}(t;s),s)$ and $\tau_{lss}^{(T)}(t;s)$.]
{\includegraphics[width=0.48\textwidth]{figures/lorenzlsp.eps}}
\hspace{0.02\textwidth}
\caption{Time dependent rate of heat transfer in the Lorenz system
with varying Rayleigh number $s$. This output is computed by solving
initial value problems in the left plot,
and by solving LSS problems in the right plot.
Each vertical slice represents the time dependent output
at an $s$ value.}
\label{f:ivplsp}
\end{figure}
\begin{wrapfigure}{r}{0.5\textwidth}\vspace{-0.04\textwidth}
\includegraphics[width=0.5\textwidth]{figures/cond}\vspace{-0.04\textwidth}
\caption{The condition number increases rapidly in an initial value
problem (dashed line with squares),
but stays relatively constant in an LSS problem (solid line with
circles).}
\label{f:cond}
\end{wrapfigure}
Figure \ref{f:ivplsp} visualizes how sensitive the initial value problem
is, whereas how robust the LSS problem is\footnote{In Figure \ref{f:ivplsp}(b),
we solve a single initial
value problem at $s=25$, followed by a sequence of Least squares problems
at increasing values of $s$, each using the previous
solution as its reference trajectory $u_r$.}.
The initial value problem produces
solutions that grows more sensitive to the input $s$
as time advances. Its condition number
grows exponentially as the trajectory length increases.
The LSS problem produces solutions that
gradually depend on $s$. As shown in Figure \ref{f:cond},
it stays well-conditioned regardless of how
long the trajectory is.
The LSS problem is well-conditioned, a result not
only observed in the Lorenz system, but also
derives from the \emph{shadowing lemma}\cite{pilyugin1999shadowing}.
It guarantees that a trajectory of the governing equation
exists in the proximity of any ``$\delta$-pseudo
trajectory'', defined as an approximate solution that satisfies
the governing equation to $\delta$-precision. The lemma assumes
a set of properties known as \emph{uniform
hyperbolicity}\cite{kuznetsov2012hyperbolic, CambridgeJournals:1825204},
and states that \emph{for any $\epsilon>0$,
there exists $\delta$, such that for all $\delta$-pseudo
trajectory $u_r$ of any length, there exists a true trajectory $u$
within $\epsilon$ distance from $u_r$, in the same distance metric
used in Equation (\ref{lsq0})}.
If $u_r$ is a true trajectory at input value
$s$, and thereby a $\delta = \sup\frac{\partial f(u;s)}{\partial s}\;\delta s$
-pseudo-trajectory at input value $s+\delta s$,
then the shadowing lemma predicts the LSS
solution $u_{lss}$ to be within $\epsilon$ distance from $u_r$.
Perturbing $s$ slightly makes $u_{lss}$ slightly
different from $u_r$, indicating a well-conditioned problem
regardless of how long the trajectory is.
Because the LSS problem is well-conditioned, its
time averaged output $\overline{J}_{lss}^{(T)}(s)$ has a useful
derivative. This LSS derivative $\frac{d \overline{J}_{lss}^{(T)}}{ds}$
can be computed by solving a
linearized LSS problem (detailed in Section \ref{s:method}).
Because of its well-conditioning, perturbing the input between $s$ and
$s+\delta s$ causes a small difference in its solution,
and therefore a small difference in the LSS derivative.
This, and the fundamental theorem of calculus
\begin{equation} \label{ftc1}
\frac{1}{\delta s} \int_{s}^{s+\delta s}
\frac{d \overline{J}_{lss}^{(T)}}{ds} ds
=
\frac{\overline{J}_{lss}^{(T)}(s+\delta s) -
\overline{J}_{lss}^{(T)}(s)}{\delta s}\;,
\end{equation}
make the LSS derivative at any $s\in[s,s+\delta s]$ a useful
approximation to the slope.
As $T\to\infty$, this slope
converges to the slope of the infinite time average
$\overline{J}^{(\infty)}$, and the LSS derivative converges to the
derivative of this infinite time average. Such derivative exists
not only as a derivative of the limit (\ref{infiniteobj})
\cite{springerlink10,CambridgeJournals:1825204},
but also as a limit of the LSS derivative as $T\to\infty$.
The limit and the derivative commute because the slope of
$\overline{J}^{(\infty)}$ between $s$ and $s+\delta s$
uniformly converges to its derivative as $\delta s$ vanishes -- a proven
result made possible by the well-conditioned LSS problem \cite{lsstheory}.
\subsection{Computing derivative from linearized Least Squares Shadowing
(LSS) solution}
\label{s:method}
The linearized LSS problem derives from the nonlinear
problem (\ref{lsq0}). We choose a
reference trajectory $u_r$ that satisfies the governing equation at an
input value $s$, then perturb $s$ by an infinitesimal
$\delta s$. By ignoring $O(\delta s^2)$ terms in Taylor
expansions, we obtain
\begin{equation} \label{lsq2} \begin{split}
& \underset{\eta, v}{\mbox{minimize }} \frac1T\int_{0}^{T}
\left(\|v\|^2 + \alpha^2 \eta^2\right)dt\;, \quad\mbox{such that}\\
& \frac{dv}{dt} = \frac{\partial f}{\partial u} v
+ \frac{\partial f}{\partial s} + \eta f(u_r,s)\;,
\end{split}\end{equation}
where $v(t)$ and $\eta(t)$ are the solution of this linearized LSS
problem. They relate to the solution of the nonlinear
problem $\tau^{(T)}_{lss}$ and $u^{(T)}_{lss}$ via
\begin{equation}
v(t) =
\frac{d}{ds}\bigg(u^{(T)}_{lss}\left(\tau^{(T)}_{lss}(t;s);s\right)\bigg)
\;,\quad
\eta(t) = \frac{d}{ds} \frac{d\tau^{(T)}_{lss}(t;s)}{dt}\;.
\end{equation}
The linearization is detailed in the Appendix.
We also linearize the time averaged output
$\overline{J}_{lss}^{(T)}$ as defined in Equation (\ref{lssobj}),
and obtain a formula for computing the desired derivative from the
solution of the linearized LSS problem
\begin{equation} \label{climatesens}
\frac{d\langle J\rangle}{ds} \approx \frac{
\displaystyle\int_{0}^{T}\left(
\frac{\partial J}{\partial u}v
+ \frac{\partial J}{\partial s}
+ \eta \left( J - \overline{J}\:\right)\right)dt}{T} \;,
\quad\mbox{where}\quad
\overline{J}=\frac{\displaystyle\int_{0}^{T} J\,dt}{T}
\end{equation}
This linearization is also derived in the Appendix.
\section{Numerical solution of the Least Squares Shadowing (LSS) problem}
\label{s:numerical}
The linearized LSS
problem (\ref{lsq2}) can be solved with two numerical approaches.
One approach, detailed in Subsection \ref{s:disckkt},
first discretizes Problem (\ref{lsq2}), then derive
from the discretized minimization problem its optimality condition,
a system of linear equations that are finally solved to obtain the solution
$v$ and $\eta$.
The other approach, detailed in Subsection \ref{s:kktdisc},
applies variational calculus to
Problem (\ref{lsq2}) to derive its variational optimality
condition, a system of linear differential equations that are then discretized
and solved to obtain $v$ and $\eta$.
Both approaches can lead to the same linear
system, whose solution method is described in Subsection
\ref{s:solvekkt}. Section \ref{s:algorithm} provides a short summary
of the numerical procedure. The algorithm admits
an adjoint counterpart, described in
Subsection \ref{s:adjoint}, that can compute derivatives
to many parameters simultaneously.
\subsection{Derivation of the linear system via
the discrete optimization approach}
\label{s:disckkt}
We first convert Problem (\ref{lsq2})
from a variational minimization problem to a finite dimensional
minimization problem. By dividing the time domain $[0,T]$ into
$m=T/\Delta t$ uniform time steps\footnote{
$\Delta t$ is chosen to be uniform for all time steps because it simplifies
the notation. The algorithm can be extended to nonuniform $\Delta t$,
as implemented in the lssode package\cite{lssode_sftw}.
}, denoting $u_{i+\frac12} = u_r\left(\left(i + \frac12\right)\Delta t\right),
v_{i+\frac12} = v\left(\left(i + \frac12\right)\Delta t\right),
i=0,\ldots,m-1$ and $\eta_{i}=\eta(i \Delta t), i=1,\ldots,m-1$,
and approximating the time derivatives of $u$ and $v$ via the
trapezoidal rule\footnote{
We choose the trapezoidal rule because it is single-step and
second-order accurate. Other time discretization can be used, though
the resulting system will be either more complex or less accurate.
}, we discretize the linearized LSS
problem (\ref{lsq2}) into
\begin{equation} \label{dlsq} \begin{split}
&\underset{v_i, \eta_i}{\mbox{minimize }} \sum_{i=0}^{m-1} \frac{\|v_{i+\frac12}\|_2^2}{2}
+ \alpha^2\sum_{i=1}^{m-1} \frac{\eta_{i}^2}{2}\;,\qquad \mbox{such that}\\
& E_i v_{i-\frac12} + f_i \eta_i + G_i v_{i+\frac12} = b_i\;,\quad 1\le i< m
\end{split}\end{equation}
where
\begin{equation} \label{efgdef} \begin{split}
E_i &= -\frac{I}{\Delta t} -
\frac{\partial f}{\partial u}(u_{i-\frac12}, s)\;, \\
f_i &= \frac{u_{i+\frac12} - u_{i-\frac12}}{\Delta t}\;,\\
G_i &= \frac{I}{\Delta t} -
\frac{\partial f}{\partial u}(u_{i+\frac12}, s)\;. \\
b_i &= \frac12 \left(\frac{\partial f(u_{i-\frac12}, s)}{\partial s}
+\frac{\partial f(u_{i+\frac12}, s)}{\partial s}
\right)\;,\\
\end{split}\end{equation}
This linear-constrained least-squares
problem has an optimality condition that forms
the following KKT system\cite{boyd2004convex}
\begin{equation} \label{kkt} \addtolength{\arraycolsep}{-1mm}
{\scriptstyle
\left[\begin{array}{cccccccc|cccc}
I & & & & & & & & E_1^T & & & \\
&\alpha^2 & & & & & & & f_1^T & & & \\
& & I & & & & & & G_1^T & E_2^T & & \\
& & & \alpha^2 & & & & & & f_2^T & & \\
& & & & I & & & & & G_2^T & & \\
& & & & & \ddots & & & & & \ddots & E_{\scriptscriptstyle m-1}^T \\
& & & & & & \alpha^2 & & & & & f_{\scriptscriptstyle m-1}^T \\
& & & & & & & I & & & & G_{\scriptscriptstyle m-1}^T \\
\midrule
E_1 & f_1 & G_1 & & & & & & & & \\
& & E_2 & f_2 & G_2 & & & & & & & \\
& & & & \ddots & \ddots & & & & & & \\
& & & & & E_{\scriptscriptstyle m-1}& f_{\scriptscriptstyle m-1} & G_{\scriptscriptstyle m-1} & & & & \\
\end{array}\right]
\left[\begin{array}{c}
v_{\frac12} \\ \eta_1 \\ v_{1+\frac12} \\ \eta_2 \\ v_{2+\frac12}
\\ \vdots \\ \eta_{\scriptscriptstyle m-1} \\ v_{m-\frac12} \\
\midrule w_1 \\ w_2 \\ \vdots \\ w_{\scriptscriptstyle m-1}
\end{array}\right]
=
\left[\begin{array}{c}
0 \\ 0 \\ 0 \\ 0 \\ 0 \\ \vdots \\ 0 \\ 0 \\
\midrule
-b_1 \\ -b_2 \\ \vdots \\ -b_{\scriptscriptstyle m-1}
\end{array}\right]
}
\end{equation}
This linear system can be solved to obtain the LSS
solution $v_i$ and $\eta_i$.
\subsection{Derivation of the linear system via
the continuous optimization approach}
\label{s:kktdisc}
Problem (\ref{lsq2}) is constrained by a differential equation.
Its optimality condition must be derived using calculus of variation.
Denote $w(t)$ as the Lagrange multiplier
function; the Lagrangian of Problem (\ref{lsq2}) is
\[
\Lambda=
\int_{0}^{T} \left(v^{\top} v + \alpha^2
\eta^2 + 2\,w^{\top}\left(
\frac{dv}{dt} - \frac{\partial f}{\partial u} v
- \frac{\partial f}{\partial s} - \eta f\right)\right)\,dt
\]
The optimality condition requires a zero variation of
$\Lambda$ with respect to arbitrary $\delta w$, $\delta v$ and
$\delta\eta$. This condition, through integration by parts,
transforms into the following differential equations and
boundary conditions
\[\left\{\begin{aligned}
&\frac{dv}{dt} - \frac{\partial f}{\partial u} v - \frac{\partial
f}{\partial s} - \eta f = 0\\
&\frac{dw}{dt} + \frac{\partial f}{\partial u}^{\top} w - v = 0 \\
&w(0) = w(T) = 0 \\
&\alpha^2\eta - w{\top} f = 0 \;.\\
\end{aligned} \right.\]
These linear differential equations consistently discretize
into the same linear system (\ref{kkt}) derived in the last subsection.
\subsection{Solution of the linear system}
\label{s:solvekkt}
The KKT system (\ref{kkt}) can be solved by using Gauss elimination to
remove the lower-left
block, forming the Schur complement
\begin{equation} \label{SchurSys}
{\bf B} {\bf B}^T {\bf w} = {\bf b}\;,
\end{equation}
where
\begin{equation}\addtolength{\arraycolsep}{-1mm} \label{Bmat}
{\bf B} =
\left[\begin{array}{cccccccc}
E_1 & \frac{f_1}{\alpha} & G_1 & & & & \\
& & E_2 & \frac{f_2}{\alpha} & G_2 & & & \\
& & & & \ddots & \ddots & & \\
& & & & & E_m& \frac{f_m}{\alpha} & G_m
\end{array}\right]\;,\quad
{\bf w} =
\left[\begin{array}{c} w_1 \\ w_2 \\ \vdots \\ w_m
\end{array}\right]\;,\quad
{\bf b} =
\left[\begin{array}{c} b_1 \\ b_2 \\ \vdots \\ b_m
\end{array}\right]\,.
\end{equation}
This Schur complement matrix ${\bf B} {\bf
B}^T$ is symmetric-positive-definite and block-tri-diagonal;
its block size is the dimension of the
dynamical system $n$. Equation (\ref{SchurSys})
can be solved using a banded direct solver
with $O(m\,n^3)$ floating point operations \cite{Golub1996}.
One can also apply sparse QR factorization to the block-bi-diagonal
${\bf B}^T$,
and then use backward and forward substitution to compute $\bf w$.
The factorization also takes $O(m\,n^3)$ floating point
operations \cite{Golub1996}. Iterative methods can be used when
$n$ is large.
$\bf w$ is substituted into the upper blocks of Equation (\ref{kkt})
to compute $v_i$ and $\eta_i$.
These blocks can be written as
\begin{equation} \label{veta}
v_{i+\frac12} = -G_i^T w_i - E_{i+1}^T w_{i+1} \;, \;\; 0\le i<m\;;\qquad
\eta_i = -\frac{f_i^T w_i}{\alpha^2}\;,\;\; 0<i<m\;.
\end{equation}
with the notation $w_0 = w_{m+1} = 0$.
The desired derivative is then computed by discretizing
Equation (\ref{climatesens}) into
\begin{equation}\label{sensdJds}
\frac{d\langle J\rangle}{ds}
\approx
\frac{1}{m} \sum_{i=0}^{m-1}\left(
\frac{\partial J(u_{i+\frac12}, s)}{\partial u}\, v_{i+\frac12} +
\frac{\partial J(u_{i+\frac12}, s)}{\partial s}\right)
+ \frac{1}{m-1}\sum_{i=1}^{m-1} \eta_i \widetilde{J}_i
\end{equation}
where
\begin{equation} \label{gdef} \begin{split}
\widetilde{J}_i &=
\frac{J(u_{i-\frac12},s) + J(u_{i+\frac12},s)}{2} -
\frac{1}{m}\sum_{i=0}^{m-1} J(u_{i+\frac12},s) \;, \quad i=1,\ldots,m-1
\end{split} \end{equation}
\subsection{Summary of the algorithm}
\label{s:algorithm}
\begin{enumerate}
\item Choose a small time step size $\Delta t$ and
sufficient number of time steps $m$.
\item Compute a solution to the equation (\ref{ode}) at
$u_i=u_r\big((i+\frac12)\Delta t\big), i=0,\ldots,m-1$.
\item Compute the vectors and matrices $E_i$, $f_i$, $G_i$ and $b_i$
as defined in Equations (\ref{efgdef}).
\item Form matrix ${\bf B}$. Choose an $\alpha$ so that $f_i/\alpha$
is on the same order of magnitude as $E_i$ and $G_i$.
Solve Equation (\ref{SchurSys}) for $\bf w$.
\item Compute $v_i$ and $\eta_i$ from Equation (\ref{veta}).
\item Compute desired derivative using Equation (\ref{sensdJds}).
\end{enumerate}
The computational cost is $O(m\,n^3)$ if a direct
solver is used for Equation (\ref{SchurSys}), where $m$ is the
number of time steps and $n$ is the dimension of the dynamical system.
\subsection{Adjoint formulation of the sensitivity computation method}
\label{s:adjoint}
The discrete adjoint computes the same derivative as in
Equation (\ref{sensdJds}) by first solving the adjoint system
\begin{equation} \label{kktadj} \addtolength{\arraycolsep}{-1mm}
\scriptstyle
\left[\begin{array}{cccccccc|cccc}
I & & & & & & & & E_1^T & & & \\
&\alpha^2 & & & & & & & f_1^T & & & \\
& & I & & & & & & G_1^T & E_2^T & & \\
& & & \alpha^2 & & & & & & f_2^T & & \\
& & & & I & & & & & G_2^T & & \\
& & & & & \ddots & & & & & \ddots & E_{\scriptscriptstyle m-1}^T \\
& & & & & & \alpha^2 & & & & & f_{\scriptscriptstyle m-1}^T \\
& & & & & & & I & & & & G_{\scriptscriptstyle m-1}^T \\
\midrule
E_1 & f_1 & G_1 & & & & & & & & \\
& & E_2 & f_2 & G_2 & & & & & & & \\
& & & & \ddots & \ddots & & & & & & \\
& & & & & E_{\scriptscriptstyle m-1}& f_{\scriptscriptstyle m-1} & G_{\scriptscriptstyle m-1} & & & & \\
\end{array}\right]
\left[\begin{array}{c}
\hat{v}_{\frac12} \\ \hat{\eta}_1 \\ \hat{v}_{1+\frac12} \\ \hat{\eta}_2
\\ \hat{v}_{2+\frac12} \\
\vdots \\ \hat{\eta}_{\scriptscriptstyle m-1} \\ \hat{v}_{m-\frac12} \\
\midrule \hat{w}_1 \\ \hat{w}_2 \\ \vdots \\ \hat{w}_{\scriptscriptstyle m-1}
\end{array}\right]
=
\left[\begin{array}{c}
\frac{1}{m}\frac{\partial J(u_{1/2}, s)}{\partial u} \\
\quad\frac{1}{m-1}\widetilde{J}_1 \\
\frac{1}{m}\frac{\partial J(u_{1+1/2}, s)}{\partial u} \\
\quad\frac{1}{m-1}\widetilde{J}_2 \\
\frac{1}{m}\frac{\partial J(u_{2+1/2}, s)}{\partial u} \\ \vdots \\
\quad\frac{1}{m-1}\widetilde{J}_{m-1}\\
\frac{1}{m}\frac{\partial J(u_{\scriptscriptstyle m-1/2}, s)}{\partial u} \\
\midrule
0 \\ 0 \\ \vdots \\ 0
\end{array}\right]
\end{equation}
The system has the same matrix as
Equation (\ref{kkt}), but a different right hand side. It can
be solved by inverting
\begin{equation} \label{SchurSys2}
{\bf B} {\bf B}^T {\bf \hat{w}} = {\bf B}{\bf g}\;,
\end{equation}
where ${\bf B}$ is defined in Equation (\ref{Bmat}), ${\bf \hat{w}} =
(\hat{w}_1, \ldots, \hat{w}_{m-1})$, and $\bf g$ is the
upper part of Equation (\ref{kktadj})'s right hand side.
Once $\bf \hat{w}$ is computed, $d\langle J\rangle/ds$ can be computed via
\begin{equation}\label{adjdJds}
\frac{d\langle J\rangle}{ds} \approx
\sum_{i=1}^{m-1} b_i^T \hat{w}_i
+ \frac{1}{m} \sum_{i=0}^{m-1}\frac{\partial J(u_{i+\frac12}, s)}{\partial s}\;,
\end{equation}
where $b_i$ is defined in Equation (\ref{efgdef}).
This adjoint derivative
equals to the derivative computed in Section \ref{s:algorithm} up to
round-off error. The examples in this paper use the
algorithm in Section \ref{s:algorithm}.
\section{Application to the Van der Pol oscillator}
\label{s:vdp}
We apply our method to the Van der Pol oscillator
\begin{equation} \label{vdp}
\frac{d^2y}{dt^2} = -y + \beta (1 - y^2) \frac{dy}{dt}\;.
\end{equation}
\begin{figure}[htb] \centering
\subfloat[Limit cycle attractors of the Van der Pol oscillator
at $\beta=0.2,0.8,1.6$ and $2.0$.]
{\label{f:vanderpolPhase}
\includegraphics[width=2.2in,trim=0 0.0cm 0 .9cm,clip]{figures/vanderpolTraj}}
\hspace{0.1in}
\subfloat[For each value of $\beta$, $\langle J\rangle^{\frac18}$ is
estimated 20 times by solving initial value problems of length $50$ with
random initial conditions.]{
\includegraphics[width=2.2in,trim=0 0.0cm 0 .9cm,clip]{figures/vanderpolJ}}\\
\subfloat[$d\langle J\rangle^{\frac18}/d\beta$
estimated by finite differencing pairs of trajectories
with $\Delta\beta=0.05$. For each value of $\beta$, the black dots
are computed on 20 pairs of trajectories with length $50$.
The red line is computed on pairs of trajectories with length
$5000$.]{
\includegraphics[width=2.2in,trim=0 0.0cm 0 .9cm,clip]{figures/vanderpoldJfd}}
\hspace{0.1in}
\subfloat[$d\langle J\rangle^{\frac18}/d\beta$ estimated
with Least Squares Shadowing sensitivity analysis.
For each value of $\beta$, the black dots
are computed on 20 trajectories of length $50$.
The red line is computed on trajectories of length
$5000$.]{
\includegraphics[width=2.2in,trim=0 0.0cm 0 .9cm,clip]{figures/vanderpoldJlss}}
\caption{Least Squares Shadowing Sensitivity Analysis
of the van der Pol oscillator.}
\label{f:vdp}
\end{figure}
to compute sensitivity to the parameter $\beta$ in the system.
Figure \ref{f:vanderpolPhase}
shows the limit cycle attractor as $\beta$ varies from
$0.2$ to $2.0$. As $\beta$ increases,
the maximum magnitude of $dy/dt$ significantly increases. We choose the
objective function to be the $L^8$ norm of $dy/dt$, which has a
similar trend to the $L^{\infty}$ norm and reflects the magnitude of the
peak in $dy/dt$.
By denoting $u=(u^{(1)}, u^{(2)})= (y, dy/dt)$ as the state vector, we convert
the second order ODE (\ref{vdp}) into two coupled first order ODEs, and
write the objective function as
\begin{equation}
\langle J\rangle^{\frac18} = \left( \lim_{T\rightarrow\infty}\frac1T \int_0^T
J(u, \beta) \,dt \right)^{\frac18}\;,\quad
J(u, \beta) = \left(u^{(2)}\right)^8
\end{equation}
The method described in Section \ref{s:method} is then applied to
compute $v$: for each $\beta$, we start the simulation by assigning
uniform $[0,1]$ random numbers to $(u^{(1)}, u^{(2)})$ as their initial
condition at $t=-50$. This initial time is chosen to be large enough
so that when the ODE is integrated to $t=0$, its state $u(0)$
is on its attractor.
A trajectory $u(t), 0\le t\le 50$ is then computed
using a scipy\cite{scipy} wrapper of lsoda\cite{lsoda},
with time step size
$\Delta t=0.02$. The trajectory is about 50 times
the longest timescale of the system.
The $m=2500$ states along the resulting trajectory
are used to construct the coefficient in Equation (\ref{kkt}).
The solution to Equation (\ref{kkt}) is then substituted into Equation
(\ref{sensdJds}) to estimate the derivative
of the $\langle J\rangle$ to the parameter $\beta$. Finally, the derivative
of the output $\langle J\rangle^{\frac18}$ is computed using
\begin{equation} \label{dJvanderpol}
\frac{d\langle J\rangle^{\frac18}}{d\beta} =
\frac{\langle J\rangle^{-\frac78}}{8} \frac{d\langle J\rangle}{d\beta}\;. \end{equation}
The computed derivative is compared against finite difference in
Figure \ref{f:vdp}. For each value of $\beta$, we repeat both the finite
difference and least squares shadowing 20 times on randomly initialized
trajectories; the spread of the computed derivatives
represents the approximation error due to
insufficient trajectory length. Long trajectories are used to compute
more accurate derivatives.
The results indicate that the least squares shadowing method is
more accurate than finite difference in this problem
with the same trajectory length.
\section{Application to the Lorenz system}
\label{s:lorenz}
We apply our method to the Lorenz system
\begin{equation} \label{lorenz}
\frac{dx}{dt} = \sigma(y-x)\;,\quad
\frac{dy}{dt} = x(r-z) - y\;,\quad
\frac{dz}{dt} = xy - \beta z\;.
\end{equation}
and analyze sensitivity to the parameter $\rho$ in the system.
The behavior of the Lorenz system as $\rho$ changes from $0$ to $100$ is
shown in Figure \ref{f:lorenzPhase}, and can be summarized as following
\cite{lorenzbook}:
\begin{itemize}
\item Stable fixed point attractor at $(0,0,0)$ for $0\le \rho<=1$.
\item Two stable fixed point attractors at
$x=y=\pm\sqrt{\beta(\rho-1)},z=\rho-1$ for $1<\rho<24.74$.
\item Quasi-hyperbolic strange attractors for $24.06<\rho<31$. This
includes the classic Lorenz attractor at $\rho=28$.
\item Non-hyperbolic quasi-attractors for $31<\rho<99.5$.
\item Periodic limit cycle attractors with
an infinite series of period doubling for $\rho>99.5$.
\end{itemize}
Despite the many transitions in the fundamental nature of the system,
the mean $z$ value
\begin{equation}
\langle z\rangle = \lim_{T\rightarrow\infty}\frac1T \int_0^T z \,dt
\end{equation}
apparently increases as the parameter $\rho$ increases. $\langle
z\rangle$ is chosen to be our time averaged output quantity in this study.
\begin{figure}[htb!] \centering
\subfloat[Attractors of the Lorenz system at $\rho=10$ (open circle),
$\rho=25,50,75$ and $100$ (blue, green, red and black lines,
respectively)]{ \label{f:lorenzPhase}
\includegraphics[width=2.2in,trim=0 0 0 0,clip]{figures/lorenzTraj}}
\hspace{0.1in}
\subfloat[ For each value of $\rho$, $\langle z\rangle$ is
estimated 20 times by solving initial value problems of length $50$ with
random initial conditions.]{
\includegraphics[width=2.2in,trim=0 0 0 0,clip]{figures/lorenzJ}}\\
\subfloat[$d\langle z\rangle/d\rho$
estimated by finite differencing pairs of trajectories
with $\Delta\rho=2$. For each value of $\rho$, the black dots
are computed on 20 pairs of trajectories with length $50$.
The red line is computed on pairs of trajectories with length
$5000$.]{
\includegraphics[width=2.2in,trim=0 0 0 0,clip]{figures/lorenzdJfd}}
\hspace{0.1in}
\subfloat[$d\langle z\rangle/d\rho$ estimated
with Least Squares Shadowing sensitivity analysis.
For each value of $\rho$, the black dots
are computed on 20 trajectories of length $50$.
The red line is computed on trajectories of length
$5000$.]{
\includegraphics[width=2.2in,trim=0 0 0 0,clip]{figures/lorenzdJlss}}
\caption{Least Squares Shadowing Sensitivity Analysis of the Lorenz system.}
\label{f:lorenz}
\end{figure}
By denoting $u= (x,y,z)$,
the method described in Section \ref{s:algorithm} is applied to the
Lorenz system. For each $\rho$, we start the simulation at $t=-50$
with uniform $[0,1]$ random numbers as initial conditions for $x,y$ and $z$.
The Lorenz system is
integrated to $t=0$, so that $u(0)$ is approximately on the
attractor. A trajectory $u(t), 0\le t\le 50$ is then computed
using a scipy\cite{scipy} wrapper of lsoda\cite{lsoda},
with time step size
$\Delta t=0.01$. The resulting $m=5000$ states along the trajectory
are used to construct the linear system (\ref{kkt}), whose solution
is then used to estimate the desired derivative
$d\langle z\rangle/d\rho$ using Equation (\ref{climatesens}).
\begin{figure}[htb!] \centering
\subfloat[For each time length $T$, the Least squares shadowing
algorithm runs on 10 random trajectories, computing 10 different
derivatives.]{
\includegraphics[width=2.2in,trim=0 0 0 0,clip]{figures/lssconv1}}
\hspace{0.1in}
\subfloat[The sample standard deviation of the 10 derivatives at
each trajectory length $T$.]{
\includegraphics[width=2.2in,trim=0 0 0 0,clip]{figures/lssconv2}}
\caption{Convergence of Least Squares Shadowing Sensitivity Analysis
applied to the Lorenz system.}
\label{f:lorenzconv}
\end{figure}
The computed derivative is compared against finite difference values in
Figure \ref{f:lorenz}. The dip in the finite difference value
at around $\rho=22.5$ is due to a bifurcation from fixed
point attractors to strange attractors at $24.0\le \rho\le 24.74$
(the two types of attractors co-exist within this range).
For $24.74<\rho<31$, the Lorenz system is dominated by a
quasi-hyperbolic attractor. Least squares shadowing
sensitivity analysis computes accurate and consistent gradients on
randomly chosen short trajectories on the attractor. The
computed gradients has a random error on the order of $O(T^{-\frac12})$,
a result derived theoretically for discrete-time dynamical systems
\cite{lsstheory} and shown empirically here in Figure \ref{f:lorenzconv}.
As $\rho$ increases beyond $31$, the system is non-hyperbolic and
its trajectories form an object known as a quasi-attractor
\cite{bonatti2010dynamics}.
For $\rho>99.5$, the system
transitions to periodic oscillations, then goes through an infinite
series of period doubling bifurcations.
Despite of the complex, non-hyperbolic behavior, our method computes
derivatives that are more accurate than finite difference on the
same trajectory lengths.
\section{Application to an aero-elastic limit cycle oscillator}
\label{s:aeroelastic}
We apply our method to a simple model of aeroelastic limit cycle
oscillation, as shown in Figure \ref{f:airfoil}.
\begin{figure}[htb!] \centering
\includegraphics[width=3.2in]{figures/pitchandplunge}
\caption{Model aero-elastic oscillator}
\label{f:airfoil}
\end{figure}
The model is described in detail by Zhao and Yang\cite{chaotic_lco}.
The governing equations are
\begin{equation}\label{stiff}
\begin{split}
&\frac{d^2h}{dt^2} + 0.25\,\frac{d\alpha}{dt} + 0.1\, \frac{dh}{dt} + 0.2\, h
+ 0.1\, Q\, \alpha = 0 \\
&0.25\, \frac{d^2h}{dt^2} + 0.5\, \frac{d^2\alpha}{dt^2} + 0.1\, \frac{d\alpha}{dt}
+ 0.5\, \alpha + 20\, \alpha^3 - 0.1\, Q\, \alpha = 0
\end{split}
\end{equation}
where $h$ is the plunging degree of freedom, and $\alpha$ is the
pitching degree of freedom.
We analyze sensitivity to the reduced dynamic pressure $Q$.
\begin{figure}[htb!] \centering
\subfloat[Bifurcation diagram in the parameter range considered.]{
\label{f:stiffbif}
\includegraphics[width=2.2in,trim=0 0 0 0,clip]{figures/lcoBif}}
\hspace{0.1in}
\subfloat[Phase plots ($\alpha$ vs $\dot\alpha=d\alpha/dt$)
at $Q=8$ (black), $Q=12$ (green) and $Q=16$ (red).]{\label{f:stiffphase}
\includegraphics[width=2.2in,trim=0 0 0 0,clip]{figures/lcoTraj}}\\
\subfloat[$d\langle J\rangle^{\frac18}/dQ$
estimated by finite differencing pairs of trajectories
with $\Delta Q=0.2$. For each value of $Q$, the black dots
are computed on 20 pairs of trajectories with length $300$.]{
\includegraphics[width=2.2in,trim=0 0 0 0,clip]{figures/1}}
\hspace{0.1in}
\subfloat[$d\langle J\rangle^{\frac18}/dQ$ estimated
with Least Squares Shadowing sensitivity analysis.
For each value of $Q$, the black dots
are computed on 20 trajectories of length $300$.
The red line is computed on trajectories of length
$30000$.]{
\includegraphics[width=2.2in,trim=0 0 0 0,clip]{figures/2}}
\caption{Least Squares Shadowing Sensitivity Analysis on the
aero-elastic oscillator model (\ref{stiff}).}
\label{f:stiff}
\end{figure}
The bifurcation diagram of $\alpha$ as $Q$ increases
from 8 to 16 is shown in Figure \ref{f:stiffbif}.
The behavior of the system as $Q$ varies is complex
\cite{Lee_Price_Wong_1999}:
At low values of $Q$, the system has an asymmetric limit cycle
attractor. As $Q$ increases beyond about 10.25, a series of period
doubling bifurcations occurs, leading to transition into chaos just
beyond $Q=11$. At about $Q=12.5$, the system ceases to be chaotic, and
transitions to symmetric periodic limit cycle oscillation.
When $Q$ increases beyond about $13.25$, there appears to be
small windows of asymmetric oscillations.
Finally, at about $Q=13.9$, the system recovers symmetric
periodic limit cycle oscillations.
The phase plot of the system at several values of $Q$ is shown in Figure
\ref{f:stiffphase}. These include an asymmetric periodic limit cycle
attractor at $Q=8$, a chaotic limit cycle attractor or quasi-attractor
at $Q=12$, and a symmetric periodic limit cycle attractor at $Q=16$.
We observe that the magnitude of the
oscillation grows as $Q$ increases, and choose the $L^8$ norm of
the pitch angle $\alpha$ as the objective function. The $L^8$ norm
has similar trend as the $L^{\infty}$ norm, and indicates
the magnitude of the oscillation in the pitching degree of freedom.
Denoting $u=(u^{(1)},u^{(2)},u^{(3)},u^{(4)})
= (y, \alpha, dy/dt, d\alpha/dt)$ as the state vector,
we convert the pair of second order ODEs (\ref{stiff}) into a system of four
first order ODEs. The output can then be written as
\begin{equation}
\langle J\rangle^{\frac18} = \left( \lim_{T\rightarrow\infty}\frac1T
\int_0^T u^{(2)\,8} \,dt \right)^{\frac18}
\end{equation}
We use the method described in Section \ref{s:algorithm} to compute the
derivative of the objective function to the input parameter
$Q$. For each $Q$, we initiate the simulation at $t=-300$
with uniform $[0,1]$ random numbers as its initial condition.
The ODE is integrated to $t=0$ to ensure that $u(0)$ is approximately on
an attractor. A trajectory $u(t), 0\le t\le 300$ is then computed
using a scipy\cite{scipy} wrapper of lsoda\cite{lsoda},
with time step size
$\Delta t=0.02$. The resulting $15000$ states along the trajectory
are used to construct the linear system (\ref{kkt}), whose solution is
used to estimate the derivative
of the output with respect to $Q$.
The computed derivative is compared against finite difference values in
Figure \ref{f:stiff}.
Whether the system exhibits periodic or chaotic limit cycle oscillations,
the derivative computed using least
squares shadowing sensitivity analysis is more accurate than finite
difference results.
\section{Conclusion}
\label{s:conclude}
We presented the Least Squares Shadowing method for computing
derivatives in ergodic dynamical systems.
Traditional tangent and adjoint methods linearize the
ill-conditioned initial value problem, thereby computing
large derivatives useless for control, optimization and inference
problems.
The new method linearizes the well-conditioned least squares shadowing
problem, thereby computing useful derivatives of long time averaged
quantities. The method is
demonstrated on the periodic van der Pol oscillator, the chaotic Lorenz
attractor, and a simple aero-elastic oscillation model that exhibits
mixed periodic and chaotic behavior. These applications demonstrate
the effectiveness of our new
sensitivity computation algorithm in many complex nonlinear dynamics
regimes. These include fixed points, limit cycles, quasi-hyperbolic and
non-hyperbolic strange attractors.
The Least Squares Shadowing method requires solving either a sparse matrix
system (in its discrete formulation) or a boundary value problem in time
(in its continuous formulation). This boundary value problem is
about twice as large as a linearized initial value problem, in terms of
the dimension and sparsity of the matrix for the discrete formulation,
and in terms of the number of equations for the continuous formulation.
When the dynamical system is low dimensional,
the sparse matrix system can be solved using a direct matrix
solver; computing the derivative of the output costs a few times
more than computing the output itself by solving an initial value problem.
When the dynamical system is high dimensional, e.g., a discretized
partial differential equation, iterative solution methods should be
used instead of direct matrix solvers.
Because the system is well-conditioned and only twice as large as an
initial value problem, an iterative solution can potentially cost only a small
multiple of an initial value solution, particularly if using an iterative
solver specifically designed for this problem.
Therefore, we think that the Least Squares Shadowing method
is not only efficient for low-dimensional
chaotic dynamical systems, but also applicable to sensitivity analysis of
large chaotic dynamical systems.
\section*{Acknowledgments}
The first author acknowledges AFOSR Award F11B-T06-0007
under Dr. Fariba Fahroo, NASA Award NNH11ZEA001N under Dr. Harold Atkins,
and a subcontract of the DOE PSAAP Program at Stanford.
|
train/arxiv
|
BkiUd4k5qhLBP6tbp6Xw
| 5 | 1 |
\section{Introduction}
Astrophysical compact objects are often observed as luminous sources of nonthermal radiation.
Their activity demonstrates efficient dissipation of macroscopic energy stored in magnetic fields and in plasma bulk motions, often in the form of powerful flares.
These processes energize plasma particles and make them emit broad-band radiation spectra, sometimes extending to very high energies.
A possible way for the dissipation to occur is through a macroscopic magnetohydrodynamical (MHD) instability.
It can excite turbulent motions on a large scale $l_0$ which enable a transfer of energy to small scales, where it can be dissipated.
A special feature of compact objects and their outflows is that the energized plasma can be magnetically dominated.
This condition is often expressed in terms of the magnetization parameter $\sigma=B^2/4\pi\rho c^2$, where $B$ is the magnetic field, and $\rho$ is the plasma mass density.
The magnetization $\sigma>1$ is likely in the coronae and jets of accreting black holes; $\sigma\gg 1$ also occur in pulsar magnetospheres and their winds.
Excitation of turbulent motions in a magnetically-dominated plasma with amplitude $\delta B/B\sim 1$ implies that a large energy per particle becomes available for dissipation.
Furthermore, turbulence can stochastically accelerate a fraction of particles to extremely high energies (e.g. \citealt{Petrosian2012}).
A complete model of this process must self-consistently follow the plasma waves and individual particle dynamics, which can be done with advanced numerical simulations.
A special feature of the magnetically-dominated plasma is that its Alfv\'en speed is relativistic, $V_{\mathrm{A}} = \sqrt{\sigma/(1+\sigma)} \approx c$.
Relativistic kinetic turbulence simulations have only recently become feasible thanks to the increase in computational resources \citep{Zhdankin_2017b, Zhdankin_2017a, Zhdankin_2019a, Comisso_2018, Wong_2019, Nattila_2019, Comisso_2019}.
In agreement with theoretical expectations, the numerical experiments demonstrated that a fraction of plasma particles experience stochastic acceleration to very high energies, until their Larmor radius becomes comparable to $\ell_0$.
In astrophysical objects, turbulent heating of the plasma can be accompanied by significant radiative losses.
The losses limit particle acceleration and the growth of the plasma temperature.
Furthermore, radiative losses may affect the development of the turbulence itself.
Most of previous work on radiative turbulence was analytical \citep[e.g.,][]{Thompson_2006, Uzdensky_2018, Sobacchi_2019, Zrake_2019}
The only existing kinetic simulations of radiative relativistic turbulence were recently performed by \citet{Zhdankin_2019}.
Their simulation setup assumed steady driving of turbulence in a magnetized electron-positron plasma with a strong level of radiative losses, which completely suppressed stochastic particle acceleration.
In the present paper, we perform kinetic simulations of turbulent {\it flares}.
We envision a sudden excitation of turbulence by an MHD instability, which deposits a macroscopic energy comparable to the total magnetic energy of the system.
The initial disturbance on a large scale $l_0$ is followed by the development of turbulent plasma motions and the eventual dissipation of the injected magnetic energy.
We investigate how the deposited turbulence energy is radiated, how the radiative losses affect particle acceleration, and what spectra can be radiated by the turbulent flares.
For simplicity, all our simulations will assume that the plasma is made of electrons and positrons and that the plasma is optically thin, so that the emitted radiation freely escapes.
The opposite, optically thick, regime was recently investigated in the context of gamma-ray bursts by \cite{Zrake_2019}.
We perform both two-dimensional ($2D$) and three-dimensional ($3D$) radiative kinetic simulations to model the flares.
Our setup of initial conditions is similar to that in \citet{Comisso_2018,Nattila_2019, Comisso_2019}.
Remarkably, for this setup the turbulence development and particle acceleration picture in $2D$ is similar to the results of full $3D$ simulations \citep{Comisso_2019}.
We call this turbulence {\it reconnection-mediated}, as we observe that the turbulence develops by forming reconnecting current sheets, in contrast to the canonical picture of an Alfv\'en-wave cascade.
2D simulations have lower computational costs and can be performed with particularly long durations and high resolutions.
We use many 2D simulations to systematically study the effects of radiative losses.
We also perform a few
large-scale $3D$ simulations to test their difference from the $2D$ models.
All our simulations are performed with the open-source kinetic code \textsc{runko} \citep{Nattila_2019}.
The paper is organized as follows.
The simulation setup is described in Section~\ref{sect:numerics}.
Section~\ref{sect:turb} presents our results for turbulent flares without cooling.
Then, in Section~\ref{sect:radturb}, we use analytical estimates to discuss the expected effects of radiative losses and define two cooling regimes: weak and strong.
The full radiative simulations are presented in Sections~\ref{sect:results} and
\ref{sect:flares}.
Finally, conclusions are given in Section~\ref{sect:conclusions}.
\section{Simulation setup}\label{sect:numerics}
\subsection{Pre-flare state}
\label{equilibrium}
The unperturbed equilibrium state is a homogeneous neutral pair plasma with a temperature $T_0$. The corresponding dimensionless temperature is
\begin{equation}
\theta_0 = \frac{k_{\mathrm{B}} T_0}{m_e c^2},
\end{equation}
where $k_{\mathrm{B}}$ is the Boltzmann constant, $m_e$ is the electron rest mass, and $c$ is the speed of light.
All models shown in this paper have $\theta_0 = 0.3$, which corresponds to a mean particle Lorentz factor of $\gamma_{\mathrm{th}} \approx 1+3\theta_0\approx 1.6$. We also performed simulations with $\theta_0$ ranging from $10^{-4}$ up to $0.6$, with similar results.
The choice of $\theta_0$ is unimportant as long as the initial thermal energy is much smaller than the energy of the injected turbulence.
The pre-flare plasma is magnetized with a uniform magnetic field $\boldsymbol{B}_0$.
The dimensionless magnetization parameter is defined as
\begin{equation}
\sigma_0 = \frac{B_0^2}{4\pi \rho_0 c^2},
\end{equation}
where $\rho_0 = n_0 m_e$ is the plasma rest-mass density, and $n_0 = n_- + n_+$ is the number density of electrons and positrons.
The magnetization parameter that takes into account heat contribution to the plasma inertia is given by
\begin{equation}
\sigma = \frac{\sigma_0}{\gamma_\mathrm{th}}\approx \frac{\sigma_0}{1+3\theta_0}.
\end{equation}
In this paper, we focus on the magnetically dominated regime of $\sigma_0 > 1$, and our fiducial simulation setup has $\sigma_0 \approx 16$ and $\sigma = 10$.
In addition, we performed simulations with $\sigma_0 \approx 2$, $8$, $30$ (and $\sigma = 1$, $5$, $20$).
The magnetized plasma is described by two characteristic frequencies:
the plasma frequency $\omega_\mathrm{p}$ and the frequency of Larmor rotation $\omega_B$.
They are given by
\begin{equation}
\omega_\mathrm{p} =\left(\frac{4\pi e^2 n_0}{m_e}\right)^{1/2},
\qquad
\omega_B =\frac{e B_0}{m_e c},
\end{equation}
and $e$ is the electron charge.
Note that $\omega_B/\omega_\mathrm{p}=\sigma_0^{1/2}=(\gamma_\mathrm{th} \sigma)^{1/2}$.
The two frequencies define two characteristic scales of the problem:
the plasma skin depth $c/\omega_\mathrm{p}$ and the Larmor radius of non-relativistic particles $c/\omega_B$.
The plasma is strongly magnetized in the sense that $\,c/\omega_B$ is much smaller than the size of the system.
\subsection{Exciting the turbulent flare}
\label{excitation}
Let us choose the $z$-axis along the unperturbed magnetic field $\vec{B}_0$.
In $2D$ systems all perturbed quantities will remain independent of $z$ and dynamics will occur in the $x$-$y$ plane.
In 3D simulations we additionally perturb the system in the $z$ direction.
Turbulence is created by starting from a non-equilibrium, excited state in the same way as in \citet{Comisso_2018,Nattila_2019, Comisso_2019}.
The plasma is initially at rest and carries no electric current;
it has a uniform density $n_0$ and temperature $\theta_0$.
The initial excited state differs from the equilibrium state described in Section~\ref{equilibrium} only by the presence of an additional magnetic field perpendicular to $\vec{B}_0$: $\vec{B}_{\perp}=(B_x,B_y)$.
This field is described by its Fourier components as follows
\begin{align}
B_x &= \phantom{+}\sum_{l,m,n} \beta_{lm} \sin(k_l x+\phi_{lmn})\cos(k_m y+\psi_{lmn})\sin(k_n z + \chi_{lmn}) \label{eq:B_perturb1}, \\
B_y &= - \sum_{l,m,n} \beta_{lm} \cos(k_l x+\phi_{lmn})\sin(k_m y+\psi_{lmn})\sin(k_n z + \chi_{lmn}), \label{eq:B_perturb2}
\end{align}
where $l,m \in \{1,\ldots,N_\perp\}$ are the perpendicular mode numbers, $n \in \{1,\ldots,N_\parallel\}$ is the parallel (along $\vec{\hat{z}}$) mode number, $k_l = 2\pi l/L$, $k_m = 2\pi m/L$, and $k_n = 2\pi n/L$ are the wave numbers along $x$, $y$ and $z$, respectively, and $\phi_{lmn}$, $\psi_{lmn}$, $\chi_{lmn}$ are random phases.
The parameters
\begin{equation}
\beta_{lm} \equiv
\frac{1}{2\sqrt{2}}
\frac{1}{N_\perp \sqrt{N_\parallel}}
\frac{B_{\perp}^{\mathrm{rms}} }{\sqrt{l^2 + m^2}}
\end{equation}
set the amplitude of the perturbations $\delta B=B_{\perp}$, which is described by the rms value $B_\perp^{\mathrm{rms}} = \sqrt{\langle B_\perp^2 \rangle}$, where $\langle \ldots \rangle$ denotes volume average over the simulation domain.
The perpendicular field satisfies $\langle B_\perp \rangle = 0$.
The perturbation is non-helical.
In our fiducial $2D$ setup we use $N_\perp=8$ modes, which results in stirring turbulence on scale $l_0 \approx 125 \,c/\omega_\mathrm{p}$.
We also made test runs with $N_\perp=4$ and $16$.
We chose $N_\perp=8$ because in this case $l_0$ is sufficiently small to excite many turbulent eddies in the box, and sufficiently large to be far from the microscopic plasma scale.
In $3D$ simulations we are limited by the computational cost and therefore forced to select a smaller number of modes;
we use $N_\perp=3$ which corresponds to a similar $l_0 \approx 140 \,c/\omega_\mathrm{p}$.
Additionally, we perturb the system in the $z$ direction with two sinusoidal modes, $N_\parallel = 2$.
Our fiducial model has the initial perturbation amplitude $B_{\perp}^{\mathrm{rms}}/B_0 = 1$.
This setup is designed so that one can think of scale $l_0$ as the size of turbulent eddies for which $\delta B/B\sim 1$.
Note that this way of triggering turbulence is quite violent.
The initial state is far out of pressure balance leading to a quick re-arrangement of the system and thus exciting mildly relativistic plasma motions in the $x$-$y$ plane.
The initial rearrangement causes transient phenomena on the timescale $\sim l_0/c$, before relaxation into a quasi-steady turbulent state.
We focus on the latter, quasi-steady stage, which lasts a much longer time of tens of $l_0/c$.
Since there is no driving of turbulence apart from the strong initial perturbation described above, the turbulent motions eventually dissipate.
We follow this process to a time $t$ of at least $20 \,l_0/c$.
The durations of some simulations were extended to $t \sim 100 \,l_0/c$.
An alternative way of exciting similar turbulence is by perturbing the $J_z$ current with an oscillating Langevin antenna \citep{TenBarge2014} instead of perturbing $\vec{B}$.
We have verified that this leads to similar results.
\subsection{Numerical implementation}
Fully kinetic calculations are required to study how turbulence energizes plasma particles.
We use relativistic particle-in-cell (PIC) simulations, where the field evolution is calculated on a grid, and the plasma is represented by a large number of charges moving through the grid and creating electric currents.
All our simulations are performed with the recently developed code \textsc{Runko} \citep{Nattila_2019}, designed as a modern, massively parallel, C++14/Python3 platform for plasma simulations.
The PIC module in \textsc{Runko} uses a 2nd order finite-difference time-domain electromagnetic field solver, a charge-conserving current deposition scheme, and digital current filtering.
Particles are propagated in time with a relativistic Boris pusher \citep[see][for details]{Nattila_2019}.
For the present work, we have modified the code to include radiative losses of particles as
described in Sect.~\ref{sect:cooling_model}.
The code evolves all three ($x$, $y$, $z$) components of the fields and the particles' velocities.
Periodic boundary conditions are imposed on the computational box.
At each timestep we perform $8$ digital current filtering passes (with a $3$-point binomial filter) to damp out unphysical high-frequency numerical noise.
In our 2D simulations, the domain is a square in the $x$-$y$ plane of size $L = 1024\,c/\omega_\mathrm{p}$.
The square is covered by a Cartesian grid of size $5120^2$, so that the (non-relativistic) plasma skin depth $\,c/\omega_\mathrm{p}$ is resolved with $5$ grid cells.
The plasma is simulated with $32$ particles per cell per species.
We have benchmarked the validity of this setup against shorter simulations with up to $256$ particles per cell per species, $10$ grid points per skin depth, and no current filtering.
The results were found to be well converged. We have also tested different computational box sizes from $L \omega_\mathrm{p}/c \approx 100$ up to $\sim 7000$ (corresponding to a maximum grid size of $20480^2$).
We found that a minimum scale separation of $l_0 \omega_\mathrm{p}/c \sim 100$ is needed to properly capture the phenomena described below in this paper.
In our 3D simulations, the domain is a cube of side $L = 426 \,\,c/\omega_\mathrm{p}$ covered by $1280^3$ grid cells.
We resolve the (non-relativistic) plasma skin depth with $3$ cells and use $2$ particles per cell per species to model the plasma.
Four current filtering passes are performed on each timestep.
Similar to the $2D$ case, the validity of these simulation parameters was benchmarked against shorter simulations with a maximum size of $L = 640 \,\,c/\omega_\mathrm{p}$ (corresponding to a resolution of $1920^3$).
\section{Relativistic kinetic turbulence}\label{sect:turb}
\subsection{The role of magnetic reconnection}
\begin{figure*}
\centering
\includegraphics[trim={0.0cm 0.1cm 0.0cm 0.0cm}, clip=true, width=0.47\textwidth]{fig_v4_s10_rho_15000.pdf}
\includegraphics[trim={0.5cm 0.0cm 0.8cm 0.5cm}, clip=true, width=0.35\textwidth]{fig_v4_d3x128_3d_dens_5000.png}
\includegraphics[trim={0.0cm 0.1cm 0.0cm 0.4cm}, clip=true, width=0.47\textwidth]{fig_v4_s10_jz_15000.pdf}
\includegraphics[trim={0.5cm 0.0cm 0.8cm 0.5cm}, clip=true, width=0.35\textwidth]{fig_v4_d3x128_3d_jz_5000.png}
\includegraphics[trim={0.0cm 0.0cm 0.0cm 0.4cm}, clip=true, width=0.47\textwidth]{fig_v4_s10_dissipf_15000.pdf}
\includegraphics[trim={0.5cm 0.0cm 0.8cm 0.5cm}, clip=true, width=0.35\textwidth]{fig_v4_d3x128_3d_dissipf_5000.png}
\caption{\label{fig:visuals}
General appearance of the relativistic turbulent plasma for the fiducial $\sigma_0 = 16$ magnetization.
Right panels show the full $2D$ simulation domain at $t \approx 10\,l_0/c$ and left panels show the periphery of the $3D$ simulation box at $t \approx 5\,l_0/c$.
The dominant background magnetic field ${\mathbf B}_0$ is oriented along the $\vec{z}$ axis (out of the plane in the 2D figures).
Top row shows the plasma density $n/n_0$ (in units of the initial plasma density $n_0$),
middle row shows current density $J_z$ (in units of $e n_0 c$),
and bottom row shows the local dissipation rate coefficient ${\cal D}_J$, which is proportional to $|J|\,\vec{E}\cdot\vec{J}$ and defined in Equation~(\ref{eq:diss}).
Magnetic flux ropes appear as round overdense structures in the 2D setup.
Current sheets are constantly being created at the interfaces of the colliding and merging flux ropes.
They are sites of strong, localized, intermittent dissipation as observed in the bottom panels.
}
\end{figure*}
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{fig_v4_s10_spec_mag.pdf}
\caption{\label{fig:spec}
Magnetic field energy spectrum $d\ene{B}{} (k)/dk$ for the fiducial $2D$ simulation with $\sigma_0 = 16$.
Different colors correspond to different times $t$ as indicated in the color bar.
Bottom panel shows the spectral slope computed in a narrow moving time window.
For comparison, two slopes are indicated: $-2$ (dashed line), and $-4.5$ (dotted line).
Vertical dashed line shows the location of the injection scale, $l_0$.
}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{fig_v4_s10_spec_kin.pdf}
\caption{\label{fig:spec_kin}
Kinetic energy spectrum $d\ene{v}{}(k)/dk$ for the fiducial $2D$ simulation with $\sigma_0 = 16$.
Different colors correspond to different times $t$, similar to Figure~\ref{fig:spec}.
Bottom panel shows the spectrum slope computed in a narrow moving time window.
For comparison, a spectral slope of $p = -5/3$ (dashed line) is shown.
The inset shows the separated energy spectra of the perpendicular velocity component $\propto v_\perp^2$ (black curve) and out-of-the plane component $\propto v_z^2$ (blue curve) at $t \approx 10~l_0/c$.
}
\end{figure}
Turbulence in a magnetized plasma is usually expected to develop an energy cascade toward small spatial scales $l\ll l_0$, in both $2D$ and $3D$ models \citep[e.g.,][]{Goldreich_1995, Biskamp_2003, loureiro_2018}.
The cascade is described by the coupled fluctuations of
the magnetic field $\delta \vec{B}$ and the plasma velocity $\delta\vec{v}$.
A self-similar cascade has a power-law distribution over the wavenumber $k = 2\pi/l$,
\begin{equation}
(\delta\vec{v})_k^2 \propto (\delta \vec{B})_k^2 \propto k^q,
\end{equation}
and the corresponding distribution of the turbulence magnetic energy has the form
\begin{equation}
\frac{d \ene{B}{} }{d k} \propto k^p.
\end{equation}
The canonical Kolmogorov slope $p=-5/3$ corresponds to $q = -2/3$.
Steeper slopes were also found, in particular in the force-free limit \citep{Li_2019}.
The cascade extends down to scale $l_\nu$ at which dissipative processes stop the nonlinear transfer of turbulence energy to smaller scales.
The magnetically-dominated plasma is weakly compressible, and the turbulent motions are mainly
shear motions perpendicular to the background field $\vec{B}_0$.
The turbulence is nearly transverse ($\vec{v}\perp\vec{B}_0$) and has a small amplitude $\delta B\ll B_0$ on small scales $l$, i.e. it weakly bends the guide field lines.
The transverse plasma motions follow the displacements of the magnetic field lines as long as the ideal MHD description holds.
Their decoupling occurs on small kinetic scales and leads to dissipation.
Some previous studies of turbulence emphasized the dissipative role of magnetic reconnection, on both large and small scales
\citep[e.g.,][]{Biskamp_2003, Loureiro_2020}.
The scale of this process depends on how the turbulence is generated.
Magnetic reconnection is quickly activated in our simulations, and strongly affects the development of plasma motions on small scales.
A typical state of the computational box after the development of turbulence is shown in Figure~\ref{fig:visuals}.
We observe that the flux ropes twisted by the initial perturbations have quickly developed thin current sheets between them, which begin to reconnect, generating smaller flux ropes.
At the same time, there is an inverse process of ``coagulation'' of flux ropes.
The system remains highly dynamic as the flux ropes move around, collide, and merge.
Flux tubes with the same current polarity attract while opposite polarities repel each other.
In this ``reconnection-mediated'' turbulence, current sheets serve as the sites for turbulence dissipation. We observe that the magnetic field fluctuations
efficiently decay via reconnection in the numerous current sheets on various scales.
Note that, in contrast to magnetohydrodynamic (MHD) models, kinetic plasma simulations do not need any prescriptions for resistivity.
Instead, they follow the development of the tearing instability of the current sheets and the resulting dissipation from first principles.
At large scales, $l \omega_p/c \rightarrow\infty $, kinetic simulations reproduce the MHD model.%
\footnote{%
A comparable MHD simulation that captures the reconnection-mediated dissipation regime needs to have a very large magnetic Reynolds number of $Rm = v L/\eta \gtrsim 10^4$ (where $\eta$ is the resistivity) in order to model the sheet tearing correctly.
This translates to very large simulation box sizes that are only starting to be probed by contemporary simulations \citep{Beresnyak_2019}.
}
Figure~\ref{fig:spec} shows the evolution of the turbulence magnetic energy spectrum in our fiducial $2D$ simulation with $\sigma_0 \approx 16$.
The initial perturbations occupy in the $\vec{k}$-space the region $k\,c/\omega_\mathrm{p}<0.07$, and the main injection scale $l_0$ corresponds to $k_0\,c/\omega_\mathrm{p}\approx 0.05$.
The relaxation to a fully developed spectrum occurs in about one dynamical time $l_0/c$.
The spectrum quickly approaches a power-law form with a slope of $p \approx -1.9 \pm 0.4$ between $k_0$ and $0.7 k_c$, where
\begin{equation}
k_c=\frac{2\pi c}{\omega_{\rm p}},
\end{equation}
and the ideal MHD approximation breaks. We also studied the turbulence spectrum in the $3D$ simulation, and found similar results when viewed in the $\vec{k}_\perp$-space.
Next, we have analyzed the plasma bulk motions by averaging particle velocities in each computational cell.
The evolution of the corresponding kinetic energy spectrum is shown in Figure~\ref{fig:spec_kin} for our fiducial $2D$ simulation with $\sigma_0 \approx 16$.
At time $ t > 3~l_0/c$, we find that the kinetic energy spectrum has a slope of $p_v = -1.5 \pm 0.3$ in the same inertial range $k_0<k< 0.7 k_c$. It is somewhat shallower than the magnetic energy spectrum.
In both 2D and 3D simulations , we observed that the kinetic power at small scales $k\gtrsim 0.7 k_c$ originates from particle motions along the $z$ direction, i.e. along the background field $\vec{B}_0$.
This is in contrast to motions on large scales, which are dominated by transverse velocities.
The change from mainly transverse to mainly $z$ motions approximately coincides with the spectral break in the magnetic field spectrum, where its slope steepens to $p \approx -4.5 \pm 0.4$.
The plasma motions are visualized in Figure~\ref{fig:vel_visuals}.
We observe the following.
\\
(1) The bulk velocities of the electron and positron fluids along $z$ have opposite directions and almost equal magnitudes, $v_z^+\approx -v_z^-$.
Thus, the $z$-component of kinetic energy results from the electric current.
The net bulk velocity of the electron-positron fluid has a negligible $z$-component.
\\
(2) The currents, and hence $v_z^\pm$, are concentrated in thin current sheets.
Note that $v_z^\pm$ are comparable to $c$, i.e. the current sheets are not far from being charge starved.
The initial large-scale perturbations are smooth and drive the characteristic current $j_0=(c/4\pi)B_0/l_0$.
The ratio
\begin{equation}
\frac{en_0c}{j_0}=\frac{\omega_p^2}{\omega_B}\,\frac{l_0}{c}
=\frac{l_0}{\sigma_0^{1/2}\,c/\omega_\mathrm{p}}\gg1
\end{equation}
is a measure of how thin the current sheets can become before approaching charge starvation.
In particular, a current sheet supporting a jump $\delta B\sim B_0$ can collapse to the scale $\sim \sigma^{1/2} c/\omega_p$. This scale also equals the Larmor radius of the particles heated by reconnection, $r_{\rm L}\sim \sigma c/\omega_B=\sigma^{1/2}\,c/\omega_\mathrm{p}$.
\\
(3) The hydrodynamic velocity field of the electron-positron fluid is dominated by motions in the $x$-$y$ plane, transverse to $\vec{B}_0$.
In particular, we observe fast motions along the current sheets, which are clearly powered by reconnection:
plasmoids are formed by the tearing instabilities and ejected along the sheets with speeds $v_\perp^{\rm out}\sim c$.
Electrons and positrons behave as a single fluid in these motions, described as an $\vec{E}\times \vec{B}$ drift.
Besides the plasmoid motion along the current sheets, the hydrodynamic $\vec{E}\times \vec{B}$ motions in the $x$-$y$ plane have a component converging toward the current sheets, $\vec{v}_\perp^{\rm in}$.
These converging flows are very fast (close to $c$) during the ``collapse'' phases forming the flat, thin current sheets between magnetic flux ropes.
When the sheet has formed, reconnection proceeds through the tearing instability, sustaining a significant $v_\perp^{\rm in}$.
The current sheets are sites of strongly localized dissipation.
The degree of this localization (i.e. the spatial intermittency of dissipation) may be described by the following dimensionless parameter,
\begin{equation}\label{eq:diss}
\mathcal{D}_J = \frac{|J|\; \vec{E}\cdot\vec{J}}{ \langle J^2 \rangle \sqrt{\langle E^2 \rangle} },
\end{equation}
where $\langle \ldots \rangle$ denotes the spatial average over the computational box.
The map of $\mathcal{D}_J$ is shown in Figure~\ref{fig:visuals}.
It demonstrates that the dissipation process is strongly localized in the thin current sheets.
Taking the average of $\mathcal{D}_J$ over the simulation box, we obtain a global measure of localization,
\begin{equation}
\langle \mathcal{D}_J \rangle
= \frac{ \langle~ |J|\; \vec{J} \cdot \vec{E} ~\rangle} {\langle |J|\rangle\; \langle \vec{J}\cdot\vec{E} \rangle }
= \frac{ V\, \int |J|\; \vec{J} \cdot \vec{E}\; dV} {\int |J| \,dV \, \int \vec{J}\cdot\vec{E}\; dV},
\end{equation}
where $V$ is the volume of the simulation domain. A uniform dissipation mechanism would give $\langle \mathcal{D}_J \rangle = 1$.
Instead, our fiducial $2D$ run with $\sigma_0 \approx 16$ gives $\langle \mathcal{D}_J \rangle \approx 3.1 \pm 0.5$ throughout an extended period of time $t>l_0/c$. A similar high $\langle \mathcal{D}_J \rangle$ is found in our $3D$ simulation and other 2D simulations with different $\sigma_0$.
The high $\langle \mathcal{D}_J \rangle$ is a clear signature of strong current-sheet dissipation.
\begin{figure*}
\centering
\includegraphics[trim={0.0cm 0.1cm 0.0cm 0.0cm}, clip=true, width=0.47\textwidth]{fig_v4_s10_gamz.pdf}
\includegraphics[trim={0.0cm 0.1cm 0.0cm 0.0cm}, clip=true, width=0.47\textwidth]{fig_v4_s10_gamp.pdf}
\includegraphics[trim={0.0cm 0.1cm 0.0cm 0.0cm}, clip=true, width=0.47\textwidth]{fig_v4_s10_vz_zoom.pdf}
\includegraphics[trim={0.0cm 0.1cm 0.0cm 0.0cm}, clip=true, width=0.47\textwidth]{fig_v4_s10_vperp_zoom.pdf}
\caption{\label{fig:vel_visuals}
Visualization of the plasma motions in the fiducial $2D$ simulation with magnetization of $\sigma_0 = 16$ at $t \approx 10\,l_0/c$.
Top-left panel shows a map of $\Gamma_z -1$, where $\Gamma_z$ is the Lorentz factor of the plasma velocity in the $z$-direction, $v_z$ (out-of-plane velocity component).
Top-right panel shows a similar quantity $\Gamma_\perp -1$ for the bulk motion $\vec{v}_\perp$ in the $xy$ plane.
White dashed rectangles indicate the location of the zoom-in regions shown in the bottom panels, where one can see strong bulk flows at small scales.
Bottom panels also visualize the direction of these flows:
bottom-right panel shows $\vec{v}_{\perp}$ with the plasma streamlines (white curves with arrows), and bottom-left panel shows $v_z^\pm$ for electrons ($v_z^-$; lower-right corner) and positrons ($v_z^+$; upper-left corner).
The $\vec{v}_\perp$ motions reflect the active magnetic reconnection --- fast motions along the reconnection layers and the inflows feeding plasma into the layers.
The $z$-motions of $e^\pm$ are opposite in sign and relativistic at the locations of the current sheets, $v_z^\pm \sim c$, indicating that the sheets' thickness is regulated by charge starvation.
}
\end{figure*}
\begin{figure}
\centering
\includegraphics[trim={0.0cm 0.0cm 0.0cm 0.0cm}, clip=true, width=0.48\textwidth]{fig_v4_s10_ene_evolution_v2.pdf}
\caption{\label{fig:ene_nocool}
Evolution of different energy components in the fiducial $2D$ simulation with $\sigma_0\approx16$ and no radiative cooling:
all particles (solid black), thermal particles (dashed black), nonthermal particles (dotted black), total electromagnetic field (solid red), transverse magnetic field $\vec{B}_\perp$ (dashed red), and electric field (dotted red).
All the components are in units of the initial total electromagnetic energy $\eneFz$.
}
\end{figure}
\subsection{Energy partitioning}
Turbulence transfers its energy to the plasma over the course of the simulation.
The history of this transfer in our fiducial $2D$ run with $\sigma_0\approx16$ is shown in Figure~\ref{fig:ene_nocool}.
We observed a similar history in the 3D simulation, except that the $3D$ turbulence developed and dissipated faster by a factor of $\sim 3$.
The initial quick drop in magnetic energy at $t\lesssim l_0/c$ is mainly the result of exciting electric fields, which begin to drive MHD motions with $\vec{v}_D=c\, \vec{E}\times\vec{B}/B^2$ in response to the created magnetic stresses.
\footnote{
As one can see in Figure~\ref{fig:ene_nocool}, the injected perpendicular magnetic field energy $\eneBperp$ immediately drops by almost $40\%$.
$25\%$ converts to the electric field energy $\eneE$, $10\%$ is given to fluctuations $\delta B_z$, and $\lesssim 5$\% goes to the kinetic energy of the excited bulk motions of the plasma.
We observed similar initial partitioning of the injected energy in simulations with different $\sigma_0$ from $5$ to $30$.
}
The ensuing gradual transfer of the turbulence energy to particles operates in two ways:
by heating the Maxwellian pool and by accelerating nonthermal particles to high energies.
The nonthermal population grows through sudden injections from the thermal pool, as a result of energetic kicks to particles in the current sheets.
This allows us to identify the nonthermal particles using the tracking technique described in \citet{Comisso_2018} and \cite{Nattila_2019}.
We track the individual particle trajectories and monitor their Lorentz factors $\gamma(t)$ to detect sudden acceleration events.
When $\dot{\gamma}=\Delta \gamma / \Delta t$ exceeds an empirical threshold of $\dot{\gamma}_{\mathrm{thr}} = 0.025 \sqrt{\sigma_0} \omega_\mathrm{p}$, the particle is labeled as nonthermal (initially all particles are thermal).
For simplicity, particles that have experienced the kick remain affiliated with the nonthermal pool until the end of the simulation, regardless of the particle history after the kick.
Energy stored in the thermal and nonthermal populations in the entire box, $\ene{}{\mathrm{th}}(t)$ and $\ene{}{\mathrm{nth}}(t)$, grow with time.
Note that the ``thermal'' part $\ene{}{\mathrm{th}}$ effectively includes the contribution from the bulk kinetic energy;
when needed, the latter can be separated from $\ene{}{\mathrm{th}}$ by calculating the local bulk speed.
The total plasma energy,
\begin{equation}
\ene{\mathrm{prtcl}}{} = \ene{}{\mathrm{th}} + \ene{}{\mathrm{nth}},
\end{equation}
grows as more electromagnetic energy is dissipated.
Turbulence energy in a magnetically-dominated plasma ($\sigma\gg 1$) is dominated by the electromagnetic field.
Therefore, we describe it as the total electromagnetic energy in the computational box with subtracted energy of the background field $\vec{B}_0$,
\begin{equation}
\eneF \equiv \int \frac{E^2+B^2}{8\pi}\;dV - \frac{B_0^2}{8\pi}\,V.
\end{equation}
We observe in the simulation that $\eneF$ decays as $\propto t^{-1}$ at $t>l_0/c$, as expected for non-helical initial conditions \citep{Biskamp1999}.
Correspondingly, $\ene{}{\mathrm{th}}$ and $\ene{}{\mathrm{nth}}$ grow with time.
The history of energy transfer is shown in Figure~\ref{fig:ene_nocool} for our fiducial 2D run with $\sigma_0=16$.
At time $t=20\,l_0/c$, about $75\%$ of the initial turbulence energy $\eneFz$ has converted to plasma energy, and then $\ene{}{\mathrm{th}}$ and $\ene{}{\mathrm{nth}}$ remain practically unchanged (we have verified this by running the simulation up to $t \approx 100\,l_0/c$).
The thermal and nonthermal populations display two distinct distribution functions $dN_{\rm th}/d\gamma$ and $dN_{\rm nth}/d\gamma$, which grow during the main dissipation phase $t\lesssim 10\,l_0/c$ and saturate at $t\sim 20\,l_0/c$.
Figure~\ref{fig:pspec} shows the evolution of the two distributions.
The nonthermal component develops a broad distribution and, remarkably, the remaining thermal component is nearly Maxwellian, validating the technique for separating the two populations.
\subsection{Two-stage nonthermal particle acceleration mechanism}\label{sect:prtcl_acc}
\begin{figure}
\centering
\includegraphics[trim={0cm 0.5cm 0cm 0.0cm}, clip=true, width=0.47\textwidth]{fig_v4_s10_pspec_nont.pdf}
\caption{\label{fig:pspec}
Evolution of the particle spectra in the fiducial $2D$ simulation with $\sigma_0\approx16$ and no radiative cooling.
Particles are separated into thermal (top panel) and nonthermal (bottom panel) populations, based on their acceleration histories (see text).
Thick dashed red curve shows the Maxwell-J\"uttner fit of the thermal distribution at $t \approx 20~l_0/c$.
We used similar fitting at other times to evaluate the evolution of temperature $\theta = kT/m_e c^2$, shown in the top-right inset.
Bottom-right inset shows the evolution of the particle number in the nonthermal component $N_{\rm nth}$ normalized to the total particle number in the box $N_{\rm tot}$.
}
\end{figure}
The nonthermal particle acceleration occurs in two stages:
the particles first receive sudden kicks and then engage in the gradual process of stochastic acceleration \citep{Comisso_2018, Comisso_2019}.
The kicks are powered by magnetic reconnection in the current sheets.
This process ``injects'' particles into the nonthermal population with a typical Lorentz factor
\begin{equation}
\gamma_\mathrm{inj} \sim \sigma_0.
\end{equation}
The injection timescale is comparable to the time it takes to travel across a current sheet,
\begin{equation}
t_\mathrm{inj} \sim \sigma_0 \omega_B^{-1}.
\end{equation}
The subsequent stochastic diffusive acceleration occurs as the gyrating particles scatter off magnetic perturbations, similar to the original \citet{Fermi_1949} acceleration picture.
In this process, the particles are energized by the electric field $\vec{E}=-\vec{v}\times\vec{B}/c$ induced by the turbulent motions described in the ideal MHD approximation.
The timescale for this process depends on the exact nature of the wave-particle interaction.
For a non-resonant diffusive acceleration the timescale is comparable to the light crossing time of the scale $l_0$ at which $\delta B/B\sim 1$ \citep[see also][]{Comisso_2019}
\begin{equation}
t_\mathrm{acc} \sim \frac{l_0}{c}.
\end{equation}
The acceleration process continues to be effective until the particle's Larmor radius becomes comparable to the length of the outer scale turbulent fluctuations
\begin{equation}\label{eq:g0}
\frac{\gamma_0 m_e c^2}{eB_0} = l_0.
\end{equation}
In our fiducial 2D simulation, $\gamma_0\approx500$.
We have verified the limit~\eqref{eq:g0} numerically:
as the simulation domain size is doubled the maximum Lorentz factor, $\mathrm{max}\{\gamma \} \sim \gamma_0$, also doubles, as expected.
A high-energy tail in the particle distribution at $\gamma \gg \sigma_0$ is expected from stochastic acceleration.
The acceleration process can be modeled by the diffusion coefficient in the energy space $D_{\gamma}$, which may be written as
\begin{equation}\label{eq:diff}
D_{\gamma} \approx \zeta \frac{c}{l_0} \gamma_0^{2} \left( \frac{\gamma}{\gamma_0} \right)^{\psi}.
\end{equation}
The numerical factor $\zeta \sim 1$ controls the efficiency of stochastic acceleration.
The exponent $\psi$ is controlled by the details of the acceleration process.
The recent work by \citet{Wong_2019} reports $D_{\gamma} \propto \gamma^2$ at high $\gamma$ (i.e., $\psi = 2$).
Similarly, \citep{Comisso_2019} assume $\psi \approx 2$.
The more accurate $\psi$ may be somewhat below $2$ \citep[see also discussion in][]{Lemoine_2019, Demidem_2020}.
In agreement with previous work, the high-energy particles in Figure~\ref{fig:pspec} form a power-law distribution with spectral slope $p = d\log N/d\log \gamma \approx -2.8$.
The exact value of the slope, however, varies slightly as a function of time and magnetization (see \cite{Comisso_2019} for details).
As time progresses, more and more particles get injected into this process (bottom inset in Figure~\ref{fig:pspec}).
At the end of the simulation ($t = 20~l_0/c$) about $20\%$ of the particles reside in the nonthermal pool.
Particle energization by turbulence is an anisotropic process \citep{Zhdankin_2017a, Comisso_2019}, and we observe an anisotropic particle distribution.
Nonthermal particles with moderate $\gamma\sim\sigma_0$ are accelerated by magnetic reconnection along the guide field (which dominates over the reconnecting field component $\delta B$ on small scales).
Particles accelerated stochastically by the ``motional'' field $\vec{E}=-\vec{v}\times\vec{B}/c$ develop the opposite tendency:
their angular distribution
is somewhat concentrated toward the plane perpendicular to $\vec{B} \approx \vec{B}_0$.
\section{Radiative turbulence}\label{sect:radturb}
\subsection{Cooling model}\label{sect:cooling_model}
The two main cooling processes in high-energy astrophysical plasmas are synchrotron emission and inverse Compton (IC) scattering of soft background photons (which can include the locally produced synchrotron photons and external radiation).
As long as the target photons are sufficiently soft, the scattering occurs in Thomson regime.
Then both synchrotron and IC losses scale with the electron Lorentz factor as $\propto \gamma^2$.
Synchrotron losses have, however, a special feature:
they are reduced when the electrons have small pitch angles with respect to the magnetic field.
As a first step, we adopt in this paper the simplest model of radiative losses:
each electron (or positron) with a four-velocity $u=\gamma\beta$ looses energy with rate $\dot{\cal E}=-(4/3) c \sigma_{\mathrm{T}} U u^2$, regardless of its velocity direction.
This prescription accurately describes energy losses due to Thomson scattering in a cool isotropic radiation field with energy density $U$. The same prescription describes synchrotron losses in a magnetic field with energy density $U = B^2/8\pi$ if the particle distribution is isotropic and self-absorption is negligible.
Particle acceleration by turbulence is however anisotropic \citep{Comisso_2019}, and so the cooling model will need to be refined when synchrotron losses dominate; this refinement is left for future work.
The cooling rate for a particle with Lorentz factor $\gamma$ may be written as
\begin{equation}\label{eq:A}
\dot{\gamma}_{\mathrm{c}} = -\mathcal{A} \frac{c}{l_0} \gamma^2\beta^2,
\end{equation}
where ${\mathcal A}$ is a dimensionless coefficient, which defines the cooling strength.
This cooling rate is equivalent to a radiation drag force $\vec{F}_{\rm rad}$ directed opposite to the particle velocity $\vec{v}$, so that
\begin{equation}
\dot{\gamma}_c m_e c^2=\vec{F}_{\mathrm{rad}}\cdot\vec{v}, \qquad
\vec{F}_{\rm rad}=-{\mathcal A}\,\frac{m_ec^2}{l_0}\,\gamma^2 \vec{\beta}.
\end{equation}
In our PIC simulations, force $\vec{F}_{\rm rad}$ is added to the electromagnetic force acting on the particle, so that the net particle acceleration becomes
\begin{equation}
\frac{\mathrm{d}\vec{u}}{\mathrm{d}t}=\frac{q_e}{m_e}\left( \vec{E} + \frac{\vec{v}}{c} \times \vec{B} \right)
+\frac{\vec{F}_{\mathrm{rad}}}{m_e}.
\end{equation}
The force is coupled to a relativistic Boris pusher similar to that in \citet{Tamburini_2010} and is available in \textsc{Runko} framework as \texttt{BorisDrag} pusher.
\subsection{Weak-cooling and strong-cooling regimes}
\label{regimes}
To define the cooling regimes we will use the approximate description of turbulence dissipation based on the discussion in Section~\ref{sect:turb}.
Comparable parts of the released energy are deposited into thermal and nonthermal plasma.
Particle acceleration to $\gamma_\mathrm{inj}\sim\sigma_0$ in current sheets occurs impulsively along the magnetic field
and stochastic acceleration to $\gamma\gg\gamma_\mathrm{inj}$ occurs with a diffusion coefficient $D_\gamma\sim (c/l_0)\gamma^2$, so that a particle with any $\gamma$ in the range $\gamma_\mathrm{inj}<\gamma<\gamma_0$ can double its energy on the timescale $t_\mathrm{acc}\sim l_0/c$.
The acceleration timescale should be compared with the cooling timescale $t_\mathrm{cool}(\gamma)$.
For particles with $\gamma\gg 1$, it is given by
\begin{equation}
t_\mathrm{cool}(\gamma)\approx \frac{\gamma}{|\dot{\gamma}_c|} \sim \frac{l_0}{c{\mathcal A}\gamma}.
\end{equation}
Hereafter we consider only systems with ${\mathcal A}\gg\gamma_0^{-1}=(c/\omega_Bl_0)$;
otherwise radiative losses have a negligible effect at all relevant $\gamma<\gamma_0$.
The condition ${\mathcal A}\gg\gamma_0^{-1}$ corresponds to $t_\mathrm{cool}(\gamma_0)\ll l_0/c$.
It ensures that particles with the maximum Lorentz factor $\gamma_0$ radiate energy faster than they could gain it from the turbulence.
{\bf Weak cooling.}
We define the weak-cooling regime as
\begin{equation}\label{eq:weak}
\frac{1}{\gamma_0}\ll{\mathcal A}\ll\frac{1}{\sigma_0}.
\end{equation}
The condition ${\mathcal A}\ll\sigma_0^{-1}$ allows the formation of a stochastically accelerated tail in the electron distribution at $\gamma\gg\gamma_\mathrm{inj}\sim\sigma_0$.
Cooling cuts off the stochastic acceleration at Lorentz factor $\gamma=\gamma_\mathrm{cool}$ that can be estimated from $t_\mathrm{cool}(\gamma)\sim t_\mathrm{acc}\sim l_0/c$.
This gives
\begin{equation}
\gamma_\mathrm{cool}\sim \frac{1}{{\mathcal A}}, \qquad \gamma_\mathrm{inj} \ll\gamma_\mathrm{cool} \ll \gamma_0.
\end{equation}
The weak-cooling condition ${\mathcal A}\ll\sigma_0^{-1}$ has another implication: thermal particles keep most of the energy received during the flare on the timescale $t_{\rm flare}\sim 10~l_0/c$.
Since the flare converts roughly half of the magnetic energy to heat, the thermal particles reach Lorentz factors $\gamma_\mathrm{th}\lesssim\sigma_0/2$.
They do not efficiently cool because
\begin{equation}
t_\mathrm{cool}(\gamma_\mathrm{th})\gg \frac{l_0}{c}.
\end{equation}
{\bf Strong cooling.}
We define the strong-cooling regime as
\begin{equation} \label{eq:strong}
{\mathcal A}\gg \frac{1}{\sigma_0}.
\end{equation}
In this case the stochastic acceleration is suppressed, however, impulsive acceleration to $\gamma_\mathrm{inj}\sim\sigma_0$ can still operate.
Indeed, to shut off the impulsive acceleration, the cooling timescale $t_\mathrm{cool}(\gamma_\mathrm{inj})$ would need to be shorter than the impulsive acceleration timescale $t_\mathrm{inj}\sim\sigma_0\, c/\omega_B$, which would require ${\mathcal A}>\gamma_0/\sigma_0^2$.
This is a very strong condition, because astrophysical objects of interests have enormous $\gamma_0$.
Our simulations have $\gamma_0\gg\sigma_0$, and we consider strong-cooling that does not suppress the impulsive acceleration,
\begin{equation} \label{eq:implusive_acc}
t_\mathrm{cool}(\gamma_\mathrm{inj})>t_\mathrm{inj}, \qquad {\mathcal A}<\frac{\gamma_0}{\sigma_0^2}.
\end{equation}
In our typical $2D$ and $3D$ simulations, $\gamma_0\approx 500$ and $\sigma_0\approx16$.
Therefore, the impulsive acceleration begins to be limited by radiative losses at ${\mathcal A}\sim 10$.
In the strong-cooling regime, radiative losses limit the plasma temperature.
The average thermal particle momentum during the flare, $u_{\rm th} m_ec$, is determined by the balance between cooling and heating:
${\mathcal A}\, u_{\rm th}^2 c/l_0=\dot{\gamma}_{\rm th}$, where $\dot{\gamma}_{\rm th}\sim 0.1\sigma_0 c/l_0$ is the characteristic heating rate in the flare.
This gives%
\footnote{%
Alternatively, an empirical
formula
from fitting our 2D simulations gives
$\dot{\gamma}_{\mathrm{th}} \sim (1/6) \sqrt{\sigma_0} c/l_0$ in a range of $8<\sigma_0<30$.
Then $u_{\rm th}\sim 0.4\,\sigma_0^{1/4}{\mathcal A}^{-1/2}\ll \sigma_0$.
}
\begin{equation} \label{eq:uth}
u_{\rm th}\sim 0.3\,\left(\frac{\sigma_0}{{\mathcal A}}\right)^{1/2} \ll \sigma_0.
\end{equation}
Note that ${\mathcal A}/\sigma_0=(2/3)(U/U_B)\tau_{\rm T}$, where $\tau_{\rm T}=\sigma_{\mathrm{T}} n l_0$ is the Thomson scattering optical depth of the plasma on the driving scale $l_0$.
In radiatively efficient flares, the radiation density is comparable to $U_B$.
Therefore, the system is expected to become optically thick if ${\mathcal A}\gg\sigma_0$.
Radiative turbulence in optically thick plasma was studied by \citet{Zrake_2019} and is not considered here.
We study cooling models with ${\mathcal A}<\sigma_0$.
This implies that $u_{\rm th}$ is at least mildly relativistic.
Note also that the relation $\sigma_0/{\mathcal A}\sim \tau_{\rm T}$ allows one to rewrite Equation~(\ref{eq:uth}) as $u_{\rm th}^2\tau_{\rm T}\sim 0.1$.
Within a numerical factor, this condition is similar to the thermal balance discussed by \citet{Uzdensky_2018}.
Next, let us consider the more general model of stochastic acceleration with the diffusion coefficient $D_\gamma$ given in Equation~\eqref{eq:diff}.
The approximation $D_\gamma\sim \gamma^2 c/l_0$ corresponds to $\psi=2$ and $\zeta\sim 1$.
The actual value of $\psi$ may be slightly below 2, and this has an important consequence, as seen from the following.
The average rate of stochastic acceleration at a given $\gamma$ is
\begin{equation}
\dot{ \langle \gamma \rangle } = \frac{\mathrm{d} D_{\gamma}}{\mathrm{d} \gamma} = \zeta \psi \omega_B \left( \frac{\gamma}{\gamma_0} \right)^{\psi-1},
\end{equation}
and the corresponding acceleration timescale is
\begin{equation}
\label{eq:tacc}
t_\mathrm{acc} = \frac{\gamma}{ \dot{\langle \gamma \rangle} } = \frac{l_0/c}{\zeta \psi} \left( \frac{\gamma}{\gamma_0} \right)^{2-\psi}.
\end{equation}
Radiative cooling stops acceleration where $t_\mathrm{cool}(\gamma)$ becomes equal to $t_{\mathrm{acc}}(\gamma)$. This occurs when the particle reaches
\begin{equation}
\label{eq:gc}
\gamma_\mathrm{cool} \sim \gamma_0 \left( \frac{\zeta\, \psi}{{\mathcal A}\,\gamma_0} \right)^{\frac{1}{3-\psi}}.
\end{equation}
Comparing $\gamma_\mathrm{cool}$ with $\gamma_\mathrm{inj}\sim\sigma_0$, one finds
\begin{equation} \label{eq:dissmeasure1}
\frac{\gamma_\mathrm{cool}}{\gamma_\mathrm{inj}}\sim \left( \frac{\zeta\, \psi}{{\mathcal A}\,\sigma_0} \right)^{\frac{1}{3-\psi}} \left(\frac{\gamma_0}{\sigma_0}\right)^{\frac{2-\psi}{3-\psi}}.
\end{equation}
When $\psi<2$, $\gamma_\mathrm{cool}$ can significantly exceed $\gamma_\mathrm{inj}\sim\sigma_0$, even when ${\mathcal A}>\sigma_0^{-1}$, because $\gamma_0$ is typically very large.
This means that stochastic acceleration beyond $\gamma_\mathrm{inj}$ can be efficient even in the strong-cooling regime (when the plasma temperature is limited by radiative losses).
In particular, $\gamma_0$ in bright accreting black holes and their jets exceeds $10^{10}$ \citep{Beloborodov_2017}, while typical $\sigma_0$ may be e.g. $\sim 10$.
Then $\psi=1.7$ gives $(\gamma_0/\sigma_0)^{(2-\psi)/(3-\psi)}\sim 10^2$.
Thus, the modest reduction of $\psi$ from 2 to 1.7 enables stochastic acceleration well above $\gamma_\mathrm{inj}$.
Finally, we note that strong radiative losses could in principle suppress the turbulent bulk motions.
The drag force damping a given turbulent eddy is given by
\begin{equation}
\vec{F}_{\rm drag}=-{\mathcal A}\,\frac{m_e c^2}{l_0}\overline{u^2}\,\overline{\vec{\beta}},
\end{equation}
where the bar signifies averaging over the particle motions in the eddy.
The effective inertial mass per particle is $\sim\sigma_0m_e\gg m_e$, and the timescale for damping the turbulent bulk momentum $\overline{\vec{u}}\sim\sigma_0 m_ec\overline{\vec{\beta}}$ is
\begin{equation}
t_{\rm drag}=\frac{|\overline{\vec{u}}|}{F_{\rm drag}}=\frac{\sigma_0\, l_0}{{\mathcal A}\, c\, \overline{u^2}}
\sim 10\,\frac{u_{\rm th}^2}{\overline{u^2}}\,\frac{l_0}{c}.
\end{equation}
Assuming that the impulsive acceleration of a fraction of particles to $\gamma_\mathrm{inj}\sim\sigma_0$ (followed by fast cooling of the particles) does not greatly increase $\overline{u^2}$, so that $\overline{u^2}\sim u_{\rm th}^2$, one concludes that $t_{\rm drag}$ exceeds the cascade timescale $\sim l_0/c$.
This allows the cascade to develop, overcoming the drag force.
\subsection{Quasi-steady one-zone acceleration model}
Magnetic reconnection injects particles into the stochastic acceleration process with the Lorentz factor $\gamma_{\mathrm{inj}} \sim \sigma_0$ \citep{Comisso_2018}.
Let us consider the situation where $\gamma_\mathrm{cool} \gg \gamma_\mathrm{inj}$ and focus on particles with Lorentz factors $\gamma>\gamma_\mathrm{inj}$.
Let us assume here that $D_\gamma$ is uniform in the turbulent plasma.
The particle distribution function $f(\gamma, t)$ then satisfies
\begin{equation}
\partial_t f + \partial_\gamma(-D_\gamma \partial_\gamma f + \dot{\gamma}_{\mathrm{c}} f)
= \dot{n}\, \delta(\gamma - \gamma_{\mathrm{inj}}),
\end{equation}
where $\dot{n}$ is the injection rate.
On timescales longer than the average time of acceleration to $\gamma_{\mathrm{cool}}$ one can expect $f(\gamma)$ to evolve in a quasi-steady regime, when $\partial_t f$ is small compared with the other terms on the left-hand side.
Then the shape of $f(\gamma)$ at $\gamma > \gamma_{\mathrm{inj}}$ approximately satisfies
\begin{equation}
\label{eq:flux}
{\cal F}=-D_\gamma \frac{\ud f}{\ud \gamma} + \dot{\gamma}_{\mathrm{c}} f = 0.
\end{equation}
It states that the particle flux in the energy space ${\cal F}$ vanishes in the steady state (a non-zero ${\cal F}$ is forbidden by the ``ceiling'' of radiative losses at $\gamma\gg\gamma_\mathrm{cool}$).
The solution of Equation~\eqref{eq:flux} is
\begin{equation}\label{eq:cutoff}
f(\gamma) = K(t) \exp \left[ -\left( \frac{\gamma}{\gamma_{\mathrm{cool}}} \right)^{3-\psi} \right],
\end{equation}
where
\begin{equation}
\gamma_{\mathrm{cool}}^{3-\psi} = \frac{(3-\psi)\zeta}{\mathcal{A}} \gamma_0^{2-\psi}.
\end{equation}
This definition of $\gamma_\mathrm{cool}$ is equivalent to Equation~(\ref{eq:gc}) within a numerical factor $\sim 1$. The normalization constant $K(t) \sim \dot{n}t$ is growing on the timescale longer than $t_{\mathrm{acc}}(\gamma_{\mathrm{cool}})$.
Its more accurate value is given by
\begin{equation}
K(t) \approx
\frac{ \int_0^t \dot{n}(t') \ud t' }{ \int_{\gamma_{\mathrm{inj}}}^{\infty} \exp \left[ -\left(\frac{\gamma}{\gamma_{\mathrm{cool}}} \right)^{3-\psi} \right] \ud\gamma }.
\end{equation}
This ``one-zone'' description of the particle distribution at $\gamma>\gamma_\mathrm{inj}$ is valid only in a region with uniform $D_\gamma$.
Our simulations (presented below in Sect.~\ref{sect:results}) show that the actual distribution function in the computational domain is far different from this analytical solution.
This difference is caused by the strong spatial intermittency of turbulence, which results in large variations of $D_\gamma$ across the domain.
Note also that for any given $\gamma$, $D_\gamma(\gamma)$ can be defined only in sufficiently large regions exceeding the Larmor radius $r_{\rm L}(\gamma)=l_0\gamma/\gamma_0$.
\begin{figure*}
\centering
\includegraphics[width=0.47\textwidth]{fig_v4_s10r60_rho_15000_zoom.pdf}
\includegraphics[width=0.47\textwidth]{fig_v4_s10r5_rho_15000_zoom.pdf}
\caption{\label{fig:visuals_rad}
Zoom-in on the plasma density $n$ (shown in units of the initial $n_0$) in a sub-region of the computational box.
Two panels show the snapshots of two simulations at time $t \approx 10 \,l_0/c$. The simulations have the same parameters (our fiducial 2D model $\sigma_0 \approx 16$) except the different cooling rates:
$\mathcal{A} = 0.001$ (left) and $\mathcal{A} = 2.6$ (right). One can see that stronger cooling results in a higher density contrast.
}
\end{figure*}
\section{Simulations of radiative turbulence}\label{sect:results}
We have repeated our fiducial simulation described in the previous sections, but now with radiative cooling, ${\mathcal A}\neq 0$.
We used five long 2D simulations (with
${\mathcal A} \approx$
$\Ten{1.0}{-3}$,
$\Ten{1.6}{-3}$,
$\Ten{2.8}{-3}$,
$\Ten{6.4}{-3}$, and
$\Ten{2.6}{-2}$)
to explore the weak cooling regime $\gamma_0^{-1} \lesssim {\mathcal A} < \sigma_0^{-1}\approx 0.06$.
In this regime, the radiative losses restrict, but do not suppress completely, stochastic particle acceleration.
The strong cooling regime, ${\mathcal A} >\sigma_0^{-1}$, is covered with three $2D$ runs (with ${\mathcal A} \approx 0.10$, $0.41$ and $2.6$).
In addition, we performed shorter simulations with different magnetizations $\sigma_0$, and with ${\mathcal A}$ as low as $10^{-4}$ and as high as $20$.
Full $3D$ models are expensive, and we have done only two $3D$ simulations that include radiative cooling, with ${\mathcal A} \approx 0.003$ and $0.1$, probing the weak and strong cooling regimes, respectively;
both setups have $\sigma_0\approx 16$.
We have found from these simulations that radiative losses weakly affect the turbulence magnetic spectrum $d{\cal E}_B/dk$;
it is practically the same as without losses (Figure~\ref{fig:spec}).
Nevertheless, strong losses impact the perturbations of plasma density, as clearly seen in Figure~\ref{fig:visuals_rad}.
While the radiative drag force $\vec{F}_{\rm rad}$ is unable to damp the turbulent motions of the magnetic field lines it significantly affects the plasma motion {\it along} the field lines, which leads to accumulation of the high-density stripes.
In the strong cooling regime, we also observed that the thermal plasma comes to a quasi-steady temperature, at which cooling balances heating.
Approaching this balance took $\sim 10\,l_0/c$ in 2D and $\sim 3\,l_0/c$ in $3D$ simulations.
The radiative turbulent flares have less relativistic temperatures (and hence higher effective magnetizations $\sigma$) compared with turbulence without cooling.
\subsection{Radiation effect on particle acceleration}
\begin{figure}
\centering
\includegraphics[trim={0cm 0.6cm 0cm 0.2cm}, clip=true, width=0.43\textwidth]{fig_v3_s10_prtcl_energization_history_path.pdf}
\includegraphics[trim={0cm 0.6cm 0cm 0.2cm}, clip=true, width=0.43\textwidth]{fig_v3_s10r60_prtcl_energization_history_path.pdf}
\includegraphics[trim={0cm 0.2cm 0cm 0.2cm}, clip=true, width=0.43\textwidth]{fig_v3_s10r10_prtcl_energization_history_path.pdf}
\caption{\label{fig:prtcl_energization_history}
Energization histories of five representative particles tracked in the $2D$ simulations with $\sigma_0 \approx 16$, with $\mathcal{A}=0$ (top), $\mathcal{A} \approx 0.003$ (middle), and $\mathcal{A} \approx 0.1$ (bottom).
Left panels show the particle $x$ coordinate, and right panels show the particle Lorentz factor $\gamma$ (line width in left panels is proportional to $\gamma$).
Vertical dotted line shows $\gamma\sim\sigma_0$ -- the expected gain from an injection event in a reconnecting current sheet.
After an initial kick at a reconnection site the particles enter the stochastic acceleration process where they gyrate and scatter between the magnetic islands.
In the weak-cooling regime the particles reach the radiative cooling limit $\gamma_\mathrm{cool}\gg \sigma_0$, which slowly decreases as the turbulence decays.
In the strong-cooling regime the particles remain cool except for short acceleration events, which happen when the particles enter a reconnecting current sheet.
}
\end{figure}
\begin{figure*}
\centering
\includegraphics[trim={0cm 0.5cm 0cm 0.0cm}, clip=true, width=0.46\textwidth]{fig_v4_s10r60_pspec_nont.pdf}
\includegraphics[trim={0cm 0.5cm 0cm 0.0cm}, clip=true, width=0.46\textwidth]{fig_v4_s10r10_pspec_nont.pdf}
\caption{\label{fig:pspec_weak}
Evolution of the particle spectra in the fiducial 2D simulation with $\sigma_0 \approx 16$ with weak cooling (${\mathcal A} \approx 0.003$, left) and strong cooling (${\mathcal A} \approx 0.1$, right).
Symbols and colors are the same as in Figure~\ref{fig:pspec}.
}
\end{figure*}
Figure~\ref{fig:prtcl_energization_history} shows the acceleration histories of five representative particles picked in the computational box, for three simulations with no cooling (${\mathcal A} = 0$), weak cooling (${\mathcal A} \approx 0.003$), and strong cooling (${\mathcal A} \approx 0.1$), all for the fiducial 2D turbulent flare with $\sigma_0 \approx 16$.
As expected, the weak cooling regime preserves the two-stage acceleration process:
impulsive acceleration to $\gamma \sim \sigma_0$ at reconnection sites is followed by slow stochastic acceleration to higher energies.
Then the particles reach a balance between acceleration and cooling with Lorentz factors of $\gamma \approx \gamma_{\mathrm{cool}}$.
The equilibrium $\gamma_{\mathrm{cool}}$ gradually decreases as the turbulence dissipates.
By contrast, in the strong-cooling regime, only the impulsive acceleration remains active.
The accelerated particles quickly cool down back to $\sim\gamma_\mathrm{th}$ when they exit the reconnection site with
the accelerating $\vec{E}_\parallel$.
Some particles experience multiple kicks by $\vec{E}_\parallel$ in regions with active current sheet formation; however, the slow stochastic acceleration is completely suppressed.
The evolution of the particle spectrum during the turbulent flare (measured in the entire computational box) is shown in Figure~\ref{fig:pspec_weak}.
We have used the same particle tracking technique as in Figure~\ref{fig:pspec} to disentangle the particle distribution into the thermal and nonthermal components.
In the weak-cooling regime, we observe the extension of the nonthermal spectrum to $\gamma_\mathrm{cool}$;
radiative losses exponentially suppress the particle population at $\gamma>\gamma_\mathrm{cool}$.
The value of $\gamma_\mathrm{cool}$ can be measured by fitting the particle distribution with the exponential cutoff, and we find $\gamma_{\mathrm{cool}} \approx 100$ during the main phase of the turbulent flare, when stochastic acceleration is strongest.
The nonthermal particle fraction ($\approx 20\%$) and the slope of the nonthermal particle spectrum ($p \approx -2.5$) are then similar to those in the simulation without cooling (compare the left panel in Figure~\ref{fig:pspec_weak} with Figure~\ref{fig:pspec}).
At later times $\gamma_\mathrm{cool}$ decreases, as the turbulence decays.
At the end of the simulation ($t = 20\,l_0/c$) $\gamma_\mathrm{cool}\sim 10$ becomes comparable to both $\gamma_\mathrm{th}$ and $\gamma_\mathrm{inj}\sim \sigma_0$.
Almost all the remaining energy is then stored in the hot thermal plasma.
In the strong-cooling regime, the particle spectrum shows a suppression of the nonthermal population.
Particles still experience impulsive acceleration events at reconnection sites, however their fast cooling keeps the nonthermal particle fraction $N_{\mathrm{nth}}/N_{\mathrm{tot}}$ low, below 6\%.
\footnote{
Our decomposition of particles into thermal and nonthermal populations becomes less accurate in the strong cooling regime because of finite time sampling.
Strong energy losses make it harder for the tracked particles to exceed the injection threshold $\dot{\gamma}_{\mathrm{thr}}$ for long periods of time.
This leads to missing of some injected particles that should in reality be marked as belonging to the nonthermal population.
}
The plasma temperature remains roughly constant until $t \approx 8 \,l_0/c$, in the heating=cooling balance.
Then the heating rate decays, and the plasma temperature decreases.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{fig_v3_gcool_fit_s10.pdf}
\caption{\label{fig:gcool_s10}
Dependence of $\gamma_\mathrm{cool}$ on the cooling parameter ${\mathcal A}$.
We ran simulations of the fiducial $2D$ model ($\sigma_0\approx 16$) with different values of ${\mathcal A}$, and analyzed the particle energy distribution in various snapshots of these simulations to measure $\gamma_\mathrm{cool}({\mathcal A},t)$.
Different colors/symbols in the figure correspond to different times $t$, as indicated by the color bar above the figure.
Colored lines show the fits of $\gamma_\mathrm{cool}\propto{\mathcal A}^w$ at seven different times.
The fit slope $w$ determines $\psi=3+w^{-1}$.
}
\end{figure}
\subsection{Measuring stochastic acceleration rate using $\gamma_\mathrm{cool}$}\label{sect:psi}
Since acceleration is balanced by the (known) radiative losses at $\gamma=\gamma_\mathrm{cool}$, the measurement of $\gamma_\mathrm{cool}$ gives the acceleration rate at $\gamma=\gamma_\mathrm{cool}$.
Note that the same turbulent flare can have different $\gamma_\mathrm{cool}$, depending on ${\mathcal A}$. By varying ${\mathcal A}$ we can vary $\gamma_\mathrm{cool}$ and thus find how the particle acceleration rate scales with $\gamma$.
In the standard picture of stochastic acceleration, particles gain energy as a result of diffusion in the energy space.
Using the measured $\gamma_\mathrm{cool}({\mathcal A})$, one can evaluate the exponent $\psi$ of the diffusion coefficient $D_\gamma\propto\gamma^\psi$.
The relation $\gamma_\mathrm{cool}\propto {\mathcal A}^{-1/(3-\psi)}$ (Equation~\ref{eq:dissmeasure1}) offers a convenient way to find $\psi$.
We performed the measurements of $\gamma_\mathrm{cool}$ for a range of ${\mathcal A}$ that give $\gamma_\mathrm{cool}>\gamma_\mathrm{inj}\sim\sigma_0$ (the weak cooling regime), in which stochastic acceleration is not suppressed.
The measurement is made by fitting the nonthermal particle spectrum by a power law with an exponential cutoff.
We use our fiducial 2D model of a turbulent flare with $\sigma_0\approx 16$. We run the model with different ${\mathcal A}$ and take snapshots of the runs at equal times (so that we compare stochastic acceleration at the same turbulence level).
The results are shown in Figure~\ref{fig:gcool_s10}.
A good time interval for measuring $\gamma_\mathrm{cool}({\mathcal A})$ is $5 < c t/l_0 < 10$.
Earlier times $t$ are not suitable in the simulations with low ${\mathcal A}$ (because it takes time for the particles to establish the extended high-energy tail with at a high $\gamma_\mathrm{cool}$), and later times are not suitable in the simulations with high ${\mathcal A}$ (because $\gamma_\mathrm{cool}$ drops to $\gamma_\mathrm{inj}$, as the turbulence decays).
This measurement gives $\psi = 1.68 \pm 0.13$.
This procedure of measuring $\psi$ assumes that the stochastic acceleration can be described by a universal scaling of the diffusion coefficient $D_\gamma\propto\gamma^\psi$ (with a prefactor determined by the turbulence level).
As we show below, this standard picture is in fact deficient, because the stochastic acceleration is strongly intermittent in space and time.
\begin{figure*}
\centering
\includegraphics[trim={0cm 0.1cm 0cm 0.2cm}, clip=true, width=0.45\textwidth]{fig_v3_s10_prtcl_pixel_spectra_gam_mean_0000.pdf}
\includegraphics[trim={0cm 0.1cm 0cm 0.2cm}, clip=true, width=0.45\textwidth]{fig_v3_s10_prtcl_pixel_spectra_gam_top_0000.pdf}
\includegraphics[trim={0cm 0.1cm 0cm 0.2cm}, clip=true, width=0.45\textwidth]{fig_v3_s10r10_prtcl_pixel_spectra_gam_mean_0000.pdf}
\includegraphics[trim={0cm 0.1cm 0cm 0.2cm}, clip=true, width=0.45\textwidth]{fig_v3_s10r10_prtcl_pixel_spectra_gam_top_0000.pdf}
\caption{\label{fig:interm}
Visualization of the intermittency of the particle energization in the $2D$ simulations with a magnetization of $\sigma_0 \approx 16$ in the weak-cooling (top row; $\mathcal{A} \approx 0.003$) and strong-cooling regimes (bottom row; $\mathcal{A} \approx 0.1$) at $t \approx 10~l_0/c$.
Left panels show the mean Lorentz factor $\langle \gamma \rangle_\mathrm{c}$,
and right panels the logarithm of the maximum Lorentz factor, $\log_{10} (\mathrm{max}\{ \gamma \})$.
These quantities are computed using a set of particles located inside $2 \times 2 \,\,c/\omega_\mathrm{p}$ resolution pixels.
Note the different color scales.
In the weak-cooling regime energetic particles are preferentially located in hot streams in between the magnetic islands.
In the strong-cooling regime the production of the nonthermal population is suppressed and thin current sheets are the main heating and particle acceleration sites.
}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[trim={0cm 0.1cm 0cm 0.2cm}, clip=true, width=0.45\textwidth]{fig_v3_d3x128r65_3d_gam_mean_5000.png}
\includegraphics[trim={0cm 0.1cm 0cm 0.2cm}, clip=true, width=0.45\textwidth]{fig_v3_d3x128r65_3d_gam_top_5000.png}
\includegraphics[trim={0cm 0.1cm 0cm 0.2cm}, clip=true, width=0.45\textwidth]{fig_v3_d3x128r10_3d_gam_mean_5000.png}
\includegraphics[trim={0cm 0.1cm 0cm 0.2cm}, clip=true, width=0.45\textwidth]{fig_v3_d3x128r10_3d_gam_top_5000.png}
\caption{\label{fig:interm3d}
Visualization of the intermittency of the particle energization in the $3D$ simulations with a magnetization of $\sigma_0 \approx 16$ in the weak-cooling (top row; $\mathcal{A} \approx 0.003$) and strong-cooling (bottom row; $\mathcal{A} \approx 0.1$) regimes at $t \approx 5~l_0/c$.
Both quantities are computed using a set of particles located inside $6 \times 6 \times 6 \,\,c/\omega_\mathrm{p}$ resolution voxels.
Symbols are the same as in Figure~\ref{fig:interm}.
In addition to perpendicular variations in the $xy$-plane similar to the $2D$ simulation results the particle distributions form column-like structures along the guide field ($\hat{\vec{z}}$ axis) direction.
}
\end{figure*}
\begin{figure}
\centering
\includegraphics[trim={-0.5cm 0.0cm 0cm 0.0cm}, clip=true, width=0.49\textwidth]{fig_v3_d3x128r65_3d_cutoff_5000_labels.png}
\includegraphics[trim={0.0cm 0.0cm 0cm 0.0cm}, clip=true, width=0.41\textwidth]{fig_v3_d3x128r65_prtcl_pixel_spectra.pdf}
\caption{\label{fig:acc_interm}
Spatial intermittency of stochastic particle acceleration observed in the $3D$ simulation with $\sigma_0 \approx 16$ and ${\mathcal A} \approx 0.003$.
The snapshot is taken at time $t \approx 5~l_0/c$.
Top panel shows the map of $\gamma_\mathrm{cool}$ at which radiative losses balance the local acceleration.
Its value is measured in each sub-domain of size $12\times12\times12 \,(\,c/\omega_\mathrm{p})^3$.
Bottom panel shows the particle distributions at two locations (indicated by white circles ``a'' and ``b'' in the upper panel).
The two vertical lines show the position of $\gamma_\mathrm{cool}$ for the two distributions.
}
\end{figure}
\subsection{Intermittency of particle acceleration}\label{sect:intermittency}
The spatial intermittency of dissipation in the simulated turbulent flares was already mentioned in Section~3.1:
the magnetic energy release is strongly enhanced in the current sheets.
We also observe that the stochastic acceleration of particles is highly inhomogeneous in the computational box.
Simulations with strong radiative losses are helpful for the analysis of intermittency, as strong emission highlights the localized regions of strong heating, especially if cooling is faster than particle diffusion out of the heating region.
To study the spatial intermittency, we divided the computational box into small domains, $2\times2\,(\,c/\omega_\mathrm{p})^2$ in 2D and $6\times6\times6 \,(\,c/\omega_\mathrm{p})^3$ in 3D, and analyzed the local particle populations in each domain.
A measure of heating in each domain is the mean particle Lorentz factor $\langle \gamma \rangle$.
The simplest measure of nonthermal particle acceleration is the maximum Lorentz factor $\gamma_{\max}$.
Figure~\ref{fig:interm} shows the snapshots of $\langle \gamma \rangle$ and $\gamma_{\max}$ from two 2D simulations, with ${\mathcal A} \approx 0.003$ and ${\mathcal A} \approx 0.1$.
A similar analysis of the 3D model is shown in Figure~\ref{fig:interm3d}.
We observe that the reconnecting current sheets and their immediate vicinity are hot.
The hot regions are particularly narrow in the simulation with ${\mathcal A} \approx 0.1$ --- the particles quickly cool when exiting the heating regions, and so a high $\langle \gamma \rangle$ reflects a high heating rate.
In the $2D$ simulations, the magnetic islands are seen to have a distinct cold inner core, as the islands are capable of insulating themselves from the hot surrounding particles.
This effect is slightly alleviated in full $3D$ simulations where the outer edges of the flux ropes appear less pronounced \citep[see also][]{Comisso_2019}, and the flux ropes are bent and reconnect on a scale $\sim L$ along the guide field.
Turbulence with weak cooling (${\mathcal A} \approx 0.003$) enables stochastic particle acceleration, and $\gamma_{\max}$ reaches high values.
The map of $\gamma_{\max}$ in this regime is quite diffuse, as expected --- the stochastic acceleration to high $\gamma$ takes a significant time and is accompanied by significant spatial diffusion.
As explained in Section~5.2, simulations with radiative losses provide another measure of stochastic acceleration:
$\gamma_\mathrm{cool}$, which can be found from the observed particle spectrum.
The measurement of $\gamma_\mathrm{cool}$ can be done locally, in each small sub-domain of the computational box.
The results are shown in Figure~\ref{fig:acc_interm} for the 3D simulation with ${\mathcal A} \approx 0.003$.
In a simple model of stochastic particle acceleration with a constant diffusion coefficient $D_\gamma$ one would expect a uniform $\gamma_\mathrm{cool}\approx {\mathcal A}^{-1}$ throughout the computational box.
By contrast, we observe a highly inhomogeneous map of ${\mathcal A}\gamma_\mathrm{cool}$, with large ``voids'' where ${\mathcal A}\gamma_\mathrm{cool}\ll 1$.
This map demonstrates that $D_\gamma$ strongly varies on scales $\gtrsim 20 \,\,c/\omega_\mathrm{p}$.
We conclude that although the low-energy part of the total volume-integrated particle distribution appears to be well described by a single-temperature Maxwell-J\"{u}ttner distribution (see Figures~\ref{fig:pspec} and \ref{fig:pspec_weak}), the particle
heating and acceleration are in fact strongly intermittent.
Both are suppressed away from current sheets, the sites where the dissipation peaks.
\subsection{Energy partition}
\begin{figure}
\centering
\includegraphics[trim={0.3cm 0.4cm 0.0cm 0.3cm}, clip=true, width=0.46\textwidth]{fig_v4_s10r60_ene_evolution_v2.pdf}
\includegraphics[trim={0.3cm 0.0cm 0.0cm 0.3cm}, clip=true, width=0.46\textwidth]{fig_v4_s10r10_ene_evolution_v2.pdf}
\caption{\label{fig:ene_cooling}
Evolution of different energy components in the fiducial $2D$ simulation with $\sigma_0\approx16$, with weak cooling (top panel;
$\mathcal{A} \approx 0.003$) and strong cooling (bottom panel;
$\mathcal{A} \approx 0.1$).
All the components are shown in units of the initial total electromagnetic energy $\eneFz$ (cf. Figure~\ref{fig:ene_nocool}).
The figure shows the electromagnetic energy (solid red) and the total energy of particles (solid black), which is separated into contributions from thermal particles (dashed black) and nonthermal particles (dotted black).
The blue curves show the energy carried away by radiation.
The total radiation energy (solid blue) is the sum of the energies emitted by thermal particles (dashed blue) and nonthermal particles (dotted blue).
}
\end{figure}
\begin{figure}
\centering
\includegraphics[trim={0.0cm 0.0cm 0.0cm 0.0cm}, clip=true, width=0.46\textwidth]{fig_v4_enes_ns10_v3.pdf}
\caption{\label{fig:energy_budget}
Energy carried by the plasma (black) and radiation (blue) at the end of the fiducial 2D simulation with $\sigma_0=16$, at $t=20 l_0/c$. The simulation was run 8 times with 8 different cooling levels ${\mathcal A}>0$, and the final energy partitioning is presented as a function of ${\mathcal A}$.
The figure shows separately the energies carried by thermal (solid black) and nonthermal (dashed black) particles.
The radiation energy is also separated into two parts, emitted by thermal (solid blue) and nonthermal (dashed blue) particles.
All components are normalized to the initially injected turbulence energy $\eneFz$.
Measurements from the simulation with no cooling ($\mathcal{A} = 0$) are shown by the horizontal arrows.
}
\end{figure}
Section~3.2 described the evolution of different energy components in our fiducial 2D simulation without radiative losses.
Figure~\ref{fig:ene_cooling} shows how this evolution is changed by weak (${\mathcal A} \approx 0.003$) and strong (${\mathcal A} \approx 0.1$) losses.
We also performed $3D$ simulations with ${\mathcal A} \approx 0.003$ and $0.1$, with similar results.
The electromagnetic field components evolve almost identically to the non-cooling case and show no dependency on ${\mathcal A}$.
At the end of the $2D$ simulations (at $t = 20~l_0/c$) the energy of the transverse magnetic field $\eneBperp$ has decayed to $\sim 20\%$ of the initially injected magnetic energy, and the electric field energy carries $\sim 5\%$.
Energy in the parallel (guide) magnetic field stays nearly constant.
These numbers remain practically unchanged at $t>20l_0/c$, which was confirmed by runs up to $t\sim 100l_0/c$.
One can see from Figure~\ref{fig:ene_cooling} that in the weak-cooling regime most of the energy lost by the electromagnetic field is retained by the plasma by the end of simulation (time $t=20l_0/c$), and the radiation has been produced mostly by the nonthermal particles.
By contrast, in the strong-cooling regime, most of the turbulence energy is lost to radiation, and the emission is dominated by thermal particles.
This behavior is further demonstrated in Figure~\ref{fig:energy_budget}, which presents the final partitioning of the released energy between the plasma and radiation for the fiducial 2D model with 9 different cooling levels ${\mathcal A}$, varying from ${\mathcal A}=0$ to ${\mathcal A}\approx 3$.
We find that ${\mathcal A}\gtrsim 0.02$ is sufficient to radiate away most of the turbulence energy before the end of the simulation.
The emission becomes dominated by the thermal particles at ${\mathcal A}\gtrsim 0.01$.
\section{ Radiation from turbulent flares}\label{sect:flares}
\begin{figure}
\centering
\includegraphics[trim={0cm 0.0cm 0cm 0.0cm}, clip=true, width=0.46\textwidth]{fig_v4_s10r60_rad_spec_pitch.pdf}
\caption{\label{fig:rad_spec_weak}
Spectrum of the time-integrated radiation from the fiducial 2D model of a turbulent flare with $\sigma_0 \approx 16$ and cooling parameter ${\mathcal A} \approx 0.003$.
Curves with different colors show the spectra emitted at different angles $\theta$ with respect to the background magnetic field $\vec{B}_0$ (color code is indicated at the top).
Inset in the top-right corner shows the angular distribution of the total energy output ${\cal E}_{\rm rad}$, normalized to the initial turbulence energy ${\cal E}_F^0$.
}
\end{figure}
\begin{figure}
\centering
\includegraphics[trim={0cm 0.0cm 0cm 0.0cm}, clip=true, width=0.46\textwidth]{fig_v4_rad_specs_ns10_v2.pdf}
\caption{\label{fig:rad_specs_s10}
Time-integrated radiation spectra calculated for models with
${\mathcal A} \approx $
$\Ten{1.0}{-3}$ (brown),
$\Ten{6.4}{-3}$ (purple),
$\Ten{2.6}{-2}$ (red),
$0.10$ (green),
$0.41$ (orange),
and $2.6$ (blue).
Inset in the top-right corner shows the radiative efficiency $\ene{\mathrm{rad}}{} / \eneFz$, as a function of $\mathcal{A}$.
Dashed line indicates a characteristic spectral index of $\alpha = -1.5$.
}
\end{figure}
Our simulations follow radiative losses individually for each particle and allow us to study the produced radiation.
Below we show the results for a simple emission model where the energy is passed to radiation via Thomson scattering of isotropic background soft photons with initial energies $\epsilon_0$.
On average, a particle with Lorentz factor $\gamma$ upscatters the background photons to energy \citep{Rybicki_1985}
\begin{equation} \label{eq:scat}
\epsilon=\frac{4}{3} \beta \gamma^2 \epsilon_0.
\end{equation}
In the Thomson scattering regime, $\epsilon$ is a small fraction of particle kinetic energy $(\gamma-1)m_ec^2$, and so the particle is cooled through a large number of scattering events.
For simplicity, we assume that each scattered photon gains the average energy $\epsilon$ given by Equation~(\ref{eq:scat}).
We also assume that the upscattered photon is emitted into the direction of the particle velocity $\vec{\beta}$.
This gives a good approximation to the overall spectral and angular distributions of the produced radiation, especially when the $e^\pm$ plasma is relativistic.
We calculate the emitted radiation by using a sample of $10^7$ particles, whose histories are followed throughout the simulation of the turbulent flare up to time $t=20\,l_0/c$.
Figure~\ref{fig:rad_spec_weak} shows the time-integrated radiation spectrum found in our fiducial 2D flare model in the weak-cooling regime, ${\mathcal A} \approx 0.003$.
The particle distribution during the flare extends from $\gamma\sim 1$ to $\gamma_\mathrm{cool}\sim {\mathcal A}^{-1}\approx 100$ and, correspondingly, the radiation spectrum extends from $\epsilon\sim\epsilon_0$ to $\epsilon\sim {\mathcal A}^{-2}\epsilon_0$.
The emitted spectrum is shown for different viewing angles $\theta$ with respect to the background magnetic field $\vec{B}_0$, and one can see the strong anisotropy at $\epsilon\gg\epsilon_0$, where emission is dominated by nonthermal particles.
The observed anisotropy is shaped by the two different mechanisms of particle acceleration.
The impulsively accelerated particles in the reconnection layers emit mostly along $\vec{B}_0$, within an opening angle of $\theta \lesssim 30^\circ$ (this is a typical pitch angle of the particle when it escapes the reconnection layer).
The impulsive acceleration along $\vec{B}_0$ dominates the emission around the characteristic energy $\epsilon \sim \sigma_0^2 \epsilon_0$.
By contrast, the more energetic, stochastically accelerated, particles move preferentially at large pitch angles $\theta$.
Their emission contributes to the spectrum at $\epsilon\gg \sigma_0^2 \epsilon_0$ and peaks at $\theta=\pi/2$.
Note that most of the flare energy is emitted by the impulsively accelerated particles in the current sheets, and therefore peaks at small $\theta$.
We have also calculated the produced radiation for the same flare model but in the strong cooling regime, with ${\mathcal A}\gg\sigma_0^{-1}$.
Then stochastic acceleration is suppressed and, therefore, the radiation spectrum is suppressed at $\epsilon\gg \sigma_0^2\epsilon_0$.
Impulsive acceleration in the current sheets remains efficient and continues to generate radiation beamed along $\vec{B}_0$ at pitch angles $\theta \lesssim 30^\circ$.
Figure~\ref{fig:rad_specs_s10} shows how the time-integrated radiation spectrum changes with ${\mathcal A}$.
At low ${\mathcal A}$ the spectrum is very broad, with a weak contribution from the thermal plasma.
The position of the high-energy cutoff $\epsilon_c$ scales approximately as ${\mathcal A}^{-2}$, as long as ${\mathcal A}<\sigma_0^{-1}$.
In the strong cooling regime, there is an increasingly prominent emission from the thermal plasma, and the high-energy tail is limited to $\epsilon\lesssim \sigma_0^2\epsilon_0$.
The inset in Figure~\ref{fig:rad_specs_s10} shows the radiative efficiency of the flare ${\cal E}_{\rm rad}/{\cal E}_{F}^0$ (measured at $t=20\,l_0/c$) as a function ${\mathcal A}$.
One can see that ${\cal E}_{\rm rad}/{\cal E}_{F}^0$ approaches unity at ${\mathcal A}\gtrsim 0.1$.
In the weak cooling regime, the efficiency varies approximately as $\ln{\mathcal A}$.
The characteristic luminosity of the flare may be estimated as ${\cal E}_{\rm rad}/t$, where $t\sim 10\,l_0/c$ is the characteristic duration of the main dissipation phase.
In the radiatively efficient regime, the emitted luminosity will approximately track the overall dissipation rate, which decays approximately as $t^{-1}$.
The strong intermittency of dissipation also implies that the radiation output is variable on
a broad range of timescales, from $\sim l_0/c$ to the kinetic tearing scale $l_c/c \approx (\sqrt{\sigma} \omega_\mathrm{p})^{-1}$ \citep[see also][]{Zhdankin_2019}.
Since the observed luminosity is integrated over the entire flare region, its temporal variations should be the strongest on the macroscopic timescales of $c/l_0$.
\section{Conclusions}\label{sect:conclusions}
We have studied decaying relativistic turbulent flares in a magnetically-dominated collisionless pair plasma by means of kinetic PIC simulations.
The simulations were preformed with the open-source code \textsc{Runko} \citep{Nattila_2019}.
The turbulent flare was initiated by a macroscopic perturbation of the magnetic field on a large injection scale $l_0$.
This perturbation immediately led to the formation of large-scale current sheets between colliding magnetic flux ropes, and developed motions on a broad range of scales down to the microscopic plasma scale.
We observed heating of the thermal plasma and acceleration of nonthermal particles, in agreement with earlier simulations \citep{Zhdankin_2018,Comisso_2019}.
The acceleration proceeds in two stages.
First, particles are almost impulsively accelerated to the Lorentz factor $\gamma_{\mathrm{inj}} \sim \sigma_0$ by a non-ideal field $E_{\parallel}$ inside current sheets.
The impulsive acceleration is followed by stochastic acceleration to $\gamma \gg \gamma_{\mathrm{inj}}$ by the ideal MHD field $\boldsymbol{E}_\perp=-\boldsymbol{v}\times\boldsymbol{B}/c$ induced by the turbulent bulk motions.
In the absence of radiative cooling, this mechanism is capable of accelerating particles up to the maximum Lorentz factor $\gamma_0$ at which the particle Larmor radius becomes comparable to the driving scale $l_0$.
In most of our simulations, the plasma magnetization is $\sigma_0\approx 16$ and $\gamma_0\approx 500$.
This gives the regime of $\gamma_0\gg\sigma_0$, as expected around compact objects.
Particles in turbulent flares near compact objects are subject to radiative cooling.
In this paper, we studied how this changes the energy release process.
The strength of radiative cooling in our simulations is described by the dimensionless parameter ${\mathcal A}$ (Equation~\ref{eq:A}), which is similar to the usual compactness parameter \citep[e.g.,][]{Guilbert_1983}.
It approximately equals the ratio of the light-crossing timescale $l_0/c$ to the cooling time $t_\mathrm{cool}(\gamma)$ of particles with $\gamma\approx 1$.
Our results are as follows.
\begin{itemize}
\item
Driving of magnetically-dominated plasma with strong perpendicular $\delta B \sim B_0$ perturbations leads to formation of large-scale magnetic flux ropes.
The flux ropes develop mildly relativistic shearing motions, as a consequence of the unbalanced initial state.
These motions shape the evolution of the decaying turbulent flare.
Magnetic reconnection between the colliding and merging ropes leads to intermittent plasma heating and nonthermal particle acceleration.
Plasmoids of various scales are ejected from the reconnection layers;
however, large-scale shear motions prevent formation of long plasmoid chains.
\item
Radiative losses enhance the contrast of density variations in the turbulent plasma, although the spectrum of magnetic field fluctuations is not changed for all ${\mathcal A}<3$ studied in this paper.
Radiative losses affect the process of energy deposition into the plasma when ${\mathcal A}>\gamma_0^{-1}$.
\item
We observed two radiative regimes, which we called ``weak cooling'' (${\mathcal A}<\sigma_0^{-1})$ and ``strong cooling'' (${\mathcal A}>\sigma_0^{-1}$).
Stochastic acceleration is suppressed in the strong cooling regime;
then only impulsive acceleration in current sheets generates nonthermal particles (with Lorentz factors $\gamma\sim \gamma_\mathrm{inj}\sim \sigma_0$).
Note that a similar transition occurs in radiative magnetic reconnection \citep{Beloborodov_2017, Werner_2019, Sironi_2020}.
\item
In the entire range of ${\mathcal A}$ studied in this paper, the plasma sustains a well-defined thermal component heated by turbulent reconnection.
It is formed by particles that never experienced an ``injection event'' --- a sudden energy gain in a current sheet.
This criterion cleanly separates thermal and nonthermal components, even when the overall particle distribution appears to have a broad shape with no obvious separate components (see also \citealt{Comisso_2019}).
The thermal component defined in this way is found to follow the Maxwell-J\"uttner distribution.
\item
The dissipated energy is distributed between the thermal and nonthermal particle populations with comparable rates as long as ${\mathcal A}\ll \sigma_0^{-1}$.
The partitioning of dissipated energy changes with increasing ${\mathcal A}$ as shown in Figure~\ref{fig:energy_budget}. When ${\mathcal A}$ reaches $\sim 1$, practically all dissipated energy is given to (and radiated by) the low-energy thermal particles.
\item
Radiative losses offer a new tool to analyze the process of particle acceleration in PIC simulations.
In the weak cooling regime, the losses compete with stochastic acceleration and impose a ``ceiling'' $\gamma_\mathrm{cool}$.
Its value depends on the diffusion coefficient in the energy space $D_\gamma$ that drives acceleration.
Our simulations demonstrate that there is no universal $D_\gamma$.
Analysis of small sub-domains of the box shows that the global spectrum $f(\gamma)\propto\gamma^{-3}$ is the superposition of hard local spectra with different cutoffs $\gamma_\mathrm{cool}$ \citep[see also][]{Lemoine_2020}.
We conclude that the total $f(\gamma)$ is shaped by the spatial and temporal intermittency of turbulent reconnection, which produces large variations of $D_\gamma$ across the box.
\item
The simulations give the spectrum of the inverse Compton radiation emitted by the turbulent flare.
We calculated the inverse Compton emission assuming an isotropic background of soft photons with energies $\epsilon_0$.
In the weak-cooling regime, the resulting spectrum is very broad, and its high-energy cutoff scales approximately as $\propto {\mathcal A}^{-2}$.
In the strong-cooling regime, the emission becomes more and more dominated by the radiation from the thermal plasma and the cutoff is located at $\epsilon \sim \sigma_0^2 \epsilon_0$.
The emission is anisotropic, and its angular distribution changes across the spectrum.
The part of the spectrum dominated by current sheets, $\epsilon\sim \sigma_0^2\epsilon$, is beamed along the background magnetic field $\boldsymbol{B}_0$ within a characteristic opening angle $\theta\sim 30^\circ$.
The emission from stochastically accelerated particles dominates at energies $\sigma_0^2\ll\epsilon/\epsilon_0<{\mathcal A}^{-2}$ and peaks at $\theta=90^\circ$.
\item
Full $3D$ simulations of the turbulent flares show qualitatively the same behavior as the $2D$ simulations.
In particular, the magnetic spectrum, reconnection in the current sheets, and stochastic particle acceleration are all captured by the $2D$ model (see also \citealt{Comisso_2019}).
\end{itemize}
In our simulations, the macroscopic driving scale is much greater than the plasma skin depth, $l_0 c/\omega_\mathrm{p} > 100$, which corresponds to the maximum Lorentz factor achieved by stochastic acceleration $\gamma_0\sim 500$.
In real astrophysical objects, $l_0\omega_p/c$ can be larger by many orders of magnitude, which leads to enormous $\gamma_0$.
This fact makes it important to know the precise scaling of the diffusion coefficient $D_\gamma\propto \gamma^\psi$.
We showed in Section~\ref{regimes} that $\gamma_\mathrm{cool}/\gamma_\mathrm{inj}\sim \sigma_0^{-1}{\mathcal A}^{1/(3-\psi)}\gamma_0^{(2-\psi)/(3-\psi)}$ (Equation~\ref{eq:dissmeasure1}) and that a slight reduction of $\psi$ below $2$ enables stochastic particle acceleration even in the presence of strong cooling.
This fact can have a strong impact on the observational appearance of turbulent flares in compact objects.
Another important aspect of turbulent flares is their spatial and temporal intermittency.
It implies intermittency in the particle acceleration process and should leave imprints on the observed temporal structure of the produced radiation on timescales shorter than $l_0/c$.
\section*{Acknowledgments}
We would like to thank Luca Comisso and Lorenzo Sironi for helpful discussions.
The simulations were performed using resources provided by the Swedish National Infrastructure for Computing (SNIC) at PDC and HPC2N.
A.M.B. is supported by NASA grant NNX~17AK37G, NSF grants AST~1816484 and AST~2009453, Simons Foundation grant \#446228, and the Humboldt Foundation.
\bibliographystyle{aasjournal}
|
train/arxiv
|
BkiUbxbxaJJQnH0ll25k
| 5 | 1 |
\section*{Acknowledgements}
This publication has received funding from the Excellence Initiative of Aix-Marseille Universite - A*Midex, a French "Investissements d'Avenir programme" (AMX-21-IET-017), and the UnLIR ANR project (ANR-19-CE23-0009). Part of this work was performed using HPC resources from GENCI-IDRIS (Grant 2020-AD011013110).
\bibliographystyle{ieee}
\section{Discussion and conclusions}
\label{sec:conclusion}
Opti-CAM combines ideas of different saliency map generation methods, which are masking-based and CAM-based. Our method optimizes the saliency map at inference given a single input image. It does not require any additional data or training any other network, which would need interpretation too.
While Opti-CAM crafts a saliency map in the image space, it does not need any regularization. This is because the saliency map is expressed as a convex combination of feature maps and we only optimize one vector over the feature dimensions. The underlying assumption is that of all CAM-based methods: feature maps contain activations at all regions that are of interest for the classes that are present. Opti-CAM is more expensive than non-iterative gradient-based methods but as fast or faster than gradient-free methods that require as many forward passes as channels.
We find that Opti-CAM brings impressive performance improvement over the state of the art according to the most important classification metrics on several datasets. The saliency maps are more spread out compared with those of the competition, attending to larger parts of the object, multiple instances and background context, which may be helpful in classification.
\iavr{Our new classification metric $\operatorname{AG}$ aims to be paired $\operatorname{AD}$ as a replacement of $\operatorname{AI}$ and resolves a long-standing problem in evaluating attribution methods, without further increasing the number of metrics. We provide strong evidence supporting that the use of ground-truth object bounding boxes for localization is not necessarily optimal in evaluating the quality of a saliency map, because the primary objective is to explain how a classifier works.
}
\section{Introduction}
\label{sec:intro}
The success of \emph{deep neural networks} (DNN) and their increasing penetration into most sectors of human activity has led to growing interest in understanding how these models make their predictions. Unlike shallow methods, DNN have a high complexity and it is not possible to directly explain their inference process in a human understandable manner. This challenge has opened up an entire research field~\citep{guidotti2018survey, montavon2018methods, samek2021explaining, bodria2021benchmarking, li2021interpretable}.
In this work, we are interested in the interpretability of deep neural networks through the generation of \emph{saliency maps}, highlighting regions of an image that are responsible for the prediction. This originates in \emph{gradient-based} methods \citep{simonyan2013deep, yosinski2015understanding}, including variants of backpropagation \citep{zeiler2014visualizing, springenberg2014striving, bach2015pixel}. CAM~\citep{zhou2016learning} introduced class-specific linear combinations of feature maps, and led to several alternative weighting schemes \citep{ramaswamy2020ablation, wang2020score, muhammad2020eigen}, including the use of gradients \citep{selvaraju2017grad, chattopadhay2018grad}. On the other hand, \emph{occlusion-} or \emph{masking-based} methods \citep{dabkowski2017real, fong2017interpretable, fong2019understanding, schulz2020restricting} remove regions in the image space while improving classification performance.
Score-CAM~\cite{wang2020score} uses each feature map as a mask and defines a corresponding weight based on the resulting increase of class score; hence, it is both CAM-based and masking-based but does not use gradients. It resembles the numerical gradient approximation, in that it needs \emph{one forward pass per weight}. Instead, the analytical approach would be to use a linear combination of feature maps as a mask, express the class score as a function of the weights and measure the gradient analytically, in a \emph{single backward pass}. Then, \emph{why not use gradient descent to maximize the class score?} The optimal mask should highlight regions for which the network is most confident.
\emph{Masking-based} methods, such as extremal perturbations~\citep{fong2019understanding} or IBA~\citep{schulz2020restricting}, do use gradient descent to maximize the class score. The mask is now a variable in the input or feature space and the class score is expressed as a function of the mask directly. Because the variable being optimized is a high-dimensional image or tensor, additional constraints or regularizers are needed to control \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot the smoothness and the salient area. This translates to more hyperparameters or more expensive optimization.
Motivated by the above, we introduce Opti-CAM, illustrated in \autoref{fig:idea}. We form a linear combination of feature maps, where the weights are a variable. Treating it as a saliency map, we form a masked version of the input image that is fed again to the network. Then, the logit of a given class for the masked version of the input is maximized to obtain the optimal weights. Thus, Opti-CAM can be seen as an analytical counterpart of Score-CAM that is optimized iteratively, or as a masking-based method where the mask to be optimized lies in the linear span of the feature maps, like CAM-based methods.
\begin{figure*}[t]
\centering
\begin{tikzpicture}[
scale=.3,
font={\small},
node distance=.5,
label distance=3pt,
net/.style={draw,trapezium,trapezium angle=75,inner sep=3pt},
enc/.style={net,shape border rotate=270},
txt/.style={inner sep=3pt},
frame/.style={draw,minimum size=1cm},
feat/.style={frame},
sq/.style={minimum size=.18cm},
elem/.style={draw,sq},
vec/.style={draw,minimum width=.8cm,minimum height=.15cm},
var/.style={blue!60},
B/.style={fill=blue!20},
R/.style={fill=red!20},
G/.style={fill=green!20},
Y/.style={fill=yellow!40},
P/.style={fill=black!20},
]
\matrix[
tight,row sep=0,column sep=16,
cells={scale=.3,},
] {
\&\&\&\&\&\&\&
\node[txt] (loss) {objective \\ $F^c_\ell(\mathbf{x}; \mathbf{u})$}; \\
\node[label=90:{input image $\mathbf{x}$}] (in) {\figah[2cm]{idea/input}}; \&
\node[enc] (net) {network \\ $f$}; \&
\foreach \s/\c in {-2/B,-1/R,0/G,1/Y,2/P}
{\node[feat,\c] (feat\s) at ($.4*(\s,-\s)$) {};}
\node at (feat2) {\figah[1cm]{idea/27_fea0}};
\node[frame] at (feat2) {};
\coordinate[label=90:{feature \\ maps $A^k_\ell$}]
(feat-north) at (feat-2.north -| feat0.north);
\coordinate (feat-west) at (feat-2.west |- feat0.west);
\coordinate (feat-east) at (feat2.east |- feat0.east);
\&
\node[var,op] (cam) {$\times$};
\foreach \s/\c in {-2/B,-1/R,0/G,1/Y,2/P}
{\node[elem,\c] (elem\s) at ($.6*(\s,-6)$) {};}
\node[sq,label=-90:{weights $\mathbf{u}$}] (weight) at (elem0) {};
\&
\node[var,label=90:{saliency map \\ $S_\ell(\mathbf{x}; \mathbf{u})$}] (sal) {\figah[2cm]{idea/saliency}}; \&
\node[var,op] (mask) {$\odot$}; \&
\node[label=90:{masked image}] (masked) {\figah[2cm]{idea/masked}}; \&
\node[enc] (net2) {network \\ $f$}; \\[8]
\&\&\&
\coordinate (mid); \\
};
\draw[->]
(in) edge (net)
(net) edge (feat-west)
(feat-east) edge (cam)
(net2) edge node[pos=.5,right] {class \\ logits} (loss)
;
\draw[var,->]
(weight) edge (cam)
(cam) edge (sal)
(sal) edge (mask)
(mask) edge (masked)
(masked) edge (net2)
(net2) edge (loss)
;
\draw[->]
(in) |- (mid)
(mid) -| (mask)
;
\end{tikzpicture}
\caption{Overview of Opti-CAM. We are given an input image $\mathbf{x}$, a fixed network $f$, a target layer $\ell$ and a class of interest $c$. We extract the feature maps from layer $\ell$ and obtain a saliency map $S_\ell(\mathbf{x}; \mathbf{u})$ by forming a convex combination of the feature maps ($\times$) with weights determined by a variable vector $\mathbf{u}$~\eq{v-sal}. After upsampling and normalizing, we element-wise multiply ($\odot$) the saliency map with the input image to form a ``masked'' version of the input, which is fed to $f$. The objective function $F^c_\ell(\mathbf{x}; \mathbf{u})$ measures the logit of class $c$ for the masked image~\eq{obj}. We find the value of $\mathbf{u}^*$ that maximizes this logit by optimizing along the path highlighted in blue~\eq{opt}, as well as the corresponding optimal saliency map $S_\ell(\mathbf{x}; \mathbf{u}^*)$~\eq{o-sal}.}
\label{fig:idea}
\vspace{-0.4cm}
\end{figure*}
The evaluation metrics most relevant to using a saliency map as a mask are \emph{average drop} ($\operatorname{AD}$) and \emph{average increase} ($\operatorname{AI}$)~\cite{chattopadhay2018grad}. The problem is that the two metrics are not defined in a symmetric way. As a result, there exists a trivial attribution method called Fake-CAM~\cite{poppi2021revisiting} that outperforms the state of the art in both metrics. To address this, we introduce the symmetric counterpart of $\operatorname{AD}$, which we call \emph{average gain\xspace} ($\operatorname{AG}$), to be paired with $\operatorname{AD}$ as a replacement of $\operatorname{AI}$. As expected, Fake-CAM fails $\operatorname{AG}$.
In summary, we make the following contributions:
\begin{enumerate}[itemsep=2pt, parsep=0pt, topsep=3pt]
\item We introduce Opti-CAM, a simple model for saliency map generation that combines ideas from CAM-based and masking-based approaches. \redred{Opti-CAM does not need any extra data, network or training.}
\item Compared with gradient-free methods~\citep{wang2020score,petsiuk2018rise,ramaswamy2020ablation}, it finds the optimal feature map weights and is on par or faster, assuming that the number of iterations is less than the number of channels.
\item We introduce a new evaluation metric, \emph{average gain\xspace} ($\operatorname{AG}$), to be paired with \emph{average drop} ($\operatorname{AD}$) as a replacement of \emph{average increase} ($\operatorname{AI}$)~\cite{chattopadhay2018grad}.
\item On several datasets, we improve the state of the art by a large margin, \redred{reaching near-perfect performance} according to the most relevant classification metrics.
\item We shed more light into how a classifier may exploit background context.
\end{enumerate}
\section{Opti-CAM}
\label{sec:opticam}
\subsection{Preliminaries}
\label{sec:prelim}
\paragraph{Notation}
\label{sec:notation}
Consider a classifier network $f: \mathcal{X} \to \mathbb{R}^C$ that maps an input image $\mathbf{x} \in \mathcal{X}$ to a logit vector $\mathbf{y} = f(\mathbf{x}) \in \mathbb{R}^C$, where $\mathcal{X}$ is the image space and $C$ is the number of classes. We denote by $y_c = f(\mathbf{x})_c$ the predicted logit and by $p_c = \operatorname{softmax}(\mathbf{y})_c \mathrel{:=} e^{y_c} / \sum_j e^{y_j}$ the predicted probability for class $c$. For layer $\ell$ with $K_\ell$ channels, we denote by $A^k_\ell = f^k_\ell(\mathbf{x}) \in \mathbb{R}^{h_\ell \times w_\ell}$ the feature map for channel $k \in \{1,\dots,K_\ell\}$, with spatial resolution $h_\ell \times w_\ell$. Because of $\operatorname{relu}$ non-linearities, we assume that feature maps are non-negative. Similarly, we denote by $S_\ell \in \mathbb{R}^{h_\ell \times w_\ell}$ a 2D saliency map.
\paragraph{Background: CAM-based saliency maps}
\label{sec:back}
Given a layer $\ell$ and a class of interest $c$, we consider saliency maps given by the general formula
\begin{equation}
S^c_\ell(\mathbf{x}) \mathrel{:=} h \left( \sum_k w^c_k A^k_\ell \right),
\label{eq:sal}
\end{equation}
where $w^c_k$ are weights defining a linear combination over channels and $h$ is an activation function. CAM~\citep{zhou2016learning} is defined for the last layer $L$ only with $h$ being the identity mapping and $w^c_k$ being the classifier weight connecting the $k$-th channel with class $c$. Grad-CAM~\citep{selvaraju2017grad} is defined for any layer $\ell$ with $h = \operatorname{relu}$ and weights
\begin{equation}
w^c_k \mathrel{:=} \operatorname{GAP} \left( \pder{y_c}{A^k_\ell} \right),
\label{eq:gcam}
\end{equation}
where $\operatorname{GAP}$ is global average pooling.
The motivation for $\operatorname{relu}$ is that we are only interested in features that have a positive effect on the class of interest, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot pixels whose intensity should be increased in order to increase $y_c$.
Score-CAM~\cite{wang2020score} is also defined for any layer $\ell$ with $h = \operatorname{relu}$ and weights $w^c_k \mathrel{:=} \operatorname{softmax}(\mathbf{u}^c)_k$. Softmax normalization considers positive channel contributions only and attends to few feature maps.
Here, vector $\mathbf{u}^c \in \mathbb{R}^{K_\ell}$ measures the increase in confidence for class $c$ that compares a known baseline image $\mathbf{x}_b$ with the input image $\mathbf{x}$ masked according to feature map $A^k_\ell$, for all channels $k$:
\begin{equation}
u^c_k \mathrel{:=} f(\mathbf{x} \odot n(\operatorname{up}(A^k_\ell)))_c - f(\mathbf{x}_b)_c,
\label{eq:s-cam}
\end{equation}
where $\odot$ is the Hadamard product. For this to work, the feature map $A^k_\ell$ is adapted to $\mathbf{x}$ first: $\operatorname{up}$ denotes upsampling to the spatial resolution of $\mathbf{x}$ and
\begin{equation}
n(A) \mathrel{:=} \frac{A - \min A}{\max A - \min A}
\label{eq:norm}
\end{equation}
\redred{is a normalization of matrix $A$ into $[0,1]$.} While Score-CAM does not need gradients, it requires as many forward passes through the network as the number of channels in the chosen layer, which is computationally expensive.
\paragraph{Motivation}
\label{sec:motiv}
\iavr{Score-CAM considers each feature map as a mask in isolation. How about linear combinations?} Given a vector $\mathbf{w} \in \mathbb{R}^{K_\ell}$ with $w_k$ its $k$-th element, let
\begin{equation}
F(\mathbf{w}) \mathrel{:=} f \left( \mathbf{x} \odot n \left( \operatorname{up} \left(
\displaystyle\sum_k w_k A^k_\ell
\right) \right) \right)_c.
\label{eq:s-obj}
\end{equation}
\ronan{If we assume that $\mathbf{x}_b = \mathbf{0}$ in~\eq{s-cam} and define $n(\mathbf{0}) \mathrel{:=} \mathbf{0}$ in~\eq{norm}, then we can rewrite the right-hand side of~\eq{s-cam} as
\begin{equation}
\frac{F(\mathbf{w}_0 + \delta \mathbf{e}_k) - F(\mathbf{w}_0)}{\delta},
\label{eq:s-cam2}
\end{equation}
where $\mathbf{w}_0 = \mathbf{0}$, $\delta = 1$ and $\mathbf{e}_k$ is the $k$-th standard basis vector of $\mathbb{R}^{K_\ell}$. This resembles the numerical approximation of the derivative $\pder{F}{w_k}(\mathbf{w}_0)$, except that $\delta$ is not small as usual. One could compute derivatives efficiently by standard backpropagation instead. It is then possible to iteratively optimize $F$ with respect to $\mathbf{w}$, starting at any $\mathbf{w}_0$.}
\iavr{As an alternative, consider masking-based methods relying on optimization in the input space, like \emph{meaningful perturbations} (MP)~\cite{fong2017interpretable} or \emph{extremal perturbations}~\citep{fong2019understanding}. In general, optimization takes the form
\begin{equation}
S^c(\mathbf{x}) \mathrel{:=} \arg\max_{\mathbf{m} \in \mathcal{M}} f(\mathbf{x} \odot n(\operatorname{up}(\mathbf{m})))_c + \lambda R(\mathbf{m}).
\label{eq:mask}
\end{equation}
Here, a mask $\mathbf{m}$ is directly optimized and does not rely on feature maps, hence the saliency map $S^x(\mathbf{x})$ is not connected to any layer $\ell$. The mask is at the same or lower resolution than the input image. In the latter case, upsampling is still necessary.
In this approach, one indeed computes derivatives by backpropagation and indeed iteratively optimizes $\mathbf{m}$. However, because $\mathbf{m}$ is high-dimensional, there are constraints expressed by $\mathbf{m} \in \mathcal{M}$, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot $\mathbf{m}$ has a certain norm, and regularizers like $R(\mathbf{m})$, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot $\mathbf{m}$ is smooth in a certain way. This makes optimization harder or more expensive and introduces more hyperparameters like $\lambda$. One could simply constrain $\mathbf{m}$ to lie in the linear span of $\{A_\ell^k\}_{k=1}^{K_\ell}$ instead, like all CAM-based methods.}
\subsection{Method}
\label{sec:method}
\paragraph{Saliency maps}
As motivated by \autoref{sec:motiv}, we obtain a saliency map as a convex combination of feature maps by optimizing a given objective function with respect to the weights.
In particular, following~\citep{wang2020score}, we use channel weights $w_k \mathrel{:=} \operatorname{softmax}(\mathbf{u})_k$, where $\mathbf{u} \in \mathbb{R}^{K_\ell}$ is a variable.
We then consider saliency map $S_\ell$ in layer $\ell$ as a function of both the input image $\mathbf{x}$ and variable $\mathbf{u}$:
\begin{equation}
S_\ell(\mathbf{x}; \mathbf{u}) \mathrel{:=} \sum_k \operatorname{softmax}(\mathbf{u})_k A^k_\ell.
\label{eq:v-sal}
\end{equation}
Comparing with~\eq{sal}, $h$ is the identity mapping, because feature maps are non-negative and weights are positive.
\paragraph{Optimization}
Now, given a layer $\ell$ and a class of interest $c$, we find the vector $\mathbf{u}^*$ that maximizes the classifier confidence for class $c$, when the input image $\mathbf{x}$ is masked according to saliency map $S_\ell(\mathbf{x}; \mathbf{u}^*)$:
\begin{equation}
\mathbf{u}^* \mathrel{:=} \arg\max_{\mathbf{u}} F^c_\ell(\mathbf{x}; \mathbf{u}),
\label{eq:opt}
\end{equation}
where we define the objective function
\begin{equation}
F^c_\ell(\mathbf{x}; \mathbf{u}) \mathrel{:=} g_c(f(\mathbf{x} \odot n(\operatorname{up}(S_\ell(\mathbf{x}; \mathbf{u}))))).
\label{eq:obj}
\end{equation}
Here, the saliency map $S_\ell(\mathbf{x}; \mathbf{u})$ is adapted to $\mathbf{x}$ exactly as in~\eq{s-cam} in terms of resolution and normalization. For \emph{normalization function} $n$, the default is~\eq{norm}. The \emph{selector function} $g_c$ operates on the logit vector $\mathbf{y}$; the default is to select the logit of class $c$, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot $g_c(\mathbf{y}) \mathrel{:=} y_c$. Other choices, including the definition of $F^c_\ell$ itself, are investigated in \autoref{sec:ablation} \redred{and in the supplementary material.}
\paragraph{Opti-CAM}
Putting everything together, we define
\begin{equation}
S^c_\ell(\mathbf{x}) \mathrel{:=} S_\ell(\mathbf{x}; \mathbf{u}^*) = S_\ell(\mathbf{x}; \arg\max_{\mathbf{u}} F^c_\ell(\mathbf{x}; \mathbf{u})),
\label{eq:o-sal}
\end{equation}
where $S_\ell$ and $F^c_\ell$ are defined by~\eq{v-sal} and~\eq{obj} respectively. The objective function $F^c_\ell$~\eq{obj} depends on variable $\mathbf{u}$ through $S_\ell$~\eq{v-sal}, where the feature maps $A^k_\ell = f^k_\ell(\mathbf{x})$ are fixed. Then,~\eq{obj} involves masking and a forward pass through the network $f$, which is also fixed.
\autoref{fig:idea} is an abstract illustration of our method, \iavr{called Opti-CAM}, without details like upsampling and normalization~\eq{obj}. Optimization takes place along the highlighted path from variable $\mathbf{u}$ to objective function $F^c_\ell$. The saliency map is real-valued and the entire objective function is differentiable in $\mathbf{u}$. We use Adam optimizer~\citep{kingma2014adam} to solve the optimization problem~\eq{opt}.
\paragraph{Discussion}
By maximizing~\eq{obj}, the saliency map focuses on the regions contributing to class $c$, while masked regions contribute less. This way, the influence of background in the average pooling process is reduced.
The saliency map is expressed as a linear combination of feature maps~\eq{v-sal}, with normalized weights. Hence, the saliency map is discouraged from taking up the entire image, both by the $\operatorname{softmax}$ competition~\eq{v-sal} and by the fact that feature maps only respond to particular locations.
\iavr{In case $g_c(\mathbf{y}) \mathrel{:=} y_c$,~\eq{o-sal} takes the form of direct masking~\eq{mask} with $R(\mathbf{m}) = \mathbf{0}$ and
\begin{equation}
\mathcal{M} \mathrel{:=} \{ S_\ell(\mathbf{x}; \mathbf{u}) : \mathbf{u} \in \mathbb{R}^{K_\ell} \}.
\label{eq:mask-m}
\end{equation}
This constraint makes ours a CAM-based method. It dispenses the need for regularizers, because we only optimize one vector over the feature dimensions. In addition, it does not complicate the optimization process in any way. It is only a different parametrization.}
\iavr{
\section{Average Gain\xspace ($\operatorname{AG}$)}
\redred{Average drop ($\operatorname{AD}$) and average increase ($\operatorname{AI}$)~\cite{chattopadhay2018grad} are well-established classification metrics. They measure the effect on the predicted class probabilities by masking the input image with the saliency map.} Let $p^c_i$ and $o^c_i$ be the predicted probability for class $c$ given as input the $i$-th test image $\mathbf{x}_i$ and its masked version respectively. Masking refers to element-wise multiplication with the saliency map, which is at the same resolution as the original image with values in $[0,1]$. Let $N$ be the number of test images. Class $c$ is taken as the ground truth.
\emph{Average drop} ($\operatorname{AD}$) quantifies how much predictive power, measured as class probability, is lost when we only mask the image; lower is better:
\begin{equation}
\operatorname{AD}(\%) \mathrel{:=} \frac{1}{N} \sum_{i=1}^N \frac{[p^c_i - o^c_i]_+}{p^c_i} \cdot 100.
\label{eq:ad}
\end{equation}
\emph{Average increase} ($\operatorname{AI}$), also known as \emph{increase in confidence}, measures the percentage of images where the masked image yields a higher class probability than the original; higher is better:
\begin{equation}
\operatorname{AI}(\%) \mathrel{:=} \frac{1}{N} \sum_i^N \mathbbm{1}_{p^c_i < o^c_i} \cdot 100.
\label{eq:ai}
\end{equation}
$\operatorname{AD}$ and $\operatorname{AI}$ are not defined in a symmetric way. $\operatorname{AD}$ measures changes in class probability whereas $\operatorname{AI}$ measures a percentage of images. It is possible that the percentage is high while the actual increase is small. Hence, it is possible that an attribution method improves both. Indeed, \citep{poppi2021revisiting} observes that a trivial method called Fake-CAM outperforms state-of-the-art methods, including Score-CAM, by a large margin. Fake-CAM simply defines a saliency map where the top-left pixel is set to zero and is uniform elsewhere. This questions the purpose of $\operatorname{AD}$ and $\operatorname{AI}$.
Although the authors of~\citep{poppi2021revisiting} make this impressive observation, they use it to motivate the definition of a number of metrics that are orthogonal to the task at hand, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot measuring the effect of masking to the classifier. By contrast, we address the problem by introducing a new metric to be paired with $\operatorname{AD}$ as a replacement of $\operatorname{AI}$. We define the new metric as follows.
\emph{Average gain\xspace} ($\operatorname{AG}$) quantifies how much predictive power, measured as class probability, is gained when we mask the image; higher is better:
\begin{equation}
\operatorname{AG}(\%) \mathrel{:=} \frac{1}{N} \sum_{i=1}^N \frac{[o^c_i - p^c_i]_+}{1-p^c_i} \cdot 100.
\label{eq:ag}
\end{equation}
This definition is symmetric to the definition of average drop, in the sense that \redred{in absolute value, the numerator in the sum of $\operatorname{AD}, \operatorname{AG}$ is the positive and negative part of $p^c_i - o^c_i$ respectively and the denominator is the maximum value that the numerator can get as a function of $o^c_i$, given that $0 < o^c_i < p^c_i$ and $p^c_i < o^c_i < 1$ respectively.} The two metrics thus compete each other, in the sense that changing $o^c_i$ to improve one leaves the other unchanged or harms it. As we shall see, an extreme example is Fake-CAM, which yields near-perfect $\operatorname{AD}$ but fails completely on $\operatorname{AG}$.
}
\section{Related Work}
A large number of works study \emph{explainability}, \emph{interpretability} or \emph{attribution} of machine learning models, especially DNN~\citep{guidotti2018survey, montavon2018methods, samek2021explaining, bodria2021benchmarking, li2021interpretable}. These works can be categorized into \emph{transparency} and \emph{post-hoc interpretability}~\citep{lipton2018mythos, guidotti2018survey}. The former addresses how to design an internally understandable model. Here we are interested in the latter, which treats the studied network as a black box and interprets its inner processing~\citep{ribeiro2016should, lundberg2017unified, fong2017interpretable, elliott2021explaining, selvaraju2017grad, petsiuk2018rise}. Among post-hoc methods, LIME~\citep{ribeiro2016should} and SHAP~\citep{lundberg2017unified} are well-known model-agnostic methods that rate feature importance. More specifically, we are interested in the generation of \emph{saliency maps}. These methods are mostly based on gradients, CAM~\citep{zhou2016learning}, occlusion, or a combination.
\paragraph{Gradient-based methods}
Gradient-based methods~\citep{adebayo2018local,springenberg2014striving,baehrens2010explain} use the gradient of a target class score with respect to the input to measure the effect of different image regions on the prediction. In~\citep{simonyan2013deep}, the gradient is directly treated as a saliency map. Inspired by DeconvNet~\citep{zeiler2014visualizing}, \emph{guided backpropagation}~\citep{springenberg2014striving} improves the explanation by setting negative gradients to zero using ReLU units. Other methods~\citep{shrikumar2017learning, zhang2018top, bastings2020elephant} are inspired by Layer-wise Relevance Propagation (LRP)~\citep{bach2015pixel}. SmoothGrad~\citep{smilkov2017smoothgrad} and \emph{integrated gradients}~\cite{sundararajan2017axiomatic} accumulate gradients into saliency maps, while NormGrad~\citep{rebuffi2020there} attempts to unify gradient-based methods. A different approach is to use adversarial attacks~\citep{elliott2021explaining, jalwana2020attack}. Several of these methods do not satisfy the fundamental property of implementation invariance~\cite{sundararajan2017axiomatic}.
\paragraph{CAM-based methods}
\emph{Class activation maps} (CAM)~\citep{zhou2016learning} is a visualization method that highlights the image regions most relevant to a target class by a linear combination of feature maps. A number of variants use different definitions of weights. Many rely on gradients, including GradCAM~\citep{selvaraju2017grad}, GradCAM++~\citep{chattopadhay2018grad}, XGradCAM~\citep{fu2020axiom} and LayerCAM~\citep{jiang2021layercam}. Gradient-free methods, including Ablation-CAM~\citep{ramaswamy2020ablation}, Score-CAM~\cite{wang2020score} and SS-CAM~\citep{wang2020ss}, rather measure the effect on the target class score of each feature map acting as a mask on the input. We inherit the idea of masking but for linear combinations of feature maps and we iteratively optimize the coefficients by analytical gradient computation. Our method is thus faster when the number of iterations is less than the number of channels.
\paragraph{Occlusion (masking)-based methods}
These methods use a number of candidate masks, measure their effect on the prediction, then combine them in a single saliency map. RISE~\citep{petsiuk2018rise} randomly masks input images and uses the class score as a weight to define a linear combination. \emph{Meaningful perturbations} \citep{fong2017interpretable} and \emph{extremal perturbations}~\citep{fong2019understanding} directly optimize the mask in the image space by using gradients. They require a large number of parameters as well as regularizers, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot for smoothness. \emph{Information bottleneck attribution} (IBA)~\citep{schulz2020restricting} optimizes the mask in the feature space as a tensor instead. Score-CAM~\cite{wang2020score} is also an occlusion-based method, using individual feature maps as candidate masks. The same holds for our Opti-CAM, but for candidate masks constrained in the linear span of the feature maps. Compared with~\citep{fong2019understanding,schulz2020restricting}, we have fewer parameters and do not require a regularizer.
\paragraph{Learning-based methods}
While occlusion-based methods compute or optimize a mask for a particular image at inference, learning-based methods use an additional network or branch and they train it on extra data and image-level labels to predict a saliency map given an input image. This includes for example generators \citep{chang2018explaining} or auto-encoders \citep{dabkowski2017real, phang2020investigating, zolna2020classifier}. This approach may be compared with weakly-supervised object detection~\citep{bilen2016weakly}, segmentation~\citep{KoLa16} or instance segmentation~\citep{AhCK19}. IBA~\citep{schulz2020restricting} includes a learning-based approach in the feature space. Apart from requiring extra data, it is not satisfying in the sense that the learned decoder would need to be explained too. Our method does not need any extra data, network, or training.
\paragraph{Evaluation of attribution methods}
Evaluating saliency maps is challenging because no ground truth attributions exist. \emph{Average drop} ($\operatorname{AD}$) and \emph{average increase} ($\operatorname{AI}$), also known as increase in confidence~\cite{chattopadhay2018grad} are well-established metrics. They consider the effect on the predicted class probabilities by masking the input image with the saliency map. There is a fundamental flaw in using $\operatorname{AD}$, $\operatorname{AI}$ as a pair of metrics, which we fix by replacing $\operatorname{AI}$ by a new metric, \emph{average gain} ($\operatorname{AG}$).
\emph{Insertion} (I) and \emph{deletion} (D) sequentially insert or delete pixels by decreasing order of saliency and observe the effect on the prediction. The resulting images are out-of-distribution (OOD)~\cite{gomez2022metrics} and the metrics favor small and compact regions. Localization metrics measure how the saliency maps are aligned with object bounding boxes, which ignores the importance of background context~\cite{shetty2019not, rao2022towards}. We demonstrate that localization and attribution are not well-aligned as tasks.
\section*{Introduction}
Implementation details are provided in \autoref{sec:details}. We provide results on more classification metrics in \autoref{sec:cla-metrics}. In \autoref{sec:loc-metrics}, we define localization metrics and provide corresponding results. We provide results on medical data in \autoref{sec:medical}. We then provide more ablation results in \autoref{sec:more-ablation}, sanity check in \autoref{sec:sanity-check}, and results without input image normalization in \autoref{sec:without-norm}. Finally, we provide additional visualizations in \autoref{sec:more-vis}.
\section{Implementation details}
\label{sec:details}
All input images are resized to $224 \times 224 \times 3$. To optimize the saliency map with Opti-CAM~\eq{opt}, we use the Adam~\citep{kingma2014adam} optimizer with learning rate $0.1$ by default, setting the maximum number of iterations to $100$ and stopping early when the change in loss is less than $10^{-10}$. For VGG16, we generate the saliency map~\eq{v-sal} from the feature maps of the last convolutional layer before max pooling by default, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot convolutional layer 3 of block 5. For ResNet50, we choose the last convolutional layer by default, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot convolutional layer 3 of bottleneck 2 of block 4. For ViT and DeiT, we choose the last self-attention block by default, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot layer normalization of self-attention block 12. Ablations concerning the layer $\ell$ and the convergence of Opti-CAM is included in \autoref{sec:more-ablation}.
\section{Classification metrics}
\label{sec:cla-metrics}
Classification metrics measure the effect on classification performance of masking (element-wise multiplying) the input image by the saliency map. We have used $\operatorname{AD}$, $\operatorname{AG}$ and $\operatorname{AI}$ in the main paper. Here we discuss Insertion/Deletion~\citep{petsiuk2018rise}, providing results and discussing failure cases for Opti-CAM.
\subsection{Insertion/Deletion}
\paragraph{Definition}
Insertion/Deletion~\citep{petsiuk2018rise} are based on the probability $p^{c_p}_i$ for the predicted class $c_p$ as pixels are ``inserted'' or ``deleted'' from image $\mathbf{x}_i$, averaged over the number of pixels and over all images in the test set.
\emph{Deletion} measures the decrease in the probability of class $c_p$ when removing pixels one by one in decreasing order of saliency, where removal is taken as setting the value to zero; lower is better.
\emph{Insertion}, by contrast, measures the increase in the probability of class $c_p$ when adding pixels, again by decreasing order of saliency. In this case, we begin with a version of the image that is distorted by Gaussian blur and then addition is taken as setting the value of the pixel according to the original image. Higher is better.
\paragraph{Results}
The experimental results are shown in \autoref{tab:imagenet_cnn_hihd} for CNNs and \autoref{tab:imagenet-trans-hihd} for transformers. ExPerturbation~\citep{fong2019understanding} is expected to perform best in insertion because its optimization objective is very similar to this evaluation metric, using blurring for masked regions. However, ExPerturbation~\citep{fong2019understanding} only performs best on ResNet50. TIBAV~\cite{chefer2021transformer}, which is designed for transformers, outperforms the other methods on DeiT and ViT. According to the results of Insertion/Deletion, Opti-CAM has low performance but there is no clear winner on either CNNs or transformers.
To further understand the behavior of Opti-CAM, we investigate in \autoref{fig:hihd} examples where Score-CAM succeeds (insertion score greater than $90$ and deletion score less than $10$) and Opti-CAM fails (insertion score less than $70$ and deletion score greater than $15$). Compared with Score-CAM, the saliency maps obtained by Opti-CAM are more spread out and highlight several parts of the object and background context. In most of the cases, Opti-CAM fails I/D because it not only finds the object but also attaches importance to the background.
We argue that this is not a failure. As our localization experiment in \autoref{tab:localization} indicates, background is useful in discriminating a class. Often, the network recognizes the background better than the object itself. \redred{For example, a gas pump is likely to be seen with a truck and a hare is often seen on grass. Several parts of the object are highlighted by Opti-CAM for the worm fence, terrier dog, hare, manhole cover. Finally, several instances of spaniel dog are found by Opti-CAM.}
Insertion/Deletion include 224 steps of binarization, with a set of 224 pixels being inserted/deleted at each step. If these pixels are all inserted over a single small area, the effect on the classifier is more immediate than when sparsely inserting pixels over multiple areas. The same observation holds for deletion. By contrast, Opti-CAM attempts to find regions that contribute to the classification as a whole. There is no guarantee that those regions are effective when used in isolation.
\begin{table}
\centering
\footnotesize
\setlength{\tabcolsep}{8pt}
\begin{tabular}{lrr rr} \toprule
\mr{2}{\Th{Method}} & \mc{2}{\Th{ResNet50}} & \mc{2}{\Th{VGG16}} \\ \cmidrule{2-5}
& {{$\operatorname{I}\!\uparrow$}} & {{$\operatorname{D}\!\downarrow$}}& {{$\operatorname{I}\!\uparrow$}} & {{$\operatorname{D}\!\downarrow$}} \\ \midrule
Fake-CAM~\citep{poppi2021revisiting}&50.7&28.1&46.1&26.9\\\midrule
Grad-CAM~\citep{selvaraju2017grad} &66.3&14.7&\tb{64.1}&11.6\\
Grad-CAM++~\cite{chattopadhay2018grad} &66.0&14.7&62.9&12.2\\
Score-CAM~\citep{wang2020score} &65.7&16.3&62.5&12.1\\
Ablation-CAM~\citep{ramaswamy2020ablation} &65.9&14.6&63.8&11.4\\
XGrad-CAM~\citep{fu2020axiom} &66.3&14.7&\tb{64.1}&11.7\\
Layer-CAM~\citep{jiang2021layercam}&67.0&\tb{14.2}&58.3&\tb{6.4}\\
ExPerturbation~\citep{fong2019understanding}&\tb{70.7}&15.0&61.1&15.0\\
\rowcolor{cyan!10}
Opti-CAM (ours) &62.0&19.7&59.2&11.0\\
\bottomrule
\end{tabular}
\caption{
I/D: insertion/deletion~\citep{petsiuk2018rise} scores on ImageNet validation set; $\downarrow$ / $\uparrow$: lower / higher is better.
\label{tab:imagenet_cnn_hihd}
\end{table}
\begin{table}
\centering
\footnotesize
\setlength{\tabcolsep}{8pt}
\begin{tabular}{lrr rr} \toprule
\mr{2}{\Th{Method}} & \mc{2}{\Th{DeiT-B}} & \mc{2}{\Th{ViT-B}} \\ \cmidrule{2-5}
& {{$\operatorname{I}\!\uparrow$}} & {{$\operatorname{D}\!\downarrow$}}& {{$\operatorname{I}\!\uparrow$}} & {{$\operatorname{D}\!\downarrow$}} \\ \midrule
Fake-CAM~\citep{poppi2021revisiting}&57.5&34.2&57.4&33.3\\\midrule
Grad-CAM~\citep{selvaraju2017grad} &61.8&17.5&62.9&19.8\\
Grad-CAM++~\cite{chattopadhay2018grad} &60.5&21.9&56.7&29.3\\
Score-CAM~\citep{wang2020score} &60.6&24.4&\tb{66.5}&15.1\\
XGrad-CAM~\citep{fu2020axiom} &55.2&31.1&55.6&26.5\\
Layer-CAM~\citep{jiang2021layercam} &61.6&21.2&62.9&14.6\\
ExPerturbation~\citep{fong2019understanding}&62.1&27.0&64.4&18.4\\
RawAtt~\citep{dosovitskiy2020image} &56.3&29.3&62.2&17.9\\
Rollout~\citep{abnar2020quantifying} &56.7&32.8&64.8&15.2\\
TIBAV~\cite{chefer2021transformer} &\tb{63.7}&\tb{16.3}&66.1&\tb{14.1}\\
\rowcolor{cyan!10}
Opti-CAM (ours) &59.2&22.8&60.5&22.0\\
\bottomrule
\end{tabular}
\caption{
\emph{I/D: insertion/deletion~\citep{petsiuk2018rise}} scores on ImageNet validation set; $\downarrow$ / $\uparrow$: lower / higher is better.
\label{tab:imagenet-trans-hihd}
\end{table}
\input{tex/check_hihd_images.tex}
\section{Localization metrics}
\label{sec:loc-metrics}
Several works measure the localization ability of saliency maps, using metrics from the \emph{weakly-supervised object localization} (WSOL) task. While we show in the main paper that localization of the object and classifier interpretability are not well aligned as tasks, we still provide localization results here. We use the \emph{official metric} (OM), \emph{localization error} (LE), \emph{pixel-wise $F_1$ score}, \emph{box accuracy} (BoxAcc)~\citep{choe2020evaluating}, standard pointing game (SP)~\cite{zhang2018top}, \emph{energy pointing game} (EP)~\citep{wang2020score} and \emph{saliency metric} (SM)~\citep{dabkowski2017real} on the ILSVRC2014\footnote{\url{https://www.image-net.org/challenges/LSVRC/2014/index\#}} dataset. The goal of these metrics is to compare the saliency maps with bounding boxes around the object of interest. For simplicity, we define these metrics for a single image; the reported results are averaged over all images of the test set.
\subsection{Definitions}
We are given the saliency map $S^c$ obtained from test image $\mathbf{x}$ for ground truth class $c$. We denote by $S^c_{\mathbf{p}}$ its value at pixel $\mathbf{p}$. We binarize the saliency map by thresholding at its average value and we take the bounding box of the largest connected component of the resulting mask as the predicted bounding box $B_p$, represented as a set of pixels. We compare this box against the set of ground truth bounding boxes $\mathcal{B}$, which typically contains 1 or 2 boxes of the same class $c$, or with their union $U = \cup \mathcal{B}$, again represented as a set of pixels. We also compare the predicted class label $c_p$ with the ground truth label $c$. All metrics take values in $[0,1]$ and are expressed as percentages, except SM~\eq{sm}, which is unbounded.
\paragraph{Official Metric (OM)}
measures the maximum overlap of the predicted bounding box with any ground truth bounding box, requiring that the predicted class label is correct:
\begin{equation}
\operatorname{OM} \mathrel{:=} 1 - \paren{\max_{B \in \mathcal{B}} \operatorname{IoU}(B, B_p)} \mathbbm{1}_{c_p = c},
\label{eq:om}
\end{equation}
where $\operatorname{IoU}$ is intersection over union.
\paragraph{Localization Error (LE)}
is similar but ignores the predicted class label:
\begin{equation}
\operatorname{LE} \mathrel{:=} 1 - \max_{B \in \mathcal{B}} \operatorname{IoU}(B, B_p).
\label{eq:le}
\end{equation}
\paragraph{Pixel-wise $F_1$ score (F1)}
is defined as $F_1 = 2 \frac{P R}{P + R}$, where \emph{precision} $P$ is the fraction of mass of the saliency map that is within the ground truth union
\begin{equation}
P \mathrel{:=} \frac{\sum_{\mathbf{p} \in U} S^c_{\mathbf{p}}}{\sum_{\mathbf{p}} S^c_{\mathbf{p}}}
\label{eq:prec}
\end{equation}
and \emph{recall} $R$ is the fraction of the ground truth union that is covered by the saliency map
\begin{equation}
R \mathrel{:=} \frac{\sum_{\mathbf{p} \in U} S^c_{\mathbf{p}}}{\card{U}}.
\label{eq:rec}
\end{equation}
\paragraph{Box Accuracy (BA)~\citep{choe2020evaluating}}
Given threshold values $\eta$ and $\delta$, we find the bounding box $B^\eta_p$ of the largest connected component of the binary mask $\set{\mathbf{p}: S_{\mathbf{p}} > \eta}$ and require that it overlaps by $\delta$ with at least one ground truth box:
\begin{equation}
\operatorname{BoxAcc}(\eta, \delta) \mathrel{:=} \max_{B \in \mathcal{B}} \mathbbm{1}_{\operatorname{IoU}(B^\eta_p, B) \ge \delta}.
\label{eq:ba}
\end{equation}
After averaging over the test images, we take the maximum of this measure over a set of values $\eta$ and then the average over a set of values $\delta$.
\paragraph{Standard Pointing game (SP)~\cite{zhang2018top}}
We find the pixel $\mathbf{p}^* \mathrel{:=} \arg\max_{\mathbf{p}} S^c_{\mathbf{p}}$ having the maximum saliency value and require that it lands in any of the ground truth bounding boxes:
\begin{equation}
\operatorname{SP} \mathrel{:=} \mathbbm{1}_{\mathbf{p}^* \in U}.
\label{eq:spg}
\end{equation}
\paragraph{Energy Pointing game (EP)~\citep{wang2020score}}
is equivalent to precision~\eq{prec}.
\paragraph{Saliency Metric (SM)~\citep{dabkowski2017real}}
penalizes the size of the predicted bounding box $B_p$ relative to the image and the cross-entropy loss:
\begin{equation}
\operatorname{SM} \mathrel{:=} \log \max\paren{ 0.05, \frac{\card{B_p}}{hw} } - \log p^c,
\label{eq:sm}
\end{equation}
where $h \times w$ is the input image resolution and $p^c$ is the precicted probability for ground truth class label $c$.
\begin{table}[ht]
\centering
\footnotesize
\setlength{\tabcolsep}{3pt}
\begin{tabular}{lccc|cccc} \toprule
\Th{method} & {OM$\downarrow$} & {LE$\downarrow$} & {F1$\uparrow$}&{BA$\uparrow$}& {SP$\uparrow$} & {EP$\uparrow$} & {SM$\downarrow$} \\ \midrule
\mc{8}{\Th{ResNet50}} \\ \midrule
Fake-CAM~\citep{poppi2021revisiting} &63.6&54.0&57.7&47.9&99.8&28.5&0.98\\ \midrule
Grad-CAM~\citep{selvaraju2017grad} &72.9&65.8&49.8&\tb{56.2}&69.8&33.3&1.30 \\
Grad-CAM++~\cite{chattopadhay2018grad} &73.1&66.1&\tb{50.4}&\tb{56.2}&69.9&33.1&1.29 \\
Score-CAM~\citep{wang2020score} &\tb{72.2}&64.9&49.6&54.5&68.7&32.4&\tb{1.25} \\
Ablation-CAM~\citep{ramaswamy2020ablation} &72.8&65.7&50.2&56.1&69.9&33.1&1.26 \\
XGrad-CAM~\citep{fu2020axiom} &72.9&65.8&49.8&\tb{56.2}&69.8&33.3&1.30 \\
Layer-CAM~\citep{jiang2021layercam} &73.1&66.0&50.1&55.5&\tb{70.0}&33.0&1.29\\
ExPerturbation~\citep{fong2019understanding} &73.6&66.6&37.5&44.2&64.8&\tb{38.2}&1.59\\
\rowcolor{cyan!10}
Opti-CAM (ours) &\tb{72.2}&\tb{64.8}&47.3&49.2&59.4&30.5&1.34 \\ \midrule
\mc{8}{\Th{VGG16}} \\ \midrule
Fake-CAM~\citep{poppi2021revisiting} &64.7&54.0&57.7&47.9&99.8&28.5&1.07 \\ \midrule
Grad-CAM~\citep{selvaraju2017grad} &71.1&62.3&42.0&54.2&64.8&32.0&1.39 \\
Grad-CAM++~\cite{chattopadhay2018grad} &70.8&61.9&44.3&55.2&66.2&32.3&1.38 \\
Score-CAM~\citep{wang2020score} &71.2&62.5&\tb{45.3}&\tb{58.5}&\tb{68.2}&33.4&1.40 \\
Ablation-CAM~\citep{ramaswamy2020ablation} &71.3&62.6&43.2&56.2&65.7&32.7&1.39 \\
XGrad-CAM~\citep{fu2020axiom} &70.8&62.0&41.9&53.5&64.4&31.6&1.41 \\
Layer-CAM~\citep{jiang2021layercam} &70.5&61.5&28.0&54.7&65.0&32.4&1.45\\
ExPerturbation~\citep{fong2019understanding} &74.1&66.4&37.8&43.3&62.7&\tb{36.1}&1.74\\
\rowcolor{cyan!10}
Opti-CAM (ours) &\tb{69.1}&\tb{59.9}&44.1&51.2&61.4&30.7&\tb{1.34} \\ \bottomrule
\end{tabular}
\caption{\emph{Localization metrics} on ImageNet validation set. OM: \emph{official metric}; LE: \emph{localization error}; F1: \emph{pixel-wise $F_1$ score}; BA: box accuracy; SP: standard pointing game; EP: energy pointing game; SM: \emph{saliency metric}. $\downarrow$ / $\uparrow$: lower / higher is better. Bold: best, excluding Fake-CAM.}
\label{tab:imagenet-loc}
\end{table}
\input{tex/table_deit}
\subsection{Results}
We evaluate the localization ability of saliency maps obtained by our Opti-CAM and we compare with other attribution methods quantitatively. \autoref{tab:imagenet-loc} and \autoref{tab:ablate-loc-sup-deit} report localization metrics on ImageNet. We observe different behavior in different metrics. In particular, Opti-CAM on ResNet and VGG performs best on OM and LE but poorly on the remaining metrics. On transformers, Opti-CAM performs best on OM, LE, F1, and SM.
Metrics where Opti-CAM does not perform well are mostly the ones that penalize saliency maps that are more spread out. For example, SP and EP penalize saliency outside the ground truth bounding box of an object. This is not necessarily a weakness of Opti-CAM, because rather than weakly supervised object localization, the objective here is to explain how the classifier works.
\section{Medical data}
\label{sec:medical}
Medical image recognition is a high-stakes task that crucially needs interpretable models. We thus evaluate our method on two standard medical image classification datasets.
\subsection{Datasets}
\paragraph{Chest X-ray}
\citep{kermany2018labeled} aims at recognizing chest images of patients with pneumonia from healthy ones with $5,216$ training images, $16$ for validation and $624$ for testing. Images are resized to $224 \times 224 \times 3$ to adapt to the pretrained models.
\paragraph{Kvasir}
\citep{pogorelov2017kvasir} contains $8$ classes and aims at recognizing anatomical landmarks, pathological findings and endoscopic procedures inside the gastrointestinal tract. The $8,000$ images are split into $6,000$ images for training, $1,000$ for validation and $1,000$ for testing. Images are resized as for the other datasets
\subsection{Network fine-tuning}
To train our models on the medical data, we first train the last fully-connected layer according to the classes in each dataset, while keeping the backbone frozen. On Chest X-ray, we use learning rate $10^{-3}$ for both networks. On Kvasir, we use learning rate $10^{-4}$ for ResNet50 and $5\times10^{-3}$ for VGG16. We then fine-tune the entire network with learning rate $10^{-5}$ for 50 epochs, using SGD with momentum 0.9 for both networks on both datasets. On Chest X-ray data, we obtain accuracies of $83.2\%$ for VGG16 and $82.0\%$ for ResNet50; on Kvasir, $89.5\%$ for VGG16 and $89.8\%$ for ResNet50.
\begin{table}
\centering
\footnotesize
\setlength{\tabcolsep}{3pt}
\begin{tabular}{lrrr|rrr} \toprule
\mr{2}{\Th{Method}} & \mc{3}{\Th{ResNet50}} & \mc{3}{\Th{VGG16}} \\
\cmidrule{2-7}
& {$\operatorname{AD}\!\downarrow$} & {$\operatorname{AG}\!\uparrow$} & {$\operatorname{AI}\!\uparrow$} & {$\operatorname{AD}\!\downarrow$} & {$\operatorname{AG}\!\uparrow$} & {$\operatorname{AI}\!\uparrow$} \\ \midrule
\mc{7}{\Th{Chest X-ray}} \\ \midrule
Fake-CAM~\citep{poppi2021revisiting} &0.1&0.9&49.7&0.1&0.4&29.8\\\midrule
Grad-CAM~\citep{selvaraju2017grad} &20.4&29.7&48.7&36.8&39.8&42.3\\
Grad-CAM++~\cite{chattopadhay2018grad} &24.7&24.1&41.2&36.9&43.4&45.8\\
Score-CAM~\citep{wang2020score} &21.6&27.7&44.2&35.3&47.4&48.9\\
Ablation-CAM~\citep{ramaswamy2020ablation} &26.2&27.9&42.9&36.9&46.9&47.8\\
XGrad-CAM~\citep{fu2020axiom} &20.4&29.7&48.7&34.7&47.3&50.2\\
Layer-CAM~\citep{jiang2021layercam} &24.5&23.4&39.1&36.6&45.9&47.6\\
ExPerturbation~\citep{fong2019understanding}&21.4&5.5&17.9&29.7&21.8&28.7\\
\rowcolor{cyan!10}
Opti-CAM (ours) &\tb{0.1}&\tb{91.2}&\tb{98.4}&\tb{0.0}&\tb{85.9}&\tb{86.2}\\
\midrule
\mc{7}{\Th{Kvasir}} \\ \midrule
Fake-CAM~\citep{poppi2021revisiting} &0.1&0.4&48.3&0.0&0.3&45.0\\\midrule
Grad-CAM~\citep{selvaraju2017grad} &10.0&23.2&39.8&33.8&6.3&14.6\\
Grad-CAM++~\cite{chattopadhay2018grad} &11.2&18.7&32.9&20.7&9.3&20.4\\
Score-CAM~\citep{wang2020score} &9.1&26.7&40.8&8.4&24.0&39.4\\
Ablation-CAM~\citep{ramaswamy2020ablation} &10.7&21.6&35.4&10.6&20.9&36.9\\
XGrad-CAM~\citep{fu2020axiom} &10.0&23.2&39.8&12.1&21.6&35.2\\
Layer-CAM~\citep{jiang2021layercam} &11.7&18.2&32.5&12.9&17.1&30.8\\
ExPerturbation~\citep{fong2019understanding}&48.4&13.8&21.0&34.8&19.0&27.7\\
\rowcolor{cyan!10}
Opti-CAM (ours) &\tb{0.2}&\tb{91.1}&\tb{99.0}&\tb{0.0}&\tb{93.5}&\tb{98.1}\\
\bottomrule
\end{tabular}
\caption{\emph{Classification metrics} on Chest X-ray and KVASIR datasets. $\operatorname{AD}$/$\operatorname{AI}$: average drop/increase~\citep{chattopadhay2018grad}; $\operatorname{AG}$: average gain (ours); $\downarrow$ / $\uparrow$: lower / higher is better; Bold: best, excluding Fake-CAM.}
\label{tab:xray-n-kvasir}
\end{table}
\subsection{Results}
\autoref{tab:xray-n-kvasir} reports metrics on Chest X-ray and Kvasir using \Th{ResNet50} and \Th{VGG16} networks. The conclusions remain the same as for ImageNet. More than that, AD and AI are near perfect in most cases and AG is also extremely high. Additional visualizations are presented in Section \autoref{sec:more-vis}.
\section{More ablations}
\label{sec:more-ablation}
\subsection{Selectivity}
We investigate the effect of selectivity of saliency maps on classification performance. In particular, before evaluation, we raise saliency maps element-wise to an exponent $\alpha$ that takes values in $\{0.01,0.05,0.1,0.5,1,1.5,2,3,5,10\}$. When $\alpha$ is small, the saliency maps become more uniform, so that more information about the original image is revealed to the network. Respectively, when $\alpha$ is large, the saliency maps become more selective, so that the network sees less parts of the input. The order of pixels is maintained.
\begin{figure*}[htp!]
\tiny
\centering
\setlength{\tabcolsep}{3pt}
\footnotesize
\begin{tabular}{ccc}
\centering
\extfig{AD}{
\begin{tikzpicture}
\begin{axis}[
height=4cm,
width=5cm,
ylabel={AD$\downarrow$},
xlabel={$\alpha$},
legend pos=outer north east,
]
\addplot[mark=*,blue] table{fig/eval/adaiag/maskin_AblationCAM_ad.txt}; \leg{Ablation-CAM}
\addplot[mark=*,red] table{fig/eval/adaiag/maskin_GradCAM_ad.txt}; \leg{Grad-CAM}
\addplot[mark=*,green] table{fig/eval/adaiag/maskin_GradCAMPlusPlus_ad.txt}; \leg{Grad-CAM++}
\addplot[mark=*,black] table{fig/eval/adaiag/maskin_XGradCAM_ad.txt}; \leg{XGrad-CAM}
\addplot[mark=*,brown] table{fig/eval/adaiag/maskin_ScoreCAM_ad.txt}; \leg{Score-CAM}
\addplot[mark=*,orange] table{fig/eval/adaiag/maskin_versionP0_ad.txt}; \leg{Opti-CAM}
\legend{};
\end{axis}
\end{tikzpicture}
}
&
\extfig{AG}{
\begin{tikzpicture}
\begin{axis}[
height=4cm,
width=5cm,
ylabel={AG$\uparrow$},
xlabel={$\alpha$},
legend pos=outer north east,
]
\addplot[mark=*,blue] table{fig/eval/adaiag/maskin_AblationCAM_ag.txt}; \leg{Ablation-CAM}
\addplot[mark=*,red] table{fig/eval/adaiag/maskin_GradCAM_ag.txt}; \leg{Grad-CAM}
\addplot[mark=*,green] table{fig/eval/adaiag/maskin_GradCAMPlusPlus_ag.txt}; \leg{Grad-CAM++}
\addplot[mark=*,black] table{fig/eval/adaiag/maskin_XGradCAM_ag.txt}; \leg{XGrad-CAM}
\addplot[mark=*,brown] table{fig/eval/adaiag/maskin_ScoreCAM_ag.txt}; \leg{Score-CAM}
\addplot[mark=*,orange] table{fig/eval/adaiag/maskin_versionP0_ag.txt}; \leg{Opti-CAM}
\legend{};
\end{axis}
\end{tikzpicture}
}
&
\extfig{AI}{
\begin{tikzpicture}
\begin{axis}[
height=4cm,
width=5cm,
ylabel={AI$\uparrow$},
xlabel={$\alpha$},
legend pos=outer north east,
]
\addplot[mark=*,blue] table{fig/eval/adaiag/maskin_AblationCAM_ai.txt}; \leg{Ablation-CAM}
\addplot[mark=*,red] table{fig/eval/adaiag/maskin_GradCAM_ai.txt}; \leg{Grad-CAM}
\addplot[mark=*,green] table{fig/eval/adaiag/maskin_GradCAMPlusPlus_ai.txt}; \leg{Grad-CAM++}
\addplot[mark=*,black] table{fig/eval/adaiag/maskin_XGradCAM_ai.txt}; \leg{XGrad-CAM}
\addplot[mark=*,brown] table{fig/eval/adaiag/maskin_ScoreCAM_ai.txt}; \leg{Score-CAM}
\addplot[mark=*,orange] table{fig/eval/adaiag/maskin_versionP0_ai.txt}; \leg{Opti-CAM}
\end{axis}
\end{tikzpicture}
}
\end{tabular}
\caption{Effect of \emph{selectivity} (raising element-wise to exponent $\alpha$) of saliency maps on classification performance. $\operatorname{AD}$/$\operatorname{AI}$: average drop/increase~\citep{chattopadhay2018grad}; $\operatorname{AG}$: average gain (ours); $\downarrow$ / $\uparrow$: lower / higher is better.}
\label{fig:aiadag-alpha}
\end{figure*}
Results in terms of $\operatorname{AD}, \operatorname{AG}, \operatorname{AI}$ are shown in \autoref{fig:aiadag-alpha}, averaged over $1,000$ ImageNet images. We observe that $\operatorname{AD}$ stays near zero for Opti-CAM for $\alpha < 2$, while it increases linearly with $\alpha$ for the other methods. The $\operatorname{AG}$ and $\operatorname{AI}$ of Opti-CAM has a strong peak at $\alpha = 1$, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot for the original saliency maps. The other methods are less sensitive and their $\operatorname{AI}$ performance is not optimal at $\alpha = 1$.
\subsection{Opti-CAM components}
\paragraph{Objective function}
We consider more alternative definitions of the objective function $F^c_\ell$, taking into account not only the regions inside the saliency maps (In) but also their complement, outside (Out). In particular, relative to Mask\xspace, we define IOMask\xspace as
\begin{equation}
F^c_\ell(\mathbf{x}; \mathbf{u}) \mathrel{:=} g_c(f(\mathbf{x} \odot \mathbf{s})) - g_c(f(\mathbf{x} \odot (1-\mathbf{s}))),
\label{eq:mi-dref}
\end{equation}
where $\mathbf{s} \mathrel{:=} n(\operatorname{up}(S_\ell(\mathbf{x}; \mathbf{u})))$ for brevity. Similarly, relative to Diff\xspace, we define IODiff\xspace as
\begin{equation}
\begin{split}
F^c_\ell(\mathbf{x}; \mathbf{u}) \mathrel{:=}
- \abs{g_c(f(\mathbf{x})) - g_c(f(\mathbf{x} \odot \mathbf{s}))} \\
+ \abs{g_c(f(\mathbf{x})) - g_c(f(\mathbf{x} \odot (1-\mathbf{s})))}.
\end{split}
\label{eq:mi-ref}
\end{equation}
According to \autoref{tab:ablate-loss}, IOMask\xspace performs great on AD and AI but worse on AG, while IODiff\xspace is worse on all metrics. Therefore, including the complementary of the saliency map is not beneficial.
\begin{table}[t]
\centering
\footnotesize
\setlength{\tabcolsep}{1pt}
\begin{tabular}{lcrrrrr} \toprule
{\Th{Method}} & {$F^c_\ell$}& {$\operatorname{AD}\!\downarrow$}& {$\operatorname{AG}\!\uparrow$} & {$\operatorname{AI}\!\uparrow$} \\ \midrule
Fake-CAM~\citep{poppi2021revisiting} & & 0.5 & 0.7 & 42.1 \\ \midrule
Grad-CAM~\citep{selvaraju2017grad} & & 15.0 & 15.3 & 40.4 \\
Grad-CAM++~\cite{chattopadhay2018grad} & & 16.5 & 10.6 & 35.2 \\
Score-CAM~\citep{wang2020score} & & 12.5 & 16.1 & 42.6 \\
Ablation-CAM~\citep{ramaswamy2020ablation} & & 15.1 & 13.5 & 39.9 \\
XGrad-CAM~\citep{fu2020axiom} & & 14.3 & 15.1 & 42.1 \\
Layer-CAM~\citep{jiang2021layercam} & & 49.2 & 2.7 & 12.7 \\
ExPerturbation~\citep{fong2019understanding} & & 43.8 & 7.1 & 18.9 \\
\midrule
\mr{4}{Opti-CAM}
& Mask\xspace~\eq{obj} &1.4 & \tb{66.3} &92.5\\
& Diff\xspace~\eq{ref} &7.1&18.5&54.9 \\
& IOMask\xspace~\eq{mi-dref} &\tb{0.2}&5.5&\tb{99.7}\\
& IODiff\xspace~\eq{mi-ref} &25.9&7.6&42.6\\
\bottomrule
\end{tabular}
\caption{\emph{Ablation study on objective function} using VGG16 on 1000 images of ImageNet validation set.
Choices for objective function $F^c_\ell$: Mask\xspace:~\eq{obj}; Diff\xspace:~\eq{ref}; IOMask\xspace:~\eq{mi-dref}; IODiff\xspace:~\eq{mi-ref}.
Choice for normalization function $n$: Range~\eq{n-rng}. Iterations: 50.
$\operatorname{AD}$/$\operatorname{AI}$: average drop/increase~\citep{chattopadhay2018grad}; $\operatorname{AG}$: average gain (ours); $\downarrow$ / $\uparrow$: lower / higher is better.}
\label{tab:ablate-loss}
\end{table}
\begin{table}[ht!]
\footnotesize
\centering
\setlength{\tabcolsep}{6pt}
\begin{tabular}{crrr} \toprule
{\Th{Layer}} & {$\operatorname{AD}\!\downarrow$}& {$\operatorname{AG}\!\uparrow$} & {$\operatorname{AI}\!\uparrow$}\\ \midrule
42 &1.4&66.0&92.5\\
36 &1.7&66.1&90.3\\
32 &2.8&61.3&81.6\\
29 &1.6&78.0&93.9\\
26 &1.7&80.1&93.7\\
22 &3.3&68.8&84.8\\
19 &2.9&67.3&84.9\\
16 &2.3&72.4&89.1\\
12 &4.1&61.9&82.4\\
9 &4.3&44.2&71.9\\
6 &13.5&23.5&50.2\\ \bottomrule
\end{tabular}
\caption{\emph{Layer ablation} on $1,000$ images from ImageNet validation set, using various layers of VGG16. The last convolutional layer before max pooling is chosen as our default layer (layer 42). $\operatorname{AD}$/$\operatorname{AI}$: average drop/increase~\citep{chattopadhay2018grad}; $\operatorname{AG}$: average gain (ours); $\downarrow$ / $\uparrow$: lower / higher is better.}
\label{tab:layer}
\end{table}
\paragraph{Layers}
\autoref{tab:layer} shows how the performance of Opti-CAM, in terms of AD/AI/AG, depends on the layer $\ell$ of the VGG16 network used to compute the saliency map $S^c_\ell$~\eq{v-sal}. We can see that the layers 26, 29, and 42 are all competitive. We choose the last convolutional layer (42) to be compatible with the other CAM methods \citep{zhou2016learning,selvaraju2017grad,chattopadhay2018grad,wang2020score}.
\begin{figure}[hpt]
\centering
\begin{tabular}{c}
\extfig{ab-ad}{
\begin{tikzpicture}
\begin{axis}[
yticklabel style={
/pgf/number format/fixed,
/pgf/number format/precision=5
},
scaled y ticks=false,
height=4.8cm,
width=7.5cm,
xlabel={Iterations},
ylabel={AD},
]
\addplot[mark=*,blue] table[x index=0, y index=1]{\plotAD};\leg{$\eta=0.01$}
\addplot[mark=*,orange] table[x index=0, y index=2]{\plotAD};\leg{$\eta=0.03$}
\addplot[mark=*,black] table[x index=0, y index=3]{\plotAD};\leg{$\eta=0.05$}
\addplot[mark=*,olive] table[x index=0, y index=4]{\plotAD};\leg{$\eta=0.08$}
\addplot[mark=*,green] table[x index=0, y index=5]{\plotAD};\leg{$\eta=0.1$}
\end{axis}
\end{tikzpicture}
}\\
\extfig{ab-ag}{
\begin{tikzpicture}
\begin{axis}[
yticklabel style={
/pgf/number format/fixed,
/pgf/number format/precision=5
},
scaled y ticks=false,
height=4.8cm,
width=7.5cm,
xlabel={Iterations},
ylabel={AG},
legend pos=south east,
]
\addplot[mark=*,blue] table[x index=0, y index=1]{\plotAG};\leg{$\eta=0.01$}
\addplot[mark=*,orange] table[x index=0, y index=2]{\plotAG};\leg{$\eta=0.03$}
\addplot[mark=*,black] table[x index=0, y index=3]{\plotAG};\leg{$\eta=0.05$}
\addplot[mark=*,olive] table[x index=0, y index=4]{\plotAG};\leg{$\eta=0.08$}
\addplot[mark=*,green] table[x index=0, y index=5]{\plotAG};\leg{$\eta=0.1$}
\end{axis}
\end{tikzpicture}
}\\
\extfig{ab-ai}{
\begin{tikzpicture}
\begin{axis}[
height=4.8cm,
width=7.5cm,
xlabel={Iterations},
ylabel={AI},
legend pos=south east,
]
\addplot[mark=*,blue] table[x index=0, y index=1]{\plotAI};\leg{$\eta=0.01$}
\addplot[mark=*,orange] table[x index=0, y index=2]{\plotAI};\leg{$\eta=0.03$}
\addplot[mark=*,black] table[x index=0, y index=3]{\plotAI};\leg{$\eta=0.05$}
\addplot[mark=*,olive] table[x index=0, y index=4]{\plotAI};\leg{$\eta=0.08$}
\addplot[mark=*,green] table[x index=0, y index=5]{\plotAI};\leg{$\eta=0.1$}
\end{axis}
\end{tikzpicture}
}\\
\end{tabular}
\caption{
Classification metrics \vs number of iterations for different learning rates, using VGG-16 on 1000 images of ImageNet. $\operatorname{AD}$/$\operatorname{AI}$: average drop/increase~\citep{chattopadhay2018grad}; $\operatorname{AG}$: average gain (ours); $\downarrow$ / $\uparrow$: lower / higher is better.}
\label{fig:lr-epochs}
\end{figure}
\paragraph{Convergence}
Finally, \autoref{fig:lr-epochs} shows the classification performance of Opti-CAM \vs number of iterations for different learning rates. Optimal performance can be obtained at 100 iterations with learning rate $\eta = 0.1$. We use these settings by default. We note that by using 50 iterations allows us to double the speed at the cost of a 6\% drop of $\operatorname{AG}$ and very small drop of $\operatorname{AI}$ and $\operatorname{AD}$.
\begin{figure}[htpb]
\centering
\ref*{sanity_leg}
\extfig{scanity-check}{
\begin{tikzpicture}
\begin{axis}[
height=4cm,
width=7cm,
xlabel={Layer},
ylabel={Similarity},
legend columns=2,
legend to name=sanity_leg,
legend style={font=\footnotesize}
]
\addplot[mark=*,red] coordinates{(0,1.0)(1,0.222)(2,0.038)(3,0.014)(4,0.016)(5,0.017)}; \leg{Rank Correl (No Abs)}
\addplot[mark=*,blue] coordinates{(0,1.0)(1,0.079)(2,0.023)(3,0.009)(4,0.000)(5,0.007)}; \leg{Rank Correl (Abs)}
\addplot[mark=*,black] coordinates{(0,1.0)(1,0.101)(2,0.089)(3,0.074)(4,0.073)(5,0.083)}; \leg{HOGs similarity}
\addplot[mark=*,green] coordinates{(0,1.0)(1,0.653)(2,0.602)(3,0.508)(4,0.510)(5,0.571)}; \leg{SSIM}
\end{axis}
\end{tikzpicture}
}
\caption{\emph{Sanity check} of Opti-CAM on $1,000$ images of ImageNet validation set using ResNet50. Similarity between saliency maps by original and randomized network, where layers are progressively replaced by random ones.}
\label{fig:sanity}
\end{figure}
\begin{figure}[htpb]
\newcommand{.12}{.12}
\newcommand{.25}{.15}
\newcommand{.175\textwidth}{.175\textwidth}
\newcommand{.200\textwidth}{.200\textwidth}
\centering
\small
\setlength{\tabcolsep}{3pt}
\begin{tabular}{cccccc}
Original & layer 1 & layer 2 & layer 3 & layer 4 & layer 5 \\
\fig[.25]{sanityC/ILSVRC2012_val_00000001JPEG_0_Smap.png} &
\fig[.25]{sanityC/ILSVRC2012_val_00000001JPEG_1_Smap.png} &
\fig[.25]{sanityC/ILSVRC2012_val_00000001JPEG_2_Smap.png} &
\fig[.25]{sanityC/ILSVRC2012_val_00000001JPEG_3_Smap.png} &
\fig[.25]{sanityC/ILSVRC2012_val_00000001JPEG_4_Smap.png} &
\fig[.25]{sanityC/ILSVRC2012_val_00000001JPEG_6_Smap.png} \\
\fig[.25]{sanityC/ILSVRC2012_val_00000002JPEG_0_Smap.png} &
\fig[.25]{sanityC/ILSVRC2012_val_00000002JPEG_1_Smap.png} &
\fig[.25]{sanityC/ILSVRC2012_val_00000002JPEG_2_Smap.png} &
\fig[.25]{sanityC/ILSVRC2012_val_00000002JPEG_3_Smap.png} &
\fig[.25]{sanityC/ILSVRC2012_val_00000002JPEG_4_Smap.png} &
\fig[.25]{sanityC/ILSVRC2012_val_00000002JPEG_6_Smap.png} \\
\end{tabular}
\caption{\emph{Sanity check visualization} of Opti-CAM on two images of ImageNet validation set using ResNet50. First column: Opti-CAM saliency maps for the original network; remaining columns: Opti-CAM saliency maps where layers are progressively replaced by random ones.}
\label{fig:sanity-vis}
\end{figure}
\section{Sanity check}
\label{sec:sanity-check}
We use the model parameter randomization test proposed by~\citep{adebayosanity}. This test compares the saliency maps generated by a trained model with the ones generated by a partially randomly initialized network of the same architecture. In particular, we choose 5 layers of ResNet50 and we progressively replace them by random ones so that we have 6 different models with different amount of random parameters. The saliency maps are generated for the small subset of ImageNet validation set, as in the ablation study.
Following~\citep{adebayosanity}, we compute a number of similarity metrics between these saliency maps generated by the original and the randomized network, including Rank Correlation with/without absolute values, HOGs similarity, and SSIM. The results are shown in \autoref{fig:sanity} (saliency map similarity measurements) and \autoref{fig:sanity-vis} (saliency map visualizations). Our method passes the sanity check, as it is very sensitive to changes in the model parameters.
\begin{table}[htbp]
\centering
\footnotesize
\setlength{\tabcolsep}{2pt}
\begin{tabular}{lrrrr|rrrr} \toprule
\mr{2}{\Th{Method}} & \mc{4}{\Th{ResNet50}} & \mc{4}{\Th{VGG16}} \\ \cmidrule{2-9}
& {$\operatorname{AD}\!\downarrow$} & {$\operatorname{AG}\!\uparrow$} & {$\operatorname{AI}\!\uparrow$} & \mc{1}{T} & {$\operatorname{AD}\!\downarrow$} & {$\operatorname{AG}\!\uparrow$} & {$\operatorname{AI}\!\uparrow$} & \mc{1}{T} \\ \midrule
Fake-CAM~\citep{poppi2021revisiting} &0.9&0.7&47.4&0.00&0.5&0.3&47.7&0.00 \\ \midrule
Grad-CAM~\citep{selvaraju2017grad} & 36.4 &5.5& 27.0 &0.03 & 41.6 &3.3 & 25.2 &0.02 \\
Grad-CAM++~\cite{chattopadhay2018grad} & 37.6 & 4.9 & 24.0 &0.04 & 46.3 &2.0 & 19.0 &0.02 \\
Score-CAM~\citep{wang2020score} & 28.8 &8.8 & 33.6 &20.47& 39.3 & 3.5 & 24.6 &3.08 \\
Ablation-CAM~\citep{ramaswamy2020ablation} & 36.6 &5.1& 25.6 &18.49 & 41.8 & 2.9& 24.0 &2.95 \\
XGrad-CAM~\citep{fu2020axiom} & 36.4 &5.5 & 27.0 &0.03 & 40.6 &3.4 & 25.8 &0.02 \\
Layer-CAM~\citep{jiang2021layercam} &42.6&4.2&19.2&0.02&82.1&0.3&6.9&0.01 \\
ExPerturbation~\citep{fong2019understanding} &51.2&6.9&26.1&15.67&50.1&4.4&24.5&9.10 \\
\rowcolor{cyan!10}
Opti-CAM (ours) &\tb{2.0}&\tb{49.4}&\tb{91.2}&3.94&\tb{1.5}&\tb{52.7}&\tb{92.1}&3.95\\
\bottomrule
\end{tabular}
\caption{
\emph{Classification metrics} on ImageNet validation set, without input normalization. AD/AI: average drop/increase~\citep{chattopadhay2018grad}; $\operatorname{AG}$: average gain (ours); $\downarrow$ / $\uparrow$: lower / higher is better. T: Average time (sec) per batch of 8 images. Bold: best, excluding Fake-CAM.}
\label{tab:norm-imagenet}
\end{table}
\section{Results without input normalization}
\label{sec:without-norm}
It is standard that images are normalized to zero mean and unit standard deviation before feeding them to a network, because this is how networks are trained. For example, for ImageNet images, we subtract the mean vector $[0.485,0.456,0.406]$ and divide channel-wise by standard deviation $[0.229,0.224,0.225]$. By doing so however, we cannot reproduce the results published for several baseline methods; rather, all results are improved dramatically. We can obtain results similar to published ones by \emph{not} normalizing, thus we speculate that authors of related work do not normalize images. This is also suggested by our attempts to communicate with the authors.
We believe normalization is important and we include it in all our experiments. For reference and to allow for comparison with published results, we provide results without normalization in \autoref{tab:norm-imagenet} that correspond to \autoref{tab:imagenet-cnn}. Finally, code is provided to allow for reproduction and verification of our results.
\input{tex/chest_resnet50}
\input{tex/kvasir_resnet50}
\input{tex/show_image_vgg}
\input{tex/show_image_res}
\input{tex/show_image_vit}
\section{More visualizations}
\label{sec:more-vis}
\autoref{fig:vis-chest-resnet} and \autoref{fig:vis-kvasir-resnet} present additional visualizations on Chest X-ray and Kvasir datasets using VGG16 and ResNet50.
Then \autoref{fig:imagenet-vis-more-vgg}, \autoref{fig:imagenet-vis-more-res}, \autoref{fig:imagenet-vis-more-vit} show more results on ImageNet using VGG16, ResNet50, and ViT, respectively.
Overall, we still observe that Opti-CAM captures more of the object area compared with other saliency methods and sometimes background context as well \autoref{fig:imagenet-vis-more-res}.
\subsection{Image classification}
Opti-CAM is evaluated quantitatively using classification metrics and qualitatively by visualizing saliency maps.
\input{tex/table_cls_cnn_imagenet_new.tex}
\paragraph{CNN}
\autoref{tab:imagenet-cnn} shows ImageNet classification metrics using \Th{VGG16} and \Th{ResNet50}. Our Opti-CAM brings impressive performance in terms of average drop ($\operatorname{AD}$) and Average Increase ($\operatorname{AI}$) metrics. That is, not only impressive improvement over baselines, but near-perfect: near-zero $\operatorname{AD}$ and above 90\% $\operatorname{AI}$. \redred{Our new metric $\operatorname{AG}$ is lower, around 70\% for Opti-CAM, but this is still several times higher than for all the other methods.}
\iavr{
Interestingly, Fake-CAM~\citep{poppi2021revisiting} is the winner in terms of $\operatorname{AD}$ and second or third best in $\operatorname{AI}$ after Opti-CAM and Score-CAM, but fails completely $\operatorname{AG}$. This is expected and makes Fake-CAM uninteresting as it should be: By only masking one pixel, the classification score can hardly drop (0.8\% on ResNet50) and while it increases very often (on 46\% of images), the gain is as little as the drop (0.7\%). This makes the pair ($\operatorname{AD}$, $\operatorname{AG}$) sufficient as primary metrics and $\operatorname{AI}$ can be thought of as secondary, if important at all.
In the supplementary material we report \emph{insertion} (I) and \emph{deletion} (D) metrics along with failure cases of Opti-CAM. The latter indicate that our saliency maps are not incorrect as a whole, but capturing more parts of the object, more instances or more background context results in larger or several disconnected salient regions. This does not let the classifier focus on a single discriminative region when pixels are processed sequentially by increasing saliency. Rather, I/D favor smaller and more compact saliency maps.
}
\autoref{tab:imagenet-cnn} also includes average execution time per image over the 1000-image ImageNet subset for all methods. Opti-CAM is slower than gradient-based methods that require only one pass through the network, \iavr{but on par or faster than gradient-free methods. Indeed, we use a maximum of 100 iterations with one forward/backward pass per iteration, while Score-CAM and Ablation-CAM perform as many forward passes as channels. Hence they are much slower on ResNet50 than VGG16. ExtremalPerturbation does not depend on the number of channels but is very slow by performing a complex optimization in the image space.}
\input{tex/table_cls_transformer_new.tex}
\paragraph{Transformers}
\autoref{tab:imagenet-trans} shows ImageNet classification metrics using ViT \iavr{and DeiT}. Unlike CAM-based methods that rely on a class-specific linear combination of feature maps, raw attention~\citep{dosovitskiy2020image} and rollout~\citep{abnar2020quantifying} use the attention map of the [CLS] token from the last attention block and from all blocks respectively. \iavr{This attention map depends only on the particular image and not on the target class, hence it is not really comparable. TIBAV~\cite{chefer2021transformer} uses both instance-specific and class-specific information.
Opti-CAM outperforms all other methods dramatically, reaching near-zero $\operatorname{AD}$ and $\operatorname{AI}$ above 80 or 90\%. \redred{According to our new $\operatorname{AG}$ metric, Opti-CAM still works while all other methods fail, but $\operatorname{AG}$ is much more conservative than $\operatorname{AI}$. On ViT-B for example, the classification score increases for 90.1\% of the images by masking with Opti-CAM, but the gain is only 18.0\% on average.}}
\input{tex/IN_n_chest_n_kvasir_resnet50}
\paragraph{Visualization}
\autoref{fig:vis-in-chest-n-kvasir-resnet} illustrates saliency map examples from ImageNet, Chest X-ray and Kvasir datasets. Opti-CAM saliency map is in general more spread out. This better highlights full objects, multiple instances or \iavr{background context, which may be taken into account by the model. On Chest X-ray, Opti-CAM and Score-CAM are the only methods that capture the chest, while all others focus on image corners.} More examples on datasets and networks \redred{as well as quantitative evaluation on medical data} are given in the supplementary material.
\section{Experiments}
\label{sec:exp}
We evaluate Opti-CAM and compare it quantitatively and qualitatively against other state-of-the-art methods on a number of datasets and networks. \iavr{We report classification metrics with execution times and we provide visualizations, an ablation study and a study on the suitability of localization ground truth. A sanity check, additional classification results, localization metrics, more ablations, more visualizations \redred{and code} are given in supplementary material}.
\subsection{Datasets}
\label{sec:data}
\paragraph{ImageNet}
We use the validation set of ImageNet ILSVRC 2012~\citep{krizhevsky2012imagenet,ILSVRC15}, which contains $50,000$ images evenly distributed over the $1,000$ categories. For the ablation study and for timing, we sample $1,000$ images from this set. Concerning the localization experiments, bounding boxes from the localization task of ILSVRC\footnote{\url{https://www.image-net.org/challenges/LSVRC/2012/index.php}} are used on the same validation set.
\paragraph{Medical data}
\iavr{We use two medical image datasets, namely \emph{Chest X-ray} \citep{kermany2018labeled} and \emph{Kvasir} \citep{pogorelov2017kvasir}. Complete qualitative and quantitative results are given in the supplementary. Here we only provide visualizations.}
\paragraph{Networks}
\label{sec:setup}
For all datasets, we use the pretrained ResNet50~\citep{he2016deep} and VGG16~\cite{simonyan2014very} networks with batch normalization~\citep{ioffe2015batch} from the Pytorch model zoo\footnote{\url{https://pytorch.org/vision/0.8/models.html}}. \iavr{For ImageNet, we further use the pretrained ViT-B (16-224)~\citep{dosovitskiy2020image} and DeiT-B (16-224)~\citep{pmlr-v139-touvron21a} from Pytorch image models (timm)\footnote{\url{https://github.com/rwightman/pytorch-image-models}}.
Regarding medical datasets, we fine-tune the networks as discussed in the supplementary material, where we also provide the setting details.
}
\subsection{Evaluation}
\label{sec:eval}
\paragraph{Metrics}
\ronan{We use \emph{average drop} ($\operatorname{AD}$) and \emph{average increase} ($\operatorname{AI}$)~\cite{chattopadhay2018grad} metrics, as well as the proposed \emph{average gain\xspace} ($\operatorname{AG}$), to measure the effect on classification performance of masking the input image by a saliency map. In the supplementary, we also report \emph{insertion} (I) and \emph{deletion} (D)~\citep{petsiuk2018rise} and highlight their limitations. Using classification metrics, we show the limitations of using the localization ground truth for the evaluation of attribution methods. In the supplementary, we provide a number of localization metrics from the \emph{weakly-supervised object localization} (WSOL) task of ILSVRC2014\footnote{\url{https://www.image-net.org/challenges/LSVRC/2014/index\#}}.}
\paragraph{Methods}
\iavr{We compare against the following state-of-the-art methods: Grad-CAM~\citep{selvaraju2017grad}, Grad-CAM++~\cite{chattopadhay2018grad}, Score-CAM~\citep{wang2020score}, Ablation-CAM~\citep{ramaswamy2020ablation}, XGrad-CAM~\citep{fu2020axiom}, Layer-CAM~\citep{jiang2021layercam} and ExtremalPerturbation~\citep{fong2019understanding}. Implementations are obtained from the PyTorch CAM library\footnote{\url{https://github.com/jacobgil/pytorch-grad-cam}} or TorchRay\footnote{\url{https://github.com/facebookresearch/TorchRay}}. For transformer models, we also compare against raw attention~\citep{dosovitskiy2020image}, rollout~\citep{abnar2020quantifying} and TIBAV~\cite{chefer2021transformer}\footnote{\url{https://github.com/hila-chefer/Transformer-Explainability}}.}
\paragraph{Image normalization}
It is standard that images are normalized before feeding them to a network. By doing so however, we cannot reproduce the results published for the baseline methods; rather, all results are improved dramatically. We can obtain results similar to published ones by \emph{not} normalizing. We believe normalization is important and we include it in all our experiments. In the supplementary, we provide more details and results without normalization\redred{, as well as code that allows for reproduction and verification of our results}.
\subsection{Object localization}
\iavr{Localization metrics are used to measure the precision of saliency maps relative to ground truth bounding boxes of the foreground object of interest. These metrics originate from weakly supervised localization (WSOL). However, the objectives of WSOL and explaining the decision of a DNN are not necessary aligned, since context may play an important role in the decision~\cite{shetty2019not, rao2022towards}.
To investigate the relative importance of the object and its context, we measure classification metrics} when using the bounding box $B$ itself as saliency map as well as its complement $I \setminus B$, where $I$ is the image. We also evaluate the intersection $B \cup S$ of the saliency map $S$ with the bounding box and with its complement ($S \setminus B$).
As shown in \autoref{tab:localization}, the ground truth region of the object is not the only one responsible for the network decision. For example, the bounding box fails both when used as a saliency map itself and when combined with any saliency map, by harming all classification metrics. \iavr{Even the complement is more effective than the bounding box itself, either alone or when combined.} These findings support the hypothesis that localization metrics based on the ground truth bounding box are not necessarily appropriate for evaluating explanations of network decisions. Classification metrics are clearly more appropriate in this sense.
\iavr{Nevertheless, we report localization metrics in the supplementary material. \redred{In summary, although its saliency maps are more spread out, Opti-CAM outperforms other methods on a number of metrics.}}
\begin{table}[t]
\footnotesize
\centering
\setlength{\tabcolsep}{1.0pt}
\begin{tabular}{lccc|ccc|ccc} \toprule
\mr{2}{\Th{Method}} & \mc{3}{\Th{$\operatorname{AD}\!\downarrow$}} & \mc{3}{\Th{$\operatorname{AG}\!\uparrow$}}& \mc{3}{\Th{$\operatorname{AI}\!\uparrow$}} \\ \cmidrule{2-10}
& {$S$} & {$B \!\cap\! S$} & {$S \!\setminus\! B$} & {$S$} & {$B \!\cap\! S$} & {$S \!\setminus\! B$}& {$S$} & {$B \!\cap\! S$} & {$S \!\setminus\! B$} \\ \midrule
$S \mathrel{:=} B$ & 67.2 & -- & -- & 2.3 & -- & -- & 9.2 & -- & -- \\
$S \mathrel{:=} I \setminus B$ & 44.0 & -- & -- & 2.8 & -- & -- & 16.3 & -- & -- \\ \midrule
Fake-CAM~\citep{poppi2021revisiting} & 0.5 & 67.2 & 44.1 & 0.7 & 2.3 & 2.8 & 42.0 & 9.2 & 18.9 \\ \midrule
Grad-CAM~\citep{selvaraju2017grad} & 15.0 & 72.6 & 52.1 & 15.3 & 1.8 & 6.0 & 40.4 & 8.4 & 19.4 \\
Grad-CAM++~\cite{chattopadhay2018grad} & 16.5 & 72.9 & 53.1 & 10.6 & 1.6 & 4.1 & 35.2 & 7.3 & 17.1 \\
Score-CAM~\citep{wang2020score} & 12.5 & 71.5 & 50.5 & 16.1 & 2.2 & 6.3 & 42.5 & 8.6 & 20.8 \\
Ablation-CAM~\citep{ramaswamy2020ablation} & 15.1 & 72.8 & 52.1 & 13.5 & 1.7 & 5.6 & 39.9 & 7.8 & 19.0 \\
XGrad-CAM~\citep{fu2020axiom} & 14.3 & 72.6 & 51.4 & 15.1 & 1.8 & 6.0 & 42.1 & 8.0 & 20.1 \\
Layer-CAM~\citep{jiang2021layercam} & 49.2 & 84.2 & 74.4 & 2.7 & 0.4 & 1.2 & 12.7 & 4.4 & 7.3 \\
ExPerturbation~\citep{fong2019understanding} & 43.8 & 81.6 & 71.0 & 7.1 & 1.4 & 3.2 & 18.9 & 5.6 & 11.1 \\
\rowcolor{cyan!10}
Opti-CAM (ours) & \tb{1.4} & \tb{62.5} & \tb{34.8} & \tb{66.3} & \tb{8.7} & \tb{25.8} & \tb{92.5} & \tb{18.6} & \tb{47.1} \\ \bottomrule
\end{tabular}
\caption{\emph{Bounding box} study. Classification metrics on ImageNet validation set using VGG16. $B$: ground-truth box used by localization metrics; $I$: entire image; $S$: saliency map. $\operatorname{AD}$/$\operatorname{AI}$: average drop/increase~\citep{chattopadhay2018grad}; $\operatorname{AG}$: average gain (ours); $\downarrow$ / $\uparrow$: lower / higher is better; bold: best, excluding Fake-CAM.}
\label{tab:localization}
\end{table}
\subsection{Ablation study}
\label{sec:ablation}
We perform an ablation study of different choices of the objective function~\eq{obj} and normalization~\eq{norm} of the saliency map. \redred{More choices of~\eq{obj}, layer $\ell$, number of iterations and learning rates, selector function $g_c$
and initialization of $\mathbf{w}$ are studied in the supplementary material.}
\paragraph{Normalization function}
For normalization function $n$~\eq{obj}, we investigate three choices:
\begin{align}
\textrm{range} : \quad & n(A) \mathrel{:=} \textstyle \frac{A - \min A}{\max A - \min A} \label{eq:n-rng} \\
\textrm{maximum} : \quad & n(A) \mathrel{:=} \textstyle \frac{A}{\max A} \label{eq:n-max}
\\
\textrm{sigmoid} : \quad & n(a_{ij}) \mathrel{:=} \frac{1}{1+e^{-a_{ij}}} \label{eq:n-sig},
\end{align}
where $a_{ij}$ is element $(i,j)$ of matrix $A$. The default is~\eq{n-rng}, normalizing by the range of values in the saliency map, as in Score-CAM~\eq{norm}; while~\eq{n-max} normalizes by the maximum value and~\eq{n-sig} by the sigmoid function element-wise.
\paragraph{Objective function}
We refer to the default definition of $F^c_\ell$~\eq{obj} as Mask\xspace because it maximizes the logit for the masked image.
We also consider an alternative definition of objective function $F^c_\ell$, which encourages the masked version to preserve the prediction of original image:
\begin{equation}
F^c_\ell(\mathbf{x}; \mathbf{u}) \mathrel{:=} -\abs{g_c(f(\mathbf{x})) - g_c(f(\mathbf{x} \odot n(\operatorname{up}(S_\ell(\mathbf{x}; \mathbf{u})))))}.
\label{eq:ref}
\end{equation}
This function is named Diff\xspace as it minimizes the difference of logits between the masked and the original image.
\paragraph{Results}
\autoref{tab:ablate} shows classification metrics for the different choices of Opti-CAM, as well as comparison to other methods for reference, for the small subset of ImageNet validation set.
We observe that the choice of normalization function has little effect overall and Sigmoid offers lower performance. Note that the minimum value of saliency maps is often zero or close to zero: Saliency maps are non-negative as convex combinations of non-negative feature maps~\eq{v-sal}. By contrast, the choice of loss function has more impact on performance and we observe that Mask\xspace~\eq{obj} is superior on all cases.
\newcommand{\ob}[1]{\textcolor{brown}{\tb{#1}}}
\newcommand{\ab}[1]{\textcolor{blue}{\tb{#1}}}
\begin{table}
\centering
\footnotesize
\setlength{\tabcolsep}{3pt}
\begin{tabular}{lccrrrrr} \toprule
{\Th{Method}} & {$F^c_\ell$} & {$n$} & {$\operatorname{AD}\!\downarrow$}& {$\operatorname{AG}\!\uparrow$} & {$\operatorname{AI}\!\uparrow$} \\ \midrule
Fake-CAM~\citep{poppi2021revisiting} & & & 0.5 & 0.7 & 42.1 \\
\midrule
Grad-CAM~\citep{selvaraju2017grad} & & & 15.0 & 15.3 & 40.4 \\
Grad-CAM++~\cite{chattopadhay2018grad} & & & 16.5 & 10.6 & 35.2 \\
Score-CAM~\citep{wang2020score} & & & 12.5 & 16.1 & 42.6 \\
Ablation-CAM~\citep{ramaswamy2020ablation} & & & 15.1 & 13.5 & 39.9 \\
XGrad-CAM~\citep{fu2020axiom} & & & 14.3 & 15.1 & 42.1 \\
Layer-CAM~\citep{jiang2021layercam} & & & 49.2 & 2.7 & 12.7 \\
ExPerturbation~\citep{fong2019understanding} & & & 43.8 & 7.1 & 18.9 \\
\midrule
\mr{2}{Opti-CAM (ours)} & Mask\xspace~\eq{obj} & Range~\eq{n-rng} & \tb{1.4} & \tb{66.3} & \tb{92.5} \\
& Diff\xspace~\eq{ref} & Range~\eq{n-rng} & 7.1 & 18.5 & 54.9 \\ \midrule
\mr{2}{Opti-CAM (ours)} & Mask\xspace~\eq{obj} & Max~\eq{n-max} & 1.6 & 66.2 & 90.3 \\
& Diff\xspace~\eq{ref} & Max~\eq{n-max} & 6.8 & 17.8 & 54.5 \\ \midrule
\mr{2}{Opti-CAM (ours)} & Mask\xspace~\eq{obj} & Sigmoid~\eq{n-sig} & 5.0 & 18.3 & 57.5 \\
& Diff\xspace~\eq{ref} & Sigmoid~\eq{n-sig} & 6.5 & 10.0 & 45.3 \\ \bottomrule
\end{tabular}
\caption{\emph{Ablation study} using VGG16 on 1000 images of ImageNet validation set. $\operatorname{AD}$/$\operatorname{AI}$: average drop/increase~\citep{chattopadhay2018grad}; $\operatorname{AG}$: average gain (ours); $\downarrow$ / $\uparrow$: lower / higher is better; bold: best, excluding Fake-CAM.}
\label{tab:ablate}
\end{table}
\pgfplotstableread{fig/eval/plain_ai.dat}{\plotAI}
\pgfplotstableread{fig/eval/plain_ad.dat}{\plotAD}
\pgfplotstableread{fig/eval/plain_ag.dat}{\plotAG}
|
train/arxiv
|
BkiUebrxK1yAgWt7_foE
| 5 | 1 |
\section{Introduction}
Long ago \cite{frsw} it was suggested that Korteweg - de Vries solitons might be formed
in the nuclear medium. In a previous work \cite{nois} we have updated the early works
on the subject introducing a realistic equation of state (EOS) for nuclear matter. We have
found that these solitary waves can indeed exist in the nuclear medium, provided that
derivative couplings between the nucleon and the vector field are included. These couplings
lead to an
energy density which depends on the Laplacian of the baryon density. For this class of
equations of state, which is quite general as pointed out in \cite{furn}, perturbations
on the nuclear density can propagate as a pulse without dissipation.
During the analysis of
several realistic nuclear equations of state, we realized that, very often the speed of sound
$c_s^2$ is in the range $0.15 -0.25$. Compared to the speed of light these values are not large
but not very small either. This suggests that, even for slowly moving nuclear matter,
relativistic effects might be sizeable. This concern justifies the extension of the
formalism presented in \cite{nois}.
\section{Hydrodynamics}
Euler equation and the continuity equation form the basis of hydrodynamics.
In the non-relativistic regime and for a perfect fluid they are \cite{nois}:
\begin{equation}
{\frac{\partial \vec{v}}{\partial t}} +(\vec{v} \cdot \vec{\nabla}) \vec{v}=
-\bigg({\frac{1}{M}}\bigg) \vec{\nabla} h
\label{eulerentalpia}
\end{equation}
\begin{equation}
{\frac{\partial \rho_B}{\partial t}} + {\vec{\nabla}} \cdot (\rho_B {\vec{v}})=0
\label{contibari}
\end{equation}
where $\rho_B$, $M$, $h$ and $v$ are the baryon density, the nucleon mass, the enthalpy per
nucleon and the fluid velocity respectively. In the relativistic case they are
\cite{elze,wein}:
\begin{equation}
{\frac{\partial {\vec{v}}}{\partial t}}+(\vec{v} \cdot \vec{\nabla})\vec{v}=
-{\frac{1}{(\varepsilon + p)\gamma^{2}}}
\bigg({\vec{\nabla} p +\vec{v} {\frac{\partial p}{\partial t}}}\bigg)
\label{eul}
\end{equation}
\begin{equation}
{\frac{\partial}{\partial t}}(\rho_{B}\gamma)+\vec{\nabla} \cdot (\rho_{B}\gamma \vec{v})=0
\label{con}
\end{equation}
where $\gamma$, $\varepsilon$ and $p$ are the usual Lorentz factor ($\gamma=(1-v^{2})^{-1/2}$),
energy density and pressure respectively. We have deliberately written the above equations in
a non-covariant way to make the comparison between the non-relativistic and relativistic cases
easier. Using the definition of enthalpy per nucleon \cite{land} for a perfect fluid we find
that $dp=\rho_{B}dh$. Therefore, using the Gibbs relation at zero temperature
$\varepsilon + p=\mu_{B} \, \rho_{B}$ we can rewrite (\ref{eul}) as:
\begin{equation}
{\frac{\partial {\vec{v}}}{\partial t}}+(\vec{v} \cdot \vec{\nabla})\vec{v}= \, - \,
{\frac{(1-v^{2})}{\mu_{B}}}
\bigg({\vec{\nabla} h +\vec{v} {\frac{\partial h}{\partial t}}}\bigg)
\label{eulerfinal}
\end{equation}
where $\mu_B$ is the baryochemical potential.
Since the enthalpy per nucleon may also be written as \cite{nois,abu}
\begin{equation}
h={\frac{\partial \varepsilon }{\partial \rho_{B}}}
\label{gradh}
\end{equation}
it becomes clear that the ``force'' on the right hand side of the Euler equations
(\ref{eulerentalpia}) and (\ref{eulerfinal}) will be ultimately determined by the
equation of state, i.e., by the function $\varepsilon(\rho_{B})$.
\section{KdV equation and the nuclear equation of state}
Equations ({\ref{eulerentalpia}}) and (\ref{eulerfinal}) contain the gradient of the
derivative of the energy density. If $\varepsilon$ contains a
Laplacian of $\rho_B$, i.e., ${\mathcal{\varepsilon}}
\propto ... + ... \nabla^{2} \rho_{B} + ...$, then
({\ref{eulerentalpia}}) and (\ref{eulerfinal}) will have a cubic derivative
with respect to the space coordinate which will give rise to the Korteweg-de Vries equation
for the baryon density.
The most popular relativistic mean field models do not have higher derivative terms and,
even if they have at the start, these terms are usually neglected during the calculations.
In \cite{nois} we have added a new derivative term to the usual non-linear QHD
\cite{lala}, given by
\begin{equation}
{\mathcal{L_{M}}} \equiv {\frac{g_{v}}{{m_{v}}^{2}}}\bar{\psi}
(\partial_{\nu} \partial^{\nu} V_{\mu})\gamma^{\mu} \psi
\label{lagram}
\end{equation}
where, as usual, the degrees of freedom are
the baryon field $\psi$, the neutral scalar meson field $\phi$
and the neutral vector meson field $V_{\mu}$, with the respective couplings and masses.
The new term is designed to be small in comparison with the main baryon - vector meson
interaction term $g_{v} \bar{\psi} \gamma_{\mu} V^{\mu} \psi$.
Folowing the standard steps of the mean field formalism we arrive at the following
expression for the energy density:
\begin{eqnarray}
\varepsilon&=&{\frac{{g_{v}}^{2}}{2{m_{v}}^{2}}}\rho_{B}^{2}
+{\frac{{m_{s}}^{2}}{2}}{\bigg[{\frac{(M^{*}-M)}{g_{s}}}\bigg]}^{2}
+{\frac{\eta}{(2\pi)^{3}}}\int_{0}^{k_{F}} d^3{k} ({\vec{k}}^{2}+{M^{*}}^{2})^{1/2}
+{\frac{b}{3g_s^3}}(M^{*}-M)^{3} \nonumber \\
&+&{\frac{c}{4g_{s}^{4}}}(M^{*}-M)^{4} + {\frac{{g_{v}}^{2}}{{m_{v}}^{4}}}\rho_{B}
\nabla^{2}\rho_{B}
\label{epsilonexp}
\end{eqnarray}
where $\eta$ is the baryon spin-isospin degeneracy factor,
$M^*$ stands for the nucleon effective mass (given by $M^{*} \equiv M-g_{s}\phi_{0}$)
and the constants $b$, $c$ $g_s$ and $g_v$ are taken from \cite{lala}.
Although Eq. (\ref{epsilonexp})
was obtained with the help of a specific Lagrangian taken from \cite{lala} and a prototype
Laplacian interaction (\ref{lagram}), the above form of the energy density
follows quite naturally from an approach based on the density functional theory \cite{fst97},
regardless of the form of the underlying Lagrangian. Thus KdV solitons are a general
consequence of many-body dynamics.
\section{KdV solitons}
We now repeat the steps developed in \cite{frsw,nois} and
introduce dimensionless variables for the baryon density and velocity:
\begin{equation}
\hat\rho={\frac{\rho_{B}}{\rho_{0}}} \hspace{0.2cm}, \hspace{0.5cm} \hat v={\frac{v}{c_{s}}}
\label{varschapeu}
\end{equation}
We next define the ``stretched coordinates'' $\xi$ and $\tau$ as in
\cite{frsw,abu,davidson}:
\begin{equation}
\xi=\sigma^{1/2}{\frac{(x-{c_{s}}t)}{R}}
\hspace{0.2cm}, \hspace{0.5cm}
\tau=\sigma^{3/2}{\frac{{c_{s}}t}{R}}
\label{stret}
\end{equation}
where $R$ is a size scale and $\sigma$ is a small ($0 < \sigma < 1$) expansion parameter
chosen to be \cite{davidson}:
\begin{equation}
{\sigma} = {\frac{\mid u-{c_{s}} \mid}{{c_{s}}}}
\label{sigma}
\end{equation}
where $u$ is the propagation speed of the perturbation in question.
We then expand (\ref{varschapeu}) around the equilibrium values:
\begin{equation}
\hat\rho=1+\sigma \rho_{1}+ \sigma^{2} \rho_{2}+ \dots
\label{roexp}
\end{equation}
\begin{equation}
\hat v=\sigma v_{1}+ \sigma^{2} v_{2}+ \dots
\label{vexp}
\end{equation}
After the expansion above (\ref{eulerentalpia}), (\ref{contibari}), (\ref{con}) and
(\ref{eulerfinal}) will contain power series in
$\sigma$ (in practice we go up to $\sigma^2$). Since the coefficients in these series are
independent of each other we get a set of equations, which, when combined,
lead to KdV equations for $\rho_{1}$. In the non-relativistic case we have obtained
\cite{nois}:
\begin{equation}
{\frac{\partial {\rho}_{1}}{\partial \tau}}+
3{{\rho}_{1}}{\frac{\partial{\rho}_{1}}{\partial \xi}}
+\bigg({\frac{{g_{v}}^{2}{\rho_{0}}}{2M{c_{s}}^{2}{m_{v}}^{4}R^{2}}}\bigg)
{\frac{\partial^{3}{\rho}_{1}}{\partial \xi^{3}}}=0
\label{KdVpaper}
\end{equation}
with the analytical solitonic solution:
\begin{equation}
{\hat{\rho}_{1}}(x,t)={\frac{(u-{c_{s}})}{{c_{s}}}}
sech^{2}\bigg[
{\frac{{m_{v}}^{2}}{{g_{v}}}}\sqrt{{\frac{(u-{c_{s}}){c_{s}}M}
{2{\rho_{0}}}}}(x-ut) \bigg]
\label{solpaper}
\end{equation}
where ${\hat{\rho}_{1}} \equiv \sigma {\rho_{1}}$ and $u$ is the velocity of propagation
of the perturbation. The solution above is a bump with width $\lambda$ given by:
\begin{equation}
\lambda={\frac{g_{v}}{{m_{v}}^{2}}} \sqrt{{\frac{2{\rho_{0}}}{(u-{c_{s}}){c_{s}}M}}}
\label{width}
\end{equation}
Now, following the same sequence of steps, the combination of (\ref{eulerfinal}) and
(\ref{con}) leads to a similar KdV equation for the relativistic case:
\begin{equation}
{\frac{\partial {\rho}_{1}}{\partial \tau}}+
(3-{c_{s}}^{2})
{{{\rho}_{1}}{\frac{\partial{\rho}_{1}}{\partial \xi}}}
+\bigg({\frac{g_v^2 \rho_0}{2 M c_s^2 m_v^4 R^2}}\bigg)
{\frac{\partial^{3}{\rho}_{1}}{\partial \xi^{3}}}=0
\label{KdVmqhdrelat}
\end{equation}
with the solution given by:
\begin{equation}
{\hat{\rho}_{1}}(x,t)={\frac{3(u-{c_{s}})}{{c_{s}}}}(3-{c_{s}}^{2})^{-1}
sech^{2}\bigg[
{\frac{{m_{v}}^{2}}{{g_{v}}}}\sqrt{{\frac{(u-{c_{s}}){c_{s}}M}
{2{\rho_{0}}}}}(x-ut) \bigg]
\label{solmqhdrelat}
\end{equation}
with the condition $\mu_B=M$.
As a consitency check we take the non-relativistic limit, which, in this
case, means taking a small speed of sound $c^2_s \rightarrow 0$. In this limit
$(3-{c_{s}}^{2})\cong 3$, (\ref{KdVmqhdrelat}) reduces to (\ref{KdVpaper})
and (\ref{solmqhdrelat}) coincides with (\ref{solpaper}).
\section{Conclusions}
The existence of KdV solitons in nuclear matter has potential applications in
nuclear physics at intermediate energies \cite{frsw} and also possibly at high
energies. The experimental measurements of jet quenching and related phenomena
performed at RHIC \cite{star} offer an unique opportunity of studying supersonic
motion in hot and dense hadronic matter. With this scenario in mind we gave the
first step in the adaptation of the KdV soliton formalism to the new environment.
We have extended the results of our previous work \cite{nois},
showing that it is possible to obtain the KdV solitons in relativistic
hydrodynamics with an appropriate EOS. Taking the non-relativistic limit
($c^2_s \rightarrow 0$) we were able to recover the previous results.
|
train/arxiv
|
BkiUdyw4eIOjSM_xsClc
| 5 | 1 |
\section{Introduction}\label{1}
Three dimensional CR structures are among the examples of geometric
structures for which Elie Cartan constructed an associated normal
Cartan connection, see \cite{Cartan}. The homogeneous model for this
geometry is $S^3$, viewed as a quotient of the semisimple group
$G:=PSU(2,1)$ by a parabolic subgroup $P$, so three dimensional CR
structures form an example of a \textit{parabolic geometry}.
This example is remarkable in many respects. On the one hand, it is
sufficiently complicated to incorporate many of the features of
general parabolic geometries. On the other hand, the low dimension of
the group $G$ and the fact that all important natural bundles are
(either real or complex) line bundles, and hence all sections can
locally be viewed as functions, simplify matters considerably. In
fact, Cartan was even able to describe an algorithm for computing the
essential curvature invariant of such structures. Moreover, many
questions that have to be attacked using representation theory for
general parabolic geometries can be easily solved directly in this
case. An example for this is provided by the analysis of possible
dimensions of automorphism groups in \cite{Srni04}.
Returning to the homogeneous model $S^3=G/P$, consider the compact
subgroup $K=SU(2)\subset G$. Acting with elements of $K$ induces a
diffeomorphism $K\cong S^3$, so we can actually view the standard CR
structure on $S^3$ as a left invariant structure on $K$. In this
picture, the structure can be easily obtained from data on the Lie
algebra $\frak k$ of $K$. These data admit an evident one--parameter
deformation, which gives rise to a one parameter family of left
invariant CR structures on $K$. The aim of this article is to show
that the canonical Cartan connections associated to these CR
structures can be computed using only linear algebra. On the way, one
also gets explicit formulae for their curvatures. Finally, we also
describe all tractor bundles and normal tractor connections
explicitly. These developments should also serve as a basis for a more
detailed analysis of these structures and as a prototype for dealing
with general left invariant parabolic geometries.
\noindent
\textbf{Acknowledgment}: I would like to thank Olivier Biquard for
bringing this example to my attention during a discussion at the
Winter School in Srn\'\i{}.
\section{Left invariant CR structures on $SU(2)$}\label{2}
\subsection{3--dimensional CR structures}\label{2.1}
Recall that a CR structure on a $3$--manifold $M$ is given by a
complex line subbundle $H\subset TM$, which defines a contact
structure on $M$. The subbundle $H\subset TM$ is called the \textit{CR
subbundle}. Equivalently, we have a rank two subbundle $H\subset TM$
endowed with a complex structure $J:H\to H$ such that for one (or
equivalently any) locally non vanishing section $\xi\in\Gamma(H)$ the
vector fields $\xi$, $J(\xi)$ and $[\xi,J(\xi)]$ form a local frame
for $TM$. In contrast to higher dimensions, there is no condition of
partial integrability or integrability in dimension $3$.
Given two CR structures, there is an evident notion of a (local) CR
diffeomorphism. This is a (local) diffeomorphism $f$, such that for
each point $x$ the tangent map $T_xf$ maps the CR subbundle to the CR
subbundle and the restriction of $T_xf$ to the CR subbundle is complex
linear.
The basic examples for such structures are provided by generic real
hypersurfaces in two dimensional complex manifolds. If $(\tilde
M,\tilde J)$ is a two dimensional complex manifold and $M\subset\tilde
M$ is a real hypersurface, then for each $x\in M$ the tangent space
$T_xM$ has real dimension $3$ and sits in $T_x\tilde M$, which is a
two dimensional complex vector space. Now $H_x:=T_xM\cap\tilde
J(T_xM)$ is a complex subspace of $T_x\tilde M$, which evidently must
have complex dimension one. By construction, the spaces $H_x$ fit
together to define a complex line subbundle $H\subset TM$, with the
complex structure $J$ given by the restriction of $\tilde J$.
Generically, the subbundle $H\subset TM$ is maximally non--integrable,
and hence defines a CR structure on $M$. From the construction it is
clear that a biholomorphism $f:\tilde M\to\tilde M$ which preserves
the hypersurface $M$ restricts to a CR automorphism of $M$.
The simplest example of this situation is provided by the unit sphere
$S^3\subset\Bbb C^2$. For $x\in S^3$ we get $T_xS^3=\{y\in\Bbb
C^2:\text{Re}(\langle x,y\rangle)=0\}$. The maximal complex subspace
of this is $H_x=\{y\in\Bbb C^2:\langle x,y\rangle=0\}$. One easily
verifies directly that this defines a contact structure on $S^3$. Hence
we have obtained a CR structure on $S^3$, called the \textit{standard
structure}. CR structures which a locally isomorphic to the standard
structure on $S^3$ are called \textit{spherical}.
Any element $A\in U(2)$ defines a biholomorphism of $\Bbb C^2$ which
preserves the unit sphere $S^3$, and hence restricts to a CR
automorphism of the standard CR structure on $S^3$. Of course, this
action is transitive, so we see that $S^3$ with its standard structure
is a homogeneous CR manifold. The group of CR automorphisms of this
structure however is larger than $U(2)$. Identifying $S^3$ with the
space those complex lines in $\Bbb C^3$ which are isotropic for a
Hermitian inner product of signature $(2,1)$ leads to an faithful
action of $G:=PSU(2,1)$ on $S^3$ by CR automorphisms. Correspondingly,
one obtains a diffeomorphism $S^3\cong G/P$, where $P\subset G$ is the
stabilizer of an isotropic line in $\Bbb C^3$.
\subsection{Left invariant deformations of the standard structure}\label{2.2}
Restricting the action of $U(2)$ on $S^3$ further to $K:=SU(2)$ we
obtain a diffeomorphism $K\to S^3$, which we can use to carry over the
standard CR structure to $K$. In this picture, multiplication from the
left by any element of $K$ is a CR automorphism, so we have
constructed a left invariant CR structure on $K$.
It is well known that left invariant structures on a Lie group can be
described in terms of the Lie algebra. Denoting by $e\in K$ the unit
element and by $\frak k=T_eK$ the Lie algebra of $K$, we get the fiber
$H_e\subset\frak k$ of the subbundle. This must be a complex subspace
of complex dimension $1$ in the real vector space $\frak k$. By left
invariance, the fiber $H_g$ in each point $g\in K$ is spanned by the
values $L_X(g)$ of the left invariant vector fields generated by
elements $X\in H_e$, and the complex structure on $H_g$ comes from the
linear isomorphism $X\mapsto L_X(g)$. Explicitly, $\frak k$ consists
of all skew Hermitian $2\times 2$--matrices, i.e.
$$
\frak k=\left\{\begin{pmatrix} it & -\overline z\\ z &
-it\end{pmatrix}:t\in\Bbb R,z\in\Bbb C\right\},
$$
and we will denote elements of $\frak k$ as pairs $(it,z)$. Using the
action on the first vector in the standard basis of $\Bbb C^2$ to
identify $K$ with $S^3$, we see that $H_e=\{(0,z):z\in\Bbb
C\}\subset\frak k$. The fact that this defines a left invariant
contact structure on $K$ is then immediate from the fact that
$[L_X,L_Y]=L_{[X,Y]}$ for all $X,Y\in\frak k$ and from
$[(0,1),(0,i)]=(-2i,0)$. Indeed, the linear functional $\alpha:\frak
k\to\Bbb R$ defined by $\alpha(it,z)=t$ defines a left invariant contact
form for the contact structure $H$.
Now the crucial idea is that we can leave this left invariant contact
structure unchanged but deform the complex structure in the space
$H_e$ to obtain a family $(H,J_\lambda)$ of left invariant CR structures
on $K$ parametrized by a positive real number $\lambda$. Namely, for
$\lambda>0$ we define $J_\lambda(e)(0,u+iv):=(0,i(\lambda
u+i\tfrac{1}{\lambda}v))=(0,-\tfrac{1}{\lambda}v+i\lambda u)$. This extends to a
left invariant complex structure on the contact subbundle $H\subset
TK$, which in addition induces the standard orientation. The obvious
question is whether this is a true deformation of the standard CR
structure, or whether one just obtains (locally) isomorphic
structures.
Notice that, viewed as CR structures on $S^3$, the structures
$(H,J_\lambda)$ for $\lambda\neq 1$ are not invariant under the group $U(2)$.
Indeed the element $\left(\begin{smallmatrix} 1 & 0 \\ 0 &
i\end{smallmatrix}\right)\in U(2)$ fixes the first vector in the
standard basis. The tangent map of its action is given by
$(it,z)\mapsto (it,iz)$, which is complex linear for $J_\lambda(e)$ if and
only if $\lambda=1$. Invariance under $U(2)$ would actually imply that the
structure is spherical, since by a classical result of Cartan, the
automorphism group of a non--spherical CR structure has dimension at
most three. A simple proof of this result can be found in
\cite{Srni04}.
\section{The canonical Cartan connections}\label{3}
\subsection{Three dimensional CR structures and Cartan
geometries}\label{3.1}
Three dimensional CR structures can be equivalently described as
Cartan geometries, which in particular implies that the curvature
gives a complete obstruction to being spherical. We first have to
describe the group $G=PSU(2,1)$ and its Lie algebra $\frak
g=\frak{su}(2,1)$ in a bit more detail. Consider the Hermitian form on
$\Bbb C^3$ defined by
$$
((z_0,z_1,z_2),(w_0,w_1,w_2))\mapsto
z_0\overline{w_2}+z_2\overline{w_0}+z_1\overline{w_1}.
$$
Then the first and last vector in the standard basis are isotropic,
while the second one has positive length, so this form has signature
$(2,1)$. A direct computation shows that for this form we get
$$
\frak g=\left\{
\begin{pmatrix}
\alpha+i\beta & w & i\psi \\ x & -2i\beta & -\overline{w}\\ i\phi &
-\overline{x} & -\alpha+i\beta
\end{pmatrix}: \alpha,\beta,\phi,\psi\in\Bbb R, x,w\in\Bbb C
\right\}
$$
We obtain a grading $\frak g=\frak g_{-2}\oplus\frak g_{-1}\oplus\frak
g_0\oplus\frak g_1\oplus \frak g_2$ of $\frak g$ by
$$
\begin{pmatrix}
{\mathfrak g}_0 & {\mathfrak g}_1 &{\mathfrak g}_2\\ {\mathfrak g}_{-1} & {\mathfrak g}_0 &{\mathfrak g}_1\\ {\mathfrak g}_{-2} &{\mathfrak g}_{-1}
&{\mathfrak g}_0
\end{pmatrix}.
$$
The associated filtration is defined by $\frak g^i=\frak
g_i\oplus\dots\oplus \frak g_2$, so we have
$$
\frak g=\frak g^{-2}\supset{\mathfrak g}^{-1}\supset\dots\supset\frak g^2,
$$
and $[{\mathfrak g}^i,{\mathfrak g}^j]\subset{\mathfrak g}^{i+j}$. The parabolic subgroup $P\subset
G$ is the stabilizer of an isotropic line, for which we take the line
generated by the first basis vector. The Lie algebra of $P$ then
evidently is given by $\frak p={\mathfrak g}^0={\mathfrak g}_0\oplus{\mathfrak g}_1\oplus{\mathfrak g}_2$. In
particular the filtration $\{{\mathfrak g}^i\}$ is invariant under the adjoint
actions of $\frak p$ and $P$.
\begin{definition*} (1) A \textit{Cartan geometry} of type $(G,P)$ on a
smooth manifold $M$ is a principal $P$--bundle $p:\Cal G\to M$
together with a one form $\omega\in\Omega^1(\Cal G,\frak g)$ such that
\begin{itemize}
\item $(r^g)^*\omega=\operatorname{Ad}(g)^{-1}\o\omega$ for all $g\in P$, where $r^g$
denotes the principal right action of $g$.
\item $\omega(\zeta_A)=A$ for all $A\in\frak p$, where $\zeta_A$ denotes the
fundamental vector field with generator $A$.
\item $\omega(u):T_u\Cal G\to\frak g$ is a linear isomorphism for all
$u\in\Cal G$.
\end{itemize}
\noindent
(2) A \textit{morphism} between two Cartan geometries $(\Cal G\to
M,\omega)$ and $({\tilde{\Cal G}}\to\tilde M,\tilde\omega)$ is a principal bundle
homomorphism $\Phi:\Cal G\to{\tilde{\Cal G}}$ such that $\Phi^*\tilde\omega=\omega$. Note
that since both $\omega$ and $\tilde\omega$ are bijective on each tangent
space, this implies that $\Phi$ is a local diffeomorphism.
\noindent
(3) The \textit{homogeneous model} of the geometry is the principal
bundle $G\to G/P$ together with the left Maurer--Cartan form
$\omega^{MC}$.
\end{definition*}
Given a Cartan geometry $(p:\Cal G\to M,\omega)$ of type $(G,P)$ on $M$,
we can form the associated bundle $\Cal G\times_P(\frak g/\frak p)$. The
map $\Cal G\times (\frak g/\frak p)\to TM$ given by $(u,X)\mapsto
T_up\cdot(\omega(u)^{-1}(X))$ descends to an isomorphism $\Cal
G\times_P(\frak g/\frak p)\cong TM$. Now $\frak g/\frak p$ contains the
$P$--invariant subspace $\frak g^{-1}/\frak p$, so this gives rise to
a subbundle $H\subset TM$. Moreover, $\frak g^{-1}/\frak p\cong\Bbb C$
and since $P$ consists of complex matrices, this complex structure is
invariant under the adjoint action of $P$. Therefore, it makes the
associated bundle $H=\Cal G\times_P(\frak g^{-1}/\frak p)$ into a complex
line bundle. If $H$ is a contact structure, then we obtain a three
dimensional CR structure on $M$.
\subsection{Regularity and normality}\label{3.2}
To characterize when $H$ is a contact structure, we need the curvature
$\kappa\in\Omega^2(\Cal G,\frak g)$ of $\omega$. This is defined by
$\kappa(\xi,\eta)=d\omega(\xi,\eta)+[\omega(\xi),\omega(\eta)]$. In the case of
the homogeneous model, $\kappa$ vanishes identically by the
Maurer--Cartan equation. Conversely, it can be shown that any Cartan
geometry with vanishing curvature is locally isomorphic to the
homogeneous model. Now we call a Cartan geometry of type $(G,P)$
\textit{regular} if and only if $\kappa(\xi,\eta)$ has values in $\frak
g^{-1}\subset\frak g$ provided that $\omega(\xi)$ and $\omega(\eta)$ have
values in $\frak g^{-1}$.
Suppose that this condition is satisfied and that $\xi$ and $\eta$ are
local lifts of vector fields on $M$. Then the fact that $\omega(\xi)$ and
$\omega(\eta)$ have values in $\frak g^{-1}$ exactly means that these
vector fields are sections of $H\subset TM$. By definition of the
curvature and the assumptions we see that
$-\omega([\xi,\eta])+[\omega(\xi),\omega(\eta)]$ has values in $\frak
g^{-1}$. Since $[\xi,\eta]$ lifts the bracket of the original fields,
this bracket cannot have values in $H$ unless $[\omega(\xi),\omega(\eta)]$
has values in $\frak g^{-1}$. One immediately verifies that the
bracket in $\frak g$ induces a non--degenerate map ${\mathfrak g}^{-1}/\frak p\times
{\mathfrak g}^{-1}/\frak p\to {\mathfrak g}/{\mathfrak g}^{-1}$. Hence we see that regularity of the
Cartan geometry ensures the we obtain an underlying CR structure.
It is a general theorem, that any three dimensional CR structure
arises as the underlying structure of a Cartan geometry of type
$(G,P)$. However, there are many non--isomorphic Cartan geometries
having the same underlying CR structure. To get rid of this freedom,
one has to put an additional normalization condition on (the curvature
of) the Cartan connection $\omega$. Under this additional condition, the
Cartan geometry is then uniquely determined up to isomorphism. See
\cite{Srni05} for a discussion of all these issues and
\cite{Cap-Schichl} for proofs, both in the realm of general parabolic
geometries.
We will not need the detailed form of the normalization condition, but
only some of its consequences. These follow from the fact that one may
relate the values of the curvature of a regular normal Cartan geometry
to certain explicitly computable Lie algebra cohomology groups. In the
case of three dimensional CR structures, these conditions imply that
$\kappa(\xi,\eta)$ has values in $\frak g^1\subset\frak g$ for all $\xi$
and $\eta$. Moreover, if both $\omega(\xi)$ and $\omega(\eta)$ have values
in $\frak g^{-1}$, then $\kappa(\xi,\eta)$ has to vanish identically.
Moreover, projecting the values of $\kappa$ to $\frak g^1/\frak
g^2\cong\frak g_1$, one obtains the \textit{harmonic curvature}, which
still is a complete obstruction to the CR structure being spherical.
\subsection{The case of left invariant structures}\label{3.3}
Let us now consider one of the left invariant CR structures
$(H,J_\lambda)$ on $K=SU(2)$. As an ansatz, we use the trivial principal
$P$--bundle $\Cal G:=K\times P$. For $X\in\frak k$, define $\hat
L_X:=(L_X,0)\in\frak X(\Cal G)$. The second part of the ansatz is
that $\omega(\hat L_X)$ is constant along $K\times\{e\}$. The motivation for
this ansatz is as follows. For any $k\in K$, the left translation by
$k$ defines a CR automorphism of $K$, which leaves each $L_X$
invariant. These automorphisms lift to the canonical principal bundle
in a way compatible with the canonical Cartan connection. Fixing an
identification of the fiber of the Cartan bundle over $e\in K$ with
$P$, we can use these lifts to trivialize the Cartan bundle and in
such a way that $\omega(\hat L_X)$ is constant along $K\times\{e\}$.
Consider a linear map $\phi:\frak k\to\frak g$ such that the
composition with the projection $\frak g\to\frak g/\frak p$ with $\phi$
is a linear isomorphism. Any tangent vector in $(k,g)\in K\times P$ can be
uniquely written as $(L_X(k),L_A(g))$ for some $X\in\frak k$ and
$A\in\frak p$. Hence we can define
$\omega\in\Omega^1(K\times P,\frak g)$ by
$$
\omega(L_X(k),L_A(g)):=\operatorname{Ad}(g^{-1})(\phi(X))+A.
$$
By the assumption on $\phi$ this defines a linear isomorphism on each
tangent space, and using that the principal right action is just
multiplication from the right in the second factor and that
$\zeta_A=(0,L_A)$ for each $A\in\frak p$, one immediately verifies that
this defines a Cartan connection.
We can also immediately compute the curvature $\kappa$ of this
connection. Since $\kappa$ is horizontal and $P$--equivariant, it
suffices to compute $\kappa(\hat L_X,\hat L_Y)(k,e)$ for all $X,Y\in\frak
k$ and $k\in K$. Now by definition,
$$
\kappa(\hat L_X,\hat L_Y)=d\omega(\hat L_X,\hat L_Y)+[\omega(\hat
L_X),\omega(\hat L_Y)]=-\omega([\hat L_X,\hat L_Y])+ [\omega(\hat L_X),\omega(\hat
L_Y)].
$$
Using $[L_X,L_Y]=L_{[X,Y]}$ we see that, along $K\times\{e\}$, the
function $\kappa(\hat L_X,\hat L_Y)$ is constant and equal to
$$
[\phi(X),\phi(Y)]-\phi([X,Y]).
$$
Hence the curvature exactly expresses the obstruction against $\phi$
being a homomorphism of Lie algebras.
It remains to express the fact that the Cartan connection $\omega$
induces the ``right'' underlying CR structure in terms of the linear
map $\phi$. Returning to the notation of \ref{2.2}, we denote elements
$X\in\frak k$ as pairs $(it,z)$ for $t\in\Bbb R$ and $z\in\Bbb C$.
Then $L_{(it,z)}$ lies in the contact subbundle $H$ if and only if
$t=0$, so to get the right contact subbundle, $\phi(it,z)$ must lie in
the subspace $\frak g^{-1}\subset\frak g$ if and only if $t=0$. Given
this, we get an induced linear map $\frak k\supset H_e\to\frak
g^{-1}$. Composing with the natural projection, we get a linear
isomorphism $\frak k\to\frak g^{-1}/\frak p\cong\Bbb C$. The condition
that we get the induced complex structure $J_\lambda$ exactly means that
via this isomorphism the (fixed) standard complex structure on $\frak
g^{-1}/\frak p$ induces the complex structure $J_\lambda(e)$ on $H_e$.
Having all this at hand, we can prove the main technical result of
this article:
\begin{thm*}
For fixed $\lambda>0$, the linear map $\phi_\lambda:\frak k\to\frak g$
defined by
$$
\phi_\lambda(it,u+iv):=\begin{pmatrix} \frac{1+\lambda^2}{4\lambda} it &
-\frac{5-3\lambda^2}{4\sqrt{\lambda}}u-\frac{3-5\lambda^2}{4\lambda\sqrt{\lambda}}iv &
\frac{-15+34\lambda^2-15\lambda^4}{16\lambda^2} it\\
\sqrt{\lambda} u+\frac{1}{\sqrt{\lambda}}iv & -\frac{1+\lambda^2}{2\lambda} it &
\frac{5-3\lambda^2}{4\sqrt{\lambda}}u-\frac{3-5\lambda^2}{4\lambda\sqrt{\lambda}}iv \\
it & -\sqrt{\lambda} u+\frac{1}{\sqrt{\lambda}}iv & \frac{1+\lambda^2}{4\lambda} it\end{pmatrix}
$$
induces a linear isomorphism $\frak k\to\frak g/\frak p$. It has
the property that $\phi_\lambda(H_e)\subset{\mathfrak g}^{-1}$, and via the
induced isomorphism $H_e\to{\mathfrak g}^{-1}/\frak p$ the induced complex
structure on $H_e$ is $J_\lambda(e)$. Finally, the map $\kappa_\lambda:\frak
k\times\frak k\to\frak g$ defined by
$$
\kappa_\lambda(X,Y):=[\phi_\lambda(X),\phi_\lambda(Y)]-\phi_\lambda([X,Y])
$$
has values in $\frak g^1$ and vanishes on $H_e\times H_e$.
Explicitly, $\kappa_\lambda$ satisfies
$$
\kappa_\lambda ((it,0),(0,u+iv))=\begin{pmatrix} 0 &
-\frac{3t(\lambda^4-1)}{2\lambda^2\sqrt{\lambda}}(v-i\lambda u) & 0 \\
0 & 0 & \frac{3t(\lambda^4-1)}{2\lambda^2\sqrt{\lambda}}(v+i\lambda u)\\ 0 & 0 &
0\end{pmatrix}
$$
and this completely determines $\kappa_\lambda$.
\end{thm*}
\begin{proof}
From the definition of $\phi_\lambda$ it is evident that it induces a
linear isomorphism $\frak k\to\frak g/\frak p$ and that it maps
elements of $H_e$, which are characterized by $it=0$ to $\frak
g^{-1}$. Then the isomorphism $H_e\to \frak g^{-1}/\frak p$ is given
by $u+iv\mapsto \sqrt{\lambda} u+i\frac{1}{\sqrt{\lambda}}v$, so the complex
structure on ${\mathfrak g}^{-1}/\frak p$ evidently induces $J_\lambda(e)$ on
$H_e$. It is then straightforward but tedious to check that
$\kappa_\lambda$ has values in $\frak g^1$ and vanishes on $H_e\times H_e$ as
well as the explicit formula. That this expression determines
$\kappa_\lambda$ follows since by skew symmetry and vanishing of $\kappa_\lambda$
on $H_e\times H_e$ we obtain
$$
\kappa_\lambda((it,z),(it',z'))=\kappa_\lambda((it,0),(0,z'))-\kappa_\lambda((it',0),(0,z)).
$$
\end{proof}
\subsection{Digression: How to get the formula for
$\phi_\lambda$}\label{3.4}
The result of Theorem \ref{3.3} is all that is needed in the sequel.
Since the proof does not explain how the formula for $\phi_\lambda$ was
obtained (although really doing the computation gives some hints), we
will briefly discuss this. As a spin off, this will show that
$\phi_\lambda$ is essentially uniquely determined by the four properties
listed in Theorem \ref{3.3}. The main point is that there is some
evident non--uniqueness around, and dealing with this is the key step
to determine $\phi_\lambda$. Recall that for any element $g\in P$, the
adjoint action $\operatorname{Ad}(g):{\mathfrak g}\to{\mathfrak g}$ preserves the filtration. In
particular, it preserves $\frak g^1$ and $\frak g^{-1}$ as well as
$\frak p$ and therefore induces a linear isomorphisms on $\frak
g/\frak p$ and $\frak g^{-1}/\frak p$. One immediately checks that the
second of these isomorphisms is complex linear. From these
observations it is evident, that if $\phi:\frak k\to\frak g$ is a
linear map which satisfies the four properties of Theorem \ref{3.3}
and $g\in P$ is arbitrary, then also $\operatorname{Ad}(g)\o\phi$ has these
properties.
To deal with this freedom, we need a bit more information on the group
$P$. Note first that there is a subgroup $G_0\subset P$ consisting of
all $g\in P$ for which $\operatorname{Ad}(g):{\mathfrak g}\to{\mathfrak g}$ even preserves the
grading. It is a general result (see \cite[Proposition
2.10]{Cap-Schichl}) that $G_0$ has Lie algebra $\frak g_0$ and any
$g\in G$ can be uniquely written in the form $g=g_0\exp(Z_1)\exp(Z_2)$
for $g_0\in G_0$ and $Z_i\in\frak g_i$. For our choice of $G$ and $P$,
one immediately verifies that the (complex) linear automorphism on
$\frak g^{-1}/\frak p$ induced by $\operatorname{Ad}(g)$ depends only on $g_0$ and
one obtains an isomorphism $G_0\to\Bbb C\setminus\{0\}$ in this way.
But now any linear isomorphism $H_e\to\frak g^{-1}/\frak p$,
which induces $J_\lambda(e)$ on $H_e$ can be written (identifying $
\frak g^{-1}/\frak p$ with the matrix component in the first column of
the second row) as the composition of a complex linear automorphism of
$\frak g^{-1}/\frak p$ with $u+iv\mapsto \sqrt{\lambda}
u+\frac{1}{\sqrt{\lambda}}iv$.
Hence if we want $\phi$ to induce a linear isomorphism $\frak k\to\frak
g/\frak p$, map $H_e\to\frak g^{-1}$, and induce $J_\lambda(e)$, then
we may assume the the lower two rows of the first column of
$\phi(it,u+iv)$ have the form $\begin{pmatrix} \sqrt{\lambda}
u+\frac{1}{\sqrt{\lambda}}iv+tz_0\\ is t\end{pmatrix}$ for some
$z_0\in\Bbb C$ and some $s\in\Bbb R\setminus\{0\}$. (Of course, this
also determines the second component in the last row.) Making this
ansatz also reduces the freedom to composition with
$\operatorname{Ad}(\exp(Z_1)\exp(Z_2))$. Having made this ansatz, one can already
compute the $\frak g_{-2}$--component of the restriction of $\kappa$ to
$H_e\times H_e$ and vanishing of this forces $s=1$.
Next we observe taking the bracket with a nonzero element of $\frak
g_{-2}$ induces a linear isomorphism $\frak g_1\to{\mathfrak g}_{-1}$. Using
this, we see that composing with $\operatorname{Ad}(\exp(Z_1))$ for an appropriate
choice of $Z_1$ we can require $z_0=0$ in the above ansatz, and this
reduces the freedom to composition with $\operatorname{Ad}(\exp(Z_2))$. To get rid
of this freedom, we observe that bracketing with a nonzero element of
$\frak g_{-2}$ induces a linear isomorphism from $\frak g_2$ to the
(one--dimensional) space of real diagonal matrices contained in $\frak
g$. Hence we can eliminate all the freedom of composition with $\operatorname{Ad}(g)$
by the ansatz that the first column of $\phi(it,u+iv)$ has the form
$\begin{pmatrix} uz_0+vz_1+ist\\ \sqrt{\lambda} u+\frac{1}{\sqrt{\lambda}}iv\\
it\end{pmatrix}$ for elements $z_0,z_1\in\Bbb C$ and $s\in\Bbb R$.
Having made this ansatz, one can compute the complete $\frak g_{-2}$
component of $\kappa$ and the $\frak g_{-1}$ component of the restriction
to $H_e\times H_e$, and vanishing of these forces $z_0=z_1=0$.
Now one can, step by step, take ansatzes for the remaining components
of $\phi(it,u+iv)$ and determine components of $\kappa$. In the end, one
finds out that the conditions on $\kappa$ in Theorem \ref{3.3} are
sufficient to uniquely pin down the formula for $\phi_\lambda$.
\subsection{The canonical Cartan connections}\label{3.5}
It is now easy to show that the map $\phi_\lambda$ from Theorem \ref{3.3}
leads to the canonical Cartan connection for $(K,H,J_\lambda)$.
\begin{kor*}
(1) For some $\lambda>0$, consider the left invariant CR structure
$(H,J_\lambda)$ on $K=SU(2)$ from \ref{2.2}. Then the regular normal
parabolic geometry associated to this structure is $(K\times P\to
K,\omega_\lambda)$, where
$$
\omega_\lambda(L_X(k),L_A(g))=\operatorname{Ad}(g^{-1})(\phi_\lambda(X))+A
$$
with $\phi_\lambda:\frak k\to\frak g$ the map from Theorem \ref{3.3}.
\noindent
(2) The CR structure $(H,J_\lambda)$ is spherical if and only if
$\lambda=1$, i.e.~if and only if it equals the standard structure.
\end{kor*}
\begin{proof}
(1) From \ref{3.3} we know that the formula for $\omega_\lambda$ defines a
Cartan connection on the trivial bundle $K\times P$. The conditions on
$\phi_\lambda$ in Theorem \ref{3.3} which do not involve $\kappa_\lambda$
exactly say the this Cartan connection induces the CR structure
$(H,J_\lambda)$ on $K$. Hence to prove (1), it remains to show that
$\omega_\lambda$ is normal. The formula for $\kappa_\lambda$ in Theorem \ref{3.3}
gives us the restriction of the curvature of $\omega_\lambda$ to
$K\times\{e\}$. By equivariancy of the normalization condition it
suffices to show normality of this restriction in order to prove
that $\omega_\lambda$ is normal. Since $\kappa_\lambda$ has values in ${\mathfrak g}^1$ and
the restriction to $H_e\times H_e$ vanishes, it is homogeneous of degree
$\geq 4$, and the component of degree $4$ maps $(\frak k/H_e)\times H_e$
to $\frak g_1$. Identifying $\frak g_1$ with the component in the
second column of the first row of a matrix, this component is
complex linear in the second variable (with respect to $J_\lambda$). It
is well known (and easy to see) that the one dimensional space of
such maps exactly constitutes the harmonic part in degree $4$, so in
particular such maps lie in the kernel of the Kostant
codifferential. Since maps of homogeneity $\geq 5$ automatically
have that property, normality follows.
\noindent
(2) This is now evident since $\kappa_\lambda$ vanishes if and only if
$\lambda^4=1$.
\end{proof}
Notice that we can use the same construction replacing $G$ by the
three--fold covering $SU(2,1)$ and $P$ by the stabilizer of a line in
that group. Such an extension is necessary for example if one wants to
have a standard tractor bundle, compare with \cite{Gover-Graham}. In
the picture of CR geometry, such an extension is associated to the
choice of a third root of a certain complex line bundle. In our case,
this bundle is trivial, so this poses no problem.
\subsection{Tractors and tractor calculus}\label{3.6}
As an indication how the description of the canonical Cartan
connection in Corollary \ref{3.5} can be used further, we discuss
tractor bundles and compute normal tractor connections. We will work
here in the setting that $G=SU(2,1)$ and $P\subset G$ is the
stabilizer of a line. Recall that for a representation $V$ of the
group $G$, one obtains a tractor bundle by restricting the
representation to $P\subset G$ and forming the associated bundle to
the canonical Cartan bundle. While sections of these bundles are
unusual geometric objects, they have the advantage that they carry
canonical linear connections induced by the canonical Cartan
connection.
\begin{prop*}
For some $\lambda>0$ consider the left invariant CR structure
$(H,J_\lambda)$ on $K=SU(2)$ from \ref{2.2}, and let $V$ be a
representation of $G=SU(2,1)$. Then the associated tractor bundle
$\Cal T\to K$ is canonically trivial, so $\Gamma(\Cal T)\cong
C^\infty(K,V)$. In this identification, the tractor connection
$\nabla^{\Cal T}$ is determined by
$$
\nabla^{\Cal T}_{L_X}f=L_X\cdot f+\rho(\phi_\lambda(X))\o f,
$$
for $f:K\to V$, where $L_X\in\frak X(K)$ denotes the left invariant
vector field generated by $X\in\frak k$ and $\rho:\frak g\to L(V,V)$
is the derivative of the representation of $G$ on $V$.
\end{prop*}
\begin{proof}
Since the canonical Cartan bundle is trivial, so is the associated
bundle $\Cal T$. Explicitly, the identification $\Gamma(\Cal T)\to
C^\infty(K,V)$ is given by restricting the $P$--equivariant function
$\Cal G=K\times P\to V$ corresponding to a section to the subset
$K\times\{e\}$. In terms of equivariant functions, the tractor connection
can be easily described explicitly, see \cite[section 3]{tractors}:
For the equivariant map $h:\Cal G\to V$ corresponding to $s\in\Gamma(\Cal
T)$, a vector field $\xi$ on $K$ and a lift $\tilde\xi\in\frak X(\Cal
G)$ of $\xi$, the covariant derivative $\nabla^{\Cal T}_\xi s$ is
represented by the function $\tilde\xi\cdot h+\rho(\omega(\xi))\o h$.
Putting $\xi=L_X$, we can use $(L_X,0)$ for $\tilde\xi$. This has the
particular advantage that its flow leaves the subset $K\times\{0\}\subset
K\times P$ invariant. Therefore, putting $f:=h|_{K\times\{e\}}$ we see that
$((L_X,0)\cdot h)|_{K\times\{e\}}=L_X\cdot f$.For the second term,
restriction to $K\times\{e\}$ makes no problems anyhow, so the formula for
$\nabla^{\Cal T}$ follows.
\end{proof}
As a concrete example, let us describe how the three dimensional
family of infinitesimal automorphisms corresponding to the left
translations by elements of $K$ are represented within adjoint
tractors. This means that we consider the representation $V=\frak g$,
and the resulting tractor bundle is the adjoint tractor bundle $\Cal
A$. The canonical Cartan connection induces an isomorphism between
$\Gamma(\Cal A)$ and the space of right invariant vector fields on $\Cal
G$, see \cite{deformations}. Infinitesimal automorphisms of a Cartan
geometry are described by such vector fields, and they are
characterized by a simple differential equation, see \cite[Proposition
3.2]{deformations}. We will verify this equation for the three
dimensional family of infinitesimal automorphisms corresponding to
left translations on $K$.
The construction of the canonical Cartan connection on $\Cal G=K\times P$
for the left invariant CR structure $(H,J_\lambda)$ on $K$ shows that for
each $k'\in K$ the map $(k,g)\mapsto (k'k,g)$ is the lift of the left
translation by $k'$ to an automorphism of the parabolic geometry
$(\Cal G,\omega_{\lambda})$. The infinitesimal generators of this three
parameter group of automorphisms are of course the vector fields
$(R_X,0)$ for $X\in\frak k$, where $R_X$ denotes the right invariant
vector field. Let $s_X\in\Gamma(\Cal A)$ be the corresponding section,
i.e.~the smooth equivariant function corresponding to $s_X$ is
$\omega_\lambda((R_X,0))$. Since $R_X(k)=L_{\operatorname{Ad}(k^{-1})X}(k)$ we see that the
smooth function $f_X:K\to\frak g$ corresponding to $s_X$ is given by
$f_X(k)=\phi_\lambda(\operatorname{Ad}(k^{-1})X)$. From the proposition above, we
conclude that $\nabla^{\Cal A}_{L_Y}s_X$ corresponds to the function
$$
L_Y\cdot\phi_\lambda(\operatorname{Ad}(k^{-1})X)+[\phi_\lambda(Y),\phi_\lambda(\operatorname{Ad}(k^{-1})X)].
$$
Now the first term can be computed as
$$
\tfrac{d}{dt}|_{t=0}\phi_\lambda(\operatorname{Ad}(\exp(-tY))\operatorname{Ad}(k^{-1})X)=-\phi_\lambda([Y,\operatorname{Ad}(k^{-1})X]).
$$
Hence we see that $\nabla^{\Cal A}_{L_Y}s_X$ corresponds to the
function $\kappa_\lambda(Y,\operatorname{Ad}(k^{-1})(X))$ which represents the curvature of
$\omega_\lambda$ evaluated on the vector fields $L_Y$ and $R_X$. This is
exactly the infinitesimal automorphism equation from \cite[Proposition
3.2]{deformations}.
|
train/arxiv
|
BkiUd5jxK7FjYEXSFgHX
| 5 | 1 |
\section{Introduction and Overview}
While analysing the compatibility problem of coherent sets of gambles, Miranda and Zaffalon \cite{mirzaffalon20} have recently remarked that their main results could be obtained also using the theory of information algebras \cite{kohlas03}.
This observation has been taken up and deepened in some of our recent work \cite{kohlas21,kohlas21b}: we have shown that the founding properties of desirability can in fact be abstracted into properties of information algebras. Stated differently, desirability makes up an information algebra of coherent set of gambles.
Information algebras are algebraic structures composed by `pieces of information' that can be manipulated by operations of \emph{combination}, to aggregate them, and \emph{extraction}, to extract information regarding a specific question.
From the point of view of information algebras, sets of gambles defined on a possibility space $\varOmega$ are pieces of information about $\varOmega$.
It is well known that coherent sets of gambles are ordered by inclusion and, in this order, there are maximal elements~\cite{CooQua12}. In the language of information algebras such elements are called \emph{atoms}. In particular, any coherent set of gambles is contained in a maximal set (an atom) and it is the intersection (meet) of all the atoms it is contained in. An information algebra with these properties is called atomistic.
Atomistic information algebras have the universal property of being embedded in a set algebra, which is an information algebra whose elements are sets. This is an important representation theorem for information algebras, since set algebras are a special kind of algebras based on the usual set operations. Conversely, any such set algebra of subsets of $\varOmega$ is embedded in the algebra of coherent sets of gambles defined on $\varOmega$.
These links between set algebras and the algebra of coherent sets of gambles are the main topic of the present work.
After recalling the main concepts introduced in our previous work in Sections~\ref{sec:DesGambles}--\ref{sec:InfAlgs}, in Section~\ref{sec:Atoms} we establish the basis to show that sets of atoms of the information algebra of coherent sets of gambles, form indeed a set algebra. In Section \ref{sec:homomorphism} we define the concept of \emph{embedding} for information algebras and finally, in Section \ref{sec:SetAlg}, we show the links between set algebras (of subsets of $\varOmega$ and of sets of atoms) and the algebra of coherent sets of gambles.
Since set algebras are algebraic counterparts of classical propositional logic, the results of this paper details how the latter is formally part of the theory of imprecise probabilities \cite{walley91}. We refer also to \cite{decooman05} for another aspect of this issue.
\section{Desirability} \label{sec:DesGambles}
Consider a set $\varOmega$ of possible worlds. A gamble over this set is a bounded function
$f : \varOmega \rightarrow \mathbb{R}$.
It is interpreted as an uncertain reward in a linear utility scale. A subject might desire a gamble or not, depending on the information they have about the experiment whose possible outcomes are the elements of $\varOmega$.
We denote the set of all gambles on $\varOmega$ by $\mathcal{L}(\varOmega)$, or more simply by $\mathcal{L}$, when there is no possible ambiguity. We also introduce $\mathcal{L}^+(\varOmega) \coloneqq \{ f \in \mathcal{L}(\varOmega): \; f\geq 0, f \not= 0\}$, or simply $\mathcal{L}^+$ when no ambiguity is possible, the set of non-negative non-vanishing gambles. These gambles should always be desired, since they may increase the wealth with no risk of decreasing it.
As a consequence of the linearity of our utility scale, we assume also that a subject disposed to accept the transactions represented by $f$ and $g$, is disposed to accept also $\lambda f + \mu g$ with $\lambda, \mu \ge 0$ not both equal to $0$.
More generally speaking, we consider the notion of a coherent set of gambles \cite{walley91}:
\begin{comment}
\begin{enumerate}
\item $\mathcal{L}^+ \subseteq \mathcal{D}$,
\item $0 \not\in \mathcal{D}$,
\item $f,g \in \mathcal{D}$ implies $f + g \in \mathcal{D}$,
\item $f \in \mathcal{D}$, and $\lambda > 0$ implies $\lambda \cdot f \in \mathcal{D}$.
\end{enumerate}
\end{comment}
\begin{definition}[\textbf{Coherent set of gambles}]
We say that a subset $\mathcal{D}$ of $\mathcal{L}$ is a \emph{coherent} set of gambles if and only if $\mathcal{D}$ satisfies the following properties:
\begin{enumerate}[label=\upshape D\arabic*.,ref=\upshape D\arabic*]
\item\label{D1} $\mathcal{L}^+ \subseteq \mathcal{D}$ [Accepting Partial Gains];
\item\label{D2} $0\notin \mathcal{D}$ [Avoiding Status Quo];
\item\label{D3} $f,g \in \mathcal{D} \Rightarrow f+g \in \mathcal{D}$ [Additivity];
\item\label{D4} $f \in \mathcal{D}, \lambda>0 \Rightarrow \lambda f \in \mathcal{D}$ [Positive Homogeneity].
\end{enumerate}
\end{definition}
So, $\mathcal{D}$ is a convex cone. Let us denote with $C(\varOmega)$, or simply with $C$, the family of coherent sets of gambles on $\varOmega$.
This leads to the concept of natural extension.
\begin{definition}[\bf{Natural extension for gambles}] \label{def:natex}Given a set $\mathcal{K}\subseteq\mathcal{L}$, we call $
\mathcal{E}(\mathcal{K}) \coloneqq\posi(\mathcal{K}\cup\mathcal{L}^+)$,
where $\posi(\mathcal{K}')\coloneqq\left\{ \sum_{j=1}^{r} \lambda_{j}f_{j}: f_{j} \in \mathcal{K}', \lambda_{j} > 0, r \ge 1\right\}$,
for every set $\mathcal{K}' \subseteq \mathcal{L}$, its \emph{natural extension}.
\end{definition}
The natural extension $\mathcal{E}(\mathcal{D})$ of a set of gambles $\mathcal{D}$ is coherent if and only if $0 \not\in \mathcal{E}(\mathcal{D})$.
In \cite{kohlas21b} we showed that $\Phi(\varOmega) \coloneqq C(\varOmega) \cup \{\mathcal{L}(\varOmega)\}$, or simply $\Phi$ when there is no possible ambiguity, is a complete lattice under inclusion \cite{daveypriestley97}, meet is intersection and join is defined for any family of sets $\mathcal{D}_i \in \Phi$ as
\begin{equation*}
\bigvee_{i \in I} \mathcal{D}_i \coloneqq \bigcap \left\{\mathcal{D} \in \Phi: \bigcup_{i \in I} \mathcal{D}_i \subseteq \mathcal{D}\right\}.
\end{equation*}
Note that, if the family of coherent sets $\mathcal{D}_i$ has no upper bound in $C$, then its join is simply $\mathcal{L}$. Moreover, we defined the following closure operator \cite{daveypriestley97} on subsets of gambles.
\begin{equation}\label{eq:closureoperatorC}
\mathcal{C}(\mathcal{D}') \coloneqq \bigcap \{\mathcal{D} \in \Phi: \mathcal{D}' \subseteq \mathcal{D}\}.
\end{equation}
It is possible to notice that $\mathcal{C}(\mathcal{D}) = \mathcal{E}(\mathcal{D})$ if $0 \not\in \mathcal{E}(\mathcal{D})$, that is if $\mathcal{E}(\mathcal{D})$ is coherent. Otherwise we may have $\mathcal{E}(\mathcal{D}) \not= \mathcal{L}(\varOmega)$.
The most informative cases of coherent sets of gambles, i.e., coherent sets that are not proper subsets of other coherent sets, are called \textit{maximal}. The following proposition provides a characterisation of such maximal elements~\cite[Proposition~2]{CooQua12}.
\begin{proposition}[\textbf{Maximal coherent set of gambles}]
A coherent set of gambles $\mathcal{D}$ is \emph{maximal} if and only if
\begin{equation*}
(\forall f \in \mathcal{L} \setminus \{0\})\ f \notin \mathcal{D} \Rightarrow -f \in \mathcal{D}.
\end{equation*}
\end{proposition}
We shall denote maximal sets with $M$ to differentiate them from the general case of coherent sets.
These sets play an important role with respect to information algebras (see Section \ref{sec:Atoms}).
Another important class is that of \emph{strictly desirable} sets of gambles \cite{walley91}.\footnote{Strictly desirable sets of gambles are important because they are in a one-to-one relation with \emph{coherent lower previsions}; these are a generalization of the usual expectation operator on gambles. Given a coherent lower prevision ${\underline{\pr}}(\cdot)$, $D^+ \coloneqq \{ f \in \mathcal{L}: {\underline{\pr}}(f) >0 \} \cup \mathcal{L}^+$ is a strictly desirable set of gambles~\cite[Section 3.8.1]{walley91}.}
\begin{definition} [\textbf{Strictly desirable set of gambles}]
A coherent set of gambles $D$ is said to be \emph{strictly desirable} if and only if it satisfies
$$(\forall f \in D \setminus \mathcal{L}^+)(\exists \delta >0)\ f- \delta \in D.$$
\end{definition}
For strictly desirable sets, we shall employ the notation $D^+$.
\begin{comment}
A further important class of coherent sets of gambles are the \textit{strictly desirable} ones.
\begin{definition} [\textbf{Strict desirability}]
A coherent set of gambles $\mathcal{D}$ is said to be \emph{strictly desirable} if and only if it satisfies
$f \in \mathcal{D} \setminus \mathcal{L}^+ \Rightarrow (\exists \delta >0)\ f- \delta \in \mathcal{D}$.
\end{definition}
To differentiate them from the general case of coherent sets of gambles we shall usually employ the notation $\mathcal{D}^+$.
So, strictly desirable sets of gambles form a subfamily of
\end{comment}
\begin{comment}
Another concept, which plays an important role in Section \ref{sec:LowUpPrev} are \textit{almost desiriable gambles}, satisfying the following conditions \cite{walley91}\begin{enumerate}
\item $f \in \bar{\mathcal{D}}$ implies $\sup f \geq 0$,
\item $\inf f > 0$ implies $f \in \bar{\mathcal{D}}$,
\item $f,g \in \bar{\mathcal{D}}$ implies $f + g \in \bar{\mathcal{D}}$,
\item $f \in \bar{\mathcal{D}}$ and $\lambda > 0$ imply $\lambda \cdot f \in \bar{\mathcal{D}}$,
\item $f + \delta \in \bar{\mathcal{D}}$ for all $\delta > 0$ implies $f \in \bar{\mathcal{D}}$.
\end{enumerate}
Such a set is no more coherent since it contains $f = 0$. But we remark that almost desirable sets of gambles again form a $\cap$-system, still topped by $\mathcal{L}(\varOmega)$. Therefore, they form too a complete lattice. So, we may define the natural extension of a set $\mathcal{D}'$ to an almost desirable set of gambles as before as the smallest such set, containing $\mathcal{D}'$, provided $\mathcal{D}'$ is contained in an almost desirable set of gambles
\begin{equation*}
\bar{\mathcal{C}}(\mathcal{D}') = \bigcap \{\bar{\mathcal{D}}:\mathcal{D}' \subseteq \bar{\mathcal{D}}\}.
\end{equation*}
This is still a closure operator on subsets of gambles.
\end{comment}
\section{Stucture of Questions and Possibilities} \label{sec:Questions}
In this section we review the main results about the structure of $\varOmega$ \cite{kohlas17,kohlasmonney95,kohlas21b}.
With reference to our previous work \cite{kohlas21b}, we recall that coherent sets of gambles are understood as pieces of information describing beliefs about the elements in $\varOmega$. Beliefs may be originally expressed relative to different questions or variables that we identify by families of equivalence relations $\equiv_x$ on $\varOmega$ for $x$ in some index set $Q$.
A question $x \in Q$ has the same answer in possible worlds $\omega \in \varOmega$ and $\omega' \in \varOmega$, if $\omega \equiv_x \omega'$.
There is a partial order between questions capturing granularity: question $y$ is finer than question $x$ if $\omega \equiv_y \omega'$ implies $\omega \equiv_x \omega'$. This can be expressed also considering partitions $\partit_x, \; \partit_y$ of $\varOmega$ whose blocks are respectively, the equivalence classes $[\omega]_x, \; [\omega]_y$ of the equivalence relations $\equiv_x, \; \equiv_y$, representing possible answers to $x$ and $y$.
Then $\omega \equiv_y \omega'$ implies $\omega \equiv_x \omega'$, meaning that any block $[\omega]_y$ of partition $\partit_y$ is contained in some block $[\omega]_x$ of partition $\partit_x$.
If this is the case, we say equivalently that: $x \le y$ or $\partit_x \leq \partit_y$.\footnote{In the literature usually the inverse order between partitions is considered. However, this order better corresponds to our natural order of questions by granularity.}
Partitions $Part(\varOmega)$ of any set $\varOmega$, form a lattice under this order \cite{graetzer03}. In particular, the partition $\sup\{\partit_x,\partit_y\} \coloneqq \partit_x \vee \partit_y$ of two partitions $\partit_x, \partit_y$ is, in this order, the partition obtained as the non-empty intersections of blocks of $\partit_x$ with blocks of $\partit_y$. It can be equivalently expressed also as $\partit_{x \vee y}$. Definition of meet $\partit_x \wedge \partit_y$, or equivalently $\partit_{x \wedge y}$, is somewhat involved \cite{graetzer03}.
We usually assume that the set of questions $Q$ analyzed, considered together with their associated partitions denoted with $\mathcal{Q} \coloneqq \{\partit_x:x \in Q\}$, is a join-sub-semilattice of $(Part(\varOmega),\leq)$ \cite{daveypriestley97}. In particular, we assume often that the top partition in $Part(\varOmega)$, i.e. $\partit_\top$ (where the blocks are singleton sets $\{\omega\}$ for $\omega \in \varOmega$), belongs to $\mathcal{Q}$.
A gamble $f$ on $\varOmega$ is called \emph{$x$-measurable}, iff for all $\omega \equiv_x \omega'$ we have $f(\omega) = f(\omega')$, that is, if $f$ is constant on every block of $\partit_x$. It could then also be considered as a function (a gamble) on the set of blocks of $\partit_x$. We denote with $\mathcal{L}_x(\varOmega)$, or more simply with $\mathcal{L}_x$ when no ambiguity is possible, the set of all $x$-measurable gambles.
We recall also the logical independence and conditional logical indipendence relation between partitions \cite{kohlas17,kohlasmonney95}.
\begin{definition}[Independent Partitions]
For a finite set of partitions $\partit_1,\ldots,\partit_n \in Part(\varOmega)$, $n \geq 2$, let us define
\begin{equation*}
R(\partit_1,\ldots,\partit_n) \coloneqq \{(B_1,\ldots,B_n):B_i \in \partit_i,\cap_{i=1}^n B_i \not= \emptyset\}.
\end{equation*}
We call the partitions \emph{independent}, if
$R(\partit_1,\ldots,\partit_n) = \partit_1 \times \cdots \times \partit_n$.
\end{definition}
\begin{definition}[Conditionally Independent Partitions]
Consider a finite set of partitions $\partit_1,\dots \partit_n \in Part(\varOmega)$, and a block $B$ of a partition $\partit$ (contained or not in the list $\partit_1,\ldots,\partit_n$), then define for $n \geq 1$,
\begin{equation*}
R_B(\partit_1,\ldots,\partit_n) \coloneqq \{(B_1,\ldots,B_n):B_i \in \partit_i,\cap_{i=1}^n B_i \cap B \not= \emptyset\}.
\end{equation*}
We call $\partit_1,\ldots,\partit_n$ \emph{conditionally independent} given $\partit$, iff for all blocks $B$ of $\partit$,
$R_B(\partit_1,\ldots,\partit_n) = R_B(\partit_1) \times \cdots \times R_B(\partit_n)$.
\end{definition}
This relation holds if and only if $B_i \cap B \not= \emptyset$ for all $i=1,\ldots,n$, implies that $B_1 \cap \ldots \cap B_n \cap B \not= \emptyset$. In this case we write $\bot\{\partit_1,\ldots,\partit_n\} \vert \partit$ or, for $n =2$, $\partit_1 \bot \partit_2 \vert \partit$. $\partit_x \bot \partit_y \vert \partit_z$ can be indicated also with $x \bot y \vert z$. We may also say that $\partit_1 \bot \partit_2 \vert \partit$ if and only if $\omega \equiv_{\partit} \omega'$ implies the existence of an element $\omega'' \in \varOmega$ such that $\omega \equiv_{\partit_1 \vee \partit} \omega''$ and $\omega' \equiv_{\partit_2 \vee \partit} \omega''$.
The three-place relation $\partit_1 \bot \partit_2 \vert \partit$ is, in particular, a \emph{quasi-separoid} \cite{kohlas17}, a retract of the concept of separoid \cite{daveypriestley97}.
\begin{theorem} \label{th:QSepOfPart}
Given $\partit,\partit',\partit_1,\partit_2 \in Part(\varOmega)$, we have:
\
\begin{description}
\item[C1] $\partit_1 \bot \partit_2 \vert \partit_2$;
\item[C2] $\partit_1 \bot \partit_2 \vert \partit$ implies $\partit_2 \bot \partit_1 \vert \partit$;
\item[C3] $\partit_1 \bot \partit_2 \vert \partit$ and $\partit' \leq \partit_2$ imply $\partit_1 \bot \partit' \vert \partit$;
\item[C4] $\partit_1 \bot \partit_2 \vert \partit$ implies $\partit_1 \bot \partit_2 \vee \partit \vert \partit$.
\end{description}
\end{theorem}
From these properties, it follows that $\partit_x \perp \partit_y \vert \partit_z \iff \partit_{x \vee z} \perp \partit_{y \vee z} \vert \partit_z$,
which we use very often later on.
\begin{comment}
Join of two partitions $\partit_x$ and $\partit_y$ is simple to define as we have seen above. For meet, $\partit_x \wedge \partit_y$ the situation is different, indeed its definition is somewhat involved \cite{graetzer03}.
Meet is simple when partitions \emph{commute} \cite{kohlas21b}.
There is however an important particular case, where meet is also simple.
\begin{definition}[\textbf{$\star$ product}]
Given two partitions $\partit_x, \partit_y \in Part(\varOmega)$, we define the $\star$ product of the correspondent equivalence relations $\equiv_x, \; \equiv_y$ respectively, as:
\begin{equation*}
\equiv_x \star \equiv_y\ \coloneqq \{(\omega,\omega'):\exists \omega'' \textrm{ so that}\ \omega \equiv_x \omega'' \equiv_y \omega'\}.
\end{equation*}
\end{definition}
The following lemma gives a necessary and sufficient condition for it to define an equivalence relation.
\begin{lemma}\label{le:CommRel}
Given two partitions $\partit_x,\partit_y \in Part(\varOmega)$, the $\star$ product of the correspondent equivalence relations $\equiv_x, \; \equiv_y$ respectively, is an equivalence relation, if and only if:
\begin{equation*}
\equiv_x \star \equiv_y =\equiv_y \star \equiv_x.
\end{equation*}
\end{lemma}
If $\equiv_x$ and $\equiv_y$ commute, then the partition associated with their $\star$-product is the meet of of the associated partitions $\partit_x$ and $\partit_y$ respectively, so that we may write $\equiv_x \star \equiv_y = \equiv_{x \wedge y}$. The partitions are then called commuting and since their meet is defined by $\equiv_x \star \equiv_y$, they are also called Type I partitions \cite{graetzer03}.
\begin{definition}[\textbf{Type I partitions/ Commuting partitions}]
Two partitions $\partit_x, \partit_y \in Part(\varOmega)$ are called \emph{Type I} or \emph{commuting partitions} if the product $\equiv_x \star \equiv_y$,
is an equivalence relation.
\end{definition}
As a consequence, for commuting partitions $\partit_x$ and $\partit_y$ we have also $\mathcal{L}_x \cap \mathcal{L}_y = \mathcal{L}_{x \wedge y}$ and vice versa, as stated already above.
For commuting partitions, the conditional independence relation can also be expressed simply in terms of joins and meets.
\begin{theorem} \label{th:CommCondIndep}
Given $\partit_1,\partit_2, \partit \in Part(\varOmega)$, we have
\begin{equation*}
\partit_1 \bot \partit_2 \vert \partit \Leftrightarrow (\partit_1 \vee \partit) \wedge (\partit_2 \vee \partit)=\partit
\end{equation*}
if and only if $\partit_1$ and $\partit_2$ commute.
\end{theorem}
An important instance of such commutative partitions is given in multivariate possibility sets. Let $X_i$ a variable for $i$ in some index set $I$ (usually a finite or countable set) and $\varOmega_i$ the set of its possible values. If then
\begin{equation*}
\varOmega = \bigtimes_{i \in I} \varOmega_i
\end{equation*}
is the set of possibilities, we may think of its elements $\omega$ as maps $\omega : I \rightarrow \varOmega$ such that $\omega(i) \in \varOmega_i$. If $S$ is any subset of variables, $S \subseteq I$, then let
\begin{equation*}
\varOmega_S = \bigtimes_{i \in S} \varOmega_i.
\end{equation*}
Further let $\omega \equiv_S \omega'$ if $\omega$ and $\omega'$ coincide on $S$. This is an equivalence relation in $\varOmega$ and it determines a partition $\partit_S$ of $\varOmega$.
These partitions commute pairwise. Taking the subsets $S$ of $I$ as index set, according to Theorem \ref{th:CommCondIndep}, we have that $S \bot T \vert R$ (meaning $\partit_S \bot \partit_T \vert \partit_R$) if and only if $(S \cup R) \cap (T \cup R) = R$. Here, the underlying lattice of subsets of $I$ or the corresponding sub-lattice of partitions is distributive. Then some properties in addition to C1 to C4 hold, making it a \emph{strong separoid} \cite{dawid01}.
\end{comment}
\section{Information Algebra of Coherent Sets of Gambles} \label{sec:InfAlgs}
In \cite{kohlas21b} we showed that $\Phi$ with the following operations:
\begin{enumerate}
\item Combination: $\mathcal{D}_1 \cdot \mathcal{D}_2 \coloneqq \mathcal{C}(\mathcal{D}_1 \cup \mathcal{D}_2)$,
\item Extraction: $\epsilon_x(\mathcal{D}) \coloneqq \mathcal{C}(\mathcal{D} \cap \mathcal{L}_x)$ for $x \in Q$,
\end{enumerate}
is a \emph{domain-free information algebra} that we call the \emph{domain-free information algebra of coherent sets of gambles}.
Combination captures aggregation of pieces of belief, and extraction describes filtering the part of information relative to a question $x \in Q$.
Information algebras are particular \emph{valuation algebras} as defined by \cite{shafershenoy90} but with idempotent combination. Domain-free versions of valuation algebras have been proposed by Shafer \cite{shafer91}. Idempotency of combination has important consequences, such as the possibility to define an information order, atoms, approximation, and more \cite{kohlas03,kohlas17}. It also offers---the subject of the present paper---important connections to set algebras.
Here we remind the characterizing properties of the domain-free information algebra $\Phi$ together with a system of questions $Q$ and a family $E$ of extraction operators $\epsilon_x : \Phi \rightarrow \Phi$ for $x \in Q$:
\begin{enumerate}
\item \textit{Semigroup:} $(\Phi,\cdot)$ is a commutative semigroup with a null element $0 = \mathcal{L}(\varOmega)$ and a unit $1 = \mathcal{L}^+(\varOmega)$.
\item \textit{Quasi-Separoid:} $(Q,\leq)$ is a join semilattice and $x \bot y \vert z$ with $x,y,z \in Q$, a quasi-separoid.
\item \textit{Existential Quantifier:} For any $x \in Q$, $\mathcal{D}_1,\mathcal{D}_2,\mathcal{D} \in \Phi$:
\begin{enumerate}
\item $\epsilon_x(0) = 0$,
\item $\epsilon_x(\mathcal{D}) \cdot \mathcal{D} = \mathcal{D}$,
\item $\epsilon_x(\epsilon_x(\mathcal{D}_1) \cdot \mathcal{D}_2) = \epsilon_x(\mathcal{D}_1) \cdot \epsilon_x(\mathcal{D}_2)$.
\end{enumerate}
\item \textit{Extraction:} For any $x,y,z \in Q$, $\mathcal{D} \in \Phi$, such that $x \vee z \bot y \vee z \vert z$ and $\epsilon_x(\mathcal{D}) = \mathcal{D}$, we have:
\begin{equation*}
\epsilon_{y \vee z}(\mathcal{D}) = \epsilon_{y \vee z}(\epsilon_z(\mathcal{D})).
\end{equation*}
\item \textit{Support:} For any $\mathcal{D} \in \Phi$ there is an $x \in Q$ so that $\epsilon_x(\mathcal{D}) = \mathcal{D}$, i.e. a \emph{support} of $\mathcal{D}$ \cite{kohlas21b}, and for all $y \geq x, \; y \in Q$, $\epsilon_y(\mathcal{D}) = \mathcal{D}$.
\end{enumerate}
When we need to specify all the constructing elements of the domain-free information algebra $\Phi$, we can refer to it with the tuple $(\Phi, \mathcal{Q}, \le, \bot, \cdot, E)$, where $E$ is the family of the extraction operators constructed starting from $x \in Q$ or, equivalently, from partitions in $\mathcal{Q}$.\footnote{When we need to be explicit about partitions, we can indicate the extraction operator $\epsilon_x$ also as $\epsilon_{\mathcal{P}_x}$, where $\mathcal{P}_x \in \mathcal{Q}$ is the partition associated to the question $x \in Q$.} When we do not need this degree of accuracy, we can refer to it simply as $\Phi$. Analogous considerations can be made for other domain-free information algebras.
Notice that, in particular, $(\Phi, \cdot)$ is an idempotent, commutative semigroup. So, a partial order is defined by $\mathcal{D}_1 \leq \mathcal{D}_2$ if $\mathcal{D}_1 \cdot \mathcal{D}_2 = \mathcal{D}_2$. Then $\mathcal{D}_1 \leq \mathcal{D}_2$ if and only if $\mathcal{D}_1 \subseteq \mathcal{D}_2$. This order is called an \textit{information order} \cite{kohlas21b}.
This definition entails the following facts: $\epsilon_x(\mathcal{D}) \leq \mathcal{D}$ for every $\mathcal{D} \in \Phi, \; x \in Q$; given $\mathcal{D}_1, \mathcal{D}_2 \in \Phi$, if $\mathcal{D}_1 \leq \mathcal{D}_2$, then $\epsilon_x(\mathcal{D}_1) \leq \epsilon_x(\mathcal{D}_2)$ for every $x \in Q$ \cite{kohlas03}.
\begin{comment}
For further reference, we remind also the following two results.
\begin{lemma} \label{le:SuppotProp}
Given $x,y \in Q$ and $\mathcal{D}, \mathcal{D}_1,\mathcal{D}_2 \in \Phi$, we have:
\begin{enumerate}
\item $\epsilon_x(1) = 1$,
\item $\epsilon_x(\mathcal{D}) = 0$ if and only if $\mathcal{D} = 0$,
\item $x$ is a support of $\epsilon_x(\mathcal{D})$,
\item if $x \leq y$, then $\epsilon_x(\mathcal{D}) \leq \epsilon_y(\mathcal{D})$,
\item if $x \leq y$, then $\epsilon_y(\epsilon_x(\mathcal{D})) = \epsilon_x(\mathcal{D})$,
\item if $x \leq y$, then $\epsilon_x(\epsilon_y(\mathcal{D})) = \epsilon_x(\mathcal{D})$,
\item if $x$ is a support of both $\mathcal{D}_1$ and $\mathcal{D}_2$, then it is a support for $\mathcal{D}_1 \cdot \mathcal{D}_2$,
\item if $x$ is a support of $\mathcal{D}_1$ and $y$ a support of $\mathcal{D}_2$, then $x \vee y$ is a support for $\mathcal{D}_1 \cdot \mathcal{D}_2$ and $\mathcal{D}_1 \cdot \mathcal{D}_2 = \epsilon_{x \vee y}(\mathcal{D}_1) \cdot \epsilon_{x \vee y}(\mathcal{D}_2)$.
\end{enumerate}
\end{lemma}
\begin{theorem} \label{th:GenCondIndep}
Let $\mathcal{D}_1,\mathcal{D}_2$ and $\mathcal{D}$ be elements of $\Phi$ and $x,y,z \in Q$, such that $x \bot y \vert z$. Then
\begin{enumerate}
\item if $x$ is a support of $\mathcal{D}$,
\begin{equation*}
\epsilon_y(\mathcal{D}) = \epsilon_y(\epsilon_z(\mathcal{D})).
\end{equation*}
\item If $x$ is a support of $\mathcal{D}_1$ and $y$ of $\mathcal{D}_2$,
\begin{equation*}
\epsilon_z(\mathcal{D}_1 \cdot \mathcal{D}_2) = \epsilon_z(\mathcal{D}_1) \cdot \epsilon_z(\mathcal{D}_2).
\end{equation*}
\end{enumerate}
\end{theorem}
\end{comment}
\begin{comment}
\begin{proof}
If all $\mathcal{D}_j = 0$, then (\ref{eq:ExtrCommMeet}) holds trivially. Otherwise, eliminate all $\mathcal{D}_j = 0$ from the family, so that we may assume that all elements $\mathcal{D}_j$ are coherent sets of gambles. We have
\begin{equation*}
\epsilon_x(\bigcap_{j \in J} \mathcal{D}_j) &=& \mathcal{C}((\bigcap_{j \in J} \mathcal{D}_j) \cap \mathcal{L}_x), \\
\bigcap_{j \in J}(\epsilon_x(\mathcal{D}_j)) &=& \bigcap_{j \in J} \mathcal{C}(\mathcal{D}_j \cap \mathcal{L}_x).
\end{equation*}
Consider first a gamble $f$ in $\epsilon_x(\bigcap_{j \in J} \mathcal{D}_j)$, so that $f = \lambda g + \mu h$, where $\lambda,\mu$ are nonnegative and not both equal to zero, and $g \in (\bigcap_{j \in J} \mathcal{D}_j) \cap \mathcal{L}_x$, $h \in \mathcal{L}^+(\varOmega)$. Then $g \in \mathcal{D}_j \cap \mathcal{L}_x$ for all $j$, so that $f \in \bigcap_{j \in J} \epsilon_x(\mathcal{D}_j)$. Conversely consider a gamble $f \in \bigcap_{j \in J} \epsilon_x(\mathcal{D}_j)$. Then, if $f \in \mathcal{L}^+(\varOmega)$, we have $f \in \epsilon_x(\bigcap_{j \in J} \mathcal{D}_j)$. Otherwise, $f = g_j + \mu_j h_j$, where $g_j \in \mathcal{D}_j \cap \mathcal{L}_x$, $h_j \in \mathcal{L}^+(\varOmega)$ for all $j$ and $\lambda_j > 0$. We claim that for all $i$ and $j$ we must have $g_i \in \mathcal{D}_j \cap \mathcal{L}_x$. This follows, since $f - \mu_i h_i = g_i$, and if $g_i \not\in \mathcal{D}_j \cap \mathcal{L}_x$, we can not have $f \in posi(\mathcal{L}^+(\varOmega) \cup (\mathcal{D}_j \cap \mathcal{L}_x) = \mathcal{C}(\mathcal{D}_j \cap \mathcal{L}_x)$. So $f =g + \mu h$, where $g \in \mathcal{D}_j \cap \mathcal{L}_x$ and $h \in \mathcal{L}^+(\varOmega)$ for all $j$, thus $f \in \epsilon_x(\bigcap_{j \in J} \mathcal{D}_j)$ and this concludes the proof
\end{proof}
\end{comment}
\section{Atoms and Maximal Coherent Sets of Gambles} \label{sec:Atoms}
Maximal coherent sets $M$ are \emph{atoms} in the information algebra of coherent sets of gambles \cite{kohlas21b}. This is a well-known concept in (domain-free) information algebras. We remind the following elementary properties of atoms \cite{kohlas03}, immediately derivable from the definition. If $M,M_1$ and $M_2$ are atoms of $\Phi$ and $\mathcal{D} \in \Phi$, then
\begin{enumerate}
\item $M \cdot \mathcal{D} = M$ or $M \cdot \mathcal{D} = 0$,
\item either $\mathcal{D} \leq M$ or $M \cdot \mathcal{D} = 0$,
\item either $M_1 = M_2$ or $M_1 \cdot M_2 = 0$.
\end{enumerate}
We indicate with $At(\Phi)$ the set of all atoms of $\Phi$, and with $At(\mathcal{D})$ the set of all atoms $M$ which dominate $\mathcal{D} \in \Phi$, that is, $At(\mathcal{D}) \coloneqq \{M \in At(\Phi):\mathcal{D} \subseteq M\}$.
Furthermore, $\Phi$ is \textit{atomic} \cite{kohlas03}, i.e. for any $\mathcal{D} \neq 0$ the set $At(\mathcal{D})$ is not empty, and \emph{atomistic}, i.e. for any $\mathcal{D} \not= 0$, $\mathcal{D} = \bigcap At(\mathcal{D})$.
It is a general result of atomistic information algebras that the subalgebras $\epsilon_x(\Phi)$ are also atomistic \cite{kohlasschmid16}. Moreover, in \cite{kohlas21b}, we showed that $At(\epsilon_x(\Phi)) = \epsilon_x(At(\Phi)) = \{\epsilon_x(M):M \in At(\Phi)\}$ for any $x \in Q$ and, therefore, we call $\epsilon_x(M)$ for $M \in At(\Phi)$ and $x \in Q$ \emph{local atoms} for $x$.
Local atoms $M_x =\epsilon_x(M)$ for $x$ induce a partition $At_x$ of $At(\Phi)$ with blocks $At(M_x)$. If $M $ and $M'$ belong to the same block, we say that $M \equiv_x M'$.
Let us indicate with $Part(At(\Phi))$ the set of these partitions. As for $Part(\varOmega)$, we can introduce a partial order on $Part(At(\Phi))$ defined as: $At_x \le At_y$ if $M \equiv_y M'$ implies $M \equiv_x M'$, for every $M,M' \in At(\Phi)$. $Part(At(\Phi))$ forms a lattice under this order where, in particular, $At_x \vee At_y$ is the partition obtained as the non-empty intersections of blocks of $At_x$ with blocks of $At_y$ \cite{graetzer03}.
We claim moreover that these partitions of $At(\Phi)$ mirror the partitions $\mathcal{P}_x \in \mathcal{Q}$.
Before stating this main result, we need the following lemma.
\begin{lemma}
Let us consider $M,M' \in At(\Phi)$ and $x \in Q$. Then
\begin{equation*}
M \equiv_x M' \iff \epsilon_x(M) = \epsilon_x(M') \iff M,M' \in At(\epsilon_x(M)).
\end{equation*}
Hence, $At_x \le At_y$ if and only if $At(\epsilon_x(M)) \supseteq At(\epsilon_y(M))$ for every $M \in At(\Phi)$.
\end{lemma}
\begin{proof}
If $M \equiv_x M'$, there exists a local atom $M_x$ such that $M,M' \in At(M_x)$. Therefore, $M, M' \ge M_x$ and $\epsilon_x(M), \epsilon_x(M') \ge M_x$ \cite[Lemma~ 15, item 3]{kohlas21b}. However, $\epsilon_x(M), \epsilon_x(M')$ and $M_x$ are all local atoms, hence $\epsilon_x(M)=\epsilon_x(M')= M_x$. The converse is obvious.
For the second part, let us suppose $At(\epsilon_y(M)) \subseteq At(\epsilon_x(M))$ for every $M \in At(\Phi)$, and consider $M',M'' \in At(\Phi)$ such that $M' \equiv_y M''$. Then $M',M'' \in At(\epsilon_y(M'))$ and hence $M',M'' \in At(\epsilon_x(M'))$, which implies $M' \equiv_x M''$. Vice versa, consider $At_x \le At_y$ and $M' \in At(\epsilon_y(M))$ for some $M ,M'\in At(\Phi)$. Then, $M \equiv_y M'$, hence $M \equiv_x M'$ and so $M' \in At(\epsilon_x(M))$.
\end{proof}
Now we can state the main result of this section.
\begin{theorem} \label{th:AtomSeparoid}
The map $\mathcal{P}_x \mapsto At_x$, from the lattice of partitions $(Part(\varOmega),\leq)$ of $\varOmega$ to the lattice partitions $(Part(At(\Phi)),\leq)$ of $At(\Phi)$, preserves order and join. Furthermore it preserves also conditional independence relations, that is, $\mathcal{P}_x \bot \mathcal{P}_y \vert \mathcal{P}_z$ implies $At_x \bot At_y \vert At_z$.
\end{theorem}
\begin{proof}
If $x \leq y$, then $\epsilon_x(M) \leq \epsilon_y(M)$ for any atom $M \in At(\Phi)$ \cite[Lemma~ 15, item 4]{kohlas21b}. Therefore, $At(\epsilon_x(M)) \supseteq At(\epsilon_y(M))$ for any $M \in At(\Phi)$, hence $At_x \leq At_y$.
The converse is also true: indeed, if $At_x \leq At_y$,
then $At(\epsilon_x(M)) \supseteq At(\epsilon_y(M))$ for any $M \in At(\Phi)$. This implies in particular that $\epsilon_x(M) = \cap At(\epsilon_x(M)) \subseteq \cap At(\epsilon_y(M)) = \epsilon_y(M) $ for any $M \in At(\Phi)$, thanks to the fact that $At(\Phi)$ is atomistic. Now, for any $D$ coherent, consider the family $\{M_j\}_{j \in J} \coloneqq At(D)$. Then we have:
\begin{equation*}
\epsilon_x(\mathcal{D}) = \epsilon_x(\cap_{j \in J} M_j) = \cap_{j \in J} \epsilon_x(M_j) \subseteq \cap_{j \in J}\epsilon_y(M_j) = \epsilon_y( \cap_{j \in J} M_j) = \epsilon_y(\mathcal{D}),
\end{equation*}
thanks to \cite[Theorem~ 17]{kohlas21b}. Therefore we have $\epsilon_x(\mathcal{D}) \subseteq \epsilon_y(\mathcal{D})$ also for any $\mathcal{D} \in C$. Applying it to $\mathcal{D} \coloneqq \mathcal{E}(\{f\})$ for every $f \in \mathcal{L}_x \setminus (\mathcal{L}^+_x \cup \{ f \in \mathcal{L}_x: f \le 0 \})$, we obtain that $\mathcal{L}_x \subseteq \mathcal{L}_y$, from which it follows that $x \leq y$ \cite[Section~ 3]{kohlas21b}. So the map $\mathcal{P}_x \mapsto At_x$ is an order isomorphism \cite[Def.~1.34]{daveypriestley97}, therefore it also preserves joins \cite[Prop.~2.19]{daveypriestley97}.
For the second part,
recall that $x \bot y \vert z$ if and only if $x \vee z \bot y \vee z \vert z$. Consider then local atoms $M_{x \vee z},M_{y \vee z}$ and $M_z$ so that
\begin{equation*}
At(M_{x \vee z}) \cap At(M_z) \not= \emptyset, \quad At(M_{y \vee z}) \cap At(M_z) \not= \emptyset.
\end{equation*}
Hence, there is an atom $M' \in At(M_{x \vee z}) \cap At(M_z)$ and an atom $M''\in At(M_{y \vee z}) \cap At(M_z)$. Therefore, $M_{x \vee z} = \epsilon_{x \vee z}(M')$, $M_{y \vee z} = \epsilon_{y \vee z}(M'')$ and $M_z= \epsilon_z(M')=\epsilon_z(M'')$.
Now, thanks to the Existential Quantifier axiom, we have:
\begin{equation*}
\epsilon_z(M_{x \vee z} \cdot M_{y \vee z} \cdot M_z) = \epsilon_z(M_{x \vee z} \cdot M_{y \vee z}) \cdot M_z.
\end{equation*}
Thanks to \cite[Theorem~ 16]{kohlas21b} and \cite[Lemma~ 15, item 6]{kohlas21b},\footnote{\cite[Theorem~ 16, item 2]{kohlas21b} indeed, can be rewritten also as follows: let $\mathcal{D}_1, \mathcal{D}_2 \in \Phi$ and $x,y,z \in Q$, if $\mathcal{D}_1$ has support $x \vee z$, $\mathcal{D}_2$ has support $y \vee z$ and $x \bot y \vert z$ ,then $\epsilon_z(\mathcal{D}_1 \cdot \mathcal{D}_2) = \epsilon_z(\mathcal{D}_1) \cdot \epsilon_z(\mathcal{D}_2)$.} we obtain
\begin{align*}
\epsilon_z(M_{x \vee z} \cdot M_{y \vee z}) \cdot M_z = \epsilon_z(\epsilon_{x \vee z}(M')) \cdot \epsilon_z(\epsilon_{y \vee z}(M'')) \cdot M_z = \epsilon_z(M') \cdot \epsilon_z(M'') \cdot M_z \neq 0.
\end{align*}
Therefore $M_{x \vee z} \cdot M_{y \vee z} \cdot M_z \neq 0$ \cite[Lemma~ 15, item 2]{kohlas21b} and hence, since the algebra is atomic, there is an atom $M''' \in At(M_{x \vee z} \cdot M_{y \vee z} \cdot M_z)$. Then $M_{x \vee z}, M_{y \vee z}, M_z \le M'''$, whence $M''' \in At(M_{x \vee z}) \cap At(M_{y \vee z}) \cap At(M_z)$ and so
$At_x \bot At_y \vert At_z$.
\end{proof}
\section{Information Algebras Homomorphisms}\label{sec:homomorphism}
We are interested in homomorphisms between algebras:
\begin{definition}[Domain-free information algebras homomorphism]\label{def:homomorphism}
Let $(\Psi,\mathcal{Q}, \le_{\Psi},\bot_{\Psi}, \cdot_\Psi, E)$ and $(\Psi',\mathcal{Q}', \le_{\Psi'}, \bot_{\Psi'}, \cdot_{\Psi'}, E')$ be two domain-free information algebras, where $E \coloneqq \{ \epsilon_{\mathcal{P}}, \; \mathcal{P} \in \mathcal{Q}\}$ and $E' \coloneqq \{ \epsilon'_{\mathcal{P}'}, \; \mathcal{P}' \in \mathcal{Q}'\}$ are respectively the families of the extraction operators of the two algebras.
A tuple $(f,h,g)$ of maps $f: \Psi \rightarrow \Psi'$, $h: \mathcal{Q} \rightarrow \mathcal{Q}'$ and $g:E \mapsto E'$ defined as $g: \epsilon_{\mathcal{P}} \rightarrow \epsilon'_{h(\mathcal{P})}$, is an homomorphism between $(\Psi,\mathcal{Q}, \le_{\Psi},\bot_{\Psi}, \cdot_\Psi, E)$ and $(\Psi',\mathcal{Q}', \le_{\Psi'}, \bot_{\Psi'}, \cdot_{\Psi'}, E')$ if and only if:
\begin{enumerate}
\item $f(\psi \cdot_{\Psi} \phi) = f(\psi) \cdot_{\Psi'} f(\phi)$, for every $\phi, \psi \in \Psi$;
\item $f(0_{\Psi}) = 0_{\Psi'}$ and $f(1_{\Psi})=1_{\Psi'}$, if we indicate with $0_{\Psi},1_{\Psi} $ and $0_{\Psi'},1_{\Psi'} $ respectively, the $0$ and the $1$ elements of $\Psi$ and $\Psi'$;
\item if $\mathcal{P}_1 \le_{\Psi} \mathcal{P}_2$ then $h(\mathcal{P}_1) \le_{\Psi'} h(\mathcal{P}_2)$, for every $\mathcal{P}_1,\mathcal{P}_2 \in \mathcal{Q}$;
\item $h(\mathcal{P}_1 \vee_{\Psi} \mathcal{P}_2) = h(\mathcal{P}_1) \vee_{\Psi'} h(\mathcal{P}_2) $ for every $\mathcal{P}_1,\mathcal{P}_2 \in \mathcal{Q}$, if we indicate with $\mathcal{P}_1 \vee_{\Psi} \mathcal{P}_2$, the join of $\mathcal{P}_1,\mathcal{P}_2$ with respect to $\le_{\Psi}$ and with $ h(\mathcal{P}_1) \vee_{\Psi'} h(\mathcal{P}_2)$, the join of $h(\mathcal{P}_1),h(\mathcal{P}_2)$ with respect to $\le_{\Psi'}$;
\item $\mathcal{P}_1 \bot_{\Psi} \mathcal{P}_2 \vert \mathcal{P}$ implies $h(\mathcal{P}_1) \bot_{\Psi'} h(\mathcal{P}_2) \vert h(\mathcal{P})$ for every $\mathcal{P}_1,\mathcal{P}_2,\mathcal{P} \in \mathcal{Q}$;
\item $f(\epsilon_{\mathcal{P}}(\psi))= g(\epsilon_\mathcal{P}) (f(\psi))$, for all $\psi \in \Psi$ and $\epsilon_\mathcal{P}\in E$ with $\mathcal{P} \in \mathcal{Q}$.
\end{enumerate}
\end{definition}
If the maps are one-to-one, then $(\Psi,\mathcal{Q}, \le_{\Psi},\bot_{\Psi}, \cdot_\Psi, E)$ is said to be \emph{embedded} into $(\Psi',\mathcal{Q}', \le_{\Psi'}, \bot_{\Psi'}, \cdot_{\Psi'}, E')$.
If they are also bijiective, the homomorphism is said to be an \emph{isomorphism} between the two algebras.
This definition is an extension of the information algebras homomorphism given in \cite{kohlas17},\footnote{ In \cite{kohlas17} the set of questions $Q$ is used in place of the set of partitions $\mathcal{Q}$. Here we need to be more explicit about partitions.} for domain-free information algebras for which $\mathcal{Q}$ is potentially different from $\mathcal{Q}'$. If $\mathcal{Q}=\mathcal{Q}'$, or equivalently $Q=Q'$, it collapses to the simpler definition in \cite{kohlas17}.
\begin{comment}
In case of commutative partitions, the above definition simplifies.
\begin{definition}[Information algebras homomorphism - commutative partitions]
Let $(\Psi,Q, \le_{\Psi},\bot_{\Psi}, \cdot_\Psi, E)$ and $(\Psi',Q', \le_{\Psi'}, \bot_{\Psi'}, \cdot_{\Psi'}, E')$ be two domain-free information algebras where $Q$ and $Q'$ are constituted only by commutative partitions and where $E \coloneqq \{ \epsilon_x, \; x \in Q\}$ and $E' \coloneqq \{ \epsilon'_{x'}, \; x' \in Q'\}$ are respectively the families of the extractors operators in the two algebras, defined starting respectively from the two sets of questions $Q$ and $Q'$.
A tuple $(f,g)$ of maps $f: \Psi \rightarrow \Psi'$, $g: E \rightarrow E'$, is an homomorphism between $(\Psi,Q, \le_{\Psi},\bot_{\Psi}, \cdot_\Psi, E)$ and $(\Psi',Q', \le_{\Psi'}, \bot_{\Psi'}, \cdot_{\Psi'}, E')$ if and only if:
\begin{enumerate}
\item $f(\psi \cdot_{\Psi} \phi) = f(\psi) \cdot_{\Psi'} f(\phi)$, for every $\phi, \psi \in \Psi$;
\item $f(0_{\Psi}) = 0_{\Psi'}$ and $f(1_{\Psi})=1_{\Psi'}$, if we indicate with $0_{\Psi},1_{\Psi} $ and $0_{\Psi'},1_{\Psi'} $ respectively, the $0$ and the $1$ elements of $\Psi$ and $\Psi'$;
\item $g(\epsilon_x \circ \epsilon_) = g(\epsilon_x) \vee_{\Psi'} g(\epsilon_y) $ for every $\epsilon_x, \epsilon_y \in E$;
\item $f(\epsilon_x(\psi))= g(\epsilon_x) (f(\psi))$, for all $\psi \in \Psi$ and $\epsilon_x \in E$.
\end{enumerate}
\end{definition}
\end{comment}
\section{Set Algebras} \label{sec:SetAlg}
Archetypes of information algebras are so-called set algebras, where the elements are subsets of some universe, combination is intersection, and extraction is related to so-called saturation operators. Starting with the set $\varOmega$ of possibilities, representing possible worlds, pieces of information may be given by subsets $S$ of $\varOmega$, meaning that the unknown world must be an element of $S$. As before, questions $x \in Q$ are modeled by partitions $\mathcal{P}_x$ or equivalence relation $\equiv_x$, where $\omega \equiv_x \omega'$ means that question $x$ has the same answer in possible worlds $\omega$ and $\omega'$. We first specify the set algebra of subsets of $\varOmega$ and show then that this algebra may be embedded into the information algebra of coherent sets. Conversely, we show that the algebra $\Phi$ of coherent sets of gambles may itself be embedded into a set algebra of its atoms, so is, in some precise sense, itself a set algebra. This is a general result for atomistic information algebras \cite{kohlas03,kohlasschmid16}.
To any partition $\mathcal{P}_x$ of $\varOmega$ there corresponds a saturation operator defined for any subset $S \subseteq \varOmega$ by
\begin{equation}\label{eq:saturOp}
\sigma_x(S) \coloneqq \{\omega \in \varOmega: (\exists \omega' \in S)\; \omega \equiv_x \omega'\}.
\end{equation}
The following are well-known properties of saturation operators.
\begin{lemma} \label{le:SatOps}
For all $S,T \subseteq \varOmega$ and any partition $\mathcal{P}_x$ of $\varOmega$:
\begin{enumerate}
\item $\sigma_x(\emptyset) = \emptyset$,
\item $S \subseteq \sigma_x(S)$,
\item $\sigma_x(\sigma_x(S) \cap T) = \sigma_x(S) \cap \sigma_x(T)$,
\item $\sigma_x(\sigma_x(S)) = \sigma_x(S)$,
\item $S \subseteq T \Rightarrow \sigma_x(S) \subseteq \sigma_x(T)$,
\item $\sigma_x(\sigma_x(S) \cap \sigma_x(T)) = \sigma_x(S) \cap \sigma_x(T)$.
\end{enumerate}
\end{lemma}
\begin{comment}
\begin{proof}
Items 1, 2 and 4 are obvious. For 3 note that $\sigma_x(T) \supseteq T$, hence $\sigma_x(\sigma_x(S) \cap T) \subseteq \sigma_x(S) \cap \sigma_x(T)$. So consider an element $\omega \in \sigma_x(S) \cap \sigma_x(T)$, such that for some elements $\omega' \in S$ and $\omega'' \in T$ we have $\omega \equiv_x \omega'$ and $\omega \equiv_x \omega'' $. By transitivity it follows that $\omega'' \equiv_x \omega'$ so that $\omega'' \in \sigma_x(S)$. But then $\omega \equiv_x \omega'' \in \sigma_x(S) \cap T$ implies $\omega \in \sigma_x(\sigma_x(S) \cap T)$ and this proves item 3. Item 4 follows from 3: $\sigma_x(\sigma_x(S)) = \sigma_x(\sigma_x(S) \cap \varOmega) = \sigma_x(S) \cap \sigma_x(\varOmega) = \sigma_x(S)$. Then item 6 follows $\sigma_x(\sigma_x(S) \cap \sigma_x(T)) = \sigma_x(S) \cap \sigma_x(\sigma_x(T)) = \sigma_x(S) \cap \sigma_x(T)$. Finally, 7. is immediate.
\end{proof}
\end{comment}
\begin{proof}
Items 1, 2, 4, 5 are obvious.
For item 6, consider $\omega \in \sigma_x(\sigma_x(S) \cap \sigma_x(T))$. Then there is a $\omega' \in \sigma_x(S) \cap \sigma_x(T)$ so that $\omega \equiv_x \omega'$. In particular, $\omega' \in \sigma_x(S)$, hence $\omega \in \sigma_x(\sigma_x(S))=\sigma_x(S)$ by item 4. At the same time, $\omega' \in \sigma_x(T)$, hence $\omega \in \sigma_x(\sigma_x(T))=\sigma_x(T)$. Then $\omega \in \sigma_x(S) \cap \sigma_x(T)$. By item 2 we must then have equality.
Regarding item 3, $\sigma_x(\sigma_x(S) \cap T) \subseteq \sigma_x(S) \cap \sigma_x(T)$ by item 2, 5 and 6. So consider an element $\omega \in \sigma_x(S) \cap \sigma_x(T)$. Then, there exist $\omega' \in S$ and $\omega'' \in T$ such that $\omega \equiv_x \omega'$ and $\omega \equiv_x \omega'' $. By transitivity it follows that $\omega'' \equiv_x \omega'$ so that $\omega'' \in \sigma_x(S)$. But then $\omega \equiv_x \omega'' \in \sigma_x(S) \cap T$ implies $\omega \in \sigma_x(\sigma_x(S) \cap T)$ and this proves item 3.
\end{proof}
Note that the first three items of this theorem imply that $\sigma_x$ is an existential quantifier relative to intersection as combination.
This is a first step to construct a domain-free information algebra of subsets of $\varOmega$.
Then we limit possible questions to the same join semilattice $(Q, \le)$ considered in the previous sections. Moreover, we consider on it the quasi-separoid three-place relation: $x \perp y \vert z$ with $x,y,z \in Q$, defined before.
Now, we want the support axiom to be satisfied. Hence, if $\partit_\top$ belongs to $Q$, then we have $\sigma_\top(S) = S$ for all $S \subseteq \varOmega$. Otherwise, we must limit ourselves to the subsets of $\varOmega$ for which there is a support $x \in Q$. We call these sets \emph{saturated} with respect to some $x \in Q$, and we indicate them with $P_Q(\varOmega)$ or more simply with $P_Q$ when no ambiguity is possible. Clearly, if the top partition belongs to $Q$, $P_Q(\varOmega) = P(\varOmega)$, the power set of $\varOmega$. So in what follows we can refer more generally to sets in $P_Q(\varOmega)$. Note that in particular $\varOmega, \emptyset \in P_Q(\varOmega)$ for every join semilattice $(Q, \le)$.
At this point the support axiom is satisfied. Indeed, if $x \leq y$ with $x,y \in Q$, then $\omega \equiv_y \omega'$ implies $\omega \equiv_x \omega'$, so that $\sigma_y(S) \subseteq \sigma_x(S)$. Then, if $x$ is a support of $S$, we have $S \subseteq \sigma_y(S) \subseteq \sigma_x(S) = S$, hence $\sigma_y(S) = S$.
Moreover $(P_Q(\varOmega),\cap)$ is a commutative semigroup with the empty set as the null element and $\varOmega$ as the unit.
Indeed, the only property we need to prove, is that $P_Q(\varOmega)$ is closed under intersection. Then, let us consider $S$ and $T$, two subsets of $\varOmega$ with support $x \in Q$ and $y \in Q$ respectively. Then they have also both supports $x \vee y $ that belongs to $Q$, because it is a join semilattice. Therefore, thanks to Lemma \ref{le:SatOps}, we have
\begin{equation*}
\sigma_{x \vee y}(S \cap T) = \sigma_{x \vee y}(\sigma_{x \vee y}(S) \cap \sigma_{x \vee y}(T)) = \sigma_{x \vee y}(S) \cap \sigma_{x \vee y}(T)= S \cap T.
\end{equation*}
So, $P_Q(\varOmega)$ is closed under intersection.
It remains only to verify the extraction property to conclude that $P_Q(\varOmega)$
forms a domain-free information algebra.
\begin{theorem} \label{th:ExtrPropSets}
Given $x,y,z \in Q$, suppose $x \vee z \bot y \vee z\vert z$. Then, for any $S \in P_Q(\varOmega)$,
\begin{equation*}
\sigma_{y \vee z}(\sigma_x(S)) = \sigma_{y \vee z}(\sigma_z(\sigma_x(S))).
\end{equation*}
\end{theorem}
\begin{proof}
From $\sigma_z(\sigma_x(S)) \supseteq \sigma_x(S)$ we obtain $\sigma_{y \vee z}(\sigma_z(\sigma_x(S)) \supseteq \sigma_{y \vee z}(\sigma_x(S))$. Consider therefore an element $\omega \in \sigma_{y \vee z}(\sigma_z(\sigma_x(S)))$. Then there are elements $\mu,\mu'$ and $\omega'$ so that $\omega \equiv_{y \vee z} \mu \equiv_z \mu' \equiv_x \omega'$ and $\omega' \in S$. This means that $\omega,\mu$ belong to some block $B_{y \vee z}$ of partition $\mathcal{P}_{y \vee z}$, $\mu,\mu'$ to some block $B_z$ of partition $\mathcal{P}_z$ and $\mu',\omega'$ to some block $B_x$ of partition $\mathcal{P}_x$. It follows that $B_x \cap B_z \not= \emptyset$ and $B_{y \vee z} \cap B_z \not= \emptyset$. Then $x \vee z \bot y \vee z \vert z$ implies, thanks to properties of a separoid, that $x \bot y \vee z \vert z$. Therefore, we have $B_x \cap B_{y \vee z} \cap B_z \not= \emptyset$, and in particular, $B_x \cap B_{y \vee z} \not= \emptyset$. So there is a $\lambda \in B_x \cap B_{y \vee z}$ such that
$\omega \equiv_{y \vee z} \lambda \equiv_x \omega' \in S$, hence $\omega \in \sigma_{y \vee z}(\sigma_x(S))$. So we have $\sigma_{y \vee z}(\sigma_x(S)) = \sigma_{y \vee z}(\sigma_z(\sigma_x(S)))$.
\end{proof}
Hence, these algebras of sets, with intersection as combination and saturation as extraction, form domain-free information algebras. Such algebras will be called \textit{set algebras}.
A set algebra of subsets of $\varOmega$ can be embedded in the information algebra of coherent sets of gambles defined on $\varOmega$. For any set $S \in P_Q(\varOmega)$, define
\begin{equation*}
\mathcal{D}_S \coloneqq \{f \in \mathcal{L}(\varOmega): \inf_{\omega \in S} f(\omega) >0 \} \cup \mathcal{L}^+(\varOmega).
\end{equation*}
If $S \neq \emptyset$, this is clearly a coherent set.
The next theorem shows that the map $S \mapsto \mathcal{D}_S$ together with the map $\sigma_x \mapsto \epsilon_x$ is an information algebra homomorphism, according to the simpler definition given in \cite{kohlas17}. It can be applied in fact, because in this case the set of partitions/questions analyzed by the two information algebras is the same.
\begin{theorem} \label{th:InfAlgHom}
Let $S,T \in P_Q(\varOmega)$ and $x \in Q$. Then
\begin{enumerate}
\item $\mathcal{D}_S \cdot \mathcal{D}_T = \mathcal{D}_{S \cap T}$,
\item $\mathcal{D}_\emptyset = \mathcal{L}(\varOmega)$, $\mathcal{D}_\varOmega = \mathcal{L}^+(\varOmega)$,
\item $\epsilon_x(\mathcal{D}_S) = \mathcal{D}_{\sigma_x(S)}$.
\end{enumerate}
\end{theorem}
\begin{proof}
1. Note that $D_S= \mathcal{L}^+$ or $D_T= \mathcal{L}^+$ if and only if $S=\varOmega$ or $T= \varOmega$. Clearly in this case we have immediately the result.
The same is true if $D_S= \mathcal{L}$ or $D_T= \mathcal{L}$, which is equivalent to have $S= \emptyset $ or $T= \emptyset$. Now suppose $D_S,D_T \neq \mathcal{L}^+$ and
$D_S,D_T \neq \mathcal{L}$.
If $S \cap T = \emptyset$, then $\mathcal{D}_{S \cap T} = \mathcal{L}(\varOmega)$.
Consider $f \in \mathcal{D}_S$ and $g \in \mathcal{D}_T$. Since $S$ and $T$ are disjoint, we have $\tilde{f} \in \mathcal{D}_S$ and $\tilde{g} \in \mathcal{D}_T$, where $\tilde{f}, \; \tilde{g}$ are defined in the following way:
\begin{align*}
\tilde{f}(\omega) \coloneqq \left\{ \begin{array}{ll} f(\omega) & \textrm{for}\ \omega \in S, \\ -g(\omega) & \textrm{for}\ \omega \in T, \\ 0 & \textrm{for}\ \omega \in (S\cup T)^c,\end{array} \right. &&
\tilde{g}(\omega) \coloneqq \left\{ \begin{array}{ll} -f(\omega) & \textrm{for}\ \omega \in S , \\ g(\omega) & \textrm{for}\ \omega \in T, \\ 0 & \textrm{for}\ \omega \in (S\cup T)^c. \end{array} \right.
\end{align*}
However, $\tilde{f} + \tilde{g} = 0 \in \mathcal{E}(\mathcal{D}_S \cup \mathcal{D}_T)$, hence $\mathcal{D}_S \cdot \mathcal{D}_T = \mathcal{L}(\varOmega) = \mathcal{D}_{S \cap T}$.
Assume then that $S \cap T \not= \emptyset$. Note that $\mathcal{D}_S \cup \mathcal{D}_T \subseteq \mathcal{D}_{S \cap T}$ so that $\mathcal{D}_S \cdot \mathcal{D}_T$ is coherent and $\mathcal{D}_S \cdot \mathcal{D}_T \subseteq \mathcal{D}_{S \cap T}$. Consider then a gamble $f \in \mathcal{D}_{S \cap T}$. Select a $\delta > 0$ and define two functions
\begin{align*}
f_1(\omega) \coloneqq \left\{ \begin{array}{ll} 1/2f(\omega) & \textrm{for}\ \omega \in (S \cap T), \\ \delta & \textrm{for}\ \omega \in S \setminus T, \\ f(\omega) - \delta & \textrm{for}\ \omega \in T \setminus S,
\\ 1/2f(\omega) & \textrm{for}\ \omega \in (S \cup T)^c, \end{array} \right.
&&
f_2(\omega) \coloneqq \left\{ \begin{array}{ll} 1/2f(\omega) & \textrm{for}\ \omega \in (S \cap T), \\ f(\omega) - \delta & \textrm{for}\ \omega \in S \setminus T,
\\ \delta & \textrm{for}\ \omega \in T \setminus S, \\
1/2f(\omega) & \textrm{for}\ \omega \in (S \cup T)^c . \end{array} \right.
\end{align*}
Then $f = f_1 + f_2$ and $f_1 \in \mathcal{D}_S$, $f_2 \in \mathcal{D}_T$. Therefore $f \in \mathcal{E}(\mathcal{D}_S \cup \mathcal{D}_T)= \mathcal{C}(\mathcal{D}_S \cup \mathcal{D}_T) \eqqcolon \mathcal{D}_S \cdot \mathcal{D}_T$, hence $\mathcal{D}_S \cdot \mathcal{D}_T = \mathcal{D}_{S \cap T}$.
2. Both have been noted above.
3. First of all it can be noticed that, if $S \in P_Q(\varOmega)$, then $\sigma_x(S) \in P_Q(\varOmega)$. So $\mathcal{D}_{\sigma_x(S)}$ is well defined. Furthermore, if $S$ is empty, then $\epsilon_x(\mathcal{D}_\emptyset) = \mathcal{L}(\varOmega)$ so that item 3 holds in this case. Hence, assume $S \not= \emptyset$. Then we have
\begin{equation*}
\epsilon_x(\mathcal{D}_S) \coloneqq \mathcal{C}(\mathcal{D}_S \cap \mathcal{L}_x) = \posi(\mathcal{L}^+(\varOmega) \cup (\mathcal{D}_S \cap \mathcal{L}_x)).
\end{equation*}
Consider a gamble $f \in \mathcal{D}_S \cap \mathcal{L}_x$. We have $\inf_S f > 0$ and $f$ is $x$-measurable. If $\omega \equiv_x \omega'$ for some $\omega' \in S$ and $\omega \in \varOmega$, then $f(\omega) = f(\omega')$. Therefore $\inf_{\sigma_x(S)} f = \inf_S f > 0$, hence $f \in D_{\sigma_x(S)}$. Then $\mathcal{C}(\mathcal{D}_S \cap \mathcal{L}_x) \subseteq \mathcal{C}(\mathcal{D}_{\sigma_x(S)})=\mathcal{D}_{\sigma_x(S)}$.
Conversely, consider a gamble $f \in \mathcal{D}_{\sigma_x(S)}$. $\mathcal{D}_{\sigma_x(S)}$ is a strictly desirable set of gambles.\footnote{${\underline{\pr}}(f) \coloneqq \inf_S(f)$ for every $f \in \mathcal{L}$ with $S \neq \emptyset$, is a coherent lower prevision \cite{walley91}.} Hence, if $f \in \mathcal{D}_{\sigma_x(S)}$, $f \in \mathcal{L}^+(\varOmega)$ or there is $\delta >0$ such that $f-\delta \in \mathcal{D}_{\sigma_x(S)}$. If $f \in \mathcal{L}^+(\varOmega)$, then $f \in \epsilon_x(\mathcal{D}_S)$. Otherwise, let us define for every $\omega \in \varOmega$, $g(\omega) \coloneqq \inf_{\omega' \equiv_x \omega} f(\omega') - \delta$.
If $\omega \in S$, then $g(\omega) > 0$ since $\inf_{\sigma_x(S)} (f- \delta) > 0$. So we have $\inf_S g \ge 0$ and $g$ is $x$-measurable. However, $ \inf_S ( g + \delta) = \inf_S g + \delta >0$ hence $(g + \delta) \in \mathcal{D}_S \cap \mathcal{L}_x$ and $f \geq g+ \delta$. Therefore
$f \in \mathcal{C}(\mathcal{D}_S \cap \mathcal{L}_x)$.
\end{proof}
Item 3. guarantees that, if $S \in P_Q(\varOmega)$, then there exists an $x \in Q$ such that $x$ is a support of $D_S$. Notice moreover that the two maps are one-to-one, therefore it is in particular an embedding of the set algebra $P_Q(\varOmega)$ into $\Phi(\varOmega)$.
\begin{comment}
\begin{theorem} \label{th:LattHomom}
The map $S \mapsto \mathcal{D}_S$ is a lattice homomorphism.
\end{theorem}
\begin{proof}
The map preserves (finite) joins by Theorem \ref{th:InfAlgHom}. Consider two subsets $S$ and $T$ of $\varOmega$ and a gamble $f \in \mathcal{D}_{S \cup T}$. Then either $f \in \mathcal{L}(\varOmega)^+$ or $\inf_{s \cup T} f > 0$. In the first case $f \in \mathcal{D}_S \cap \mathcal{D}_T$. in the second case it follows from $\inf_{S \cup T}f = \min\{\inf_S f,\inf_T f\}$ that $\inf_S f > 0$ and $\inf_T f > 0$, hence $f \in \mathcal{D}_S \cap \mathcal{D}_T$. Conversely, from $f \in \mathcal{D}_S \cap \mathcal{D}_T$ we have either $f \in \mathcal{L}(\varOmega)^+$ or $\inf_{S \cup T}f = \min\{\inf_S f,\inf_T f\} > 0$, hence in both cases $f \in \mathcal{D}_{S \cup T}$.
\end{proof}
It could be conjectured that the map $S \mapsto \mathcal{D}_S$ preserves also arbitrary joins and meets. But this is at this point still an open question. Anyway, the image of the map, the set $\Phi_S = \{\mathcal{D}_S:S \subseteq \varOmega\}$ is a subalgebra of the information algebra $\Phi$ of coherent sets of gambles.
Since the subsets of $\varOmega$ form a Boolean lattice, the same holds for the $\mathcal{D}_S$
\end{comment}
Next we construct a set algebra of subsets of $At(\Phi)$. For this purpose we consider
the set of partitions $At_x$ with $x \in Q$. We denote them as $Part_Q(At(\Phi))$. Moreover, we indicate with $\sigma_x$ the related saturation operators defined similarly to \eqref{eq:saturOp}, and with $At_Q(\Phi)$ the subsets of $At(\Phi)$ saturated with respect to some $At_x \in Part_Q(At(\Phi))$.
By
Theorem \ref{th:AtomSeparoid} restricted to $\mathcal{P}_x$ with $x \in Q$, it is possible to derive that, if $(Q, \le) $ is a join semilattice, then $(Part_Q(At(\Phi)), \le)$ is also a join semilattice with $At_x \bot At_y \vert At_z$ a quasi-separoid \cite[Theorem 2.6]{kohlas17}.
So, thanks to Lemma \ref{le:SatOps} and the reasoning below, also $At_Q(\Phi)$ is a set algebra with intersection as combination and saturation relative to partitions $At_x$ as extraction. Moreover, thanks again to Theorem \ref{th:AtomSeparoid}, we know that $h: \mathcal{P}_x \mapsto At_x$, satisfies items 3,~4 and~5 of Definition~\ref{def:homomorphism}. Therefore, we need only an analog of Theorem~\ref{th:InfAlgHom} for $f: D \mapsto At(D)$ and $g: \epsilon_x \mapsto \sigma_x$, to conclude that $(f,h,g)$ is an information algebra homomorphism between $\Phi$ and $At_Q(\Phi)$.
\begin{theorem} \label{th:EmbedInSetAlg}
For any element $\mathcal{D}_1,\mathcal{D}_2$ and $\mathcal{D}$ of $\Phi$ and all $x \in Q$,
\begin{enumerate}
\item $At(\mathcal{D}_1 \cdot \mathcal{D}_2) = At(\mathcal{D}_1) \cap At(\mathcal{D}_2)$,
\item $At(\mathcal{L}(\varOmega)) = \emptyset$, $At(\mathcal{L}(\varOmega)^+) = At(\Phi)$,
\item $At(\epsilon_x(\mathcal{D})) = \sigma_x(At(\mathcal{D}))$.
\end{enumerate}
\end{theorem}
\begin{proof}
Item 2 is obvious.
If
there is a an atom $M \in At(\mathcal{D}_1 \cdot \mathcal{D}_2)$, then $M \geq \mathcal{D}_1 \cdot \mathcal{D}_2 \geq \mathcal{D}_1,\mathcal{D}_2$ and thus $M \in At(\mathcal{D}_1)$ and $M \in At(\mathcal{D}_2)$, hence $M \in At(\mathcal{D}_1) \cap At(\mathcal{D}_2)$. Conversely, if $M \in At(\mathcal{D}_1) \cap At(\mathcal{D}_2)$, then $\mathcal{D}_1,\mathcal{D}_2 \leq M$, hence $\mathcal{D}_1 \cdot \mathcal{D}_2 \leq M$ and $M \in At(\mathcal{D}_1 \cdot \mathcal{D}_2)$. This shows that $At(\mathcal{D}_1 \cdot \mathcal{D}_2) = At(\mathcal{D}_1) \cap At(\mathcal{D}_2)$.
Furthermore, if $\epsilon_x(\mathcal{D}) = 0$, then $\mathcal{D} = 0$ and $At(\mathcal{D}) = \emptyset$, hence $\sigma_x(\emptyset) = \emptyset$ and vice versa \cite[Lemma~ 15, item 2]{kohlas21b}. Assume therefore $At(\mathcal{D}) \not= \emptyset$ and consider $M \in \sigma_x(At(\mathcal{D}))$. There is then a $M' \in At(\mathcal{D})$ so that $\epsilon_x(M) = \epsilon_x(M')$. But $\mathcal{D} \leq M'$, hence $\epsilon_x(\mathcal{D}) \leq \epsilon_x(M') = \epsilon_x(M) \leq M$. Thus $M \in At(\epsilon_x(\mathcal{D}))$.
Conversely consider $M \in At(\epsilon_x(\mathcal{D}))$. We claim that $\epsilon_x(M) \cdot \mathcal{D} \not= 0$. Because otherwise $0 = \epsilon_x(\epsilon_x(M) \cdot \mathcal{D}) = \epsilon_x(M) \cdot \epsilon_x(\mathcal{D}) = \epsilon_x(M \cdot \epsilon_x(\mathcal{D}))$, which is not possible since $\epsilon_x(\mathcal{D}) \leq M$. So there is an $M' \in At(\epsilon_x(M) \cdot \mathcal{D})$ so that $\mathcal{D} \leq \epsilon_x(M) \cdot \mathcal{D} \leq M'$. We conclude that $M' \in At(\mathcal{D})$. Furthermore, $\epsilon_x(\epsilon_x(M) \cdot \mathcal{D}) = \epsilon_x(M) \cdot \epsilon_x(\mathcal{D}) \leq \epsilon_x(M')$. It follows that $\epsilon_x(M') \cdot \epsilon_x(M) \cdot \epsilon_x(\mathcal{D}) \not= 0$ and therefore $\epsilon_x(M) \cdot \epsilon_x(M') \not= 0$. But $\epsilon_x(M) \cdot \epsilon_x(M') = \epsilon_x(M \cdot \epsilon_x(M'))$ so that $M \cdot \epsilon_x(M') \not= 0$ and therefore $\epsilon_x(M') \leq M$ since $M$ is an atom, and then $\epsilon_x(M') \leq \epsilon_x(M)$. Proceed in the same way from $\epsilon_x(M) \cdot \epsilon_x(M') = \epsilon_x(M' \cdot \epsilon_x(M))$ to obtain $\epsilon_x(M) \leq \epsilon_x(M')$. So finally $\epsilon_x(M) = \epsilon_x(M')$, which together with $M' \in At(\mathcal{D})$ tells us that $M \in \sigma_x(At(\mathcal{D}))$. This means that $At(\epsilon_x(\mathcal{D})) = \sigma_x(At(\mathcal{D}))$.
\end{proof}
Item 3. again guarantees that if $D \in \Phi$, then $At(D) \in At_Q(\Phi)$. Moreover, since $\Phi$ is atomistic, the maps $f,h,g$ are all one-to-one and then the homomorphism is an embedding. We can say therefore that $\Phi$ is in fact a set algebra.
\section{Conclusions}
This paper presents an extension of our work on information algebras related to gambles on a possibility set that is not necessarily multivariate \cite{kohlas21b}. In particular, here we analyze the relation between the domain-free version of the information algebra of coherent sets of gambles and the archetypes of information algebras, i.e., sets algebras. Specifically, we show that it is in fact a set algebra. These facts could also be expressed equivalently in the \emph{labeled view} of information algebras, better adapted to computational purposes \cite{kohlas03,kohlas21b}. This is left for future work, along with other aspects such as the question of conditioning.
\begin{comment}
For citations of references, we prefer the use of square brackets
and consecutive numbers. Citations using labels or the author/year
convention are also acceptable. The following bibliography provides
a sample reference list with entries for journal
articles~\cite{ref_article1}, an LNCS chapter~\cite{ref_lncs1}, a
book~\cite{ref_book1}, proceedings without editors~\cite{ref_proc1},
and a homepage~\cite{ref_url1}. Multiple citations are grouped
\cite{ref_article1,ref_lncs1,ref_book1},
\cite{ref_article1,ref_book1,ref_proc1,ref_url1}.
\end{comment}
\bibliographystyle{splncs04}
|
train/arxiv
|
BkiUfF45qhLBqN_YUZus
| 5 | 1 |
\section{Introduction}
Quantum sensing or quantum metrology \cite{Giovannetti_2011,Toth_2014,Pezze_2018} is one of the most promising applications of an upcoming quantum technology. Measuring quantities with ever higher precision lies at the heart of most natural sciences, and accordingly high precision measurements are a tool of uttermost importance. Quantum devices offer in principle a quadratic scaling advantage in the number of sensors, and have hence been studied in detail in recent years. Whenever some unknown signal or function should be sensed using multiple sensors, one is typically faced with a situation that the sensors are at different positions. This is the case for trapped ions \cite{Keller2019,Schindler2013, Blatt2008} as well as for arrays of superconducting qubits, quantum dots or nitrogen-vacancy centers \cite{Childress2013, Kessler_2014}. Furthermore, with the rapid developments in quantum networks, even arrays of such quantum sensors distributed over large distances are into reach. Since a single quantum system or qubit is already a quantum sensor, any such arrangement of multiple qubits corresponds to a quantum sensor network \cite{Kimble2008, Blatt2008, Schindler2013, Childress2013,Kessler_2014, Wehner2018}. These networks can be used to measure non-local properties such as field gradients or spatial Fourier coefficients \cite{Urizar2013, Altenburg2017, Apellaniz2017,Sekatski2019}, or to increase the precision of atomic clocks, interferometers and telescope networks \cite{Komar2014, Komar2016, Landini2014, Ciampini2016, Khabiboulline2019, Khabiboulline2019b}. While the spatial distribution of sensors is irrelevant for signals without spatial dependence (as often considered in metrological scenarios), this is a crucial asset in the sensing of signals with certain spatial correlations.
In this paper we study the sensing of scalar, spatial dependent signals and show that one can indeed make use of such spatial correlations. By choosing appropriate quantum states of the sensors, one can make the sensor array sensitive only to a particular signal with a specific spatial dependence. This allows one to lock in to any signal of choice and measure only this signal.
In this way, one can construct decoherence free subspaces for arbitrary given noise sources \cite{Sekatski2017PRX, Sekatski2017quantummetrology,Dur_2014,Arrad_2014,Kessler_2014,Sekatski_2016,Zhou2018,Altenburg2016,Landini2014} and overcome the known vulnerability of metrological schemes under noise \cite{Fujiwara2008,Escher2011,Escher2012,Kolodynski2012, Sekatski2017PRX,Sekatski2017quantummetrology}. Such a decoherence free subspace (DFS) in quantum computation or standard quantum metrology is typically thought of to be available only in very specific situations, mainly when there is some correlated or restricted kind of noise. For noise source with a known spatial dependence one can essentially always construct such a DFS. The only requirement is that the spatial dependences of the signal to be sensed, and the one of the noise source are different. In any such case, one can find sensor states that are insensitive to a single or even multiple noise sources, while still being sensitive to the signal \cite{Sekatski2019}. In fact, a sensor array of $N+1$ sensors allows one to be insensitive to $N$ noise sources with different spatial dependences, and sense one specific signal. Notice that this insensitivity only refers to noise sources with different spatial dependence, but clearly a fluctuating constant noise field still jeopardizes the sensing of an constant signal field. However, e.g. in situations with different sources and decaying field strength with certain distance dependence $r^{-\alpha}$ --which is a rather typical situation an many physical set-ups--, these fields are linearly independent whenever they are located at different positions and sampled on fixed sensor positions using such a sensor network. Hence a DFS and lock-in to a specific signal can be constructed. This holds true under generic conditions, and the existence of a DFS is thus typical and not exceptional.
In \cite{Sekatski2019} such a sensing scheme was introduced and analyzed in the so-called Fisher regime, where the parameter $\varphi$ to be sensed is already approximately know, and multiple repetitions of the same experiment are considered. In this case the optimal state for sensing is given by a GHZ-type state, in the noiseless case just a superposition of two eigenstates of the signal Hamiltonian or generator $\hat{G}$ with minimal and maximal eigenvalue respectively. The achievable accuracy is given by the quantum Fisher information (QFI) \cite{Braunstein1994, Giovannetti2006, Toth2014}. In a scenario with multiple noise sources, a two-dimensional DFS that contains two states with different eigenvalues and a certain spectral distance $\Delta$ can be generically constructed, and hence the above mentioned features can be achieved. This ensures that Heisenberg scaling, i.e. a quadratic enhancement over the best classical protocol, can be obtained even in the presence of additional noise sources.
The situation is different in the so called Bayesian regime where the unknown parameter is specified by a (broad) probability distribution and only a single or few measurements can be performed. In this case, it is known that multiple states with different eigenvalues w.r.t. the signal Hamiltonian are required to achieve Heisenberg scaling\cite{Berry2000,Chiribella2004,Sekatski2017singlequbit}. Therefore, not only the spectral range $\Delta$ but also the number of different eigenvalues $L$ covered by a probe state are important for quantum metrology in the Bayesian regime, as we will further investigate in this paper.
In quantum sensor networks, it is often possible to maximize either the spectral range or the number of levels of a generator, by e.g. placing the sensors at appropriate positions, but not both simultaneously. Thus, it is important to know how $\Delta$ and $L$ will influence the precision for different scenarios as we will discuss in this paper. Moreover, not the total number $L$ is important in noisy scenarios but the effective number of levels within the decoherence-free subspace. For a single-shot scenario, i.e. when considering only a single run of preparing a probe state, letting it evolve and then measure the resulting state, the number of available levels $L$ is the crucial quantity as long as the measurement time can be freely chosen and is not considered to be a resource. The longer the evolution time, the larger the required number of levels. However, the spectral range $\Delta$ enters in the required evolution time, as the strength of the signal is proportional to $\Delta$. Hence of a fixed time, both $\Delta$ and $L$ are important. In this paper, we will introduce different methods to create a large number of linear spaced levels within the decoherence-free subspace. We introduce different methods with a different trade-off between increasing the number of effective levels $L$ within the decoherence-free subspace and the maximal achievable spectral range $\Delta$. Depending on the exact situations, as discussed in the first part of this paper, we can than choose a corresponding method to either maximally increase $L$, $\Delta$, or to share the provided resources to increase both simultaneously.
The main results of this paper can be summarized as follows:
(i) We analyze the effect of number of levels $L$ and spectral range $\Delta$ for Bayesian metrology with flat prior.
(ii) We provide a general way to construct multi-dimensional decoherence free subspaces with quantum sensor networks for spatially correlated scalar signals.
(iii) We show how to measure specific signals with a particular spatial dependence and a given prior, being completely insensitive to noise sources with a different spatial dependence.
The paper is organized as follows: First, we introduce the setup and summarize our results from \cite{Sekatski2019} in \Sec{sec:background}. Then, we start our investigation by discussing different measurement scenarios in the Bayesian regime and the influence of $\Delta$ and $L$ on the precision in \Sec{sec:L_vs_D}. Consecutively, we described methods to create effective linear spectra within the decoherence free subspace by either increasing the internal degree of freedom of the sensors (\Sec{sec:spectra}) or by changing the position of the different sensors (\Sec{sec:position}). At the end, we summarize our results in \Sec{sec:conclusion} by comparing the different situations and methods.
\section{Setting and background\label{sec:background}}
In the following, we investigate methods to achieve maximal precision for estimating the unknown field strength $\omega$ of a global field $B_0(r)=\omega f_0(r)$ with given spatial distribution $f_0(r)$. For this purpose, we consider quantum sensor networks with $J$ sensors located at positions $r_j$. The time evolution of each local sensor is described by the local operator $\hat{Z}_j$ equal to the sum of all Pauli-z matrices of qubits located at $r_j$. The unknown phase $\varphi_0=\omega t$ is generated by the global generator
\begin{equation}
\hat{G}_0 = \sum\limits_{j=1}^J f_0(r_j)\hat{Z}_j \label{eq:G}
\end{equation}
via the time evolution $U=\exp(-it \omega \hat{G_0})$. Throughout this paper, we investigate situations where additional noise sources are present. These noise sources are describe via similar generators $\hat{G}_k$ with $1\leq k\leq K$ but with different spatial distributions $f_k(r_j)$. Strictly speaking, we assume that the vectors $\mathbf{f}_k=(f_k(r_1),\cdots, f_k(r_J))$ are linear independent. The state $\rho$ of the quantum sensor network after evolving for a time $t$ is given by
\begin{equation}
\int \exp\left(-\ensuremath{{\mkern1mu\mathrm{i}\mkern1mu}}\sum\limits_{k=0}^K \varphi_k\hat{G}_k\right)\rho \exp\left(\ensuremath{{\mkern1mu\mathrm{i}\mkern1mu}}\sum\limits_{k=0}^K \varphi_k\hat{G}_k\right) \text{d}\varphi_1 \cdots \text{d}\varphi_K.
\end{equation}
As a consequence, the coherence between two spin eigenstates $\mathbf{s}=(s_1,\cdots, s_J)$ with $\hat{Z}_j\ket{s_j}=s_j$ is destroyed whenever there exists at least one $k>0$ with
\begin{equation}
\mathbf{f}_k(\mathbf{s}-\mathbf{s}')\neq 0
\end{equation}
preventing us from obtaining information about the unknown phase $\varphi_0$.
Thus, optimal probe state consists of a superposition of eigenstates $\mathbf{s}$ which are all orthogonal to $\lbrace \mathbf{f}_k \rbrace$ for $1\leq k\leq K$ as we have demonstrated in \cite{Sekatski2019}. A priori the components $s_j$ can only take on integer multiples of $1/2$ which prevents us from creating spin vectors $\mathbf{s}$ orthogonal to $\lbrace \mathbf{f}_k\rbrace$ in certain cases. However, we can circumvent this restriction by adding dynamical control. Here, all spins at a corresponding site are switched at an intermediate time $t_j$ leading to effective spin components $s_j$ equal to non-integer multiples of $1/2$. In general, such orthogonal spin vectors can be created whenever there exists more probes $J>K$ than noises sources $K$.
Optimal probe states in the Fisher regime (narrow prior, many measurements) consists of the superposition of the two effective spin eigenstates $\ket{\pm \mathbf{s}}$ which maximize the absolute value of the scalar product $\mathbf{s}\mathbf{f}_\perp$. Here, $\mathbf{f}_\perp$ denotes the component of $\mathbf{f}_0$ which is orthogonal to $\lbrace \mathbf{f}_k\rbrace$ with $1\leq k\leq K$. However, the total number of distinct spin eigenstates on which the probe state is supported sets a limit on the amount of information on the field strength that can be gathered by the probe in a single run \cite{holevo1973}. Thus in the Bayesian regime also effective intermediate levels with
$|\mathbf{s}\mathbf{f}_\perp|<\text{max}|\mathbf{s}\mathbf{f}_\perp|$ play an important role as we will discuss in the next sections.
\section{Spectral range versus number of levels\label{sec:L_vs_D}}
Let us first concentrate on achievable precisions for parameter estimation in the Bayesian regime without noise. In general, the precision of the estimation of $\omega$ depends on the spectral range $\Delta=\Gamma_\text{max}-\Gamma_\text{min}$, given by the difference of the maximal and minimal eigenvalue $\Gamma$ of $\hat{G}_0$, and the number of levels $L$ of the generator $\hat{G}_0$. Both of them depend on the spatial distribution of the sensors. Thus, generators with linear spectrum and different spectral range and different number of levels can be achieved by rearranging the local sensors. Usually, $\Delta$ and $L$ cannot be maximized simultaneously. As a consequence, it is important to know how the precision scales with $\Delta$ and $L$. Therefore, we will investigate this scaling for a couple of exemplary scenarios in this sections before we investigate achievable effective $\Delta$ and $L$ for noisy distributed sensing in the next section.
\subsection{Single-shot estimation }
We will start our investigations in the Bayesian regime where we assume flat priors and single-shot estimation. The extremal case of a flat prior is an equally distributed unkown phase $\varphi=\omega t$ with prior $p(\varphi)=1/(2\pi)$ for $0\leq \varphi<2\pi$. For this situation, Berry and Wiseman \cite{Berry2000} determined the optimal probe state for a generator
\begin{equation}
\hat{G}_\text{BW}=\sum\limits_{j=1}^N \hat{z}_j.
\end{equation}
with linear spectrum. Here, $\hat{z}_k$ denotes the Pauli-z matrix of a single qubit. Berry and Wiseman proved that the phase $\varphi$ can be determined with a precision of
\begin{equation}
\langle (\hat{\varphi}-\varphi)^2\rangle \approx \frac{\pi^2}{N^2} \label{eq:BW}
\end{equation}
with a single shot measurement and a $N$-qubit state. The generator $\hat{G}_{BW}$ has a linear spectrum with $L=N+1$ different eigenvalues and a spectral range of $\Delta=N$ leading to the spectral decomposition
\begin{equation}
\hat{G}_\text{BW}=\sum\limits_{\mu=1}^{N+1}(\mu+c)\ket{\mu}\bra{\mu}
\end{equation}
with $c=-N/2-1$ being an irrelevant constant which we will neglect from now on.
Here, the number of levels $L$ and the spectral range $\Delta$ are proportional to the number of qubits $N$. However, this is not necessary the case for global fields $B(r)$ with arbitrary spatial dependence $f(r)$ and different positioning of the local sensors. Therefore, we investigate in the following how $L$ and $\Delta$ influence the optimal precision separately. Thus, we generalized the results of Berry and Wiseman \cite{Berry2000} to frequency estimation and generators with rescaled linear spectrum
\begin{eqnarray}
\hat{G}_0&=&\frac{\Delta}{L-1}\sum\limits_{\mu=1}^{L}\mu\ket{\mu}\bra{\mu}\\
&=&\frac{\Delta}{L-1}\hat{G}_\text{BW}
\end{eqnarray}
where $L$ and $\Delta$ can be varied independently.
The time evolution determined by $\exp[-i\omega t \hat{G}_0]$ is equivalent to $\exp[-i\varphi\hat{G}_{BW}]$ with $\varphi=(\omega t \Delta)/(L-1)$.
We assume that the frequency $\omega$ is equally distributed between $0\leq \omega < W_0$. Letting the system evolve for
\begin{equation}
t_1=\frac{2\pi}{W_0}\frac{L-1}{\Delta} \label{eq:time}
\end{equation}
leads to an equal distribution of $\varphi$ with $0\leq \varphi < 2\pi$.
As a consequence, we can achieve a precision of
\begin{equation}
\langle(\hat{\omega}-\omega)^2\rangle =\frac{\langle(\hat{\varphi}-\varphi)^2\rangle}{|\partial_\omega \varphi|^2}\approx \frac{ W_0^2}{4L^2}
\end{equation}
determined by \eq{eq:BW}. As a result, only the number of levels $L$ is important for the precision in a single shot experiment if the measurement time can be chosen appropriately. However, the time $t_1$ to achieve this precision scales inversely with the spectral range $\Delta$. Therefore, maximizing the number of levels is only optimal when the interaction time $t_1$ can be chosen arbitrarily. However, not only the number of qubits is a resource, but in general also time, which we will investigate in the next section.
\subsection{Multi-shot estimation}
In this section, we investigate scenarios where the total interaction time $T$ is fixed. Here, we assume that $T$ can be split between different measurements. A basic approach would to repeat the measurement described in the previous section $\nu=T/t_1$ times without updating the prior. Since the precision scales with $ 1/\nu$, we arrive finally at
\begin{equation}
\langle(\hat{\omega}-\omega)^2\rangle \sim \frac{W_0^2 t_1}{4L^2T} \sim \frac{W_0 }{2T L\Delta}.
\end{equation}
As a result, the optimal precision scales inverse with the product of the number of levels $L$ times the spectral range $\Delta$ of the generator $\hat{G}_0$.
A better measurement scheme would update the prior $p(w)$ after each measurement and adapt the probe state, evolution time and measurement for each run.
Assuming that the frequency distribution stays flat at each round we can now define a sequence of interaction times and widths $(t_k;W_k)$ with
\begin{equation}
W_k = W_0(2L)^{-k}
\end{equation}
and
\begin{equation}
t_k = \frac{L-1}{ \Delta}\frac{2\pi}{W_{k-1}}
= \frac{2\pi(L-1)}{W_0\Delta}(2L)^{k-1}.
\end{equation}
As a consequence, the total interaction time after $n$ measurement rounds is given by
\begin{equation}
T =\sum\limits_{k=1}^n t_k = \frac{2\pi(L-1)}{W_0\Delta}\frac{(2L)^{n}-1)}{2L-1}
\approx \frac{\pi}{\Delta W_0}(2L)^n.
\end{equation}
Thus, the maximal number of estimation rounds is upper bounded by
\begin{equation}
(2L)^n\leq\frac{\Delta W_0}{\pi}T
\end{equation}
for a fixed interaction time $T$. Therefore, the maximal achievable precision is upper bounded by
\begin{equation}
W_T = W_n = W_0(2L)^{-n} \geq\frac{\pi }{T\Delta }
\end{equation}
suggesting that in such an adaptive scheme only the spectral range has an effect on the
scaling of the precision with time.
\subsection{Single-shot, fixed time estimation\label{sec:single_shot}}
In the two previous subsections, we assumed that it is possible to arbitrarily choose and split the interaction time. However, often only short interaction times are available in real world estimation problems. In addition, also preparing a good probe state and performing measurements needs time which exceeds in some cases the actual interaction time.
As a consequence, there exist many scenarios where only a single-shot estimation with fixed interaction time is possible. In this case, our estimation problem falls into the regime of Bayesian frequency estimation \cite{Macieszczak2014,Sidhu2019}. For Gaussian prior distributions $p(\omega)$, the precision of the updated distribution after the measurement is given by \cite{Macieszczak2014}
\begin{equation}
\langle(\hat{\omega}-\omega)^2\rangle=W_1^2 = W_0^2\left(1-W_0^2\cdot F(\bar{\rho},\hat{G}_0t)\right).
\end{equation}
Here, $F$ denotes the quantum Fisher information, $W_0$ the variance of the prior and $\bar{\rho}$ the prior weighted density operator with matrix elements
\begin{eqnarray}
\bar{\rho}_{n,m}&=& \int \text{d}\omega \;c_nc_m^\ast \exp\left[-\ensuremath{{\mkern1mu\mathrm{i}\mkern1mu}} \omega t (n-m)\right] p(\omega)\\
&=& c_nc_m^\ast \exp\left[-\frac{t^2W_0^2\Delta^2}{2(L-1)^2}(n-m)^2\right],\label{eq:rho}
\end{eqnarray}
given in the eigenbasis of $\hat{G}_0$.
The optimization of the probe state is in general non-trivial and often only possible with numerical methods and iterative algorithms \cite{Demkowicz2011,Macieszczak2014}. However, we can adapt some of the results of \cite{Macieszczak2014} by rescaling the dimensionless time parameter
\begin{equation}
\tau= tW_0 \rightarrow tW_0 \frac{\Delta}{L}.
\end{equation}
For $tW_0\Delta \ll 1$, all off-diagonal terms \eq{eq:rho} survive and thus a GHZ-like probe state is optimal. In this case, the variance reduction factor is given by
\begin{equation}
\frac{W_1^2}{W_0^2}=1-t^2W_0^2\Delta^2 \exp\left(-t^2W_0^2\Delta^2\right).
\end{equation}
As a consequence, the precision is mainly determined by the spectral range $\Delta$ alone as long as $tW_0\Delta \ll 1$.
For $tW_0\Delta \gg 1$, only off-diagonal elements \eq{eq:rho} with $n\approx m$ survive provided $tW_0\Delta/L$ is of the order of unity. Then, the number of different levels is important and states similar to the Berry-Wiseman state \cite{Berry2000} are optimal.
In the intermediate regime, states with a structure interpolating between GHZ-state and the Berry-Wiseman state are optimal. Again, we can adapt results from \cite{Macieszczak2014} to our situation. A fixed number of levels $L$ in our case corresponds to a fixed number of atoms $N$ in \cite{Macieszczak2014} with $L=2N$. Thus, for a fixed $L$ and $2\leq L\leq 40$, a ratio of $0.5\leq tW_0 \Delta/(L-1)\leq 1$ is optimal as can be seen from Fig. 2 in \cite{Macieszczak2014} suggesting that $L$ and $\Delta$ should be increased simultaneously in this regime if possible. However, a larger number of levels $L$ leads always to a higher precision provided the ratio $\Delta/L$ is fixed.
For $tW_0\Delta/L>>1$, no off-diagonal elements survive and phase estimation is not possible anymore. In this case, a shorter interaction time should be chosen.
\section{Creating linear spectra with local multilevel systems in noisy environments\label{sec:spectra}}
The advantage of quantum metrology is severely limited by the influence of noise. In the worst case scenario, the advantage shrinks to a constant factor \cite{Fujiwara2008,Escher2011,Escher2012,Kolodynski2012, Sekatski2017PRX,Sekatski2017quantummetrology}. However, the quadratic improvement of quantum metrology can be maintained in certain situations by using e.g. error correction or fast control \cite{Sekatski2017PRX, Sekatski2017quantummetrology,Dur_2014,Arrad_2014,Kessler_2014,Sekatski_2016,Zhou2018,Altenburg2016,Landini2014}. In \cite{Sekatski2019}, we have described how to protect global parameter estimation from noise sources with given spatial distributions by designing appropriate probe states. These states consisted of superpositions of effective energy levels of the generator $\hat{G}_0$ within a decoherence-free subspace. With these probe states, it is possible to achieve the same precision scaling as in the noiseless case.
We mainly concentrated in \cite{Sekatski2019} on the Fisher regime and thus on probe states consisting of a superposition of only two orthogonal states. However, the maximal achievable precision in the Bayesian regime crucially depends on the number of levels as we have discussed in the previous section.
It is in general a difficult task to find optimal probe states and precision limits in the Bayesian regime. Previous works \cite{Berry2000, Demkowicz2011, Macieszczak2014,Sidhu2019,Sekatski2017singlequbit} mainly concentrated on generators $\hat{G}_0$ with equally spaced levels. Therefore, we will also concentrate on this regime and investigate in the following methods to create effective linear spectra within the decoherence-free subspace. All previous results \cite{Berry2000, Demkowicz2011, Macieszczak2014,Sidhu2019,Sekatski2017singlequbit} as well as our considerations from \Sec{sec:L_vs_D} can then be adopted to probe states based solely on this effective spectra.
In this section, we concentrate on methods based on a fixed number of sensor at fixed positions and variable internal degrees of freedom. In the next section, we will concentrate on methods based on sensors with fixed internal degrees of freedom but with variable positioning.
\subsection{One-dimensional orthogonal subspace}
Our goal is to determine the global phase $\varphi=t\omega$ generated by $\hat{G}_0$, \eq{eq:G}, with $0\leq \omega \leq W_0$ without being disturbed by global phase noise generated by $\lbrace \hat{G}_k\rbrace$ with $1\leq k\leq K$ and different spatial distributions $f_k(r_j)$.
The spatial distribution of each generator $\hat{G}_k$ can be described by the vector $\mathbf{f}_k=(f_k(r_1),\cdots,f_k(r_j))^T$ for $0\leq k\leq K$. In the following, we investigate a situation with one signal source and $K$ noise sources which are linear independent meaning that their corresponding spatial vectors $\mathbf{f}_k$ are linear independent.
The state of our sensor network can be described by time averaged spin vectors
\begin{equation}
\mathbf{s}=\langle \hat{\mathbf{Z}} \rangle_t = (\langle \hat{Z}_1\rangle_t, \cdots , \langle \hat{Z}_J\rangle_t)^T
\end{equation}
determined by the time averaged expectation values $\langle \hat{Z}_j\rangle_t$ of the local operators $\hat{Z}_j$.
In the following, we assume that $\mathbf{s}$ describe time averaged eigenstates (compare \cite{Sekatski2017singlequbit, Sekatski2019}). This means that a system in state $\mathbf{s}$ is at all times in an energy eigenstate, however, the eigenstate might change due to spin flips during the coherent time evolution generating the phase $\varphi=\omega t$. The coherence between two different effective spin vectors $\mathbf{s}$ and $\mathbf{r}$ is preserved if \cite{Sekatski2019}
\begin{equation}
\mathbf{f}_k^T(\mathbf{s}-\mathbf{r})=0 \quad 1\leq k\leq K
\end{equation}
and the effective signal strength is given by
$
\mathbf{f}_\perp^T(\mathbf{s}-\mathbf{r}).
$
Here, $\mathbf{f}_\perp$ denotes the component of the signal $\mathbf{f}_0$ which is orthogonal to all noise vectors $\mathbf{f}_k$.
The subspace orthogonal to $\text{span}\lbrace \mathbf{f}_k\rbrace$ with $1\leq k\leq K$ is one dimensional if our sensor network consist of sensors with only $J=K+1$ different positions.
In this case, the optimal probe state consists of a superposition of spin states $\mathbf{s}$ parallel to $\mathbf{f}_\perp$ (compare with \cite{Sekatski2019}). To create non-integer multiples of $1/2$, we use intermediate spin flips such that the time average of the spin is given by
$
\mathbf{s}=\langle \hat{\mathbf{Z}} \rangle_t \parallel \mathbf{f}_\perp.
$
In the following, we investigate methods to create an effective linear spectra described by $\mathbf{s}$ within the decoherence-free subspace. We assume that the position of all sensors are fixed and that each sensor consists of a quantum system with $n_j$ linear spaced energy levels with energies $\lbrace E_m=m\rbrace$ and $-n_j/2 \leq m \leq n_j/2$.
One possibility to create superpositions of effectively linear spaced levels is to use equivalent local sensors with $n_j=n$ energy levels and create the superposition state
\begin{equation}
\ket{\psi}=\sum\limits_{m=-n/2}^{n/2} \bigotimes_{j=1}^J \ket{\text{sign}(f_\perp^j)m}_j.
\end{equation}
For each local system $j$, we time the local spin flips such that the average energy of the level $m=+1/2$ is given by
\begin{equation}
\langle 1/2 \rangle _j=\frac{|f_\perp^j|}{2f_\perp^\text{max}}
\end{equation}
where $f_\perp^j$ denotes the component $j$ of $\mathbf{f}_\perp$ and $f_\perp^\text{max}$ the maximal component of $\mathbf{f}_\perp$.
As a consequence, each energy level $m$ is mapped to $m |f_\perp^j|/f_\perp^\text{max}$ and we arrive at the effective probe state
\begin{equation}
\ket{\psi_\text{eff}}=\sum\limits_{m=-n/2}^{n/2} \bigotimes_{j=1}^J \ket{m \frac{ f_\perp^j}{f_\perp^\text{max}}}_j = \sum\limits_{m=-n/2}^{n/2} \ket{m \frac{\mathbf{f}_\perp}{f_\perp^\text{max}}}\label{eq:probestate}
\end{equation}
which consist of a superposition of $n$ states with effective spins $\mathbf{s}_m \parallel \mathbf{f}_\perp$. Thus, they all ly in the same decoherence-free subspace.
However, a superposition state with equal level spacing and spectral range can be also achieved with less resources if $n\cdot |f_\perp^j|/f_\perp^\text{max}\leq (n-1)$ for some $j$. In this case, we use systems of dimension $\lceil n|f_\perp^j|/f_\perp^\text{max}\rceil$ for each local system. However, at least one system still has dimension $n$. We use this system as control system for controlled spin flips on all the other systems to create the effective superposition state given in \eq{eq:probestate}. In general, it is also possible to create superpositions of $n$ levels with only single qubits for each sensor. In this case, an auxiliary system with $n$ levels which is insensitive to all fields (source and noise) is necessary to control the necessary spin flips (see \cite{Sekatski2017singlequbit}).
\subsection{Multi-dimensional orthogonal subspace}
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{gen_multilevel_final}
\end{center}
\caption{Visualization of the states of a two qubit system and their projection onto the perpendicular signal component $\mathbf{f}_\perp$ (black arrow). The projections of the four spins states $(\pm s_\text{eff},\pm 1/2)$ (red dots) lead to an equally spaced level structure optimal four quantum metrology in the Bayesian regime. The superposition of the two states $(1/2,1/2)$ and $(-1/2,-1/2)$ are optimal for quantum metrology in the Fisher regime. However, their projection together with the projection of $(1/2,-1/2)$ and $(-1/2,1/2)$ do not lead to equally spaced levels. }
\label{fig:multilevel}
\end{figure}
There are automatically more levels available in the decoherence-free subspace if the space orthogonal to $\text{span}\lbrace \mathbf{f}_k\rbrace$ with $1\leq k\leq K$ is multi-dimensional. This is possible if there exist more than $J>K+1$ different sensor positions. For example, in the appendix of \cite{Sekatski2019}, an example of a three qubit system is discussed, where all states of the form $\ket{\pm 1/2,\pm1/2,s_3}$ with arbitrary but fixed $s_3$ lie within the same decoherence-free subspace. In this case, the states $\ket{\mathbf{s}}$ used for the superposition probe state must not necessarily be parallel to $\mathbf{f}_\perp$. However, the projection of these states onto $\mathbf{f}_\perp$ are not necessary equidistant, see for example the projection of the states $\ket{\pm 1/2,\pm 1/2,s_3}$ onto $\mathbf{f}_\perp$ as depicted in \fig{fig:multilevel} (the state $s_3$ of the third qubit was neglected in this figure to simplify the presentation). Again, we can us dynamical controlled spin-flips to solve this problem. In this case, we decrease the spin of one of the systems from $1/2$ to $s_\text{eff}$ in such a way that
\begin{equation}
\left(\begin{array}{c}s_\text{eff}\\1/2\end{array}\right) \cdot \mathbf{f}_\perp = 3 \left(\begin{array}{c}-s_\text{eff}\\1/2\end{array}\right) \cdot \mathbf{f}_\perp.
\end{equation}
In this way, the projection of the states $\ket{\pm s_\text{eff}, \pm 1/2}$ lead to linear spaced projections onto $\mathbf{f}_\perp$. To obtain more levels, it is enough to increase the number of linear spaced energy levels of one of the systems. In our example, we can get $2n$ ``equally'' spaced states within the decoherence-free subspace. This method can be generalized to higher dimensional decoherence-free subspaces.
As a result, we can generate linear multi-level spectra within the decoherence-free subspace for Bayesian parameter estimation by using well timed spin-flips. The maximal achievable spectral range $\Delta$ as well as the number of levels $L$ scale with the number of local energy levels $n$ similar to the noiseless case. In addition, the dimension $d=J-K$ of the space orthogonal to all noise sources can be also used to generate effective linear spectra by using again well timed spin-flips.
\subsection{Arbitrary effective spectrum}
We have concentrated on generating effective linear spectra w.r.t. the signal generating Hamiltonian so far, as such spectra are typically used in a Bayesian estimation scenario with flat prior distribution. However, there are many other metrological scenarios as well, and the optimal states and optimal effective energy spectra vary from case to case. Hence we also discuss a general method to obtain arbitrary effective spectra within a DFS using dynamical control. Once one has constructed a two-dimensional DFS with eigenstates $|v^+\rangle,|v^-\rangle$ and eigenvalues $\lambda^\pm=\pm\Delta/2$ where $\Delta$ is the spectral range, one can obtain a multi-dimensional DFS with degenerate eigenvalues by simply placing more sensors (or a higher dimensional system) at each sensor position. Similarly, adding auxiliary systems that are not taking part in the sensing process have a similar effect. We assume in the following that each eigenstate is $k$-fold degenerate, $|v_k^\pm\rangle = |v^\pm\rangle|k\rangle$, with eigenvalues $\lambda_k^\pm = \lambda^\pm$. By performing a controlled-switch between eigenstates $|v^+_k\rangle$ and $|v_k^-\rangle$ at appropriated times, one can generate effective eigenvalues $\tilde\lambda_k^+, \tilde\lambda_k^-$ with arbitrary values $0 \leq \tilde\lambda_k^+ \leq \Delta/2$ and $\lambda_k^-=-\lambda_k^+$. This allows one to produce an arbitrary symmetric spectrum. An arbitrary asymmetric spectrum can be obtained by mixing each of the eigenstates separately with an effective zero-energy state. Notice that effective zero energy levels can be generated by using two other auxiliary levels. An alternative is to use just the positive part of the spectrum, which however results in a decrease of the spectral range by a factor of $1/2$.
A similar method works to modify a given linearly spaced spectrum $\{\pm \lambda_k\}$. By adding degeneries (e.g. using auxiliary states or levels that do not take part in the sensing process), one can either mix pairs of levels $\pm \lambda_k$ with effective zero-energy states and move energy down for the two levels. Or one can also mix two different energy levels $\lambda_{k_1},\lambda_{k_2}$, which results in moving one energy up and the other down (but also changes the spectral range eventually).
\section{Creating linear spectra within the decoherence-free subspace by varying the spatial distributions \label{sec:position}}
\begin{table}
\begin{tabular}{|l|c|c|}
\hline
spatial distribution & $\Delta$ & $L$ \\ \hline
2-point & $N$ & $N/2$ \\ \hline
linear & $\approx N/4$ & $\approx N^2/4$ \\ \hline
exponential & $\approx 2$ & $2^{N/2}$ \\ \hline
\end{tabular}
\caption{Summary of spectral range $\Delta$ and number of levels $L$ for gradient estimation with $N$ qubits for different spatial distribution within the subspace protected from global phase noise. }
\label{tab:gradient}
\end{table}
In the previous section, we investigated how to create effectively linear spaced levels assuming a fixed number of sensors at fixed positions. Here, we increased the number of level $L$ as well as the spectral range $\Delta$ simultaneously by increasing the internal degrees of freedom of the local sensors.
However, the number of available levels $L$ and the maximal achievable spectral range $\Delta$ are strongly influenced by the positioning of the different sensors. Thus, we can increase $L$ or $\Delta$ just by varying the spatial distribution of our sensors without using additional resources such as additional qubits to increase the internal degree of freedom of the sensors. In general, increasing one will lead to a degrease of the other. Thus, the trade-off between $L$ and $\Delta$ need to be carefully balanced depending on the actual situation as discussed in \Sec{sec:L_vs_D}.
To be fully flexible, we present here different constructions to achieve states with up to exponentially many effective energy levels, at the prize of a (linearly) reduced spectral range. In contrast to \Sec{sec:spectra}, we now assume that each sensor is described by a single qubit. Our results can be generalized to sensor with more internal degrees of freedom by combining the methods from this section and \Sec{sec:spectra}.
We start by concentrating on gradient estimation with the generator
\begin{equation}
\hat{G}_0=\sum\limits_j r_j \hat{Z_j},
\end{equation}
with normalized positions $-1/2\leq r_j\leq 1/2$. Our goal is to determine the global phase $\varphi=t\omega$ generated by $\hat{G}_0$ with $0\leq \omega < W_0$ without being disturbed by global phase noise generated by
\begin{equation}
\hat{G}_1 = \sum\limits_j \hat{Z_j}.
\end{equation}
In the Fisher scenario, it is optimal to place $N/2$ qubits at $r_j=\pm 1/2$, respectively, because we achieve in this way the maximal possible spectral range of $\Delta= N$ \cite{Altenburg2017}. However, we obtain only $L=N/2$ different eigenvalues for $\hat{G}_0$. In the following, we discuss different spatial arrangements of our sensors to generate linear spectra with different combinations of $L$ and $\Delta$ which can help to optimize global parameter estimation in the Bayesian regime.
\subsection{Linear spacing}
For simplicity, we assume that the number of qubits $N$ is even. In this case, positioning $N$ sensors with linear spacing leads to
\begin{equation}
r_{\pm j} = \pm \frac{j-1/2}{N-1} \quad , \quad 1\leq j \leq N/2.
\end{equation}
The maximal eigenvalue $\Gamma_\text{max}$ is achieved if all spins with positive $r_j$ pointing up and all others down, leading to
\begin{equation}
\Gamma_\text{max}= 2\cdot \frac{1}{2}\sum\limits_{j=1}^{N/2} \frac{j-1/2}{N-1}.
\end{equation}
In a similar way, we find $\Gamma_\text{min}= -\Gamma_\text{max}$.
Thus the spectral range is given by
\begin{equation}
\Delta = \Gamma_\text{max} - \Gamma_\text{min} = \frac{N^2}{4(N-1)}\approx \frac{N}{4}.
\end{equation}
Similar considerations for $N$ odd leading to the same scaling of $\Delta\approx N/4$. The smallest energy change is achieved if the spins situated at $r_{\pm 1}$ are changed. Both of these spins need to be switch simultaneously, to stay in the protected subspace from global field noise. This leads to a minimal energy change of $\delta = 1/(N-1)$ and as a result to a maximal number of energy levels of $L=\Delta/\delta\approx N^2/4$. As a result, increasing the number of qubits leads to a similar scaling of the spectral range as in the Fisher regime while we get a quadratic improvement in the number of levels.
\subsection{Exponential spacing}
The maximal number of levels with equidistant spacing $L=2^{N/2}$ is achieved if the particles are placed at
\begin{equation}
r_{\pm j}=\pm \frac{1}{2} \frac{1}{2^{j-1}}\quad ,\quad 1\leq j \leq N/2
\end{equation}
where we again took into account that only states within the protected subspace are interesting. In this case, the maximal and minimal eigenvalues are given by
\begin{equation}
\Gamma_\text{max/min}=\pm \frac{2}{4}\sum\limits_{j=1}^{N/2} \frac{1}{2^{j-1}} = \pm\left(1-\frac{1}{2^{N/2}}\right)
\end{equation}
leading to a spectral range of $\Delta \approx 2 $ for large $N$. As a consequence, the achievable spectral range $\Delta$ is limited in this case and cannot be enhanced above a certain threshold by increasing the number of qubits. However, the precision depends only on $L$ for single shot estimation if the interaction time t is large enough such that $t W_0 \Delta \gg 1$ (see \Sec{sec:single_shot}). In this case, it is not necessary to increase $\Delta$ and we profit from the exponential scaling of $L$.
\subsection{Arbitrary functions}
The above conducted considerations can be generalized to arbitrary generators
\begin{equation}
\hat{G}_0=\sum\limits_j f(r_j) \hat{Z}_j.
\end{equation}
Again, our goal is to construct probe states, which are insensitive to global phase noise generated by $\hat{G}_1=\sum_j \hat{Z}_j$. However, now we do not demand that the positioning of the sensors itself is linear or exponential, but the resulting field strengths $f(r_j)$ when hopping from one sensor to another. In this way, we can generate similar level structures as in the case of gradient estimation.
To achieve linear spacing, the sensors need to be placed at positions $r_j$ such that
\begin{equation}
f(r_j)=f_j=a \frac{j-1/2}{N-1}+b.
\end{equation}
To construct a probe state which is insensitive to global field noise, we determine the component $\mathbf{f}_\perp$ of $\mathbf{f}$ which is orthogonal to the vector $\mathbf{f}_0=(1,\cdots,1)^T$ describing global field noise \cite{Sekatski2019}. This component is given by
\begin{eqnarray}
f_\perp^j= f(r_j)-\frac{\sum\limits_{j=1 }^N f(r_j)}{N}=a \frac{j-1/2-N/2}{N-1}
\end{eqnarray}
and is antisymmetric such that $f_\perp^j =- f_\perp^{N+1-j}$. The largest eigenvalue of $\hat{G}_0$ is achieved if all spins for $1\leq j \leq N/2$ pointing down and all other up. The state with the smallest eigenvalue is obtained by flipping all spins leading to a spectral range of
\begin{equation}
\Delta = 2\cdot2\cdot \frac{1}{2} \sum\limits_{j=N/2+1}^N a \frac{j-1/2-N/2}{N-1}= a \frac{N^2}{4(N-1)}.
\end{equation}
The smallest changes within the protected subspace is achieved if the two middle spins ($j=N/2$ and $j=N/2+1$) are switched leading to a level spacing of $\delta=a/(N-1)$ and in total $L=N^2/4$ levels similar to the case of gradient estimation with linear positioned sensors.
For arbitrary function $f(r_j)$ it is also possible to create $L=2^{N/2}$ equidistant levels within a decoherence-free subspace with $N$ qubits as we will demonstrate in the following. So far, we have always used the fact that two qubits with opposite spin form a 2-dimensional decoherence-free subspace. That is, always two qubits form one logical qubit with
\begin{eqnarray}
\ket{+}_{L_j}&=&\ket{+}_j\ket{-}_{-j} \\
\ket{-}_{L_j}&=&\ket{-}_j\ket{+}_{-j}
\end{eqnarray}
leading to
\begin{eqnarray}
\hat{G}_1\ket{\pm}_{L_j}&=&0\\
\hat{G}_0\ket{\pm}_{L_j}&=&\pm(f(r_j)-f(r_{-j}))\ket{\pm}_{L_j}.
\end{eqnarray}
Here, $\ket{\pm}$ denotes a spin-eigenstate with the spin pointing up or down, respectively.
To create $2^{N/2}$ levels within the decoherence-free subspace, we need $N/2$ independent pairs $(j,-j)$ of sensors with
\begin{equation}
f(r_j)-f(r_{-j})= \frac{a}{2^{j-1}} \quad, \quad 1\leq j\leq N/2.
\end{equation}
The maximal and minimal eigenvalues are then given by
\begin{equation}
\Gamma_\text{max/min}=\pm \frac{1}{2}\sum\limits_{j=1}^{N/2} \frac{a}{2^{j-1}}=\pm a(1-\frac{1}{2^{N/2}})
\end{equation}
leading to a spectral range of $\Delta \approx 2a$ for large $N$. Finding the positions $r_{\pm j}$ is straightforward if $f$ is continuous and an inverse function $f^{-1}$ is known ($f^{-1}$ need not necessarily be unambitious). The pairs $(j,-j)$ can be freely chosen since the function $f_0(r)$
describing the global noise is constant. The optimal strategy is to choose $r_1$ such that $f(r_1)$ is equal to the maximum of $f(r)$ within the area of allowed sensor positions and $r_{-1}$ denotes the position of the minimum. All other sensor positions are then consecutively defined via
\begin{equation}
f(r_{\pm j})= \frac{1}{2}\left(f_\text{max}+f_\text{min}\pm \frac{f_\text{max}-f_\text{min}}{2^{j-1}}\right).
\end{equation}
As a result, we can achieve an exponential scaling for the precision of a global field with arbitrary spatial dependence $f(r)$ in the presence of global phase noise in the Bayesian regime.
\section{Conclusion\label{sec:conclusion}}
In this paper, we first discussed the influence of the spectral range $\Delta$ and the number of levels $L$, covered by a probe state and defined by a generator $\hat{G}$ of an unknown phase $\varphi=\omega t$, on the precision to estimate $\omega$. The optimal precision is solely determined by the spectral range $\Delta$ if the interaction time $t$ between the unknown field $B=\omega f_0(r)$ with strength $\omega$ and the sensor network is very small such that $t\ll 1/(W_0 \Delta)$. Here, $W_0$ determines the width of the prior of $\omega$. This is also the case if the interaction time can be split into multiple measurements with arbitrary small interaction times. In this case, the information gain achieved by probe states based on multi-level states is compensated by longer interaction times for each single measurement (see \eq{eq:time}). Thus, it is possible to either perform a few longer measurements providing more information, due to a larger $L$, or many short measurements providing each only a single bit of information, if $L=2$. However, the total amount of available information stays constant and the precision $\langle(\hat{\omega}-\omega)^2\rangle $
depends solely on the spectral range $\Delta$.
However, we are limited in many situations to single-shot estimation due to preparation and measurement times longer than the available interaction time. Then, the number of levels $L$ becomes more and more important as $tW_0\Delta$ grows.
As a consequence, it is optimal to invest the given resources, e.g. number of available qubits $n$, in different ways depending on the given estimation situation. Here, we also took the influence of different noise sources with given spatial distributions $f_k(r)$ into account. To generate a maximal spectral range $\Delta$ it is optimal put as many qubits as possible at a position with maximal effective signal strength $f_\perp^\text{max}$. In this case, $\Delta$ scales linear with the number of qubits at this position. However, we need at least sensors at $J=K+1$ different positions if $K$ linear independent noise sources are present. This reduces the number of qubits at this positions dramatically.
To create a large number of levels $L$, it is optimal to place each qubit at a different position. Depending on the spatial distribution of the sensors, we can either maximize $L$ by sacrificing $\Delta$. In this case, we can get an exponential scaling of $L$ with the number of qubits $n$. However, $\Delta$ will be limited by a constant in this case. In other cases, it is optimal to increase $L$ and $\Delta$ simultaneously as discussed in \Sec{sec:single_shot}. In such situations, it is e.g. possible to achieve quadratic scaling of $L\sim n^2$ and still linear scaling of $\Delta\sim n$.
\section*{Acknowledgment}
This work was supported by the Austrian Science Fund (FWF) through project P30937-N27.
\bibliographystyle{apsrev4-1}
|
train/arxiv
|
BkiUaCA25V5hegR8Wmpm
| 5 | 1 |
\section{Introduction}
\label{Introduction}
Deep models have become the de-facto approach for prediction in many applications like image classification (e.g. \cite{krizhevsky2012imagenet}) and machine translation (e.g. \cite{bahdanau2014neural,sutskever2014sequence}) and further seem poised to advance prediction in real-world domains \cite{miotto2016deep,gulshan2016development,ghassemi2017predicting}.
However, many practitioners still are reluctant to adopt deep models because their predictions are difficult to interpret. Without interpretability, humans are unable to incorporate their domain knowledge and effectively audit predictions.
In this work, we shall seek a specific form of interpretability known as \emph{human-simulability}. A human-simulable model is one in which a human user can ``take in input data together with the parameters of the model and in reasonable time step through every calculation required to produce a prediction'' \cite{lipton2016interpretability}.
For example, small decision trees with only a few nodes are easy for humans to simulate and thus understand. Human-simulability is valuable in many domains. In particular, despite advances in deep learning for clinical decision support (e.g. \cite{miotto2016deep,choi2016doctor,che2015deep}), the clinical community remains skeptical (and rightfully so) of machine learning systems \cite{chen2017machine}. The black box nature of neural networks prevents the checks-and-balances and quality control that we expect from healthcare providers.
Meanwhile, a simulable model would enable clinicians to audit predictions easily: they can manually inspect changes to outputs under perturbed inputs, check substeps against their expert knowledge, and reason about external factors influencing prediction like systemic bias in the data.
Similar needs for simulability exist in many decision-critical domains such as disaster response or recidivism prediction.
Despite the appeal and need for human-simulability, many popular models are not simulable. Even simple deep models like multi-layer perceptrons with a few dozen units can have far too many parameters and connections for a human to easily step through (successive matrix multiplications quickly becomes difficult to think about). Richer families of neural networks such as those for sequences are essentially impossible for humans to simulate. However, with added non-linearities and many more free parameters, these rich families often allow for significantly more accurate predictions than a small decision tree.
Thus, the primary question we consider in this work is the following: \emph{Is it possible for a powerful model such as a deep network to be human-simulable, or at least frequently human-simulable?}
Simulability is a rather strict definition for interpretability as it requires full transparency in prediction. As such, current work on the interpretability of black-box models struggle to balance being both simulable and faithful to the model.
For instance, \citeauthor{craven1996extracting} \shortcite{craven1996extracting} train decision trees that mimic the predictions of a fixed, pre-trained neural network. Other post-hoc interpretations typically evaluate the sensitivity of predictions to local perturbations of inputs or the input gradient \cite{ribeiro2016should,selvaraju2017grad,adler2016auditing,lundberg2016unexpected,erhan2009visualizing}. While the post-hoc interpretations come in many sophisticated forms--- others include \cite{singh2016programs}, who uses programs to explain a model's predictions as a post-hoc step, and \cite{lakkaraju2016interpretable}, who learn decision sets based on a learned model--- it is difficult to simplify the complex logic of an unregularized neural network to a simulable (simple) tree, set, or program. As a result, many of these methods only explain local behavior or a lower resolution (noisy) depiction of global logic. In general, the problem of distilling the decision function of a trained and unregularized neural network to a simple family of decision functions is somewhat ill-posed: unregularized neural networks have no incentive to be simulable or any other notion of human-interpretability.
Instead, they will learn complex decision boundaries fit to succeed at the target task. Trying to enforce interpretability post-hoc must understandably make strong assumptions that over-simplify the model's logic.
In contrast, we begin with the observation that since it is well-known that deep models often have multiple optima of similar predictive accuracy \cite{Goodfellow-et-al-2016} one might hope to directly find ``more interpretable" minima with equal predictive accuracy. In other words, if we consider interpretability from the very start i.e. add an ``interpretability term" in the objective function, it might be possible to train neural networks to be both performant and simulable. In general however, the field of \emph{optimizing} deep models for interpretability remains largely nascent. In this vein, \citeauthor{ross2017right} \shortcite{ross2017right} penalize input sensitivity to features marked as less relevant, while \citeauthor{lei2016rationalizing} \shortcite{lei2016rationalizing} train deep models that make predictions from text and simultaneously highlight contiguous subsets of words, called a ``rationale,'' to justify each prediction. Unfortunately, while both works optimize deep models to expose relevant features, these lists of features alone are not sufficient to \emph{simulate} the prediction. We draw a stark distinction between explanation and simulation: the former may describe interpretable features whereas the latter requires defining both features and a procedure for translating them into output. In the following, we introduce two contributions: we first discuss how to optimize deep models to expose prediction logic (not just features) using decison trees, and second, how to generalize this method to incorporate human prior knowledge.
\paragraph{Tree Regularization}
To optimize for interpretability, we must define an objective function that finds deep models that are both accurate and simulable.
To do this, we introduce the notion of \emph{tree-regularization.} Specifically, we define a novel model-complexity penalty function that favors model optima whose decision boundaries can be well-approximated by small decision trees. In effect, this penalizes models that would require many calculations to simulate predictions. Similar to many popular regularizers such as L2 or L1, the tree regularizer is a function on the weights of the neural network. Several of our technical contributions surround making this regularizer differentiable such that it is compatible with stochastic gradient descent.
Experimentally, we first exemplify how this technique can be used to train simple multi-layer perceptrons to have tree-like decision boundaries. We then focus on time-series applications and show that gated recurrent unit (GRU) models trained with strong tree-regularization reach a high-accuracy-at-low-complexity sweet spot that is not possible with any strength of L1 or L2 regularization. Furthermore, we will show that the decision trees (produced during training) can be used as tools for human simulation -- they act as distillations of the deep model and can be give to domain experts. Choosing several real world applications, we demonstrate these features of our approach on a speech recognition task and two medical treatment prediction tasks for patients with sepsis in the intensive care unit (ICU) and for patients with human immunodeficiency virus (HIV). Throughout, we also show that standalone decision trees as a baseline are noticeably less accurate than our tree-regularized deep models.
\paragraph{Granularity of Explanation}
Thus far, we have implicitly assumed that there exists an optima for a deep model that is simulable while maintaining high performance. For many domains, this may not be true -- we may rely on the complexity of a deep model where any strong regularization greatly increases error. In such cases, it may not be possible to have a model that is both accurate and well-approximated by a simple decision tree. To remedy this, we consider \textit{regional} explanations that constrain the model independently across a partitioning of the input space. Coincidentally, this form of explanation is consistent with those of humans, whose models are typically context-dependent \cite{miller2018explanation}. For example, physicians in the intensive care unit do not expect treatment rules to be the same across different categories of patients. Constraining each region to be interpretable allows the deep model more flexibility than a global constraint, while still revealing prediction logic that can generalize to nearby inputs (in contrast to works on local explanation---\cite{ribeiro2016should,selvaraju2016grad,ross2017right}---which cannot indicate whether the same logic revealed for an input $x$ can be used for nearby inputs $x'$, an ambiguity that can lead to mistaken assumptions and poor decisions). In other words, we assume that even the most complex decision boundaries can be decomposed into an ensemble of simpler regional boundaries, each of which can be well-approximated by a decision tree. Furthermore, in many domains like medicine, human experts have very good intuitions for how to partition the input space. For example, an intensivist may care for patients in the surgical unit differently than patients in (non)-surgical units. By generalizing tree regularization to support regions, we can incorporate prior knowledge from domain experts to train simulable models.
While a straightforward conceptual leap, optimizing for simulable explanations across many regions poses a difficult technical challenge, facing issues with differentiability, efficiency, and a delicate balance of constraints between regions of varying size and complexity. In the methods, we will describe a computationally tractable and reliable approach to do so. Specifically, we show how to jointly train a deep model that both has high accuracy and is regionally simulable, and introduce innovations for stability in optimization. We first present a few synthetic experiments to build intuition and then, revisiting the clinical domain, we demonstrate that \textit{regional tree regularization} achieves better performance while learning a much simpler decision function than any other regularizer.
\section{Related work}
\label{Related_Work}
\paragraph{Global Interpretability}
Given a \emph{trained} black box model, many approaches exist to explain what the model has learned. Works such as \cite{mordvintsev2015inceptionism} expose the features a representation encodes but not the logic. \cite{amir2018highlights,kim2014bayesian} provide an informative set of examples that summarize the system. Model distillation compress a source network into a smaller target neural network \cite{frosst2017distilling}. However, even a small neural model may not be interpretable. Activation maximisation of neural networks \cite{montavon2018methods} tries to find input patterns that produce the maximum response for a quantity of interest. However, a set of input patterns is not necessarily adequate to simulate a model's predictions. Similarly, Layerwise-Relevance Propagation \cite{binder2016layer,bach2015pixel} produces a heatmap of relevant information for prediction based on the aggregating the weights of a neural network. Again, learning a heatmap of the important information for predicting outcomes does not always enable human simulability, since we cannot necessarily step through each calculation that produces a decision.
\paragraph{Local Interpretability}
In contrast, local approaches provide explanation for a specific input. \citeauthor{ribeiro2016should} \shortcite{ribeiro2016should} show that using the weights of a sparse linear model, one can explain the decisions of a black box model in a small area near a fixed data point. This captures the intuition that even nonlinear functions are locally linear. Similarly, instead of a linear model, \citeauthor{singh2016programs} \shortcite{singh2016programs} and \citeauthor{koh2017understanding} \shortcite{koh2017understanding} output a simple program or an influence function, respectively. Other approaches have used input gradients (which can be thought of as infinitesimal perturbations) to characterize the local space \cite{maaten2008visualizing,selvaraju2016grad}. However, the notion of a local region in these works is both very small and often implicit; it does not match with human notions of contexts \cite{miller2018explanation}: a user may have difficulty knowing when local explanations apply and how they generalize to nearby inputs.
\paragraph{Optimizing for Interpretability}
While there is little work on optimizing models for interpretability, there are some related threads. The first is \emph{model compression}, which trains smaller models that perform similarly to large, black-box models (e.g. \cite{buciluǎ2006model,hinton2015distilling,balan2015bayesian,han2015learning}).
Other efforts specifically train very sparse networks via L1 penalties \cite{zhang2016l1} or even \emph{binary} neural
networks \cite{tang2017train,rastegari2016xnor} with the goal of faster computation. Edge and node regularization is commonly used to improve prediction accuracy \cite{drucker1992improving,ochiai2017automatic}, and recently \citeauthor{hu2016harnessingLogic} \shortcite{hu2016harnessingLogic} improve prediction accuracy by training neural networks so that predictions match a small list of known domain-specific first-order logic rules. Sometimes, these regularizations---which all smooth or simplify decision boundaries---\textit{can} have the effect of also improving interpretability. However, there is no guarantee that these regularizations will do so; we emphasize that specifically \emph{training} deep models to have easily-simulatable decision boundaries is (to our best knowledge) novel.
\section{Background and Models}
\label{Background}
We consider supervised learning tasks given datasets of $N$ labeled examples, $\mathcal{D} = \{(\mathbf{x}_n, \mathbf{y}_n)\}_{n=1}^N$, where each example (indexed by $n$) has an input feature vector $\mathbf{x}_n \in \mathcal{X}^P$ and a target output vector $\mathbf{y}_n \in \mathcal{Y}^Q$. $P$ and $Q$ are the dimensionalities. For example, we will sometimes write $\mathbf{x}_n = [x_{n}(1), ..., x_{n}(p)]$, using $(\cdot)$ to indicate indexing into the vector. We shall assume the targets $\mathbf{y}_n$ are binary, though it is simple to extend to other types. When modeling time-series, each example sequence $n$ contains $T_n$ timesteps indexed by $t$ which each have a feature vector
$\mathbf{x}_{nt}$ and an output $\mathbf{y}_{nt}$. Formally, we write: $\mathbf{x}_n = (\mathbf{x}_{n1} \ldots \mathbf{x}_{nT_n})$ and $\mathbf{y}_n = (\mathbf{y}_{n1} \ldots \mathbf{y}_{nT_n})$. Each value $\mathbf{y}_{nt}$ could be a prediction about the next timestep (e.g. the character at time $t+1$) or some other task-related annotation (e.g. if the patient became septic at time $t$).
We will primarily consider two kinds of deep models: multi-layer perceptrons and recurrent neural networks. That said, our approach is compatible with any architecture.
\paragraph{Multi-Layer Perceptrons.}
A multi-layer perceptron (MLP) makes predictions $\mathbf{\hat{y}}_n$ of the target $\mathbf{y}_n$ via a function $f: \mathcal{X}^P \times \Theta \rightarrow \mathcal{Y}^Q$ such that $\hat{\mathbf{y}}_n = f(\mathbf{x}_n; \theta)$, where the vector $\theta \in \Theta$ represents all parameters of the network. Given a data set $\mathcal{D}$, our goal is to learn the optimal parameters $\theta^*$ to minimize the objective
\begin{align}
\theta^* = \arg\min_{\theta \in \Theta} \sum_{n=1}^N \mathcal{L}( \mathbf{y}_n , \mathbf{\hat{y}}_n ) + \lambda \Psi(\theta)
\label{eqn:orig_loss}
\end{align}
For binary targets $\mathbf{y}_n$, the logistic loss (binary cross entropy) is an effective choice for $\mathcal{L}(\cdot)$. The regularization term $\Psi(\theta)$ can represent L1, L2 penalties (e.g. \cite{drucker1992improving,Goodfellow-et-al-2016,ochiai2017automatic}) or our new family of regularizers.
\begin{figure}
\centering
\begin{subfigure}[b]{0.3\linewidth}
\centering
\includegraphics[width=\linewidth]{v1/gru_sketch.pdf}
\caption{GRU}
\end{subfigure}
\qquad
\begin{subfigure}[b]{0.55\linewidth}
\centering
\includegraphics[width=\linewidth]{v1/gru_hmm_sketch.pdf}
\caption{GRU-HMM}
\end{subfigure}
\caption{Architecture diagrams for (a) gated recurrent units (GRU) and (b) a GRU and hidden markov model (HMM) hybrid. The orange triangle indicates the output used in surrogate training for tree regularization.}
\label{fig:what-we-regularize}
\end{figure}
\paragraph{Recurrent Neural Networks with Gated Recurrent Units.}
A recurrent neural network (RNN) takes as input an arbitrary length sequence $\mathbf{x}_n = (\mathbf{x}_{n1} \ldots \mathbf{x}_{nT_n})$ and produces a ``hidden state'' sequence $\mathbf{h}_n = (\mathbf{h}_{n1} \ldots \mathbf{h}_{nT_n})$ of the same length as the input. Each hidden state vector at timestep $t$ represents a location in a (possibly low-dimensional) ``state space'' with $K$ dimensions: $\mathbf{h}_{nt} \in \mathbb{R}^K$ ($K$ is often chosen as a hyperparameter). RNNs perform
sequential \emph{nonlinear} embedding of the form $\mathbf{h}_{nt} = f(\mathbf{x}_{nt}, \mathbf{h}_{nt-1}; \theta)$ in hope that the state space location $\mathbf{h}_{nt}$ is a useful summary statistic for making predictions of the target $\mathbf{y}_{nt}$ at timestep $t$. As written, $f: \mathcal{X}^P \times \mathbb{R}^K \times \Theta \rightarrow \mathbb{R}^K$ is called a \textit{transition} function parameterized by $\theta \in \Theta$. Many different variants of the transition function architecture have been proposed to solve the challenge of capturing long-term dependencies. In this paper, we use gated recurrent units (GRUs) \cite{cho2014gru}, which are simpler than other alternatives such as long short-term memory units (LSTMs) \cite{hochreiter1997long}. While GRUs are convenient, any differentiable RNN architecture is compatible with our new tree-regularization approach.
As review, we describe the evolution of a single
GRU sequence, dropping the sequence index $n$ for readability. The GRU transition function $f$ produces the state vector $\mathbf{h}_{t} =
(\mathbf{h}_{t1} \ldots \mathbf{h}_{tT})$ (let $T$ denote the number of timesteps) from a previous state $\mathbf{h}_{t-1}$ and an input vector $\mathbf{n}_t$, via the following feed-forward architecture:
\begin{align}
\textup{output state}: \mathbf{h}_{t} &= (1 - \mathbf{z}_{t}) \mathbf{h}_{t-1} + \mathbf{z}_{t,k}\mathbf{\tilde{h}}_{t}
\\
\textup{candidate state}:
\mathbf{\tilde{h}}_{t} &= \textup{tanh}( \mathbf{V}_h \mathbf{x}_t + \mathbf{U}_h
(\mathbf{r}_{t} \odot \mathbf{h}_{t-1}) )
\\
\textup{update gate}:
\mathbf{z}_{t} &= \sigma( \mathbf{V}_z \mathbf{x}_t + \mathbf{U}_z \mathbf{h}_{t-1} )
\\
\textup{reset gate}:
\mathbf{r}_{t} &= \sigma(\mathbf{V}_r \mathbf{x}_t + \mathbf{U}_r \mathbf{h}_{t-1})
\end{align}
The internal network nodes include candidate state gates $\mathbf{\tilde{h}}$, update gates $\mathbf{z}$ and reset gates $\mathbf{r}$ which have the same cardinality as the state vector $\mathbf{h}$. Reset gates allow the network to forget past state vectors when set near zero via the logistic sigmoid nonlinearity $\sigma(\cdot)$, which critically adds a multiplicative expressivity to this model class. Update gates allow the network to either pass along the previous state vector unchanged or use the new candidate state vector instead. This architecture is diagrammed in Figure~\ref{fig:what-we-regularize}.
The predicted probability of the binary target $\mathbf{y}_t$ for timestep $t$ is a sigmoid transformation of the state at time $t$, $\mathbf{\hat{y}}_t = \sigma(\mathbf{w}^T \mathbf{h}_t)$.
Here, weight vector $w \in \mathbb{R}^K$ represents the parameters of this individual output layer. We denote the parameters for the entire GRU-RNN model as $\theta = (\mathbf{w}, \mathbf{U}_h, \mathbf{U}_z, \mathbf{U}_r, \mathbf{V}_h, \mathbf{V}_z, \mathbf{V}_r) \in \Theta$, concatenating all component parameters. We can train GRU-RNN timeseries models (hereafter often just called GRUs) via the following loss minimization objective, sharing many similarities to the MLP's loss (Eqn.~\ref{eqn:orig_loss}):
\begin{align}
\theta^* = \arg\min_{\theta \in \Theta} \sum_{n=1}^N \sum_{t=1}^{T_n} \mathcal{L}( \mathbf{y}_{nt}, \mathbf{\hat{y}}_{nt}) + \lambda \Psi(\theta)
\label{eqn:gru_loss}
\end{align}
where again $\Psi(\theta)$ defines a regularization cost, and $\theta^*$ represents the optimal parameters.
\paragraph{Hidden Markov Models with Stochastic Gradient Descent.} Besides recurrent neural networks, hidden markov models (or HMMs) are another class of sequence models that are commonly used to describe stochastic processes. Often, (as with RNNs) we are given a sequence of $T_n$ observed variables $\mathbf{x}_n = (\mathbf{x}_{n1} \ldots \mathbf{x}_{nT_n})$, and wish to derive a sequence of $T_n$ latent (or hidden) variables $\mathbf{s}_n = (\mathbf{s}_{n1} \ldots \mathbf{s}_{nT_n})$. We assume each latent variable, $\mathbf{s}_{nt}$ can take one of $K$ discrete states. In practice, these latent variables can be interpreted as an unsupervised clustering over the observed sequence. For our purposes, one can view the HMM as a stochastic RNN (added noise), making it a probabilistic generative model. To be tractable, the HMM makes a set of simplifying assumptions. The free parameters of an HMM define a \textit{prior}, $p(\mathbf{s}_{n0})$, the probability distribution over $K$ states for timestep 0; a \textit{transition matrix}, $p(\mathbf{s}_{nt}|\mathbf{s}_{n,t-1})$ which specifies a probability distribution over states for timestep $t$ given the state at timestep $t-1$; and an \textit{emission matrix}, $p(\mathbf{x}_{nt}|\mathbf{s}_{nt})$ which specifies a probability distribution over (possibly continuous) observations at timestep $t$ given only the latent at timestep $t$. Critically, this setup makes the Markov assumption -- all information required to make a decision at timestep $t$ is present at timestep $t-1$.
In our setting, we also have a sequence of known outputs, $\mathbf{y}_n = (\mathbf{y}_{n1} \ldots \mathbf{y}_{nT_n})$. In some sense, we are not interested not in the latent states themselves but using them to classify an observation into output. If we decide upfront to specify a simple classifier on top of the latent variables (such as logistic regression), then we explicitly write the joint distribution over latents, observations, and outputs as:
\begin{equation}
p(\mathbf{x}_n, \mathbf{y}_n, \mathbf{s}_n) = p(\mathbf{s}_{n0};\phi)\prod_{t=1}^T p(\mathbf{s}_{nt}|\mathbf{s}_{n,t-1};\phi)p(\mathbf{x}_{nt}|\mathbf{s}_{nt};\phi)p(\mathbf{y}_{nt}|\sigma(\sum_{k} w_k f(\mathbf{s}_{nt})))
\end{equation}
where $\phi$ are the parameters specifying the prior, transition, and emission probabilities; $\{ w_k \}_{k=1}^K$ are the parameters used in logistic regression; $f(\mathbf{s}_{nt}) = p(\mathbf{s}_{nt}|\mathbf{s}_{n,t-1}, \mathbf{x}_{nt};\phi)$, the posterior distribution over states at timestep $t$; $\sigma$ represents a Sigmoid function. Therefore, we can train the HMM with stochastic gradient descent using the objective:
\begin{equation}
\theta^* = \arg \max_{\theta \in \Theta} p(\mathbf{x}_n, \mathbf{y}_n, \mathbf{s}_n)
\end{equation}
where $\theta = \{\phi, w_1, ..., w_K \}$ contain all trainable parameters from a high-dimensional space of parameters $\Theta$. In other words, because we only desire maximum-a-posteriori (MAP) inference, we never need to sample from any of the distributions and therefore can differentiate this objective with standard techniques. Note that this is quite similar to the forward pass in the forward-backward algorithm.
\paragraph{Modeling the Residuals of a Hidden Markov Models}
One strength of the HMM is that it is a fairly interpretable model. Often, the discrete latent states have contextual meaning such that we can analyze the predictions of HMM as conditioned completely on its state. However, for complex domains, discrete states (even for large $K$) might not be able to fully capture the true decision function, resulting in high prediction error. One option is to add a recurrent neural network, which are known to be high performing but un-interpretable, to model the residual errors when predicting the target outputs using the HMM belief (latent) states. If we can properly penalize the complexity of the deep model, then high quality predictions do not come at the price of a less interpretable model. In practice, the GRU and HMM can be trained jointly where the parameters of each model are kept independent. We call this model a \textit{GRU-HMM} and use it in several experiments. Figure~\ref{fig:what-we-regularize}(b) recap the model architecture.
\section{(Decision) Tree-Regularization}
As presented in Eqns.~\ref{eqn:orig_loss} and \ref{eqn:gru_loss}, the regularizer $\Psi(\theta)$ is arbitrary. Common choices include $L_2$ norms to manage the sizes of $\theta$ and $L_1$ norms to manage the sparsity of $\theta$. We now come to our core contribution: we replace $\Psi(\theta)$ with a novel \emph{tree-regularizer}, denoted $\Omega(\theta)$, that encourages the model $\theta$ to be \emph{simulable}. Specifically, we shall encourage our deep models to be well-approximated by (small) decision trees. For clarity, we refer to the deep neural network that we are trying to regularize as the \textit{target neural model} or target network.
To do so, we first fit a binary decision tree which \textit{accurately} reproduces the target network's thresholded binary predictions $\mathbf{\hat{y}}_{n}$ given input $\mathbf{x}_n$. The accuracy parameter is always kept fixed, so that the tree is forced to model the network well. Next, we penalize the network based on the complexity of learnt tree: a simple decision function can be explained with only a few branches whereas a complex function may need exceedingly large trees. With this in mind, we quantify complexity as the \emph{average decision path length} (shorthand APL) ---the average number of decision nodes that must be touched to make a prediction for an input $\mathbf{x}_n$ (i.e. the number of nodes from root to leaf). We compute the \emph{average} with respect to some designated reference dataset of example inputs $\mathcal{D} = \{\mathbf{x}_n\}$ from the training set. Thus, our regularizer is
\begin{equation}
\Omega(\theta) \triangleq \text{APL}(\{\mathbf{x}_n\}_{n=1}^N, f(\cdot; \theta), h)
\label{eqn:gapl}
\end{equation}
where the APL function is detailed in Algorithm~\ref{alg:true_tree_regularization}; $f(\cdot; \theta)$ represents the neural model; $h$ is a hyperparameter for training decision trees that controls the minimum number of training examples to define a leaf node. This definition of APL generalizes when the input data represents a timeseries. Algorithm~\ref{alg:true_tree_regularization} requires two subroutines, \textsc{TrainTree} and \textsc{PathLength}. Firstly, \textsc{TrainTree} trains a binary decision tree to accurately reproduce the provided labeled examples $\{\mathbf{x}_n, \mathbf{\hat{y}}_n \}$ (recall $\mathbf{\hat{y}}_n = f(\mathbf{x}_n; \theta)$). For this we use the \texttt{DecisionTree} module distributed in Python's scikit-learn \cite{scikit-learn}, which fits a tree by maximizing information gain with Gini impurity. Generally, the runtime cost of this module scales superlinearly with the number of examples $N$ and linearly with the number of features $F$ for a total complexity of $O(FN\log N)$. In practice, we found that with $N = 1000$, $F=10$, fitting a decision tree takes 15.3 microseconds. These trees can give probabilistic predictions at each leaf. Next, \textsc{PathLength} counts how many nodes are needed to make a specific input to an output node in the provided decision tree (this is done programmatically by storing traversals).
We consider average path length a good proxy for simulability because human simulation requires stepping through every calculation required to make a prediction. Average path length (or APL) exactly counts the number of true-or-false boolean calculations needed to make an average prediction, assuming the model is a binary decision tree. In contrast, a metric such as the total number of nodes might penalize more accurate trees that have short paths for most examples but need more involved logic for few outliers. While a sensible choice, a few technical innovations are required to efficiently optimize the APL loss.
\begin{algorithm}[!t]
\caption{Average-Path-Length (APL) Cost Function}
\begin{algorithmic}[1]
\Require{
\Statex $f(\cdot; \theta)$ : binary prediction function, with parameters $\theta$
\Statex $\mathcal{D} = \{ \mathbf{x}_n \}_{n=1}^N$ : reference dataset with $N$ examples
\Statex $h$ : minimum number of samples required to be a leaf node; a higher $h$ regularizes the tree, resulting in a smaller tree
}
\Function{\textup{APL}}{$\{\mathbf{x}_n\}, f(\cdot; \theta), h$}
\State $\mbox{tree} \gets \textsc{TrainTree}( \{ \mathbf{x}_n, f(\mathbf{x}_n, \theta) \}_{n=1}^N)$
\State \Return $\frac{1}{N} \sum_{n} \textsc{PathLength}(\mbox{tree}, \mathbf{x}_n)$
\EndFunction
\end{algorithmic}
\label{alg:true_tree_regularization}
\end{algorithm}
\paragraph{Making Tree Regularization Differentiable}
Training decision trees is not differentiable, and thus the tree regularization loss $\Omega(\theta)$ from Equation~\ref{eqn:gapl} is not differentiable with respect to the network parameters $\theta$ (unlike standard regularizers such as $L_1$ or $L_2$). While one could resort to derivative-free optimization techniques \cite{audet2016blackbox} e.g. search algorithms, gradient descent has been an extremely fast and robust way of training neural networks \cite{Goodfellow-et-al-2016}.
A key technical contribution of our work is introducing and training a \emph{surrogate} regularization function
$\hat{\Omega}(\theta): \mbox{supp}(\theta) \rightarrow \mathbb{R}^+$ to map each parameter vector $\theta \in \Theta$ of the target neural model to an \emph{estimate} of the APL. Our approximate function $\hat{\Omega}$ is implemented as a standalone multi-layer perceptron network and is critically \emph{differentiable}. Let vector $\xi \in \Xi$ denote the trainable parameters of this chosen MLP surrogate. We can train $\hat{\Omega}$ to be a good estimator by minimizing a squared error
loss function:
\begin{align}
\min_{\xi \in \Xi} \textstyle \sum_{j=1}^J (\Omega( \theta_j ) - \hat{\Omega}( \theta_j, \xi ) )^2 + \epsilon || \xi ||_{2}^2
\label{eqn:surrogate_loss}
\end{align}
where each $\theta_j$ is an instance of the \emph{entire} set of parameters for the target neural model, $\epsilon > 0$ is a regularization strength, and we assume we have a dataset of $J$ known parameter vectors and their associated true APLs: $\mathcal{D}^{\theta} = \{\theta_j, \Omega(\theta_j) \}_{j=1}^J$. This dataset can be assembled using the candidate parameter vectors obtained every gradient step while training our target neural model $f(\cdot, \theta)$. Importantly, one can train the surrogate function $\hat{\Omega}$ in parallel with our network. In Figure~\ref{fig:tricks}(a), we show evidence that our surrogate predictor $\hat{\Omega}(\cdot)$ tracks the true average path length as we train the target predictor $f(\cdot, \theta)$.
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.47\linewidth}
\centering
\includegraphics[width=\linewidth]{v1/regressor_plot_parabola.pdf}
\caption{Surrogate APL vs True APL}
\end{subfigure}
\begin{subfigure}[b]{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{v1/weight2node_tricks.pdf}
\caption{Effect of Restarts and Augmentation}
\end{subfigure}
\caption{\emph{(a)} True average path lengths (yellow) and surrogate estimates $\hat{\Omega}$ (green) across many iterations of network parameter
training iterations (on 2D Parabola). \emph{(b)} Compares the effects of parameter augmentation and random restarts (retraining): The blue line shows the true APL of the decision tree at each epoch. All other lines show predicted APL using the surrogate MLP. By augmenting and restarting, we significantly improve the ability of the surrogate model to track the changes in the ground truth.}
\label{fig:tricks}
\end{figure}
\paragraph{Training the Surrogate Loss} In this section, we describe a few more considerations to improve surrogate quality. Firstly, even moderately-sized neural models can have parameter vectors $\theta$ with thousands of dimensions. Our labeled dataset for surrogate training -- $\{ \theta_j, \Omega(\theta_j) \}_{j=1}^J$---will only have one $\theta_j$ example from each target network training iteration. Even with small batch sizes (more gradient steps), this dataset is too small. Thus, in early iterations, we will have only few examples from which to learn a good surrogate function $\hat{\Omega}(\theta)$. We resolve this challenge via \emph{augmenting} our training set with additional examples: We randomly sample weight vectors $\theta$ and calculate the true APL $\Omega(\theta)$, and we also perform several random restarts (initializing parameters with different random seeds) on the unregularized target network and use those weights in our training set.
A second challenge arises later in training: as the model parameters $\theta$ shift away from their initial values, parameters from earlier in optimization may not be as relevant in characterizing the current decision function of the target neural model. In practice, this is a function of the learning rate: a high step size will quickly render recent parameters ineffective for training a surrogate. To address this, for each epoch, we use examples only from the past $E$ iterations, where in practice, $E$ is empirically chosen. Consequently, using examples from a fixed window of iterations also speeds up training. Figure~\ref{fig:tricks}(b) shows a comparison of the importance of these heuristics for efficient and accurate training---empirically, data augmentation for stabilizing surrogate training allows us to scale to neural networks with 100s of nodes. MLPs and GRUs of this size are already sufficient for many real problems, such as those we encounter in healthcare domains.
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.3\linewidth}
\centering
\includegraphics[width=0.60\linewidth]{v1/parabola_data_generating.pdf}
\caption{2D Parabola}
\end{subfigure}
\begin{subfigure}[b]{0.5\linewidth}
\includegraphics[width=\linewidth]{v1/parabola_auc_vs_node.pdf}
\caption{Prediction vs Complexity for many $\lambda$}
\end{subfigure}
\caption{\emph{(a)} 2D parabola dataset. The black line shows the true decision boundary; the gray lines define areas where noise is added. \emph{(b)} A comparision of APL versus AUC for many regularizers. In the small average path length regime (0-5), tree-regularization produces models with higher AUC than L$_1$ or L$_2$.}
\label{fig:parabola}
\end{figure}
\section{Demonstration: A Tree-Regularized MLP and RNN}
We start by exploring two simple domains intended to build intuition for the tree regularization method. We first test the regularizer on MLPs in a two-dimensional classification task followed by a second prediction task with sequential data.
\paragraph{Tree-Regularized MLP: Noisy Parabola}
We first show a binary classification task as demonstration. We call this task the \textit{2D Parabola problem}, because as Figure~\ref{fig:parabola}(a) shows, the training data consists of 2D input points whose two-class decision boundary is roughly shaped like a parabola. The true decision function is defined by $y = 5*(x-0.5)^{2} + 0.4$.
\begin{wrapfigure}{r}{0.5\textwidth}
\centering
\begin{subfigure}[b]{\linewidth}
\includegraphics[width=\linewidth]{v1/l1_parabola_decision_functions.pdf}
\caption{Decision Boundaries with L1 regularization\\}
\label{fig:decision_function:l1}
\end{subfigure}
\begin{subfigure}[b]{\linewidth}
\includegraphics[width=\linewidth]{v1/l2_parabola_decision_functions.pdf}
\caption{Decision Boundaries with L2 regularization\\}
\label{fig:decision_function:l2}
\end{subfigure}
\begin{subfigure}[b]{\linewidth}
\includegraphics[width=\linewidth]{v1/tree_parabola_decision_functions.pdf}
\caption{Decision Boundaries Tree regularization}
\label{fig:decision_function:tree}
\end{subfigure}
\caption{Decision boundaries (black lines) have qualitatively different shapes
for different regularization schemes, as regularization strength $\lambda$ increases. We color each prediction as true positive (red), true negative (yellow), false negative (green), and false positive (blue). The L$_1$ boundary appears more sharp, whereas L$_2$ is more round, and tree reg. is axis-aligned.}
\label{fig:parabola2}
\end{wrapfigure}
We sampled 500 input points $\mathbf{x}_n$ uniformly within the unit square $[0,1] \times [0,1]$ and labeled those above the decision function as positive. To make it easy for models to overfit to more complex decision boundaries, we flipped 10\% of the points in a region near the boundary. A random 30\% were held out for testing. For the classifier, we train a 3-layer MLP with 100 first layer nodes, 100 second layer nodes, and 10 third layer nodes. This MLP is intentionally overly expressive to encourage overfitting and expose the impact of different forms of regularization: our proposed tree regularization $\Psi(\theta) = \hat{\Omega}(\theta)$, an L$_2$ penalty on the weights $\Psi(\theta) = ||\theta||_2$, and an L$_1$ penalty on the weights $\Psi(\theta) = ||\theta||_1$. For each regularization function, we train models at many different regularization strengths $\lambda$ chosen to explore the full range of decision boundary complexities possible under each technique. For tree regularization, we model a surrogate $\hat{\Omega}(\theta)$ with a 1-hidden layer MLP with 25 units. The surrogate is intentionally chosen to be small with few parameters. In practice, we bias towards simpler surrogate networks to ensure faster training -- additionally, too complex of a surrogate would no longer preserve intepretability. The objective in Equation~\ref{eqn:orig_loss} was optimized via Adam gradient descent \cite{kingma2014adam} using a batch size of 100 and a learning rate of 1e-3 for 250 epochs. These hyperparameters were set via cross validation using grid search.
To evaluate model simulability, we use APL. Since Algorithm~\ref{alg:true_tree_regularization} can compute the APL for \textit{any} fixed deep model given its parameters, we use it to measure decision boundary complexity under any regularization, including L$_1$ or L$_2$. Figure~\ref{fig:parabola2}(b) shows each trained model as a single point in a 2D fitness space: the x-axis measures model complexity with APL, and the y-axis measures AUC (area under the ROC curve) prediction performance. These results show that simple L$_1$ or L$_2$ regularization does \emph{not} produce models with both small node count and good
predictions at \emph{any} value of the regularization strength $\lambda$. As expected, large $\lambda$ values for L$_1$ and L$_2$ only produce far-too-simple linear decision boundaries with poor accuracies. In contrast, our proposed tree regularization directly optimizes the MLP to have simple tree-like boundaries at high $\lambda$ values which can still yield good predictions. The lower panes of Figure~\ref{fig:parabola2} shows these boundaries. Our tree regularization is uniquely able to create \textit{axis-aligned} functions, because decision trees by definition parameterize functions with axis-aligned splits. Critically, these axis-aligned functions require very few nodes but are more effective than L$_1$ and L$_2$ counterparts.
\paragraph{Tree-Regularized GRU: Signal-and-noise HMM}
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.18\linewidth}
\includegraphics[width=0.76\linewidth]{v1/hmm2_tree/gru_tree_1}
\label{fig:hmm2:tree:1}
\caption{GRU $\lambda = 1$}
\end{subfigure}
\begin{subfigure}[b]{0.23\linewidth}
\includegraphics[width=0.8\linewidth]{v1/hmm2_tree/gru_tree_1000}
\label{fig:hmm2:tree:1000}
\caption{GRU $\lambda = 1\,000$}
\end{subfigure}
\begin{subfigure}[b]{0.28\linewidth}
\includegraphics[width=\linewidth]{v1/hmm2_gru.pdf}
\caption{GRU}
\label{fig:toy:gru:plot}
\end{subfigure}
\begin{subfigure}[b]{0.28\linewidth}
\includegraphics[width=\linewidth]{v1/hmm2_gruhmm.pdf}
\caption{GRU-HMM}
\label{fig:toy:gruhmm:plot}
\end{subfigure}
\caption{
\emph{Toy Signal-and-Noise HMM Task:}
\emph{(a)-(b)} Decision trees trained to mimic predictions of GRU models at different regularization strengths $\lambda$; as expected, increasing $\lambda$ decreases the size of the learned trees. Decision tree (b) suggests the model learns to predict positive output (blue) if and only if ``x[0] == 1 and s[3] == 1 and s[4] == 0''.
This simple description is consistent with the true rule used to generate labels for our dataset: assign positive label only if first dimension is on (x[0] == 1) and first state is active (the emission probability vector for this state is: [.5 .5 .5 .5 0 $\ldots$]). \emph{(c,d)} Tree-regularization produces simpler models (as measured by APL) with higher prediction quality (AUC) across range of regularization strengths $\lambda$ for the GRU (c) and GRU-HMM (d).
}
\label{fig:results:toy-signal-and-noise-hmm}
\end{figure}
Next, we analyze the performance of tree regularization on synthetic timeseries data. We generated a toy dataset of $N=100$ sequences, each with $T=50$ timesteps. Each timestep has a data vector $\mathbf{x}_{nt}$ of 14 binary features and a single binary output label $\mathbf{y}_{nt}$. The data comes from two separate HMM processes. First, a ``signal'' HMM generates the first 7 data dimensions from 5 well-separated states. Second, an independent ``noise'' HMM generates the remaining 7 data dimensions from a different set of 5 states. The transition and emission matrices for both HMMs are shown in Fig.~\ref{fig:toy-hmm}. The probabilities were chosen to make it difficult for a new HMM to learn. Each timestep's output label $\mathbf{y}_{nt}$ is produced by a rule involving \emph{both} the signal HMM's generated observations and the signal HMM's hidden state: the target is 1 at timestep $t$ only if both the first signal state is active and the first observation is turned on. We deliberately designed the generation so that neither logistic regression on inputs $\mathbf{x}_n$ alone nor a GRU model that makes predictions from hidden states alone can perfectly separate this data.
\begin{figure}[!h]
\centering
\begin{subfigure}[b]{0.24\linewidth}
\[
\begin{psmallmatrix}
.5 & .5 & .5 & .5 & 0 & 0 & 0 \\
.5 & .5 & .5 & .5 & .5 & 0 & 0 \\
.5 & .5 & .5 & 0 & .5 & 0 & 0 \\
.5 & .5 & .5 & 0 & 0 & .5 & 0 \\
.5 & .5 & .5 & 0 & 0 & 0 & .5
\end{psmallmatrix}
\]
\caption{Signal: Emission}
\label{fig:emission:hmm}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\[
\begin{psmallmatrix}
.7 & .3 & 0 & 0 & 0 \\
.5 & .25 & .25 & 0 & 0 \\
0 & .25 & .5 & .25 & 0 \\
0 & 0 & .25 & .25 & .5 \\
0 & 0 & 0 & .5 & .5
\end{psmallmatrix}
\]
\caption{Signal: Transition}
\label{fig:transition:hmm}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\[
\begin{psmallmatrix}
.5 & .5 & .5 & 0 & 0 & 0 & 0 \\
0 & .5 & .5 & .5 & 0 & 0 & 0 \\
0 & 0 & .5 & .5 & .5 & 0 & 0 \\
0 & 0 & 0 & .5 & .5 & .5 & 0 \\
0 & 0 & 0 & 0 & .5 & .5 & .5
\end{psmallmatrix}
\]
\caption{Noise: Emission}
\label{fig:emission:hmm2}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\[
\begin{psmallmatrix}
.2 & .2 & .2 & .2 & .2 \\
.2 & .2 & .2 & .2 & .2 \\
.2 & .2 & .2 & .2 & .2 \\
.2 & .2 & .2 & .2 & .2 \\
.2 & .2 & .2 & .2 & .2
\end{psmallmatrix}
\]
\caption{Noise: Transition}
\label{fig:transition:hmm}
\end{subfigure}
\caption{Emission (5 states vs 7 features) and transition probabilities for the signal HMM (a, b) and noise HMM (c, d). We emphasize that to output 1, the signal HMM must be in state 1 and the first input feature must be 1.}
\label{fig:toy-hmm}
\end{figure}
As with the MLP, each regularizer (tree, L2, L1) is applied to the output node of the GRU across a range of strength parameters $\lambda$ (see orange triangle in Figure~\ref{fig:what-we-regularize}). In training, we used 25 hidden dimensions for GRU models and 5 states for the HMM component of the GRU-HMM. All other choices are identical to the 2D Parabola setting.
Figure~\ref{fig:results:toy-signal-and-noise-hmm} compare the performance of regularized GRU and GRU-HMM models on the signal-and-noise HMM dataset. Since we can no longer easily visualize the decision boundary, we rely on plots like Figure~\ref{fig:results:toy-signal-and-noise-hmm}(c,d) to measure regularization effectiveness. Many of the same patterns from the 2D Parabola experiments emerge here: tree regularized GRU models achieve much higher (held-out) AUC at lower APL. Further, L$_1$ and L$_2$ are quite unreliable at high regularization strengths, doing worse than a decision tree at low APL. All regularized models converge to the same performance as APL approaches 0 (random choice) and infinity (unregularized). Additionally, we include results for the GRU-HMM (d) whose performance is lower bounded by the performance of a standalone HMM (notice the scale of the y-axis). However, as before, tree regularization on the ``GRU component" of the GRU-HMM quickly reaches near maximum performance with small APL (around 5). We hypothesize this is largely due to the compactly expressive nature of axis-aligned decision boundaries. Finally, Figure~\ref{fig:results:toy-signal-and-noise-hmm}(a,b) show two ``distilled" decision trees that are used to approximate the deep model in the last epoch of training. We can see that for small regularization strengths (a), the distilled tree is large and difficult to interpret. For larger strengths (b), the tree recovers the true generative process: predict positive output if and only if ``x[0] == 1 and s[3] == 1 and s[4] == 0''. The first component (x[0] == 1) represents the first observation being 1; the second component (s[3] == 1 and s[4] == 0) represents the first state being active (recall that the emission distribution for this is state is [.5 .5 .5 .5 0 $\ldots$]). A decision tree like this can be given to a human to help describe what mappings the deep model as learned. Critically, smaller decision trees are very easy to simulate.
\section{Applications: Real-World Timeseries Data}
Having explored a few synthetic environments, we now evaluate the tree regularizer on several real-world timeseries models in speech recognition and two sectors of healthcare. For each experiment below, we will compare a tree regularized GRU with an identical GRU regularized with L$_1$ or L$_2$. We will also include a decision tree baseline where a tree classifier is fit directly on the observations. Additionally, we will compare the GRU results with GRU-HMM performance to gauge any benefits of residual training. For optimization, we use Adam with a learning rate of 1e-3, a batch size of 256, decision tree hyperparameter $h=1000$, train for 300 epochs, surrogate datasets of size $J=100$, and retrain every 25 steps. Like above, we measure performance with AUC and simulability with APL for all models. Before sharing results, we briefly describe each task and domain.
\subsection{Tasks}
We tested our approach on several real-world tasks: predicting medical outcomes of hospitalized septic patients, HIV therapy outcome prediction, and predicting stop phoneme groups from a selection of English speech recordings. To normalize scales, we independently standardized input features via z-scoring. Like in the demonstrations above, we compare tree regularization to L$_1$ and L$_2$ baselines. Additionally, we compare a tree-regularized deep network to a decision tree classifier.
\begin{itemize}
\item
\emph{Sepsis Critical Care (ICU)}: We study timeseries data for 11\,786 septic ICU patients from the public MIMIC III dataset \cite{johnson2016mimiciii}. We observe at each hour (timestep) $t$ a data vector $\textbf{x}_{nt}$ of 35 vital signs and lab results as well as a label vector $\textbf{y}_{nt}$ of 5 binary outcomes. Hourly data $\textbf{x}_{nt}$ measures continuous input features such as respiration rate (RR), blood oxygen levels (paO$_{2}$), fluid levels, and more. Hourly binary labels $\textbf{y}_{nt}$ include whether the patient died in hospital, whether the patient died after 90 days, and if mechanical ventilation was applied. Models are trained to predict all 5 output dimensions concurrently from one shared embedding. The average sequence length is 15 hours. 7\,070 patients are used in training, 1\,769 for validation, and 294 for test.
\item
\emph{HIV Therapy Outcome (HIV)}: We make use of the EuResist Integrated Database \cite{euresist} for 53,236 patients diagnosed with HIV. We consider 4-6 month intervals (corresponding to hospital visits) as time steps. Each data vector $\textbf{x}_{nt}$ has 40 features, including blood counts, viral load measurements and lab results. Each output vector $\textbf{y}_{nt}$ has 15 binary labels, including whether a therapy was successful in reducing viral load to below detection limits, if therapy caused CD4 blood cell counts to drop to dangerous levels (indicating AIDS), or if the patient suffered adherence issues to medication. The average sequence length is 14 steps. 37\,618 patients are used for training; 7\,986 for testing, and 7\,632 for validation.
\item
\textit{Phonetic Speech (TIMIT)}: Timeseries data containing broadband recordings of 630 speakers of eight major dialects of American English reading ten phonetically rich sentences \cite{garofolo1993timit}. Each sentence contains time-aligned phonetic transcriptions of 60 phonemes. We focus on the problem of distinguishing stop phonemes (those that stop the flow of air, such as ``b'', ``d'', or ``g'') from non-stops. Each timestep has one binary output $\textbf{y}_{nt}$ indicating whether a stop phoneme occurs or not. There are 26 continuous features for each input vector $\textbf{x}_{nt}$ representing the Mel-frequency cepstral coefficients and derivatives of the acoustic signal. There are 6\,303 sequences: which we split into 3\,697 for training, 925 for validation, and 1\,681 for testing. The average length is 614 tokens in a sequence.
\end{itemize}
\subsection{Results and Analysis}
The results on ICU, HIV, and TIMIT share many consistent characteristics. We summarize the many experiments with analysis on common patterns and provide a few takeaways.
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.23\linewidth}
\includegraphics[width=\linewidth]{v1/sepsis_grutree_0.pdf}
\caption{Hospital Mortality}
\label{fig:results:sepsis:gru:mortality:trace_plots}
\end{subfigure}
\begin{subfigure}[b]{0.25\linewidth}
\includegraphics[width=0.8\linewidth,height=3cm]{v1/sepsis_tree/sepsis_gru_tree_dim_0}
\caption{Hospital Mortality}
\label{fig:results:sepsis:gru:mortality:tree}
\end{subfigure}
\begin{subfigure}[b]{0.23\linewidth}
\includegraphics[width=\linewidth]{v1/sepsis_grutree_1.pdf}
\caption{90-day Mortality}
\label{fig:results:sepsis:gru:90mortality:trace_plots}
\end{subfigure}
\begin{subfigure}[b]{0.25\linewidth}
\includegraphics[width=0.8\linewidth,height=3cm]{v1/sepsis_tree/sepsis_gru_tree_dim_1}
\caption{90-day Mortality}
\label{fig:results:sepsis:gru:90mortality:tree}
\end{subfigure}
\begin{subfigure}[b]{0.23\linewidth}
\includegraphics[width=\linewidth]{v1/sepsis_grutree_2.pdf}
\caption{Mech. Vent.}
\label{fig:results:sepsis:gru:mechvent:trace_plots}
\end{subfigure}
\begin{subfigure}[b]{0.22\linewidth}
\includegraphics[width=0.8\linewidth]{v1/sepsis_tree/sepsis_gru_tree_dim_2}
\caption{Mech. Vent.}
\label{fig:results:sepsis:gru:mechvent:tree}
\end{subfigure}
\begin{subfigure}[b]{0.23\linewidth}
\includegraphics[width=\linewidth]{v1/sepsis_grutree_4.pdf}
\caption{Max Vaso.}
\label{fig:results:sepsis:gru:vaso:trace_plots}
\end{subfigure}
\begin{subfigure}[b]{0.22\linewidth}
\includegraphics[width=0.8\linewidth]{v1/sepsis_tree/sepsis_gru_tree_dim_4}
\caption{Max Vaso.}
\label{fig:results:sepsis:gru:vaso:tree}
\end{subfigure}
\caption{
\emph{SEPSIS task} -- Study of different regularizers for a GRU model with 100 states, trained to jointly predict 5 binary outcomes. Panels \emph{(a,c,e,g)} show AUC vs. APL for 4 of the 5 outcomes; in all cases, tree regularization provides higher accuracy in the target regime of low-complexity decision trees. Panels \emph{(b,d,f,h)} show the associated decision trees for $\lambda = 2\,000$; these were found by clinically interpretable by an ICU clinician.
}
\label{fig:results:sepsis}
\end{figure}
\begin{figure}[h!]
\centering{
\begin{subfigure}[b]{0.23\linewidth}
\includegraphics[width=\linewidth]{v1/timit_grutree.pdf}
\caption{TIMIT ``Stop"}
\label{fig:timit:gru:trace_plots}
\end{subfigure}
\begin{subfigure}[b]{0.23\linewidth}
\includegraphics[width=\linewidth]{v1/HIV/CD4_below200_GRU.pdf}
\caption{HIV: CD4$^+$}
\label{fig:hiv:cd4}
\end{subfigure}
\begin{subfigure}[b]{0.23\linewidth}
\includegraphics[width=\linewidth]{v1/HIV/adherence_gru.pdf}
\caption{HIV Therapy}
\label{fig:hiv:therapy}
\end{subfigure}
\begin{subfigure}[b]{0.23\linewidth}
\includegraphics[width=\linewidth]{v1/HIV/adherence_nosamples.pdf}
\caption{HIV Therapy}
\label{fig:hiv:adherence}
\end{subfigure}}
\caption{\emph{TIMIT and HIV tasks:} -- Study of different regularizers for a GRU model with 75 states. Panels \emph{(a)-(c)} are tradeoff curves showing how predictive power and decision-tree complexity evolve with increasing strength of L$_1$, L$_2$ or tree regularization in both TIMIT (stop phoneme prediction) and HIV (CD4$^+ \leq 200$ cells/ml and therapy adherence prediction). The TIMIT task has only one binary outcome. However, for the HIV task, the GRU is trained to jointly predict 15 binary outcomes, of which 2 are shown here in Panels \emph{(b)-(c)}. The decision tree associated with HIV adherence is shown in \emph{(d)}.}
\label{fig:results:timit}
\end{figure}
\paragraph{Tree-regularized models have fewer nodes than other forms of regularization.}
Across tasks, we see that in the target regime of small decision trees (low APLs), our proposed regularization achieves higher prediction quality (higher AUCs). In the signal-and-noise HMM task, tree regularization (green line in Figure~\ref{fig:results:toy-signal-and-noise-hmm}(d)) achieves AUC
values near 0.9 when its trees have an average path length of 10. Similar models with L$_1$ or L$_2$ regularization reach this AUC only with trees that are nearly double in complexity (APL over 25). On both the SEPSIS (Figure~\ref{fig:results:sepsis}) and TIMIT (Figure~\ref{fig:timit:gru:trace_plots}), we see considerable gains in accuracy over other regularizers---AUC differences of 0.05 to 0.15---for path lengths of 20-30. On the HIV task in Figure~\ref{fig:hiv:cd4}, we see AUC differences of between 0.03 and 0.15 for path lengths of 10-15. Similarly, on the other HIV outcomes in Figures~\ref{fig:hiv:therapy}-\ref{fig:hiv:adherence}, we see AUC differences of between 0.03 and 0.09 for path lengths of 20-30. These gains are particularly useful in determining how to administer subsequent therapies. More specifically, in domains where human-simulability is required, these increases in accuracy in the small-complexity regime can mean the difference between models that provide value on
a task and models that are unusable, either because their performance is too poor or they are uninterpretable.
We emphasize that across all tasks, standalone decision trees (marked by yellow dots in line plots) cannot reach this high-accuracy, low-complexity sweet spot, suggesting that tree regularization still enables neural networks to be nonlinear.
\begin{table}
\parbox{.45\linewidth}{
\begin{tabular}{ l | c}
\toprule
Dataset & Fidelity \\
\midrule
signal-and-noise HMM & 0.8762 \\
SEPSIS (In-Hospital Mortality) & 0.8144\\
SEPSIS (90-Day Mortality) & 0.8845\\
SEPSIS (Mech. Vent.) & 0.9008\\
SEPSIS (Median Vaso.) & 0.9166\\
SEPSIS (Max Vaso.) & 0.9260\\
HIV (CD4$^{+}$ below 200) & 0.8426 \\
HIV (Therapy Success) & 0.8761 \\
HIV (Mortality) & 0.9318\\
HIV (Poor Adherence) & 0.9014 \\
HIV (AIDS Onset) & 0.9344\\
TIMIT & 0.8477\\
\bottomrule
\end{tabular}
\caption{Fidelity of predictions from our trained deep GRU and its corresponding decision tree. Fidelity is defined as the percentage of test examples on which the prediction made by a tree agrees with the deep model \cite{craven1996extracting}.}
\label{table:fidelity}
}
\hfill
\parbox{.45\linewidth}{
\begin{tabular}{ l | l | c}
\toprule
Dataset & Model & Epoch Time\\
\midrule
SEPSIS & HMM & $589.8 \pm 24.1$ \\
SEPSIS & GRU & $822.3 \pm 11.2$ \\
SEPSIS & GRU-HMM & $1666.9 \pm 147.0$ \\
SEPSIS & GRU$^\ddagger$ & $2015.1 \pm 388.1$ \\
SEPSIS & GRU-HMM$^\ddagger$ & $2443.7 \pm 351.2$ \\
TIMIT & HMM & $1668.9 \pm 126.9$ \\
TIMIT & GRU & $2116.8 \pm 438.8$ \\
TIMIT & GRU-HMM & $3207.2 \pm 651.9$ \\
TIMIT & GRU$^\ddagger$ & $3977.0 \pm 812.1$ \\
TIMIT & GRU-HMM$^\ddagger$ & $4601.4 \pm 805.9$ \\
\bottomrule
\end{tabular}
\caption{Training time for a single epoch in seconds on a single Intel Core i5 CPU. The ($\ddagger$) symbol represents using tree regularization. The times for tree regularized models include surrogate training expenses. If we retrain sparsely, then the cost is amortized to close to negligible.}
\label{table:runtime}
}
\end{table}
\paragraph{Our learned decision-tree-like boundaries are interpretable.} Recall that a consequence of tree regularization is a distillation of the deep model as a decision tree. Across all tasks, these trees which mimic the predictions of tree-regularized deep models are small enough to simulate by hand and help users grasp the model's nonlinear prediction logic. We have already seen this to be the case for the signal-and-noise HMM task. Similarly, in Figure~\ref{fig:results:sepsis}, we show
decision trees for two sepsis prediction tasks. We consulted a clinical expert on sepsis treatment, who noted that the trees helped him understand what the models might be doing and thus determine if he would trust the deep model. For example, he said that using FiO$_{2}$, RR, CO$_{2}$ and paO$_{2}$ to predict need for mechanical ventilation (Figure~\ref{fig:results:sepsis:gru:mechvent:tree}) was sensible, as these all measure breathing quality.
In contrast, the in-hospital mortality tree (Figure~\ref{fig:results:sepsis:gru:mortality:tree}) predicts that some young patients with no organ failure have high mortality rates while other young patients with organ failure have low mortality. These counter-intuitive results led to hypotheses about how uncaptured variables impact the
training process. Such reasoning would not be possible from
simple sensitivity analyses of the deep model. Moreover, our distilled trees for HIV such as those in Figure \ref{fig:hiv:adherence}, are also interpretable. We observe that the baseline viral load and number of prior treatment lines are crucial factors in predicting whether a patient will suffer adherence issues. This is consistent with several medical studies which show that patients with higher viral loads at baseline tend to have faster disease progression, and hence have to take several drug cocktails to potentially combat resistance. This typically makes it more difficult for these patients to adhere to the medication.
\paragraph{Practical runtimes for tree regularization are less than twice that of simpler L2.}
While our tree-regularized GRU with 10 states takes 3977 seconds per epoch on TIMIT, an equivalent L$_2$-regularized GRU takes 2116 seconds per epoch. Thus, our new method has cost less than twice the baseline \emph{even when the path-length surrogate is serially computed}. Because the surrogate $\hat{\Omega}$ will in general be a much smaller model than the target neural model, we expect one could get much smaller per-epoch times by parallelizing the creation of $(\theta,\Omega(\theta))$ training pairs and the surrogate training. Additionally, 3\,977 seconds includes the time needed to train the surrogate. In practice, we do this sparingly, only once every 25 epochs, yielding an amortized per-epoch cost of 2\,191 seconds. More exhaustive runtime results with standard deviations over 10 epochs are in Table~\ref{table:runtime}.
\begin{figure}[!h]
\centering
\begin{subfigure}[b]{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{v1/sparse_trees/figuref1a}
\caption{$\frac{7}{10}$ Runs}
\label{}
\end{subfigure}
\begin{subfigure}[b]{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{v1/sparse_trees/figuref1b}
\caption{$\frac{2}{10}$ Runs}
\label{}
\end{subfigure}
\begin{subfigure}[b]{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{v1/sparse_trees/figuref1c}
\caption{$\frac{1}{10}$ Runs}
\label{}
\end{subfigure}
\begin{subfigure}[b]{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{v1/sparse_trees/figuref2a}
\caption{}
\label{}
\end{subfigure}
\begin{subfigure}[b]{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{v1/sparse_trees/figuref2b}
\caption{}
\label{}
\end{subfigure}
\begin{subfigure}[b]{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{v1/sparse_trees/figuref2d}
\caption{}
\label{}
\end{subfigure}
\caption{\emph{(a-c)} Decision trees from 10 independent runs on the signal-and-noise HMM dataset with $\lambda = 1000.0$. Seven of the ten runs resulted in a tree of the same structure. The other three are similar, having additional subtrees but sharing the same splits and features. \emph{(d-f)} Similar experiment with $\lambda = 0.01$. Low regularization causes high variance in tree size and shape. Sub-figures (d-f) show three of many variations.}
\label{fig:results:stable:test}
\end{figure}
\paragraph{Decision trees are stable over multiple optimization runs.}
When tree regularization is strong (high $\lambda$), the decision trees trained to match the predictions of deep models are stable. For both signal-and-noise and Sepsis tasks, multiple runs from different random restarts have nearly identical tree shape and size, perhaps differing by a few nodes. This stability is crucial to building trust in our method. On the signal-and-noise task ($\lambda = 7000$), 7 of 10 independent runs with random initializations resulted in trees of exactly the same structure, and the others closely resembled those sharing the same subtrees and features. On the other hand, with weak regularization (small $\lambda$), variability in the distilled decision trees is high. See Figure~\ref{fig:results:stable:test} example trees under strong (a-c) and weak (d-f) regularization.
\paragraph{Target neural models are faithful to decision trees.} \textit{Fidelity} is defined by \cite{craven1996extracting} as the percentage of examples where the prediction of the target network and the decision tree agree. Thus, fidelity is a measurement of how faithful the deep network is to the distilled tree. A fidelity of 1 would indicate perfect agreement, in which the neural network has learned exactly the axis-aligned boundaries of a tree. In some sense, a fidelity of 1 is undesirable as we hope the deep network can make use of nonlinearity on the examples that a simulable tree would struggle with. Table~\ref{table:fidelity} shows that the fidelity is high but not perfect, ranging from 0.80 to 0.94 across datasets.
\begin{figure}[h!]
\begin{subfigure}[b]{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{v1/dataset_results/hmm2_gruhmm.pdf}
\caption{SNR {\tiny 20+5 states}}
\label{fig:2hmm:gruhmm:plot}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{v1/sepsis_gruhmmtree_0.pdf}
\caption{Mortality {\tiny 50+50}}
\label{fig:sepsis:mortality:gruhmm:plot}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{v1/sepsis_gruhmmtree_2.pdf}
\caption{Mech. Vent. {\tiny 50+50}}
\label{fig:sepsis:vent:gruhmm:plot}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{v1/timit_results/timit_gruhmmtree.pdf}
\caption{TIMIT ``Stop" {\tiny 50+25}}
\label{fig:timit:gruhmm:plot}
\end{subfigure}
\caption{
Fitness curves for the GRU-HMM, showing prediction quality (AUC) vs. complexity (APL) across range of regularization strengths $\lambda$. Captions show the number of HMM states plus the number of GRU states. See Figures~\ref{fig:results:sepsis} and \ref{fig:results:timit} to compare these GRU-HMM numbers to simpler GRU and decision tree baselines.
}
\label{fig:results:gruhmm}
\end{figure}
\paragraph{The deep residual GRU-HMM can achieve high AUC with less complexity.}
In Figure~\ref{fig:results:gruhmm}, we show the performance of jointly training the residual model, GRU-HMM, which combines an HMM with a tree-regularized GRU to improve its predictions. Here, the ideal APL is zero, indicating only the HMM makes predictions (only the GRU output node is regularized). For small APLs, the GRU-HMM substantially improves the original HMM's predictions \emph{and} has simulability gains over earlier GRUs. On the mechanical ventilation task, the GRU-HMM requires an APL of only 28 to reach AUC of 0.88, while the GRU alone with the same number of states requires a path length of 60 to reach the same AUC. This suggests that jointly-trained deep residual models may provide even better interpretability.
\section{Regionally Faithful Explanations with Expert Priors}
\label{sec:treereg}
Global summaries such as L$_1$, L$_2$, or even tree regularization as presented above face a tough trade-off between human-simulability and being faithful to the underlying model. For instance, if we require a minimum fidelity of 0.95, it simply may not be possible to fit a faithful decision tree that is also human-simulable. In our experiments so far, we have been fortunate but there is little guarantee that such a tree must exist. More generally, for a complex enough domain (or for particularly difficult examples), it is again unreasonable to assume that there a decision tree can be small, bushy and performant. In such a case, tree regularization of a deep network may not be able to find a good compromise between accuracy and complexity. To get the best of both worlds, we will need a \textit{finer-grained} definition of interpretability. Doing so might help find a new wealth of minima with high AUC and low APL (aka powerful yet simulable).
In this extension, we take advantage of the fact that domain experts may already have notions about how regions of the input space operate differently. For example, a clinical intensivist may already cognitively consider patients in the surgical intensive care unit (ICU) as different from patients in the cardiac ICU. Analogously, biologists may be happy with different models for classifying diseases in deciduous versus in coniferous plants. In fact, this way of partitioning thinking into independent compartments is a very general phenomena. Cognitive science literature tells us that people build context-dependent models of the world; they do not expect the same rule to apply in all circumstances \cite{miller2018explanation}.
Using this intuition, we divide the input space into exclusive regions. We assume that this division is available \emph{a priori} via domain knowledge. In fact, this is a good opportunity to inject human beliefs into training the model. Formally, this translates into $R$ exclusive regions $\mathcal{X}_1, \ldots \mathcal{X}_R$, where $\cup_{r=1}^R \mathcal{X}_r \subseteq \mathcal{X}^P$. We denote the observed dataset belonging to region $r$ as $X_r \triangleq \{\textbf{x}_n : \textbf{x}_n \in \mathcal{X}_r\}$. Thus, we shall apply a \textit{regionally-faithful regularization} that encourages the target neural model to be ``simple'' in \emph{every} region (where a region corresponds to a human context). This partitioning of the input space into regions allows a regularized neural model to approximate very complex decision boundaries with simple components (in each region) still, thereby remaining simulable. We emphasize that our regional explanations are distinct from local explanations (e.g.~\cite{ribeiro2016should}): the latter concerns itself with behavior within an $\epsilon$-ball around a single data point, $\mathbf{x}_n$ and makes no claims about general behavior across data points. In contrast, \emph{regional} explanations are faithful over an entire region $\mathcal{X}_r$.
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.24\linewidth}
\centering
\includegraphics[width=0.7\linewidth]{v2/toy2/groundtruth.pdf}
\caption{True}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\centering
\includegraphics[width=0.7\linewidth]{v2/toy2/globaltree.pdf}
\caption{Global}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\centering
\includegraphics[width=0.7\linewidth]{v2/toy2/localtree.pdf}
\caption{Local}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\centering
\includegraphics[width=0.7\linewidth]{v2/toy2/regiontree.pdf}
\caption{Regional}
\end{subfigure}
\caption{We show the differences between global (b), local (c), and
regional (d) tree regularization using a synthetic classification
task. (a) shows the true decision boundary. Red and green points
represent the training dataset. Lightly colored areas represent
regions. In (b), the model is over-regularized and ignores underlying structure.
In (c), regions are made as small as possible to simulate
locality---resulting in highly variable rules for nearby points.
Regional tree regularization (d) provides an interpretable middle
ground.}
\label{fig:toy2}
\end{figure}
As a preview, Figure~\ref{fig:toy2} highlights the distinctions between global, local, and regional tree regularization on a two-dimensional toy dataset where the true decision boundary is divided in half at $x=0.4$. We see that global explanations (b) lack information about the input space and have to choose from a large set of possible solutions, converging to a different boundary. On the other hand, local explanations (c) produce simple boundaries around each data point but fail to capture global relationships, resulting in a complex overall decision function. Finally, regional explanations (d) over two regions divided at 0.4 share the benefits of (b) and (c), converging to the true boundary.
\begin{algorithm}[!t]
\caption{Pruned Average-Path-Length (APL) Cost Function}
\begin{algorithmic}[1]
\Require{
\Statex $f(\cdot; \theta)$: discrete prediction function, with parameters $\theta$
\Statex $\{ \mathbf{x}_i \}_{i=1}^{N}$: a set of $N$ input examples
\Statex $N_{\text{train}}$: number of examples to use for training
\Statex $h$: minimum number of samples required to be a leaf node
}
\Function{APL}{$\{ \mathbf{x}_i \}_{i=1}^{N}, f(\cdot; \theta), h$}
\State $\hat{\mathbf{y}}_i = f(\mathbf{x}_{i}, \theta)$, $\forall i \in \{1, 2, \ldots N\}$
\State $T = \textsc{TrainTree}( \{ \mathbf{x}_{i}, \hat{\mathbf{y}}_i \}_{i=1}^{N_{\text{train}}})$
\State $T = \textsc{PruneTree}(T, \{ \mathbf{x}_{i}, \hat{\mathbf{y}}_i \}_{i=N_{\text{train}}}^N)$
\State \Return $\text{mean}( \{ \textsc{GetDepth}(T, \mathbf{x}_i) \}_{i=1}^N )$
\EndFunction
\end{algorithmic}
\label{algorithm:2}
\end{algorithm}
\subsection{Regional Tree Regularization Objective}
We now formally introduce regional tree regularization, which will require that the target neural model $f(\cdot; \theta)$ is well-approximated by a separate compact decision tree in \emph{every} region. In contrast, we will rename the tree regularizer presented above as \emph{global tree regularization}. Regionally simple decision boundaries are particularly hard to achieve with global tree regularization as the global APL metric may allow some human-relevant regions to be complex as long as most are simple. In particular, global tree regularization has an incentive to ``ignore" simpler regions in order to minimize the regularization term (i.e. trivially prediction a single label). In many contexts, this behavior is undesirable. For example, if a clinician splits his/her patients by severity of illness, regularizing for simple global explanations can completely ignore a group of patients, rendering the machine learning system useless. To address this, we define our regional tree regularization as follows. First, let the APL for region $r$ be:
\begin{align}
\Omega^{\texttt{regional}}_{r}(\theta)
&\triangleq \textup{APL}(\mathcal{X}_r, f(\cdot; \theta))
\\
\Omega^{\texttt{global}}(\theta)
&\triangleq \textup{APL}(\mathcal{X}^P, f(\cdot; \theta))
\label{eqn:region-apl}
\end{align}
where the average path length, $\textup{APL}$ can be computed with Algorithm~\ref{algorithm:2} (note that the target network and its parameters $\theta$ are the same for all regions $r$, meaning a strong sharing of parameters across regions). For all future instances of computing APL, we use Algorithm~\ref{algorithm:2}, not Algorithm~\ref{alg:true_tree_regularization}. We will elaborate on this distinction later. Note that $\Omega^{\texttt{global}}(\theta)$ is equivalent to global tree regularization as presented above. Next, to ensure that some regions cannot be made simple at the expense of others, we penalize only the most complex region:
\begin{equation}
\Omega^{\texttt{regional}}(\theta)
\triangleq \texttt{max}_r (\{ \Omega^{\texttt{regional}}_{r}(\theta) \}_{r=1}^R)
\label{eqn:argmax-apl}
\end{equation}
in other words, a L$_0$ norm over $\{\Omega_r\}$. The choice of L$_0$ norm produces significantly different (and desirable) behavior than if we had simply used, for example, the L$_1$ norm (or sum) over $\{\Omega_r\}$. Regularizing the sum of $\Omega_r$ is equivalent to simply regularizing APL in a global tree that first branches by region. In contrast, as a nonlinear regularizer, L$_0$ keeps \emph{all} regions simple (aka low APL), while not penalizing regions that are already simple. We show an example of this effect in Figure~\ref{fig:toyexp3}: (a) shows a toy dataset with two regions (split by the black line): the left has a simple decision boundary dividing the region in half; the right has a more complex boundary. (b) and (c) then show two minima using L$_1$ regional tree regularization. In both cases, one of the regions collapses to a trivial decision boundary (predicting all one label) to minimize the overall sum of APLs. On the other hand, since L$_0$ is sparse, simple regions are not included in the objective, resulting in a more ``balanced" regularization between regions (see d and e).
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.19\linewidth}
\includegraphics[width=0.8\linewidth]{v2/toy_2side/gt.pdf}
\caption{True}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\includegraphics[width=0.8\linewidth]{v2/toy_2side/l1_min1.pdf}
\caption{L$_1$}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\includegraphics[width=0.8\linewidth]{v2/toy_2side/l1_min2.pdf}
\caption{L$_1$}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\includegraphics[width=0.8\linewidth]{v2/toy_2side/l0_min1.pdf}
\caption{L$_0$}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\includegraphics[width=0.8\linewidth]{v2/toy_2side/l0_min2.pdf}
\caption{L$_0$}
\end{subfigure}
\caption{An L$_1$ penalty on per-region APLs can over-penalize, resulting in an entire region with far too simple predictions. Subplots (b) and (c) show results from two different initializations using the L$_1$ norm, while (d) and (e) show the same using the L$_0$ norm.}
\label{fig:toyexp3}
\end{figure}
However, gradient descent with Equation~\ref{eqn:argmax-apl} has several challenges. For example, both $\Omega_r$ and the $\max$ functions are non-differentiable. In the following, we describe how we address these challenges as well as concerns over optimization stability.
\begin{algorithm}[b]
\caption{\textsc{SparseMax For Regional Tree Reg.}}
\begin{algorithmic}[1]
\Require{
\Statex $\mathbf{\hat{\Omega}} = \{\hat{\Omega}^{\texttt{regional}}_r\}_{r=1}^R$: APL for each of $R$ regions
}
\Function{$\textsc{SparseMax}$}{$\mathbf{\hat{\Omega}}$}
\State Sort $\mathbf{\hat{\Omega}}$ such that $\mathbf{\hat{\Omega}}[i] \geq \mathbf{\hat{\Omega}}[j]$ if $i \geq j$
\State $k = \max \{ r \in [1, R] | (1 + r\mathbf{\hat{\Omega}}[r]) > \sum_{i \leq r} \mathbf{\hat{\Omega}}[i] \}$
\State $\tau = k^{-1}(-1 + \sum_{i \leq k} \mathbf{\hat{\Omega}}[i])$
\State $\mathbf{p} = \{p_r\}_{r=1}^R$ where $p_r = \max\{\mathbf{\hat{\Omega}}_r - \tau, 0 \}$
\State \Return $\mathbf{p}$
\EndFunction
\end{algorithmic}
\label{algorithm:3}
\end{algorithm}
\subsection{Gradient-based optimization with SparseMax}
Gradient-based optimization of our proposed regularizer in Equation~\ref{eqn:argmax-apl} is challenging because the $\texttt{max}$ operator is not differentiable. Further, common differentiable approximations like $\texttt{softmax}$ are dense (include non-zero contributions from all regions), which makes it difficult to focus on the most complex regions as $\texttt{max}$ does (using a dense approximation of $\texttt{max}$ would suffer from the same problems as using a L$_1$ norm). Instead, we use the recently-proposed $\textsc{SparseMax}$ transformation~\cite{martins2016softmax},
which can focus on the most problematic regions (setting others to zero contribution) while remaining smooth and differentiable almost everywhere. Intuitively, $\textsc{SparseMax}$ corresponds to a Euclidean projection of an input vector $\mathbf{\hat{\Omega}}$ with $R$ entries (one APL per region) to an $R$-length vector $\mathbf{p}$ of non-negative entries that sums to one (i.e. the ($R-1$)-dimensional probability simplex).
When the projection lands on a boundary in the simplex (which is likely), then the resulting vector will be sparse.
Efficient implementations of this projection are well-known~\cite{duchi2008efficient} (see Algorithm~\ref{algorithm:3}), as are Jacobians for automatic differentiation~\cite{martins2016softmax}.
We refer to using $\textsc{SparseMax}$ as L$_0$ regional tree regularization (we call using the sum of the APLs L$_1$ regional tree regularization).
\begin{figure*}[t!]
\centering
\includegraphics[width=\textwidth]{v2/diagram.pdf}
\caption{Illustratoin of L$_0$ regional tree regularization. Each round contains three trees representing regions. Light gray color indicates regions given 0 probability by \texttt{sparsemax}. Over the three rounds, different regions are given priority while other regions are given no weight. The ability to disregard regions of low complexity makes for smoother learning.}
\end{figure*}
\subsection{Differentiable Regional Tree Regularization Loss $\hat{\Omega}_r$}
The regional APL $\Omega^{\texttt{regional}}_r(\theta)$ is not differentiable as derivatives cannot flow through CART (the common method for training decision trees). To circumvent this, we again employ surrogate loss functions $\hat{\Omega}^{\texttt{regional}}_{r}:$ that map a parameter vector $\theta \in \Theta$ to an \textit{estimate} of $\Omega^{\texttt{regional}}_{r}(\cdot)$, the APL in region $r$. This process is identical to global tree regularization but only for observations lying in region $r$. Each surrogate $\hat{\Omega}^{\texttt{regional}}_r$ has its own parameters $\phi_r$. Specifically, we fit each $\hat{\Omega}^{\texttt{regional}}_{r}(\theta)$ by minimizing a mean squared error loss,
\begin{equation}
\min_{\phi_r} \sum_{j=1}^{J} (\Omega^{\texttt{regional}}_{r}(\theta_{j}) - \hat{\Omega}^{\texttt{regional}}_{r}(\theta_{j}, \phi_r))^{2}
\label{eqn:opt:local}
\end{equation}
for all $r=1, ..., R$ where $\theta_j$ is sampled from a dataset of $J$ known parameter vectors and their true APLs: $\mathcal{D}^{\theta}_{r} = \{\theta_j, \Omega^{\texttt{regional}}_{r}(\theta_{j})\}_{j=1}^{J}$. This dataset can be assembled using the candidate $\theta$ vectors obtained over $J$ gradient steps while training the target model $f(\cdot, \theta)$. For $R$ regions, we curate one such dataset for each surrogate model.
The ability of each surrogate to stay faithful is a function of many factors. For global tree regularization (above), we used a fairly simple strategy for training a surrogate and found it sufficient; we find that especially when there are multiple surrogates to be maintained, sophistication is needed to keep the gradients accurate and the variances low. We describe these innovations in the next section.
\subsection{Innovations for Optimization Stability}
\label{sec:global:improve}
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.16\linewidth}
\includegraphics[width=\linewidth]{v2/deterministic/rand_tree_0.pdf}
\caption{Random:1}
\end{subfigure}
\begin{subfigure}[b]{0.16\linewidth}
\includegraphics[width=\linewidth]{v2/deterministic/rand_tree_1.pdf}
\caption{Random:2}
\end{subfigure}
\begin{subfigure}[b]{0.16\linewidth}
\includegraphics[width=\linewidth]{v2/deterministic/rand_tree_2.pdf}
\caption{Random:3}
\end{subfigure}
\begin{subfigure}[b]{0.16\linewidth}
\includegraphics[width=\linewidth]{v2/deterministic/det_tree_0.pdf}
\caption{Fixed:1}
\end{subfigure}
\begin{subfigure}[b]{0.16\linewidth}
\includegraphics[width=\linewidth]{v2/deterministic/det_tree_1.pdf}
\caption{Fixed:2}
\end{subfigure}
\begin{subfigure}[b]{0.16\linewidth}
\includegraphics[width=\linewidth]{v2/deterministic/det_tree_2.pdf}
\caption{Fixed:3}
\end{subfigure}
\caption{\emph{(a-d)} Decision trees using randomized training; \emph{(e-h)} Decision trees using deterministic training. Note that randomized training leads to very different optima.}
\label{fig:deterministic}
\end{figure}
Optimizing multiple surrogate networks is a delicate operation. We found that depending on hyperparameters, the regional surrogates were unable to accurately predict the APL, causing regularization to fail. Further, repeated runs also often found different minima, making regional tree regularization feel unreliable. In short, it presents a much more difficult technical challenge than training a single surrogate as in global tree regularization. Below, we list optimization innovations that are essential to stabilize training, identify consistent minima, and get good APL prediction---all of which enabled robust regional tree regularization.
\begin{wrapfigure}{r}{0.5\textwidth}
\centering
\begin{tabular}{l c c}
\toprule
Experiment & Mean MSE & Max MSE \\
\midrule
No data aug. & 0.069 & 0.987 \\
With data aug. & 0.015 & 0.298 \\
\hline
Randomized & 0.116 & 1.731 \\
Deterministic & 0.024 & 0.371 \\
\bottomrule
\end{tabular}
\caption{Comparison of the average and max mean squared error (MSE) between surrogate predictions and true average path lengths over 500 epochs. Non-deterministic training and lack of data introduces large errors.}
\label{table:tricks}
\end{wrapfigure}
\paragraph{Data augmentation makes for a robust surrogate.}
Especially for regional explanations, relatively small changes in the underlying model can mean large changes for the pattern in a specific region. As such, the surrogates need to be retrained frequently (e.g. every 50 gradient steps). The practice used in global tree regularization of computing the true APL for a dataset $\mathcal{D}^\theta$ of the most recent $\theta$ is insufficient to learn the mapping from a thousand-dimensional weight vector to the APL. Using stale (very old) $\theta$ from previous epochs, however, would result in a poor surrogate model given outdated information. Previous heuristics as in random restarts or arbitrarily sampling random weights introduced more noise than signal. Thus, we supplement the dataset with randomly sampled weight vectors \textit{from the convex hull defined by the recent weights}. Specifically, to generate a new $\theta$, we sample from a Dirichlet distribution with $J$ categories and form a new parameter as a convex combination of the elements in $\mathcal{D}^{\theta}$. For each of these samples, we compute its true APL to train the surrogate. Table~\ref{table:tricks} shows this to reduce noise.
\paragraph{Decision trees should be pruned.}
Given a dataset, $\mathcal{D}$, even with a fixed seed, there are many decision trees that can fit $\mathcal{D}$. One can always add additional subtrees that predict the same label as the parent node, thereby not effecting performance. This invariance again introduces difficulty in learning a surrogate model. To remedy this, we use \textit{reduced error pruning}, which removes any subtree that does not effect performance as measured on a portion of $\mathcal{D}$ not used in $\textsc{TrainTree}$. Note that line 4 in Algorithm~\ref{algorithm:2} is not in the original tree regularization algorithm. Intuitively, pruning collapses the set of possible trees describing a single classifier to a singleton.
\paragraph{Decision trees should be trained deterministically.}
CART is a common algorithm to train a decision tree. However, it has poor complexity in the number of features as it enumerates over all unique values per dimension. To scale efficiently, many open-source implementations (e.g. Scikit-Learn \cite{pedregosa2011scikit}) randomly sample a small subset of features. As such, independent training instances can lead to different decision trees of varying APL. For tree regularization, unexplained variance in APL means difficulty in training the surrogate model, since the function from model parameters to APL is no longer many-to-one. The error is compounded when there are many surrogates. To remedy this, we fix the random seed that governs the choice of features. As an example, Figure~\ref{fig:deterministic} shows the high variance of decision boundaries from a randomized treatment of fitting decision trees (a-d) on a very sparsely sampled data set, leading to higher error in surrogate predictions (Table~\ref{table:tricks}). Setting the seed removes this variance.
\paragraph{A large learning rate will lead to thrashing.}
As mentioned before, with many regions, small changes in the deep model can already have large effects on a region. If the learning rate is fast, each gradient step can lead to a dramatically different decision boundary than the previous. Thus, the function that each surrogate must learn is no longer continuous. Empirically, we found large learning rates to lead to \textit{thrashing}, or oscillating between high and low APL where the surrogate is effectively memorizing the APL from the last epoch (with poor generalization to new $\theta$).
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.15\linewidth}
\includegraphics[width=\linewidth]{v2/toy_many/gt.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.15\linewidth}
\includegraphics[width=\linewidth]{v2/toy_many/noreg.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.15\linewidth}
\includegraphics[width=\linewidth]{v2/toy_many/no_aug.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.15\linewidth}
\includegraphics[width=\linewidth]{v2/toy_many/no_prune.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.15\linewidth}
\includegraphics[width=\linewidth]{v2/toy_many/bad_lr.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.15\linewidth}
\includegraphics[width=\linewidth]{v2/toy_many/tricks.pdf}
\caption{}
\end{subfigure}
\caption{(a) Ground truth decision boundary with 25 regions; green represents positive labels. \emph{(b)} Minima with no regularization. \emph{(c)} Minima with no data augmentation. \emph{(d)} Minima with no pruning or determinism in training trees. \emph{(e)} Minima with bad learning rate. \emph{(f)} Minima using optimization innovations. Colored patches represent regions.}
\label{fig:toyexp4}
\end{figure}
These optimization innovations are crucial for learning with regional tree regularization. Without them, optimization is very unstable, resulting in undesirable minima. Figure~\ref{fig:toyexp4} shows a few examples in a synthetic dataset: without data augmentation (c), there are not enough examples to fully train each surrogate, resulting in poor estimates of $\Omega^{\texttt{regional}}$ in which we converge to the same minima as no regularization (b); without pruning and fixing seeds, the path lengths vary due to randomness in fitting a decision tree, which can lead to over- or under- estimating the true APL. As shown in (d), this leads to strange decision boundaries. Finally, (e) shows the effect of large learning rates that leads to thrashing, resulting in a trivial decision boundary in efforts to minimize the loss. Only with the optimization innovations (f), do we converge to a properly regularized decision boundary.
\begin{figure*}[t!]
\centering
\begin{subfigure}[b]{0.16\textwidth}
\includegraphics[width=\textwidth]{v2/toy/training.pdf}
\caption{$\mathcal{D}^{\text{train}}$}
\end{subfigure}
\begin{subfigure}[b]{0.16\textwidth}
\includegraphics[width=\textwidth]{v2/toy/none.pdf}
\caption{No Reg.}
\end{subfigure}
\begin{subfigure}[b]{0.16\textwidth}
\includegraphics[width=\textwidth]{v2/toy/l2.pdf}
\caption{L2}
\end{subfigure}
\begin{subfigure}[b]{0.16\textwidth}
\includegraphics[width=\textwidth]{v2/toy/global.pdf}
\caption{Global Tree}
\end{subfigure}
\begin{subfigure}[b]{0.16\textwidth}
\includegraphics[width=\textwidth]{v2/toy/local.pdf}
\caption{L$_1$ Reg. Tree}
\end{subfigure}
\begin{subfigure}[b]{0.16\textwidth}
\includegraphics[width=\textwidth]{v2/toy/sparse.pdf}
\caption{L$_0$ Reg. Tree}
\end{subfigure}
\caption{Synthetic data with a sparse training set \emph{(a)} and a dense test set \emph{(b)}. Due to sparsity, the division of five rectangles is not trivial to uncover from \emph{(a)}. \emph{(c-g)} show contours of decision functions learned with varying regularizations. Only the regional tree regularized model captures the vertical structure of the five regions, leading to high accuracy.}
\label{fig:toy}
\end{figure*}
\section{Demonstration: Five Rectangles Dataset}
\label{sec:toy}
To build intuition, we present experiments in a toy setting: We define a ground-truth classification function composed of five rectangles (height of 0.5 and width of 1) in $\mathbb{R}^2$ concatenated along the x-axis to span the domain of $[0, 5]$. The first three rectangles are centered at $y=0.4$ (shifted slightly downwards) while the remaining two rectangles are centered at $y=0.6$ (shifted slightly upwards). The training dataset is intended to be sparse, containing only 250 points with the labels of 5\% of points randomly flipped to introduce noise and encourage overfitting. In contrast, the test dataset is densely sampled without noise. This is intended to model real-world settings where regional structure is only partially observable from an empirical dataset. It is exactly in these contexts that prior knowledge can be helpful.
\begin{table}[h!]
\centering
\begin{tabular}{c c c c}
\toprule
Model & Test Acc. & Test APL \\
\midrule
Unregularized & 0.8296 & 17.9490 \\
L2 ($\lambda=0.001$) & 0.8550 & 16.1130 \\
Global Tree ($\lambda=1$) & 0.8454 & 6.3398 \\
L$_1$ Regional Tree ($\lambda=0.1$) & 0.9168 & 10.1223 \\
L$_0$ Regional Tree ($\lambda=0.1$) & 0.9308 & 8.1962 \\
\bottomrule
\end{tabular}
\caption{Classification performance on a toy demonstration with varying regularizations. The reported test APL is averaged over APLs in each of the five regions.}
\label{table:toy}
\end{table}
Figure~\ref{fig:toy} show the learned decision boundary with (b) no regularization, (c) L2 regularization, (d) global tree regularization, and (e,f) regional tree regularization. As global regularization is restricted to penalizing all data points evenly, it fails to find the happy medium between being too complex or too simple. In other words, increasing the regularization strength quickly causes the target neural model to collapse from a complex nonlinear decision boundary to a single axis-aligned boundary. As shown in (d), this fails to capture any structure imposed by the five rectangles\footnote{It might be possible to capture the true structure (in a simple domain such as this) with very careful tuning of the hyperparameters in global tree regularization. However, this is difficult to do consistently and regional tree regularization presents a much easier solution.}. Similarly, if we increase the strength of L2 regularization even slightly from (c), the model collapses to the trivial solution of predicting entirely one label. Only regional tree regularization (e,f) is able to model the up-and-down curvature of the true decision function. With high $\lambda$, L$_0$ regional tree regularization produces a more axis-aligned decision boundary than its L$_1$ equivalent, primarily because we can regularize complex regions more harshly without collapsing simpler regions. Knowledge of the region divisions provides a model with prior information about underlying structure in the data; we should expect that with such information, a regionally regularized model can better prevent itself from over- or underfitting. We train for 500 epochs with a learning rate of 4e-3, a minibatch size of 32, retrain the surrogate function every epoch (a loop over the full training dataset) and sample 1000 weights from the convex hull each time. Decision trees were trained with $h=1$. Table~\ref{table:toy} compares metrics between the different regularizations: although the regional tree regularization is slightly more complex than global tree regularization, it comes with a large increase in accuracy.
\begin{figure*}[t]
\centering
\begin{subfigure}[b]{0.9\textwidth}
\includegraphics[width=\textwidth]{v2/uci/legend.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{v2/uci/bank.pdf}
\caption{Bank}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{v2/uci/gamma.pdf}
\caption{Gamma}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{v2/uci/adult.pdf}
\caption{Adult}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{v2/uci/wine.pdf}
\caption{Wine}
\end{subfigure}
\caption{(a-d) Comparison of regularizers (L2, global tree, regional tree) on four datasets from the UCI repository. Each subfigure plots the average APL over 5 regions (computed on a held-out test set) against the test F1 score. The ideal model is with high accuracy and low APL i.e. the upper left diagonal of each plot. In each setting, regional tree regularized models are able to find more low APL minima than global explanations and consistently achieves the highest performance at low APL. In contrast, the performance of global tree and L2 regularization quickly decays as the regularization strength increases.}
\label{fig:uci}
\end{figure*}
\section{Application: UC Irvine Prediction Tasks}
Having seen a synthetic dataset, we transition to more realistic machine learning settings. Without loss of generality, we focus on feedforward networks, or MLPs. The same ideas of regional explanation using decision trees can be trivially extended to sequential models (like the GRU used above) or convolutional models. For the experiments below, we set the target neural model to a 6 layer MLP with 128, 128, 128, 64, 64, and $Q$ dimensional hidden layers respectively. The final layer contains a node for each output dimension. We use leaky ReLU nonlinearities in between each layer. Each surrogate remains a very shallow MLP.
\subsection{Evaluation Metrics}
We wish to compare models with global and regional explanations. However, given $\theta\in\Theta$, $\Omega^{\texttt{regional}}(\theta)$ and $\Omega^{\texttt{global}}(\theta)$ are not directly comparable: subtly, the APL of a global tree is often an overestimate for data points in a single region. To reconcile this, for any globally regularized model, we separately compute $\Omega^{\texttt{regional}}(\theta)$ as an evaluation criterion. In this context, $\Omega^{\texttt{regional}}$ is used only for evaluation; it does not appear in the objective nor training. We do the same for baseline models, L2 regularized models, and unregularized models. From this point on, if we refer to average path length (e.g. Test APL, APL, path length) outside of the objective, we are referring to the evaluation metric, $\Omega^{\texttt{regional}}(\theta)$.
\subsection{Datasets}
We apply regional tree regularization to a suite of four popular machine learning datasets from UC Irvine repository \cite{Dua:2017}.
We briefly provide context for each dataset and show results comparing the regularization methods in effectiveness. We choose a generic method for defining regions to showcase the wide applicability of regional regularization: we use $\mathcal{D}$ to fit a $k$-means clustering model with $k=5$. Each example $\mathbf{x}_n \in \mathcal{D}$ is then assigned a number, $s_n \in \{1,2,3,4,5\}$. We define $X_r = \{ \mathbf{x}_n | s_n = r \} \subseteq \mathcal{X}^P$.
\vspace{1.4mm}
\noindent\textbf{Bank Marketing} (Bank): 45,211 rows collected from marketing campaigns for a bank \cite{moro2014data}. $\mathbf{x}_n$ has 17 features describing a recipient of the campaign (age, education, etc). There is one binary ouput indicating whether the recipient subscribed.
\vspace{1.4mm}
\noindent\textbf{MAGIC Gamma Telescope} (Gamma): 19,020 samples from a simulator of high energy Gamma particles in an Cherenkov telescope. There are 11 input features for afterimages of photon pulses, and one binary output discriminating between signal and background.
\vspace{1.4mm}
\noindent\textbf{Adult Income} (Adult): 48,842 data points with 14 input features (age, sex, etc.), and a binary output indicating if an individual's income exceeds \$50,000 per year \cite{kohavi1996scaling}.
\vspace{1.4mm}
\noindent\textbf{Wine Quality} (Wine): 4,898 examples describing wine from Portugal. Each row has a quality score from 0 to 10 and eleven variables based on physicochemical tests for acidity, sugar, pH, etc. We binarize the target where a positive label indicates a score of at least 5.
\vspace{1.4mm}
In each dataset, the target neural model is trained for 500 epochs with 1e-4 learning rate using Adam \cite{kingma2014adam} and a minibatch size of 128. We train under 20 different $\lambda$ between 0.0001 and 10.0. We do not do early stopping to preserve overfitting effects. We use 250 samples from the convex hull and retrain every 50 gradient steps. We set $C=25$ for Wine and $C=100$ otherwise. Figure~\ref{fig:uci} (a-d) compare L2, global tree, and regional tree regularization with varying strengths. The points plotted show minima from 3 independent runs. We include three baselines: an unregularized model, a decision tree trained on $\mathcal{D}$ and, a set of trees with one for each region (we call this: regional decision tree). For baseline trees, we vary $h$ where a higher $h$ is a more regularized decision tree.
\subsection{Results}
Some patterns are apparent. First, an unregularized model (black) does poorly due to overfitting to a complex decision boundary, as the neural network is over-parameterized. Second, we find that L2 is \textit{not} a desirable regularizer for simulatability as it is unable to find many minima in the low APL region (see Gamma, Adult, and Wine under roughly 5 APL). Any increase in regularization strength quickly causes the target neural model to decay to an F1 score of 0, in other words, one that predict a single label. We see similar behavior with global tree regularization, suggesting that finding low complexity minima is challenging under global constraints. Third, regional tree regularization achieves the highest test accuracy in all datasets. We find that in the lower APL area, regional explanations surpasses global explanations in performance. For example, in Bank, Gamma, Adult, and Wine, we can see this at 3-6, 4-7, 5-8, 3-4 APL respectively. This suggests, like in the toy example, that it is easier to regularize groups rather than the entire input space as a whole. In fact, unlike global regularization, models constrained regionally are able to reach a wealth of minima in the low APL area. Lastly, we note that with high regularization strengths, regional tree regularization mostly converges in performance with regional decision trees, which is sensible as the neural network prioritizes distillation over performance.
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.9\textwidth}
\includegraphics[width=\textwidth]{v2/uci/legend.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{v2/sepsis/sofa/results0.pdf}
\caption{SOFA: Vaso.}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{v2/sepsis/sofa/results2.pdf}
\caption{SOFA: Sedation}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{v2/sepsis/sofa/results3.pdf}
\caption{SOFA: Vent.}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{v2/sepsis/sofa/results4.pdf}
\caption{SOFA: Renal}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{v2/sepsis/careunit/results0.pdf}
\caption{Careunit: Vaso.}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{v2/sepsis/careunit/results2.pdf}
\caption{Careunit: Sedation}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{v2/sepsis/careunit/results3.pdf}
\caption{Careunit: Vent.}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{v2/sepsis/careunit/results4.pdf}
\caption{Careunit: Renal}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{v2/sepsis/sofa/trees/mechvent_low_sofa.pdf}
\caption{Low SOFA: Vent.}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{v2/sepsis/sofa/trees/mechvent_high_sofa.pdf}
\caption{High SOFA: Vent.}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{v2/sepsis/sofa/trees/sedation_low_sofa.pdf}
\caption{Low SOFA: Sedation}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{v2/sepsis/sofa/trees/sedation_high_sofa.pdf}
\caption{High SOFA: Sedation}
\end{subfigure}
\caption{Comparison of regularization methods on the Critical Care dataset. Each output represents a form of medication given in the ICU (e.g. vasopressor, sedation, mechanical ventilation, and renal replacement therapy). Each subfigure compares APL and test accuracy. \emph{(a-d)} compute APL based on three regions defined using SOFA scores; \emph{(e-h)} instead, compute APL on five regions, one for each careunit (e.g. medical vs. surgical ICU). In each experiment, regional tree regularized finds the best performing models at low complexity. Finally, \emph{(i-l)} show distilled decision trees (split by SOFA) that best approximate a regionally regularized target neural model with a low APL and good test accuracy. As confirmed by a physician in the ICU, distilled trees are simulable and capture statistical nuances specific to a region.}
\label{fig:sepsis}
\end{figure}
\section{Application: Sepsis (ICU)}
We revisit the Sepsis Critical Care dataset, only this time we apply regional tree regularization and compare to other regularizers, including global tree regularization.
\paragraph{APL for multiple outputs.} Previous datasets had only 1 binary output while Critical Care has 5. Fortunately, the definition of APL generalizes: compute the APL for each output dimension, and take the sum as the measure of complexity. This requires fitting $Q \times R$ trees.
\paragraph{Defining regions.} We explore two methods of defining regions, both suggested by ICU physicians. The first defines three regions by sequential organ failure assessment (SOFA), a summary statistic that has historically been used for predicting ICU mortality. Using $\mathcal{D}$, the groups are defined by more than one standard deviation below the mean, one standard deviation from the mean, and more than one standard deviation above the mean. Intuitively, each group should encapsulate a very different type of patient. The second method clusters patients by the his/her careunit into five groups: MICU (medical), SICU (surgical), TSICU (trauma surgical), CCU (cardiac non-surgical), and CSRU (cardiac surgical). Again, patients who undergo surgery should behave differently than those with less-invasive operations.
\paragraph{Regularization results.} Figure~\ref{fig:sepsis} compares different regularization schemes against baseline models for SOFA regions (a-d) and careunit regions (e-h). Overall, the patterns we discussed in the UCI datasets are consistent in this application. We especially highlight the inability (across the board) of global explanation to find many low complexity solutions. For example, in Figure~\ref{fig:sepsis} (a,c,e), the minima from global constraints stay very close to the unregularized minima. In other cases (f, g), global regularization finds very poor optima: reaching low accuracy with high APL. In contrast, region regularization consistently finds a good compromise between complexity and performance. In each subfigure, we can point to a span of APL at which the pink curve is much higher than all others. These results are from three runs, each with 20 different strengths.
\paragraph{Distilled decision trees.} A consequence of tree regularization is that every minima is associated with a set of trained trees. We can extract the trees that best approximate the target neural model, and rely on it for explanation. Figure~\ref{fig:sepsis} (i,j) show an example of two trees predicting ventilation plucked from a low APL - high AUC minima of a regional tree regularized model. We note that the composition of the trees are different, suggesting that they each capture a decision function biased to a region. Moreover, we can see that while Figure~\ref{fig:sepsis} (i) mostly predicts 0, Figure~\ref{fig:sepsis} (j) mostly predicts 1; this agrees with our intuition that SOFA scores are correlated with risk of mortality. Figure~\ref{fig:sepsis} (k,l) show similar findings for sedation. If we were to capture this behavior with a single decision tree, we would either lose granularity or be left with a very large tree.
\paragraph{Feedback from physicians.} We presented a set of 9 distilled trees from regional tree regularized models (1 for each output and SOFA region) to an expert intensivist for interpretation. Broadly, he found the regions beneficial as it allowed him to connect the model to his cognitive categories of patients---including those unlikely to need interventions. He verified that for predicting ventilation, GCS (mental status) should have been a key factor, and for predicting vasopressor use, the logic supported cases when vasopressors would likely be used versus other interventions (e.g. fluids if urine output is low). He was also able to make requests: for example, he asked if the effect of oxygen could have been a higher branch in the tree to better understand its effects on ventilation choices, and, noticing the similarities between the sedation and ventilation trees, pointed out that they were correlated and suggested defining new regions by both SOFA and ventilation status.\newline
\noindent We highlight that this kind of reasoning about what the model is learning and how it can be improved is very valuable. Very few notions of interpretability in deep models offer the level of granularity and simulatability as regional tree explanations do.
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.9\textwidth}
\includegraphics[width=\textwidth]{v2/uci/legend.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{v2/hiv/mortality.pdf}
\caption{Immunity: Mortality}
\label{mortalsubfig}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{v2/hiv/AIDS_onset.pdf}
\caption{Immunity: AIDS Onset}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{v2/hiv/adherence.pdf}
\caption{Immunity: Adherence}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{v2/hiv/therapy_success.pdf}
\caption{Immunity: Viral Suppression }
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{v2/hiv/mortality_highCD4_tree.pdf}
\caption{High Immunity: Mortality}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{v2/hiv/mortality_medCD4_tree.pdf}
\caption{Mid Immunity: Mortality}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{v2/hiv/mortality_lowCD4_tree.pdf}
\caption{Low Immunity: Mortality}
\end{subfigure}
\caption{Comparison of regularization methods on 15 output dimensions of the HIV dataset (4 of which are shown). Each subfigure compares APL and test accuracy. Subfigures (a-d) base the metric on four regions corresponding to the level of immunosuppression (abbreviated to immunity) at baseline (e.g. $<$200 cells/mm$^3$). Subfigures (e-g) show distilled decision trees (split by degrees of immunity) that best approximate a regionally regularized target neural model with a low APL.}
\label{fig:hiv}
\end{figure}
\section{Application: EuResist (HIV)}
We again revisit the HIV dataset to compare global and regional explanations.
\paragraph{Defining regions in HIV.} We define regions based on the advice of medical experts. This is performed using a patient's degree of immunosuppression at baseline (known as CDC staging). These groups are defined as: $< $200 cells/mm$^3$, 200 - 300 cells/mm$^3$, 300 - 500 cells/mm$^3$ and $>$500 cells/mm$^3$ \cite{world2005interim}. This choice of regions should characterize patients based on the initial severity of their infection; the lower the initial cell count, the more severe the infection.
\paragraph{Regularization results.} Figure~\ref{fig:hiv} compares different regularization schemes against baseline models across levels of immunosuppression. Overall, regional tree regularization produces more accurate predictions and provides simpler explanations across all outputs. For the case of predicting patient mortality in Fig~\ref{mortalsubfig}, we tend to find more suitable optima across different patient groupings and can provide better regional explanations for these patients as a result. Here, we observe that patients with lower levels of immunosuppression tend to have lower risk of mortality. We also observe that patients with lower immunity at baseline are more likely to progress to AIDS. Similar inferences can be made for the other outputs. In each subfigure, we reiterate that there is a span of APL at which the pink curve is much higher than all others.
\paragraph{Distilled decision trees.} We extract decision trees that approximate the target model for multiple minima and use these as explanations. Fig~\ref{fig:hiv} (e-g) show three trees where we have low APL and high AUC minima from a regional tree regularized model. Again, the trees look significantly different based on the decision function in a particular region. In particular, we observe that lower levels of immunity at baseline are associated with higher viral loads (lower viral suppression) and higher risk of mortality.
\paragraph{Feedback from physicians.} The trees were shown to a physician specializing in HIV treatment. He was able to simulate the model's logic, and confirmed our observations about relationships between viral loads and mortality. In addition, he noted that when patients have lower baseline immunity, the trees for mortality contain several more drugs. This is consistent with medical knowledge, since patients with lower immunity tend to have more severe infections, and require more aggressive therapies to combat drug resistance.
\section{Analysis for Regional Tree Regularization}
We now summarize a few important outcomes from the regional experiments:
\paragraph{The most effective minima are found in the low APL, high AUC regime.} The ideal model is one that is highly performant and simulable. This translates to high F1/AUC scores near medium APL. Too large of an APL would be hard for an expert to understand. Too small of an APL would be too restrictive, resulting in no benefit from using a deep model. Across all experiments, we see that L$_0$ region regularization is most adept at finding low APL and high AUC minima.
\paragraph{Global and local regularization are two extreme forms of regional regularization.}
If $R=1$, the full training dataset is contained in a single region, enforcing global explainability. If $R=N$, then every data point $\mathbf{x}_n \in \mathcal{D}$ has its own region i.e. local explainability.
\begin{table}[h!]
\centering
\begin{tabular}{c c c c c c c}
\toprule
& Bank & Gamma & Adult & Wine & Crit. Care & HIV \\
\midrule
Fidelity & 0.892 & 0.881 & 0.910 & 0.876 & 0.900 & 0.897 \\
\bottomrule
\end{tabular}
\caption{Fidelity is the percentage of examples on which the prediction made by a tree agrees with the deep model \cite{craven1996extracting}. }
\label{table:fidelity}
\end{table}
\paragraph{Regularized deep models outperform trees.} Comparing regional tree-regularized models and regional decision trees, the former reach much higher AUC at equal APL.
\paragraph{Regional tree regularization produces regionally faithful decision trees.} Table~\ref{table:fidelity} shows the fidelity of a deep model to its distilled tree. A score of 1.0 indicates that both models learned the same decision function. With a fidelity of 89\%, the regularized model is ``simple" in most cases, but can take advantage of deep nonlinearity with difficult examples.
\paragraph{Regional tree regularization is not computationally expensive.}
Over 100 trials on Sepsis, an L2 model takes $2.393 \pm 0.258$ sec. per epoch; a global tree model takes $5.903 \pm 0.452$ sec. and $21.422\pm0.619$ sec. to (1) sample 1000 convex samples, (2) compute APL for $\mathcal{D}^\theta$, (3) train a surrogate model for 100 epochs; a regional tree model takes $6.603\pm0.271$ sec. and $39.878\pm0.512$ sec. for (1), (2), and training 5 surrogates. The increase in base cost is due to the extra forward pass through $R$ surrogate models to predict APL. The surrogate cost(s) are customizable depending on the size of $\mathcal{D}^\theta$, the number of training epochs, and the frequency of re-training. If $R$ is large, we need not re-train each surrogate. The choice of which regions to prioritize can be treated as a bandit problem.
\paragraph{Distilled decision trees are interpretable by domain experts.} We asked physicians in Critical Care and HIV to analyze the distilled decision trees from regional regularization. They were able to quickly understand the learned decision function per region, suggest improvements, and verify the logic.
\paragraph{Optimizing surrogates is much faster and more stable than gradient-free methods.} We tried alternative optimization methods that do not require differentiating through training a decision tree: (1) estimate gradients by perturbing inputs, (2) search algorithms like Nelder-Mead. However, we found these methods to either be unreasonably expensive, or easily stuck in local minima based on initialization.
\paragraph{Sparsity over regions is important.} We experimented with different ``dense" norms: L$_1$, L$_2$, and a softmax approximation to L$_0$, all of which faced issues where regions with simpler decision boundaries a priori were over-regularized to trivial decision functions. Only with L$_0$ (i.e. \texttt{sparsemax}) did we avoid this problem. As a consequence, in toy examples, we observe that \texttt{sparsemax} finds minima with more axis-aligned boundaries. In real world studies, we find \texttt{sparsemax} to lead to better performance in low/mid APL regimes.
\section{Conclusion}
Interpretability is a bottleneck preventing widespread acceptance of deep learning. We have introduced a family of novel tree-regularization techniques that encourages the complex decision boundaries of any differentiable model to be well-approximated by human-simulable
functions, allowing domain experts to quickly understand and approximately compute what the model is doing. Overall, our training procedure is robust and efficient. Across three complex, real-world domains (HIV treatment, sepsis treatment, and human speech processing) our tree-regularized models provide gains in prediction accuracy in the regime of simpler, human-simulatable
models. Finally, we then showed how to extend tree regularization to more regional-specific approximations of a loss, where experts can add prior knowledge about the structure of their domain. More broadly, our general training procedure could apply tree-regularization or other procedure-regularization to a wide class of popular models, helping us move beyond sparsity toward models humans can easily simulate and thus trust.
|
train/arxiv
|
BkiUdvE5qsJBjXav9AvP
| 5 | 1 |
\section{\large{Author Bios}}\label{sec:bio}
The authors have extensive experience in the field of web-scale search and recommendation systems, and in particular, in applying data mining, machine learning, and information retrieval techniques in the talent search domain. They have built and deployed multiple generations of machine learning models and systems for real-time, low latency applications such as talent search and recommendations at LinkedIn. They have published extensively in venues such as SIGIR, KDD, WWW, WSDM, and CIKM, and also presented tutorial/industry talks about their work.
Sahin Cem Geyik is part of the AI team at LinkedIn, focusing on personalized recommendations across several LinkedIn Talent Solutions products. He received his Ph.D. degree in Computer Science from Rensselaer Polytechnic Institute in 2012, and has authored papers in top-tier conferences and journals such as KDD, INFOCOM, IEEE TMC, and IEEE TSC.
Qi Guo is part of the AI team at LinkedIn, where he applies machine learning for LinkedIn Talent Solutions products. He received his M.S. degree in Robotics from Carnegie Mellon University in 2016. He has published at IJCAI.
Bo Hu is part of the AI team at LinkedIn, where he works on relevance for LinkedIn Talent Solutions products. He received his Ph.D. degree in Computer Science from Simon Fraser University in 2014, and has authored papers in top-tier conferences and journals such as RecSys, ICDM, IEEE TKDE, and ACM TOIS.
Cagri Ozcaglar is part of the AI team at LinkedIn, where he works on relevance for LinkedIn Talent Solutions products. He received his Ph.D. degree in Computer Science from Rensselaer Polytechnic Institute in 2012, and has authored papers in top-tier conferences and journals such as IEEE BIBM, IEEE TNBS, BMC Genomics, Mathematical Biosciences.
Ketan Thakkar is a relevance engineer at LinkedIn Talent Solutions and works on improving search relevance on LinkedIn search stack. Previously, he was part of Microsoft's Bing search relevance team working on improving relevance for Bing search and ads. He received his M.S. degree in Information Technology from Bentley University in 2010.
Xianren Wu is part of the AI team at LinkedIn, leading the candidate recommendation relevance efforts within LinkedIn Talent Solutions. He previously co-founded and was the director of R\&D for GageIn Inc. He received his Ph.D. degree in Electrical Engineering from U.C. Santa Cruz in 2008, and has authored papers in top-tier conferences such as WWW and CIKM.
Krishnaram Kenthapadi is part of the AI team at LinkedIn, where he leads the fairness and privacy modeling efforts across different LinkedIn applications. Previously, he was a Researcher at Microsoft Research Silicon Valley. He received his Ph.D. degree in Computer Science from Stanford University in 2006. He has published 35+ papers, filed 125+ patents, and received the CIKM best case studies paper award, the SODA best student paper award, and the WWW best paper award nomination.
{\em About LinkedIn}: Founded in 2003, LinkedIn connects the world's professionals to make them more productive and successful. LinkedIn operates the world's largest professional network on the Internet with more than 500 million members in over 200 countries and territories. The company has a diversified business model with revenues coming from talent solutions, marketing solutions, and premium subscription products. See \url{https://press.linkedin.com/about} for more information.
\section{\large{Talent Search and Recommendation: Practical Challenges}}\label{sec:talentsearch}
\let\thefootnote\relax\footnote{\\ ~ \\ ~ \large \textbf{This paper has been accepted for publication at ACM SIGIR 2018.}}LinkedIn Talent Solutions business contributes to around 65\% of LinkedIn's annual revenue, and provides tools for job providers to reach out to potential candidates and for job seekers to find suitable career opportunities. LinkedIn's job ecosystem has been designed as a platform to connect job providers and job seekers, and to serve as a marketplace for efficient matching between potential candidates and job openings. A key mechanism to help achieve these goals is the \emph{LinkedIn Recruiter} product, which enables recruiters to search for relevant candidates and obtain candidate recommendations for their job postings.
We highlight a few unique information retrieval, system, and modeling challenges associated with talent search and recommendation systems:
\begin{enumerate}
\item The underlying query to the talent search system could be quite complex, combining several structured fields (such as canonical title(s), canonical skill(s), company name) and unstructured fields (such as free-text keywords). Depending on the application, the query could either consist of an explicitly entered query text and selected facets (talent search), or be implicit in the form of a job opening, or ideal candidate(s) for a job (talent recommendations). Our goal is to determine a ranked list of most relevant candidates in real-time among hundreds of millions of structured candidate profiles. Consequently, robust standardization, efficient indexing, candidate selection, and multi-pass scoring/ranking systems are essential \cite{galene_engine, thucS16ltr}.
\item Unlike traditional search and recommendation systems which solely focus on estimating how relevant an item is for a given query, the talent search domain requires mutual interest between the recruiter and the candidate in the context of the job opportunity. In other words, we require not just that a candidate shown must be relevant to the recruiter's query, but also that the candidate contacted by the recruiter must show interest in the job opportunity. Hence, it is crucial to use appropriate metrics (e.g., the likelihood of a candidate receiving an inMail (message) from the recruiter and also answering with a positive response) for model optimization as well as for online A/B testing, taking into account the fact that certain ideal metrics (e.g., the likelihood of a candidate receiving a job offer and accepting it) may either be unavailable or delayed \cite{Ramanath_2018, thuc15pes}.
\item Quite often, the recruiter or the hiring manager may not be able to express their hiring needs in the form of a search query (or even a job posting), since this often requires deep domain knowledge, as well as significant time and manual effort to come up with the best search criteria (e.g., which skills are relevant for a specific role that the recruiter is looking to fill). To address this challenge, it is desirable to support search based on ideal candidate(s) \cite{Thuc16}, and online learning of recruiter preferences within a search session based on their instantaneous response to recommended candidates \cite{Geyik_2018}.
\end{enumerate}
In this talk, we will present how we formulated and addressed the above problems, the overall system design and architecture, the challenges encountered in practice, and the lessons learned from the production deployment of these systems at LinkedIn. By presenting our experiences of applying techniques at the intersection of recommender systems, information retrieval, machine learning, and statistical modeling in a large-scale industrial setting and highlighting the open problems, we hope to stimulate further research and collaborations within the SIGIR community.
\section{\large{Overview of Talent Search and Recommendation Systems Developed and Deployed at LinkedIn}}
We next briefly describe the overall architecture of LinkedIn's talent search and recommendation engine, highlighting the key components (Figure~\ref{fig:RecruiterSearchInfra}). Our system can be subdivided into an online system for serving most relevant candidate results and an offline workflow for updating different machine learned models (described in greater detail in Figure~\ref{fig:recruiter-search-offline-pipeline}). Our presentation covers the architecture choices, modeling design decisions, and the practical lessons learned.
\begin{figure} [!h]
\centering
\includegraphics[width=3.3in]{recruiter_architecture_diagram.pdf}
\caption{Architecture of LinkedIn's talent search and recommendation engine.}
\label{fig:RecruiterSearchInfra}
\end{figure}
{\em Online system architecture}: First, the recruiter's search request (either as explicitly entered query, or implicit in the form of job opening / ideal candidate(s)), along with the recruiter and session context, is transformed into a complex query combining structured fields (e.g., canonical title(s) / skill(s), company name, region) and unstructured text keywords, and issued to LinkedIn's {\em Galene} search engine \cite{galene_engine}. A candidate set of results is then retrieved from the search index based on the criteria specified, and then ranked in multiple passes using machine learned scoring models of varying complexity \cite{thucS16ltr, Ramanath_2018, thuc15pes, Thuc16, Geyik_2018}. The search result set, along with the features used by the ranking model, are logged for later use for model training. Finally, front-end server gets the top ranked candidates, renders the result page, and logs recruiter interactions. The underlying search index is updated in near real-time to reflect changes in LinkedIn member data.
{\em Offline modeling pipeline}: Our offline system periodically trains the ranking models using recruiter usage logs \cite{thucS16ltr, Ramanath_2018, thuc15pes, Thuc16, Geyik_2018}. The training data is generated from recruiter interactions (and candidate responses to recruiter messages) over the search results displayed. As the member data can change over time, we also log computed features along with search results, instead of generating the features during model training. The offline modeling pipeline is designed to support ease of feature engineering, incorporation of different types of machine learning models, and experimentation agility.
\begin{figure} [!h]
\centering
\includegraphics[width=3.3in]{recruiter-search-offline-pipeline.png}
\caption{Offline modeling pipeline for LinkedIn's talent search and recommendation engine.}
\label{fig:recruiter-search-offline-pipeline}
\end{figure}
|
train/arxiv
|
BkiUdEk4uzlhYyxW3gSq
| 5 | 1 |
\section{Introduction}
\label{INTRODUCTION}
The usual approach to extra dimensions in string theory is to consider a direct product,
\eqn{Direct}{
ds_{10}^2 = -dt^2 + d\vec{x}^2 + d\tilde{s}_6^2 \,,
}
where $d\tilde{s}_6^2$ is, for example, the metric of a Calabi-Yau three-fold, as in \cite{Candelas:1985en}. This metric has the symmetries of Minkowski space, ${\bf R}^{3,1}$: translation invariance in the $t$ and $\vec{x} = (x^1,x^2,x^3)$ directions, and also rotation and boost invariance.
A large literature has grown up on the study of ``warped compactifications'' of string theory: for a review, see \cite{Douglas:2006es}. A common way to write the metric ansatz is
\eqn{Warped}{
ds_{10}^2 = e^{2A(y)} \left[ -dt^2 + d\vec{x}^2 \right] +
e^{-2A(y)} d\tilde{s}_6^2 \,,
}
where $y$ denotes, collectively, the coordinates involved in the metric $d\tilde{s}_6^2$. This is usually thought to be the most general ansatz consistent with the symmetries of ${\bf R}^{3,1}$.
It is less than straightforward to construct solutions of string theory of the form \eno{Warped} where the extra-dimensional manifold is compact, because of a constraint from the strong energy condition \cite{Gibbons:1984kp,Maldacena:2000mw}:\footnote{If we define $\tilde{T}_{MN} = T_{MN} - {1 \over d-2} g_{MN} T^L_L$ in a $d$-dimensional theory, then the strong energy condition says that $\tilde{T}_{MN} \xi^M \xi^N \geq 0$ when $\xi^M$ is timelike or null. Assuming the Einstein equations, $R_{MN} = \kappa_d^2 \tilde{T}_{MN}$, hold, this means that $R_{MN} \xi^M \xi^N \geq 0$. In particular, $R^{00} \geq 0$ when the metric is diagonal.}
\eqn{StrongCondition}{
R^{00} = \tilde\square A \geq 0 \,,
}
where $\tilde\square$ is the laplacian built from the metric $d\tilde{s}_6^2$. The problem is that the integral of $\tilde\square A$ over the compact manifold vanishes, which can only happen if the inequality is saturated everywhere, meaning that the function is harmonic. And a harmonic function on a compact manifold must be constant. A resolution is to use orientifold planes: for example, O3-planes, as in \cite{Verlinde:1999fy}. Near an O3-plane, the geometry is ill-defined: formally, $e^{-4A}$ becomes negative.
One could consider a more general ansatz:
\eqn{TimeWarp}{
ds_{10}^2 = e^{2A(y)} \left[ -h(y) dt^2 + d\vec{x}^2 \right] + ds_6^2 \,.
}
This has all the same symmetries as before, except for boost invariance. Boost invariance is quite well established experimentally (for a review, see for example \cite{Mattingly:2005re}), so one might dismiss the ansatz \eno{TimeWarp} as obviously unacceptable. But suppose, for some reason, we are constrained to live at a particular value of $y$, call it $y_*$; or perhaps we are restricted to some narrow range of values of $y$ close to $y_*$ where $h$ is nearly constant. Then we would perceive the world to be boost-invariant (or nearly so), with a speed of light $c = \sqrt{h(y_*)}$. If $h(y) > h(y_*)$ away from $y_*$, then a particle that can propagate into the extra dimensions can appear to move superluminally from the point of view of an observer at $y_*$, in the sense that in a coordinate time $\Delta t$, the particle could propagate significantly further than $\sqrt{h(y_*)} \Delta t$ without violating causality in the extra-dimensional geometry.
The ideas in the previous paragraph have a long history, which I will trace only partially here. Metrics of approximately the form \eno{TimeWarp} were discussed as early as \cite{Kaelbermann:1998hu}, with a qualitative hint arising even in \cite{Rubakov:1983bb}; and the special case $A=0$ was treated in \cite{Visser:1985qm}. Time dependent versions were studied in \cite{Chung:1999xg,Youm:2001sw} in an effort to use a variable speed of light to solve cosmological problems without inflation, along the lines of \cite{Moffat:1992ud,Albrecht:1998ir}. Further work in the direction of a variable speed of light has been reviewed in \cite{Magueijo:2003gj}, and the extensive topic of brane world cosmology has been reviewed in \cite{Langlois:2002bb}. Five-dimensional metrics similar to \eno{TimeWarp} were termed ``asymmetrically warped spacetimes'' in \cite{Csaki:2000dm}, where, in the spirit of earlier work \cite{Randall:1999vf,Kiritsis:1999tx,Alexander:1999cb,Bowcock:2000cq}, the examples of five-dimensional AdS-Schwarzschild and Reissner-Nordstr\"om-AdS were discussed. It was noted in \cite{Csaki:2000dm} that one needs $w < -1$ on the Planck brane. The need for a violation of the null energy condition was later demonstrated more generally \cite{Cline:2001yt}, using an argument which I extend in this paper. Special cases of \eno{TimeWarp} where $A=0$ were discussed in \cite{Dubovsky:2001fj}, and also in \cite{Deffayet:2001aw} as part of an approach to the cosmological constant problem using the model of \cite{Dvali:2000hr}. Some general constraints on asymmetrically warped string theory constructions were considered in \cite{Frey:2003jq}. More explicit string theory constructions have been considered, for example in \cite{Ganor:2006ub}.
Returning to the ansatz \eno{TimeWarp}: It's difficult to arrange for non-constant $h(y)$ over a compact extra-dimensional manifold because of a constraint arising from the null energy condition:\footnote{The null energy condition says $T_{MN} \xi^M \xi^N \geq 0$ for all null vectors $\xi^M$. Assuming the Einstein equations hold, this means $R_{MN} \xi^M \xi^N \geq 0$. In particular, $-R^0_0+R^1_1 \geq 0$ when the metric is diagonal.}
\eqn{NullCondition}{
4h^2 e^{-2A} (-R^0_0 + R^1_1) =
-3\tilde{g}^{mn} \partial_m h \partial_n h +
\tilde\square (h^2) \geq 0 \,.
}
Integrating over the compact manifold (and supposing $e^{-2A}$ is for some reason well-defined everywhere), one would be forced by the inequality to conclude that $h$ is constant. This argument is in the same spirit as \cite{Cline:2001yt}, but it generalizes more easily to (almost) any dimension, as we will see in section~\ref{BULK}. In contrast to the situation described by \eno{StrongCondition}, I do not know an explicit string theory construction that would evade the no-go argument based on \eno{NullCondition}. However, one may temporarily ignore this argument by considering non-compact extra dimensions. This amounts to turning off gravity, because the wave-function of the four-dimensional graviton is non-normalizable in the extra dimensions.
The simplest sort of non-compact asymmetrically warped geometry is just an extra-dimensional black brane. For example, the near-extremal D3-brane has a metric of the form \eno{TimeWarp} with
\eqn{Dthree}{
d\tilde{s}_6^2 = e^{-2A(y)} \left(
{dy^2 \over h(y)} + y^2 d\Omega_5^2 \right) \,,
}
where now $y$ is a single real variable, $d\Omega_5^2$ is the metric on a unit $S^5$, and
\eqn{FoundH}{
e^{-4A(y)} = 1 + {L^4 \over y^4} \qquad\qquad
h(y) = 1 - {y_0^4 \over y^4} \,.
}
A difficulty is that the existence of a regular horizon at $y=y_0$ is associated with a finite Hawking temperature,
\eqn{HawkingT}{
T = {1 \over \pi y_0} \left( 1 + {L^4 \over y_0^4} \right)^{-1/2}
\,.
}
Following \cite{Kiritsis:1999tx}, we might imagine our world as a brane at some fixed value of $y$ in the geometry \eno{Dthree}. Certainly, such a construction would lead to an observed speed of light which is slower than what can be attained far from the D3-branes. But the finite temperature \eno{HawkingT} makes the construction seem less interesting, because it means we are not describing the ground state. Likewise, as emphasized in \cite{Creminelli:2001tc}, the $AdS_5$-Schwarzschild and $AdS_5$-Reissner-Nordstr\"om geometries considered in \cite{Csaki:2000dm} do not describe ground states, but instead finite temperature states of a strongly coupled conformal theory. An exception is the extremal $AdS_5$-Reissner-Nordstr\"om solution, which has zero temperature; but it still has a macroscopic Bekenstein-Hawking entropy, meaning that it doesn't describe a single physical state, but instead a large ensemble of states. This may not be fatal, but entropic solutions do not seem to me the best of starting points when seeking to describe the vacuum.
I wish to consider in this paper a subclass of asymmetrically warped solutions which, by requirement, have no temperature and no entropy. I will refer to such backgrounds as ``time warps.'' A simple way to construct one is to cut off a black brane background above the horizon, as was indeed considered in \cite{Csaki:2000dm,Cline:2003xy}. I will instead work with five-dimensional variants of the geometries found in \cite{Gubser:2008wz} in asymptotically $AdS_4$ geometries, using the abelian Higgs model coupled to gravity. This model, first considered in \cite{Gubser:2008px}, has the following action:
\eqn{Action}{
S_{\rm bulk} = {1 \over 16\pi G_{D+1}} \int d^{D+1} x \, \sqrt{g}
{\cal L}_{\rm bulk} \,,
}
where
\eqn{Lagrangian}{
{\cal L}_{\rm bulk} = R - {1 \over 4} F_{\mu\nu}^2 -
|(\partial_\mu - i q A_\mu) \psi|^2 - V(|\psi|) \,.
}
For appropriate choices of $V(|\psi|)$ and $q$, the classical equations of motion following from \eno{Action} admit superconducting black hole solutions \cite{Gubser:2008px,Hartnoll:2008vx,Hartnoll:2008kx,Gubser:2008pf}, which spontaneously break the $U(1)$ gauge symmetry in the bulk. See \cite{Gubser:2005ih} for earlier work on superconducting black holes; \cite{Hartnoll:2007ih,Hartnoll:2007ip} for earlier work on the possible relation between black holes in $AdS_4$ and phases of superconducting materials; \cite{Basu:2008st,Herzog:2008he}, among others, for discussion of variants on this type of solution and dual superfluids; and \cite{Sachdev:2008ba} for an overview and further references.
The focus on four-dimensional anti-de Sitter space in the superconducting black hole literature has been driven by the interest in strongly coupled $2+1$-dimensional conformal field theories, which has been driven in turn by the hope of explaining phenomena in thin-film or layered superconductors in terms of quantum critical points.\footnote{An exception is \cite{Horowitz:2008bn}, which deals with five-dimensional geometries. Also there have been studies of superconducting black holes based on the Einstein-Yang-Mills lagrangian in four \cite{Gubser:2008zu,Gubser:2008wv,Roberts:2008ns} and higher \cite{Manvelyan:2008sv} dimensions.} In this paper I will focus instead on five-dimensional geometries, because they provide a minimal example of non-compact time warp geometries that include a copy of ${\bf R}^{3,1}$. By minimal, I mean that there is only one extra dimension, and the field content appears to be the minimal one that can support a time warp. I will find the geometries using the same strategy as in \cite{Gubser:2008wz}. And as in \cite{Gubser:2008wz}, the time warp geometries I find are domain walls with conformal invariance in the ultraviolet (UV) and infrared (IR) regions of the bulk, but different speeds of light as measured by a fixed coordinate system on the boundary. One therefore expects that correlation functions interpolate between a conformally invariant form in the ultraviolet, and a different conformally invariant form---characterized by a different speed of light---in the infrared.
The rest of this paper is organized as follows. In section~\ref{BULK} I exhibit the time warp geometry which I focus on. In section~\ref{CORRELATORS} I calculate the spectral measure of two-point correlators for a massive scalar propagating in the time warp geometry. I relate this spectral measure to unparticle phase space and explain how time warp effects could manifest themselves in the decay of a heavy particle into a light visible particle plus unparticle stuff. An appendix contains some additional technical detail on the computation of two-point functions. In section~\ref{BRANE}, I consider what it would take to ``compactify'' the asymptotically $AdS_5$ time warp geometries using a Planck brane construction.\footnote{``Compactify'' is a slight misnomer here because after introducing the Planck brane, the geometry is still non-compact in the infrared, as in \cite{Randall:1999vf}. It is like a true compactification in that there is a finite five-dimensional volume below any finite four-volume element on the Planck brane.} As one might anticipate based on the no-go argument discussed around \eno{NullCondition}, I am forced to entertain a theory on the brane which violates the null energy condition. In section~\ref{GRAVITON}, I show that it is possible to obtain an four-dimensional infrared-massless graviton by including a wrong-sign Einstein-Hilbert term in the action of the Planck brane. Altogether, it must be admitted that the Planck brane construction is peculiar.
\section{Solutions in the bulk}
\label{BULK}
I will construct a solution to the classical equations of motion of the Abelian Higgs model in $AdS_5$, starting with the ansatz
\eqn{AdSansatz}{
ds_5^2 = e^{2A(r)} \left[ -h(r) dt^2 + d\vec{x}^2 \right] +
e^{2B(r)} dr^2 \,.
}
$B(r)$ parametrizes the gauge freedom of choosing different radial variables. Let us pass to the gauge $B=-{1 \over 2} \log h$, where the equations of motion and constraints are simplest to state. Rotational invariance in the $\vec{x}$ directions forces $A_i=0$ for $i=1,2,3$. Let's also assume $A_r=0$ (which is a gauge choice) and use $\Phi$ to denote $A_0$. Finally, let's restrict attention to solutions where $\psi$ is real. The equations of motion are
\begin{subequations}\label{eomsZero}
\begin{eqnarray}
A'' &=& -{1 \over 3} |\psi'|^2 -
{e^{-2A} \over 3h^2} q^2 \Phi^2 |\psi|^2 \label{eomsZeroA} \\
h'' + 4 A' h' &=& e^{-2A} \left( \Phi'^2 +
{2 \over h} q^2 \Phi^2 |\psi|^2 \right) \label{eomsZeroB} \\
\Phi'' + 2 A' \Phi' &=& {2 \over h} q^2 \Phi |\psi|^2 \label{eomsZeroC} \\
\psi'' + \left( 4A' + {h' \over h} \right) \psi' &=&
-{e^{-2A} \over h^2} q^2 \Phi^2 \psi +
{1 \over h} {\partial V \over \partial\psi^*} \,. \label{eomsZeroD}
\end{eqnarray}
\end{subequations}
The constraint coming from the $G_{rr}$ Einstein equation is
\eqn{constraint}{
24 h^2 A'^2 + 6 hh' A' + e^{-2A} h\Phi'^2 - 2h^2 |\psi'|^2 -
2e^{-2A} q^2 \Phi^2 |\psi|^2 + 2 h V = 0 \,.
}
An additional first-order equation can be extracted by noticing that the quantity
\eqn{NoetherCharge}{
Q = e^{4A} (h' - e^{-2A} \Phi \Phi')
}
is a constant if the equations \eno{eomsZero} and \eno{constraint} are obeyed. Assuming that the infrared geometry is asymptotically $AdS_5$ amounts to assuming that $Q=0$. This is because both $h'$ and $\Phi'$ must go to zero in the infrared, while $h$ remains non-zero and $\Phi$ is bounded. Solutions with a regular horizon typically have $Q \neq 0$. So $Q=0$ is a sort of extremality condition.\footnote{The existence of the conserved charge \eno{NoetherCharge} and its relevance to extremal solutions were pointed out to me by A.~Nellore. In a more general gauge where $B$ is left as an unspecified function of $r$ but $A_r$ is still constrained to vanish,
\eqn{NoetherChargeGeneral}{
Q = {e^{4A-B} \over \sqrt{h}} (h' - e^{-2A} \Phi \Phi') \,.
}
It may seem puzzling that $Q$ isn't invariant under shifts of $\Phi$ by a constant, given that such shifts preserve the more general gauge just described. However, after such a shift, the complex scalar $\psi$ would have a time-dependent phase. Requiring $\psi$ to be time-independent thus disallows shifts of $\Phi$ by a constant---unless $\psi=0$. If in fact $\psi=0$, then there is an additional conserved quantity, ${e^{2A-B} \over \sqrt{h}} \Phi'$. Shifting $\Phi$ by an additive constant causes $Q$ to change by some multiple of this quantity.}
Already from \eno{eomsZeroA} and \eno{eomsZeroB} one can see the no-go theorem of \cite{Cline:2001yt} at work, as well as a application of the $c$-theorem argument of \cite{Freedman:1999gp}. In brief: the left-hand side of \eno{eomsZeroA} is proportional to $R^0_0 - R^r_r$, so the right hand side has to be negative according to the null energy condition. Thus $A$ is superharmonic as a function of $r$, which means it can't have a minimum unless one introduces an additional matter source. The left-hand side of \eno{eomsZeroB} is proportional to $R^1_1 - R^0_0$, so the right hand side has to be positive, again according to the null energy condition. Thus $h$ is subharmonic with respect to the line element $d\tilde{s}^2 = e^{-8A} dr^2$ in the $r$ direction, indicating that it cannot have a maximum without some additional matter source as long as $A$ is well defined. As argued in \cite{Gubser:2008wz}, \eno{eomsZeroA} and \eno{eomsZeroB} also imply that for backgrounds which are asymptotically anti-de Sitter, the effective speed of light $\sqrt{h(r)}$ increases from the infrared to the ultraviolet.\footnote{There appears to be some tension between \eno{StrongCondition}, which indicates that $A$ is subharmonic in a ten-dimensional compactification, and \eno{eomsZeroA}, which shows that it is superharmonic in five. To understand the situation more comprehensively, consider a $D$-dimensional ansatz
\eqn{DDimension}{
ds_D^2 = e^{2A} (-h dt^2 + d\vec{x}^2) + e^{2B} d\tilde{s}_{D-4}^2
\,,
}
where $A$, $B$, and $h$ depend only on the $D-4$ coordinates of $d\tilde{s}^2_{D-4}$. Setting $h=1$, $D \neq 6$, and $B = {4 \over 6-D} A$, one finds
\eqn{StrongD}{
-e^{2B} R^0_0 = \tilde\square A \,,
}
and the strong energy condition says this should be positive. Both the strong energy condition and the null energy condition are satisfied by the stress tensor following from \eno{Lagrangian}. Indeed, if $h=1$ and $D=5$, $A$ is superharmonic with respect to the metric $dr^2$ and subharmonic with respect to the metric $e^{8A} dr^2$.
Using the same ansatz \eno{DDimension}, assuming $D \neq 6$, and setting $B = {4 \over 6-D} A$, one finds that
\eqn{NullAgain}{
4h^2 e^{2B} (-R^0_0 + R^1_1) = -3 \tilde{g}^{mn} \partial_m h
\partial_n h + \tilde\square (h^2) \,.
}
So the argument around \eno{NullCondition} survives essentially unchanged for dimensions $D \neq 6$.
It is notable that both lines of argument discussed in this footnote have no force for $D=6$. It would be interesting to consider six-dimensional time warps in more detail.}
Unfortunately, I don't know a choice of the scalar potential that leads to analytically tractable equations of motion, even in the presence of the extremality constraint $Q=0$. So I will solve them numerically for particular choices of parameters. Specifically, I will choose
\eqn{VChoice}{
V = -{12 \over L^2} + m^2 |\psi|^2 + {u \over 2} |\psi|^4 \,,
}
with $m^2 < 0$ and $u>0$. With this choice, the infrared geometry is itself anti-de Sitter, but with a different radius, $L_{\rm IR}$, determined through the equation $V(\psi_{\rm IR}) = -12/L_{\rm IR}^2$, where $\psi_{\rm IR} = \sqrt{-m^2/u}$ is the $U(1)$ symmetry-breaking extremum of the potential. The infrared copy of $AdS_5$ signals emergent conformal symmetry: emergent in the sense that it arises only in the infrared limit of the dual field theory. It was speculated in \cite{Gubser:2008wz} that scalar potentials which have no minima lead to emergent Lorentz symmetry in the infrared, rather than emergent conformal symmetry. Evidence in favor of this conjecture has appeared in \cite{Gubser:2008pf}.
For numerical work, I found that the most convenient gauge is $B=0$, instead of the gauge $B = -{1 \over 2} \log h$ that I used in \eno{eomsZero}-\eno{NoetherCharge}. In the $B=0$ gauge, $A''$ does not have a definite sign. However, $A' = 1/L$ in the UV and $A' = 1/L_{\rm IR}$ in the IR\@.
\begin{figure}
\centerline{\includegraphics[width=4.5in]{example.eps}}
\caption{(Color online.) A time warp geometry for the choice of parameters~\eno{ExampleParameters}, in the gauge $B=0$. Solid colored curves are from numerics; dashed black curves are infrared asymptotics; and dotted black curves are ultraviolet asymptotics.}\label{EXAMPLE}
\end{figure}
The ultraviolet dimension of the operator dual to $\psi$ is
\eqn{DualDim}{
\Delta_\psi^{\rm UV} = 2 + \sqrt{4 + m^2 L^2} \,.
}
Provided $-4 \leq m^2 L^2$, the asymptotically $AdS_5$ geometry is stable \cite{Breitenlohner:1982bm,Breitenlohner:1982jf}. If also $m^2 L^2 \leq 0$, then one does not have to stipulate boundary conditions on $\psi$ at the conformal boundary. However, one may do so, and in studying solutions to \eno{eomsZero}, I generally did: I required $\psi \propto e^{-\Delta_\psi^{\rm UV} A}$ rather than $\psi \propto e^{(\Delta_\psi^{\rm UV}-4)A}$. My choice corresponds to requiring that the breaking of the $U(1)$ symmetry associated with the phase of $\psi$ is spontaneous rather than explicit in the dual conformal theory.\footnote{It is well understood \cite{Klebanov:1999tb} that for the range $-4 \leq m^2 L^2 < -3$, one can replace $\Delta_\psi^{\rm UV} \to 4-\Delta_\psi^{\rm UV}$; this corresponds to a new CFT, from which the original can be recovered from a renormalization group flow triggered by double-trace terms. It is even possible to make sense of more general boundary conditions on $\psi$ in terms of multi-trace operators \cite{Witten:2001ua}.} With this choice (or with any definite choice of boundary conditions on $\psi$), according to the same reasoning as in \cite{Gubser:2008wz}, there can be at most discretely many solutions with $h$ nowhere vanishing. In instances where I was able to find more than one solution, the one with no nodes in $\psi$ had the smallest value of $h$ in the UV.\footnote{For example, the choice of parameters \eno{ExampleParameters} leads to one solution with $h_{\rm UV} \approx 2.74$, which I will describe in some detail, and another with $h_{\rm UV} \approx 34.0$, in which $\psi$ has a single node.} I will assume that solutions with no nodes (or as few nodes as possible) in $\psi$ are preferred; however, this is just an assumption. In figure~\ref{EXAMPLE}, I exhibit the solution I found for the following choice of parameters:
\eqn{ExampleParameters}{
qL = 3 \qquad m^2 L^2 = -2 \qquad uL^2 = 4 \,.
}
In the solution I found for this choice of parameters, the speed of light is about $1.7$ times faster in the ultraviolet part of the geometry (large positive $r$) than in the infrared part (large negative $r$). With the choice of parameters \eno{ExampleParameters}, it happens that $L_{\rm IR}$ and $L$ are quite close together: $L_{\rm IR}/L \approx 0.97$.
It is possible to generate series expansions in the infrared and the ultraviolet for the solution I have described. In the infrared,
\begin{subequations}\label{IRasymp}
\begin{eqnarray}
A &=& {r \over L_{\rm IR}} + \ldots \label{IRasympA} \\
h &=& 1 +
{\Delta^{\rm IR}_\Phi-2 \over \Delta^{\rm IR}_\Phi-3} \Phi_1^2
e^{2(\Delta_\Phi^{\rm IR}-3) r/L_{\rm IR}} + \ldots
\label{IRasymph} \\
\Phi &=& \Phi_1 e^{(\Delta_\Phi^{\rm IR}-2)r/L_{\rm IR}} +
\ldots \label{IRasympPhi} \\
\psi &=& \psi_{\rm IR} +
\psi_1 e^{(\Delta_\psi^{\rm IR}-4)r/L_{\rm IR}} +
\ldots \label{IRasymppsi} \,,
\end{eqnarray}
\end{subequations}
where $\ldots$ denotes terms that are exponentially smaller in the infrared than the ones shown. Here
\eqn{DeltaIRdefs}{
\Delta_\Phi^{\rm IR} &= 1 + \sqrt{1 + 2 q^2
\psi_{\rm IR}^2} L_{\rm IR}^2 \cr
\Delta_\psi^{\rm IR} &= 2 + \sqrt{4 +
(m^2 + 3u\psi_{\rm IR}^2) L_{\rm IR}^2} \,,
}
and the solution I exhibited has
\eqn{FoundIRcoefs}{
\Phi_1 = 1 \qquad \psi_1 = 0.168 \,.
}
(A scaling symmetry, $x^m \to \lambda x^m$ while $r \to r - L_{\rm IR} \log\lambda$ and $\Phi \to \Phi/\lambda$, allows one to set $\Phi_1=1$ provided it is non-zero.) In the ultraviolet,
\begin{subequations}\label{UVasymp}
\begin{eqnarray}
A &=& {r \over L} + a_1 - {p_1 p_2 \over 16 h_{\rm UV}}
e^{-4r/L} + \ldots \label{UVasympA} \\
h &=& h_{\rm UV} + {p_1 p_2 \over 2} e^{-4r/L} + \ldots
\label{UVasymph} \\
\Phi &=& p_1 + p_2 e^{-2r/L} + \ldots \label{UVasympPhi} \\
\psi &=& s_2 e^{-\Delta_\psi^{\rm UV} r/L} +
\ldots \label{UVasymppsi} \,,
\end{eqnarray}
\end{subequations}
where $\ldots$ indicates terms which are exponentially smaller in the ultraviolet than the ones shown. The solution I exhibited has
\eqn{FoundUVcoefs}{
a_1 = -0.0963 \qquad
h_{\rm UV} = 2.74 \qquad
p_1 = 1.86 \qquad
p_2 = -1.18 \qquad
s_2 = 0.955 \,.
}
The asymptotics shown in figure~\ref{EXAMPLE} are based on \eno{IRasymp} and~\eno{UVasymp}, but in some cases evaluated to higher orders and with greater numerical precision.
\section{Green's functions and unparticles}
\label{CORRELATORS}
If we think of the UV speed of light as the ordinary one of daily experience, then the asymptotically anti-de Sitter geometry can be interpreted as describing a medium with a definite index of refraction. Low-energy signals pass through the infrared part of the geometry, where they can only go at a fraction of the UV speed of light.
A more tantalizing possibility is that the physics of our world can be described as the infrared physics in a time warp geometry. Then the speed of light that we measure is the infrared speed of light. We are, in effect, caught in the refractive medium. If only we could pass through the domain wall and get into the UV part of the geometry, then we could move superluminally from the perspective of an infrared observer.
As a first step toward exploring the phenomenology of time warps, I consider in this section how relativistic kinematics is altered for the field theory dual to the type of non-compact time warp constructed in section~\ref{BULK}. Because such a geometry is asymptotically $AdS_5$, one can extract two-point correlators for operators in the dual field theory. This field theory is strongly coupled, so it is not straightforward to make a comparison with perturbative quantum field theory. What is straightforward is to make a connection with recent ideas about ``unparticle physics'' \cite{Georgi:2007ek}, which is the possibility that an approximately conformal, strongly coupled sector---the unparticles---will be discovered through high-energy collisions, for example at the LHC\@. In section~\ref{SPECTRAL}, without reference to time warps, I review the connection between the imaginary part of the two-point Green's functions in a conformal field theory and the phase space for unparticles. Then, in section~\ref{PHASEMOD}, I calculate in a specific example how this phase space gets modified by time warp effects.
\subsection{Spectral measure and unparticle phase space}
\label{SPECTRAL}
First let's recall how multi-particle phase space measure is related to the imaginary part of an appropriate Green's function. Let $\phi$ be a canonically normalized, free, massless, real scalar in four dimensions. For any integer $n \geq 1$, the operator ${\cal O} = \phi^n$ has dimension $\Delta = n$. Its time-ordered Green's function is
\eqn{GF}{
G_F(x) &\equiv -i \langle 0 | T \left\{ {\cal O}(x) {\cal O}(0)
\right\} | 0 \rangle \,.
}
It's an exercise in free field theory to verify that the phase space measure for $n$ outgoing $\phi$ particles, collectively carrying four-momentum $k_\mu = (-\omega,\vec{k})$, is
\eqn{PhaseMeasure}{
d\Phi(k) = -2\theta(\omega) \Im G_F(k) {d^4 k \over (2\pi)^4} \,.
}
Because the right hand side is essentially the spectral measure for the Green's function $G_F$, it makes sense to use precisely the same expression for ``unparticle phase space'' even when ${\cal O}$ has no construction in terms of free fields. In such a case, the only constraint on the dimension $\Delta$ is the unitarity bound $\Delta \geq 1$. Using \eno{PhaseMeasure} for general conformal field theories is part of the proposal of \cite{Georgi:2007ek}, although it was phrased a little differently there.
Let's now review the computation of $\Im G_F(k)$ from a dual geometry which is asymptotically $AdS_5$ with radius $L$. I will assume that ${\cal O}$ is dual to a real, minimally coupled scalar $\phi$ in five dimensions. To quadratic order, its lagrangian is
\eqn{Lphi}{
{\cal L}_\phi = -{1 \over 2} (\partial\phi)^2 -
{1 \over 2} m_\phi^2 \phi^2 \,,
}
up to a prefactor which I will not try to track. The computation of imaginary parts of real-time two-point Green's functions is familiar from literature (for example \cite{Das:1996wn,Klebanov:1997kc,Gubser:1997cm,Gubser:1997se}) predating AdS/CFT \cite{Maldacena:1997re,Gubser:1998bc,Witten:1998qj}: it hinges on the identification of a conserved flux. Related computations were revisited in \cite{Son:2002sd} and shown to follow from the original AdS/CFT prescription upon appropriate use of Schwinger-Keldysh contours \cite{Herzog:2002pc}. See \cite{Gubser:2008yx,Gubser:2008sz} for a recent application to the computation of bulk viscosity which is fairly similar to the calculation of interest here. The standard procedure is to find solutions to the equations of motion following from \eno{Lphi} of the form
\eqn{phiAnsatz}{
\phi(t,\vec{x},r) = e^{-i\omega t + i\vec{k} \cdot \vec{x}}
f_k(r) \,,
}
where $f_k(r)$ is required to satisfy appropriate boundary conditions in the infrared---to be discussed below. In general, the functions $f_k(r)$ are complex. When they are, $f_k^*(r)$ satisfies the same radial equation that $f_k(r)$ does, because all the coefficients in the radial equation are real functions of $r$. As a consequence of Abel's identity, the flux
\eqn{FluxDef}{
{\cal F}_k \equiv L h e^{4A-B} \Im f_k^* \partial_r f_k
}
is conserved (meaning independent of $r$). The imaginary part of the two-point function of ${\cal O}$ is then evaluated as
\eqn{GpEval}{
\Im G_F(k) = \lim_{r \to \infty}
K_{\cal O} \left( {L \over 2} \right)^{4-2\Delta}
e^{2(\Delta-4)A} {{\cal F}_k \over |f_k(r)|^2} \,,
}
where $K_{\cal O}$ is a positive, dimensionless prefactor related to how the lagrangian \eno{Lphi} is normalized.
In pure $AdS_5$, where $h=1$ and $A = r/L$, one straightforwardly finds
\eqn{AdSGF}{
f_k(r) = \left\{ \seqalign{\span\TR & \qquad\span\TT}{
e^{-2r/L} H_{\Delta-2}^{(1)}(L \sqrt{\omega^2-\vec{k}^2}
e^{-r/L}) & for $\omega^2 > \vec{k}^2$ \cr
e^{-2r/L} K_{\Delta-2}(L \sqrt{\vec{k}^2-\omega^2}
e^{-r/L}) & for $\omega^2 < \vec{k}^2$ \,.
} \right.
}
These solutions satisfy the infrared boundary conditions appropriate for computing the time-ordered propagator $G_F$: infalling when $\omega > |\vec{k}|$; outgoing when $\omega < -|\vec{k}|$; and decaying rather than growing in the infrared when $\omega^2 < \vec{k}^2$. Plugging \eno{AdSGF} into \eno{GpEval}, one obtains
\eqn{AdSResult}{
\Im G_F(k) = -{2\pi K_{\cal O} \over \Gamma(\Delta-2)^2}
(\omega^2-\vec{k}^2)^{\Delta-2} \theta(\omega^2-\vec{k}^2) \,.
}
\subsection{Time warp modification of unparticle phase space}
\label{PHASEMOD}
The relation \eno{PhaseMeasure} equates phase space with spectral measure. The same relation can be used in the context of a time warp. The only difference is that the spectral measure will no longer be capable of expression solely in terms of $-\omega^2 + \vec{k}^2$; instead, it is a function of $\omega$ and $|\vec{k}|$ separately. The purpose of the current section is to examine the spectral measure and explain how it affects a decay process that involves unparticles.
In a time warp geometry which is asymptotically $AdS_5$ in both the UV and IR, one expects the following asymptotic forms for small and large $\omega$, respectively:
\eqn{LimitingForms}{
\Im G_F^{\rm IR}(k) &= -{2\pi K_{\cal O}^{\rm IR} \over
\Gamma(\Delta_\phi^{\rm IR}-2)^2}
(\omega^2-\vec{k}^2)^{\Delta_\phi^{\rm IR}-2}
\theta(\omega^2-\vec{k}^2) \cr
\Im G_F^{\rm UV}(k) &= -{2\pi K_{\cal O}^{\rm UV} \over
\Gamma(\Delta_\phi^{\rm UV}-2)^2}
(\omega^2/h_{\rm UV}-\vec{k}^2)^{\Delta_\phi^{\rm UV}-2}
\theta(\omega^2/h_{\rm UV}-\vec{k}^2) \,.
}
The UV dimension $\Delta_\phi^{\rm UV}$ is given simply by the formula
\eqn{DeltaPhiUV}{
\Delta_\phi^{\rm UV} = 2 + \sqrt{4 + m_\phi^2 L^2} \,.
}
To calculate the IR dimension $\Delta_\phi^{\rm IR}$ one must replace $L$ by the radius $L_{\rm IR}$ of the infrared copy of $AdS_5$. The dimensionless parameter $K_{\cal O}^{\rm UV}$ is related to how the lagrangian \eno{Lphi} is normalized, just as $K_{\cal O}$ was in the discussion above. $K_{\cal O}^{\rm IR}$ is a dimensionless multiple of $K_{\cal O}^{\rm UV} L^{2(\Delta_\phi^{\rm IR}-\Delta_\phi^{\rm UV})}$, and it can be calculated once the background geometry is known.
Note that the condition for a momentum to be UV-timelike is $\omega^2/h_{\rm UV}-\vec{k}^2 > 0$. Because $h_{\rm UV}>1$, a UV-timelike momentum is necessarily IR-timelike; but an IR-timelike momentum may be UV-timelike or UV-spacelike. This is because the momentum $k_\mu = (-\omega,\vec{k})$ is a covariant vector, i.e.~a $1$-form. Thus $(k^2)_{\rm UV} = -\omega^2/h_{\rm UV} + \vec{k}^2$, while $(k^2)_{\rm IR} = -\omega^2 + \vec{k}^2$. The opposite conclusion would be reached for contravariant vectors like an infinitesimal displacement $dx^\mu$: if it is UV-timelike, it can be either IR-timelike or IR-spacelike. The latter possibility is what one would mean by an infinitesimal faster-than-light displacement. It makes sense that the spectral measure of Green's functions occupies a narrower light-cone in momentum space in the UV limit than in the IR, because causal trajectories in the UV limit occupy a broader light-cone in real space. The retarded Green's function $G_R(t,\vec{x})$ must be non-zero over the broader position-space light-cone defined by the UV speed of light, but at large separations, I expect that $G_R$ is very attenuated outside the narrower position-space light-cone defined by the IR speed of light. Colloquially: You can go faster than light, but perhaps not for long.
To examine how the Green's function interpolates between the limiting behaviors shown in \eno{LimitingForms}, one may consider a dimensionless phase space modification factor:
\eqn{WarpRatio}{
W(k) \equiv {\Im G_F(k) \over \Im G_F^{\rm IR}(k)} \,.
}
$W(k)$ is to be evaluated only for infrared-timelike momenta. One should find $W(k) \to 1$ as $\omega \to 0$. For large $\omega$, according to \eno{LimitingForms}, one should find
\eqn{WarpRatioUV}{
W(k) \propto {(\omega^2/h_{\rm UV}-\vec{k}^2)^{\Delta_\phi^{\rm UV}
-2} \over (\omega^2-\vec{k}^2)^{\Delta_\phi^{\rm IR}-2}}
\theta(\omega^2/h_{\rm UV}-\vec{k}^2) \,.
}
\begin{figure}
\centerline{\includegraphics[width=4in]{wmod.eps}}
\caption{(Color online.) The phase space ratio $W(k)$ defined in \eno{WarpRatio}, for values of parameters discussed in the main text. The vertical green line shows where UV-null momenta lie.}\label{WMOD}
\end{figure}
In figure~\ref{WMOD}, I show some numerical evaluations of $W(k)$ for the time warp geometry exhibited in figure~\ref{EXAMPLE} and for $m^2 L^2 = -\sqrt{10}$, corresponding to a dual operator with UV dimension $\Delta_\phi^{\rm UV} \approx 2.92$ and IR dimension $\Delta_\phi^{\rm IR} \approx 2.98$. All indications from this figure, as well as further numerical studies, are that $W(k)$ does interpolate smoothly between $1$ in the IR and \eno{WarpRatioUV} in the UV\@. In appendix~\ref{GREENS} I give some further details on the computation of two-point functions.
With the modification factor $W(k)$ in hand, we can reconsider the process $t \to u + {\cal U}$ analyzed in \cite{Georgi:2007ek}.\footnote{There is no special reason to consider top and up quarks: any decay of a heavy visible-sector particle to a light particle plus unparticle stuff would serve as well.} In order to make an explicit analysis, I use the $W(k)$ shown in figure~\ref{WMOD}, as well as the specific value $\Delta_\phi^{\rm IR} \approx 2.98$. Also, I assume that all violations of infrared Lorentz invariance arise from $W(k)$. That is, I assume that the $u$ quark propagates at the infrared speed of light, no matter what its energy; and I assume that the relevant coupling can be written as
\eqn{DecayCoupling}{
{\cal L}_{\rm int} = i {\lambda \over
\Lambda^{\Delta_\phi^{\rm IR}}} \bar{u} \gamma_\mu
(1-\gamma_5) t \, \partial^\mu {\cal O} +
\hbox{h.c.} \,,
}
where $\Lambda$ is some high scale related to the mass of messenger fields that are integrated out to obtain \eno{DecayCoupling}. The differential decay rate, expressed as a positive measure on phase space, is
\eqn{DecayRate}{
d\Gamma = {\overline{|{\cal M}|^2} \over 2m_t}
(2\pi)^4 \delta^4(k_t - k_u - k_{\cal U})
d\Phi_u(k_u) d\Phi_{\cal U}(k_{\cal U}) \,,
}
where
\eqn{PhaseSpaces}{
d\Phi_u(k_u) &= \theta(\omega_u) 2\pi \delta(k_u^2) \cr
d\Phi_{\cal U}(k_{\cal U}) &=
A_{\cal U} \theta(\omega_{\cal U})
\theta(\omega_{\cal U}^2 - \vec{k}_{\cal U}^2)
(\omega_{\cal U}^2 -
\vec{k}_{\cal U}^2)^{\Delta_\phi^{\rm IR} - 2}
W(k_{\cal U}) \,.
}
The whole setup is just as in \cite{Georgi:2007ek} except for the factor $W(k_{\cal U})$ in \eno{PhaseSpaces}. To obtain the distribution of up quark energies, we evaluate
\eqn{UpEnergies}{
{m_t \over \Gamma} {d\Gamma \over dE_u} \equiv
{m_t \over \Gamma} \int d\Gamma \, \delta(E_u - \omega_u)
\propto {E_u^2 \over m_t^2} \left[ 1 - {2E_u \over m_t}
\right]^{\Delta_\phi^{\rm IR}-2}
\theta\left( {m_t \over 2} - E_u \right)
W(k_{\cal U}) \,,
}
where to find the last expression one must note that $\overline{|{\cal M}|^2} \propto E_u$. If $m_t \gg 1/L$, then we can combine \eno{WarpRatioUV} and \eno{UpEnergies} to get
\eqn{UpHigh}{
{m_t \over \Gamma} {d\Gamma \over dE_u} \propto
{E_u^2 \over m_t^2}
\left[ 1 - {2E_u \over m_t} - (h_{\rm UV}-1) {E_u^2 \over m_t^2}
\right]^{\Delta_\phi^{\rm UV}-2}
\theta\left( {m_t \over 1 + \sqrt{h_{UV}}} - E_u \right) \,.
}
The main qualitative feature of \eno{UpHigh} is that the up quark energy spectrum stops at an energy $m_t / (1 + \sqrt{h_{\rm UV}})$, lower than the usual $m_t / 2$. This is a direct consequence of the narrower momentum-space light-cone in which the dominant ultraviolet contribution to the unparticle Green's function lies. To appreciate this point without going through amplitudes explicitly, note that $\omega_{\cal U} = m_t - E_u$ and $|\vec{k}_{\cal U}| = |\vec{k}_u| = E_u$, so the condition that $p_\mu^{\cal U} = (-\omega_{\cal U},\vec{k}_{\cal U})$ is UV-timelike becomes, in two equivalent forms,
\eqn{WhenTimelike}{
\omega_{\cal U} &\geq \sqrt{h_{\rm UV}} |\vec{k}_{\cal U}| \cr
m_t - E_u &\geq \sqrt{h_{\rm UV}} E_u \,.
}
The second of these is clearly equivalent to $E_u \leq m_t / (1 + \sqrt{h_{\rm UV}})$. In figure~\ref{PHASECUTOFF} I show how the energy distribution \eno{UpEnergies} interpolates between the infrared limit, where $W(k_{\cal U}) = 1$, and the ultraviolet limit \eno{UpHigh}. Evidently, there are at least two difficulties in using a ``signal'' of the type I have described to detect the presence of time warp effects: first, the curves with large $m_t L$ could easily be confused with standard kinematics with a lower value of $m_t$; and second, the curves with moderate $m_t L$ may be difficult to distinguish from unparticle effects with $\Delta$ slightly larger than $3$. An optimal circumstance would be to have large $m_t L$ and an independent determination of $m_t$. (By $m_t$, I mean now the mass of some heavy particle with a decay like $t \to u + {\cal U}$---it doesn't have to be the top, nor does $u$ have to be an up quark, just some light visible sector particle.)
\begin{figure}
\includegraphics[width=6in]{PhaseCutoff.eps}
\caption{(Color online.) The distribution of energies $E_u$ for the $u$ quark in the process $t \to u + {\cal U}$, where the unparticle stuff has infrared dimension $\Delta_\phi^{\rm IR} = 2.98$ and the time warp modifications are from the factor $W(k)$ plotted in figure~\ref{WMOD}. Different curves come from different choices of the dimensionless parameter $m_t L$. Each curve is normalized to have unit area under it.}\label{PHASECUTOFF}
\end{figure}
\section{The Planck brane}
\label{BRANE}
Adding a Planck brane means adding a term to the action:
\eqn{FullAction}{
S = S_{\rm bulk} + S_{\rm brane} \,,
}
where
\eqn{Sbrane}{
S_{\rm brane} = {1 \over 16\pi G_5} \int d^4 x \, \sqrt{h}
{\cal L}_{\rm brane} \,,
}
and $h_{mn}$ is the induced metric on the brane. This extra term results in additional terms in the equations of motion which are distributions supported at the position of the brane, say at $r=0$. For purposes of calculation, it is convenient to think of two mirror-image copies of the bulk geometry separated by the Planck brane. This can be thought of as the ``upstairs'' picture of a ${\bf Z}_2$ orbifold acting as $r \to -r$. Such a picture is well-motivated from string theory \cite{Polchinski:1995df,Horava:1995qa,Horava:1996ma,Lukas:1998yy,Lukas:1998tt} and has been widely considered following \cite{Randall:1999vf}. Indeed, most of the earlier works on asymmetrically warped geometries summarized in section~\ref{INTRODUCTION} employ a Planck brane. Some of the results of this section are quite standard: for example the junction conditions on the metric components are special cases of more general relations derived in \cite{Binetruy:1999ut}. However, in contrast to \cite{Csaki:2000dm}, I assign positive parity to the ${\bf R}^{3,1}$ components of $A_\mu$ under the ${\bf Z}_2$ orbifold symmetry and negative parity to $A_r$.
For simplicity, I will work in the following axial gauge throughout this section:
\eqn{AxialGauge}{
ds_5^2 = g_{mn} dx^m dx^n + dr^2 \qquad
A = A_m dx^m \,,
}
where $m$ and $n$ run from $0$ to $3$. The metric coefficients $g_{mn}$, the gauge field components $A_m$, and the scalar $\psi$ may depend on $r$ as well as $x^m$. It will be convenient to define a unit normal $n_\mu dx^\mu = dr$ to the brane, where, as usual, $\mu$ runs over all five dimensions of the bulk. Then one may express the induced metric as a $5 \times 5$ tensor as $h_{\mu\nu} = g_{\mu\nu} - n_\mu n_\nu$. In the following, I will pass freely between four- and five-dimensional forms of tensors on the brane whose components in the $\mu=5$ direction vanish.
The equations of motion resulting from \eno{FullAction} are
\eqn{BraneEOMs}{
G_{\mu\nu} &= {1 \over 2} T_{\mu\nu}^{\rm bulk} +
{1 \over 2} T_{\mu\nu}^{\rm brane} \delta(r) \cr
\nabla_\mu F^{\mu\nu} &= J^\nu_{\rm bulk} +
J^\nu_{\rm brane} \delta(r) \cr
D_\mu D^\mu \psi &= {\partial V \over \partial \psi^*} +
j^{\rm brane} \delta(r) \,,
}
where
\eqn{BulkSources}{
T_{\mu\nu}^{\rm bulk} &= 2 D_\mu \psi^* D_\nu \psi -
|D\psi|^2 g_{\mu\nu} - V g_{\mu\nu} +
F_{\mu\alpha} F_\nu{}^\alpha -
{1 \over 4} g_{\mu\nu} F_{\alpha\beta}^2 \cr
J_\mu^{\rm bulk} &=
iq (\psi^* \partial_\mu \psi - \psi \partial_\mu \psi^*) +
2q^2 A_\mu |\psi|^2 \cr
D_\mu \psi &\equiv (\partial_\mu - i q A_\mu) \psi
}
and the brane sources are defined by
\eqn{BraneSources}{
\delta S_{\rm brane} =
{1 \over 16\pi G_5}
\int d^4 x \, \sqrt{h} \left[ {1 \over 2} \delta h^{mn}
T_{mn}^{\rm brane} - \delta A^m J_m^{\rm brane} -
\delta\psi^* j^{\rm brane} - \delta\psi j^{\rm brane,*}
\right] \,.
}
Assuming $g_{mn}$, $A_m$ and $\psi$ are smooth functions of $r$ except for jumps in the first derivatives at $r=0$, one can extract the brane source terms by integrating \eno{BraneEOMs} over a small interval around $r=0$: for example,
\eqn{TbraneInt}{
T_{\mu\nu}^{\rm brane} = 2 \lim_{\epsilon \to 0+}
\int_{-\epsilon}^\epsilon dr \, G_{\mu\nu} \,,
}
where I've omitted $T_{\mu\nu}^{\rm bulk}$ because it only involves first derivatives, which have no singular part. One easily finds
\eqn{CovariantPlanck}{
T_{\mu\nu}^{\rm brane} &=
4 \left[ K_{\mu\nu} - K h_{\mu\nu} \right]_{0^-} \cr
J_\mu^{\rm brane} &= -2 \left[ F_{r\mu} \right]_{0^-} \cr
j^{\rm brane} &= -2 \left[ \partial_r \psi \right]_{0^-} \,,
}
where in general,
\eqn{EvalZeroMinus}{
\left[ f(r) \right]_{0^-} \equiv \lim_{r \to 0^-} f(r) \,,
}
and the extrinsic curvature is
\eqn{KmnDef}{
K_{\mu\nu} = -h_\mu{}^\rho \nabla_\rho n_\nu = -{1 \over 2}
{\partial h_{\mu\nu} \over \partial r} \,.
}
The middle expression in \eno{KmnDef} is the defining equation for $K_{\mu\nu}$, and the last expression is a result of the gauge choice \eno{AxialGauge}. $K$ denotes the trace $g^{\mu\nu} K_{\mu\nu}$. $K_{\mu r} = 0$ for all $\mu$, and as a consequence, $T^{\rm brane}_{\mu r} = 0$.
For brevity, let's define
\eqn{BraneDefs}{
\epsilon = -T_0^{0,{\rm brane}} \qquad
p = {1 \over 3} T_i^{i,{\rm brane}} \qquad
\rho = J_0^{\rm brane} \qquad
j = j^{\rm brane} \,,
}
where $i$ runs from $1$ to $3$. Plugging the ansatz \eno{AdSansatz} into \eno{CovariantPlanck} leads to
\eqn{PlanckProperties}{
\epsilon = \left[ 12 A' \right]_{0^-} \qquad
p = \left[ -12 A' - {2h' \over h} \right]_{0^-} \qquad
\rho = \left[ -2\Phi' \right]_{0^-} \qquad
j = \left[ -2\psi' \right]_{0^-} \,,
}
Thus we see that the Planck brane must carry electric charge under the gauge field $A_\mu$, and its stress tensor must also break Lorentz invariance.\footnote{If I had assigned negative parity to $\Phi$, as in \cite{Csaki:2000dm}, then the boundary condition at the Planck brane would be $\Phi=0$. This cannot be reconciled with the requirement that $\Phi \to 0$ in the infrared part of the geometry. Perhaps some variant of the bulk solution could accommodate non-zero $\Phi$ in the infrared. But it would be impossible to recover Lorentz symmetry in the infrared with both $\Phi$ and $\psi$ non-zero there, because the conserved current $J_\mu = i \psi^* \overleftrightarrow\partial_\mu \psi + 2 q^2 A_\mu \psi^* \psi$ coupled to $A_\mu$ has $J_0 \neq 0$. If $\psi$ is zero in the infrared, then non-zero $\Phi$ is possible, but there is still some potential difficulty: the gauge field $A = \Phi dt$ is ill-defined as a 1-form at a horizon where $g^{tt} \to \infty$.}
Provided $A'>0$ and $h'>0$, \eno{PlanckProperties} implies that the Planck brane has an equation of state with $w \equiv {p \over \epsilon} < -1$. This is a violation of the null energy condition---essentially the same one as found in \cite{Csaki:2000dm} in the absence of a charged scalar. Some such violation was inevitable because of the argument due to \cite{Cline:2001yt} and outlined in section~\ref{BULK}: $h$ cannot have a maximum as long as the null energy condition is obeyed and $A$ is well defined. To examine this in more detail, consider cutting off the geometry shown in figure~\ref{EXAMPLE} with a Planck brane, not at $r=0$, but at some radius $r=r_*$ which is positive enough that we can use the ultraviolet asymptotics \eno{UVasymp}. Then I find that
\eqn{FoundW}{
w = -1 + {p_1 p_2 \over 3 h_{\rm UV}} e^{-4r_*/L} + \ldots \,,
}
where $\ldots$ indicates terms that are even more exponentially suppressed at large $r_*$. We do have $w<-1$ since $p_1 p_2 < 0$; but it is notable that as $r_*$ increases, $w$ gets exponentially close to $-1$. So, by this measure, we don't have to violate the null energy condition by much.
For the sake of exhibiting a definite construction, let's consider how one might accommodate \eno{PlanckProperties} using gauged phantoms on the brane. The first step, largely following \cite{ArmendarizPicon:1999rj}, is to assume a brane lagrangian of the form
\eqn{BraneL}{
{\cal L}_{\rm brane} = f(X) - V_{\rm brane}(|\psi|) \,,
}
where
\eqn{Xdef}{
X \equiv -|D_m \psi|^2
}
and $f(X)$ is some smooth function. The gauge-covariant derivative $D_m$ is the same as the one in \eno{BulkSources}. Assuming that $D_i\psi = 0$ for $i=1,2,3$ and that $\partial_0 \psi = 0$, one readily obtains
\eqn[c]{FoundSources}{
X = -h^{00} q^2 \Phi^2 |\psi|^2 \qquad
\epsilon = 2X f'(X) - f(X) + V_{\rm brane}(|\psi|) \qquad
p = f(X) - V_{\rm brane}(|\psi|) \cr
\rho = 2 f'(X) q^2 \Phi |\psi|^2 \qquad
j = h^{00} q^2 \Phi^2 \psi f'(X) +
{\partial V_{\rm brane} \over \partial\psi^*} \,.
}
The expressions \eno{FoundSources} satisfy a first law constraint:
\eqn{FirstLawBrane}{
\epsilon + p + h^{00} \rho \Phi = 0 \,.
}
The same first law constraint holds for the bulk relations \eno{PlanckProperties} once one imposes
\eqn{ExtremalID}{
h' = e^{-2A} \Phi \Phi' \,,
}
which is what one gets by demanding that the Noether charge \eno{NoetherChargeGeneral} vanishes.\footnote{In section~\ref{INTRODUCTION} I remarked that a simple way to construct a time warp geometry---that is, a geometry where time has a different warp factor from space, but there is no temperature or entropy associated with the extra-dimensional geometry---is to start with a black brane geometry and cut it off above the horizon, as done for example in \cite{Csaki:2000dm,Cline:2003xy}. With the exception of extremal Reissner-Nordstr\"om, the bulk geometries considered in these works do not obey the extremality condition \eno{ExtremalID}. So the bulk geometry will demand an equation of state on the brane that is different from \eno{FirstLawBrane}. The difference is essentially a $Ts$ term, where $T$ is the temperature and $s$ is the entropy density that the horizon would have had if it were present. It seems to me a non-trivial difficulty to construct a theory on the brane that will accommodate the equation of state that the bulk demands without involving non-zero entropy and temperature. For this reason, it is not clear that the infrared cutoff is a satisfactory construction.
The Planck brane in \cite{Csaki:2000dm} is required to have an equation of state $\epsilon + p < 0$, just as I found in \eno{FoundW}. It was not demonstrated, however, that such an equation of state actually arises from the dynamics on the brane, decoupled as it is from the $U(1)$ gauge field when $A_m$ has odd parity. This is in contrast to \eno{FoundSources}-\eno{FirstLawBrane}, where the equation of state is seen to arise explicitly from the gauged phantom construction, and to be a consequence more generally of the first law of thermodynamics.}
Suppose we start with a bulk solution with $Q=0$ and want to tailor the functions $f(X)$ and $V_{\rm brane}(|\psi|)$ so that the Planck brane ``fits'' onto the bulk solution at a specified radius, $r=r_*$. ``Fitting'' means that \eno{PlanckProperties} and \eno{FoundSources} are consistent at the specified radius. In light of \eno{FirstLawBrane}-\eno{ExtremalID}, we need only demand that the last three equations of \eno{PlanckProperties} are consistent with the last three of \eno{FoundSources}. From these requirements, we can extract the following conditions:
\eqn{fConditions}{
X &= \left[ {e^{-2A} q^2 \Phi^2 |\psi|^2 \over h} \right]_{r_*^-}
= {q^2 e^{-2a_1} p_1^2 s_2^2 \over h_{\rm UV}}
e^{-2 (1+\Delta_\psi^{\rm UV}) r_*/L} + \ldots \cr
f(X) - V_{\rm brane}(|\psi|) &=
\left[ -12 A' - {2h' \over h} \right]_{r_*^-}
= -{12 \over L} + {p_1 p_2 \over h_{\rm UV} L} e^{-4r_*/L}
+ \ldots \cr
X f'(X) &= \left[ -{h' \over h} \right]_{r_*^-}
= {2 p_1 p_2 \over h_{\rm UV} L} e^{-4r_*/L} + \ldots \cr
\psi^* {\partial V_{\rm brane} \over \partial\psi^*} &=
\left[ -{h' \over h} - 2 \psi^* \psi'
\right]_{r_*^-} = {2 p_1 p_2 \over h_{\rm UV} L}
e^{-4r_*/L} +
{2 s_2^2 \Delta_\psi^{\rm UV} \over L}
e^{-2\Delta_\psi^{\rm UV} r_*/L} + \ldots \,,
}
where $\ldots$ indicates terms which are subleading in the UV expansions \eno{UVasymp} compared to the ones shown. What the equations \eno{fConditions} mean is that $f(X)-V_{\rm brane}(|\psi|)$, $X f'(X)$, and $\psi^* \partial V_{\rm brane} / \partial\psi^*$ are required to take on the values indicated for the particular value of $X$ indicated, and for $\psi$ evaluated on the brane. These conditions say nothing about the global shape of $f$ as a function of $X$ or of $V_{\rm brane}$ as a function of $|\psi|$. But the third equation in \eno{fConditions} indicates that $f'(X) < 0$. This is the characteristic feature of phantoms, and it could also have been anticipated by noting that $w = -1 + 2X f'(X) / \epsilon < 0$.
Having $f'(X) < 0$ raises questions about stability and/or unitarity: see for example \cite{Cline:2003gs}. However, it seems likely that a combination of positive $f''(X)$, positive curvature for $V_{\rm brane}(|\psi|)$, and augmentation of ${\cal L}_{\rm brane}$ by appropriate higher derivative terms, as in \cite{Creminelli:2006xe}, would lead to a stable construction. There is no guarantee, of course, that the requisite properties of ${\cal L}_{\rm brane}$ are reasonable, in the sense of being in the ballpark of what one might obtain from studying an actual string theory compactification. Gauged phantoms are probably not the only option for an appropriate Planck brane construction. Certainly, other ways of violating the null energy condition have been discussed, as has the link between null energy violations and superluminal motion. For a few points of entry into the large and diverse literature on these topics, see \cite{Alcubierre:1994tu,ArmendarizPicon:2000ah,Sahni:2002dx,Dubovsky:2005xd,Buchbinder:2007ad,Marvel:2008uh}.
\section{Gravitons}
\label{GRAVITON}
Having passed from an asymptotically $AdS_5$ geometry to a geometry with a UV cutoff---through the admittedly {\it ad hoc} introduction of gauged phantoms on the Planck brane---the next question to ask is how gravity works from a four-dimensional perspective. An all-but-necessary condition for physically reasonable gravity is the existence of a spin two particle which propagates at the infrared speed of light, at least when it is not too energetic. The purpose of this section is to ask what we have to do to get such a four-dimensional graviton. The answer is simple to state: we must add a wrong-sign Einstein-Hilbert term to the lagrangian on the brane, so that it reads
\eqn{CompleteLbrane}{
{\cal L}_{\rm brane} = \eta {}^{(4)}R + f(X) -
V_{\rm brane}(|\psi|) \,,
}
where ${}^{(4)}R$ is the four-dimensional Ricci scalar. By wrong-sign I mean that $\eta$ has to be negative, so this somewhat different from the proposal of \cite{Dvali:2000hr}. Moreover, $\eta$ needs to be tuned to a certain value, close to $-L$, in order to make the four-dimensional graviton appear with the desired infrared dispersion relation, $\omega = |\vec{k}|$.
A plane wave of gravitons moving in the $x^1$ direction and polarized in the $x^2$-$x^3$ direction can be described by the following perturbation of the metric \eno{AdSansatz}:
\eqn{PerturbedMetric}{
ds_5^2 = e^{2A(r)} \left[ -h(r) dt^2 + d\vec{x}^2 +
2 e_{23}(r) \cos(\omega t - kx) dx^2 dx^3 \right] +
dr^2 \,,
}
where as in \eno{AxialGauge} I use an axial gauge. Treating the perturbation to linear order, and accounting for the Planck brane action \eno{CompleteLbrane}, one finds that the perturbation obeys the equation
\eqn{ETwoThree}{
e_{23}'' + \left[ 4A'+{h' \over 2h} \right] e_{23}' +
e^{-2A} \left[ 1 + \eta \delta(r) \right]
\left( {\omega^2 \over h} - k^2 \right) e_{23} = 0 \,,
}
where primes denote $d/dr$ and I have assumed, for now, that the Planck brane is at $r=0$. If $h=1$ and $\eta=0$, as in \cite{Randall:1999vf}, then the solution describing a graviton is $e_{23}=1$, with the on-shell requirement $\omega^2 = k^2$.
To treat the general case with $h \neq 1$ and $\eta \neq 0$, one must first extract the correct boundary condition at the Planck brane. As with the boundary conditions \eno{CovariantPlanck}, this is done by integrating the equation of motion \eno{ETwoThree} over a small region including $r=0$. The result is
\eqn{BDgrav}{
2 \left[ e_{23}' \right]_{0^-} =
\eta \left[ e^{-2A} \left( {\omega^2 \over h} -
k^2 \right) e_{23} \right]_{0^-} \,.
}
My aim is to find out what $\eta$ we should choose in order to have a graviton that travels at the infrared speed of light. That is, I simply require that the on-shell condition is $\omega^2 = k^2$. Imposing this restriction, one can solve \eno{BDgrav} for $\eta$ to find
\eqn{FoundEta}{
\eta = -{2 \over \omega^2} \left[ e_{23}' / e_{23} \over
e^{-2A} \left( 1 - {1 \over h} \right) \right]_{0^-} \,.
}
The correct boundary condition in the infrared is that $e_{23}$ approaches $1$: its infrared behavior is then the same as the graviton found in \cite{Randall:1999vf}. It is very likely that the value of $\eta$ one extracts from \eno{FoundEta} depends on $\omega$, which means that we can't make gravitons with arbitrary energy all travel with the infrared speed of light. We must be satisfied instead with the choice of $\eta$ that makes gravitons with $\omega L \ll 1$ IR-lightlike.\footnote{Perhaps by introducing on the Planck brane a series of higher derivative terms in the curvature one could obtain a dispersion relation for four-dimensional gravitons of the form $\omega^2 = k^2$ with corrections suppressed by any desired positive even integer power of $\omega L$. Even if this is possible, each successive term would presumably have a coefficient that needs to be tuned to a particular value.} To calculate this value, let's solve \eno{ETwoThree} with $\omega^2 = k^2$ perturbatively in small $\omega$ by expanding
\eqn{ExpandE}{
e_{23} = \phi_0 + \omega^2 \phi_2 + \omega^4 \phi_4 + \ldots \,.
}
At zeroth order in $\omega$, the equation of motion \eno{ETwoThree} becomes
\eqn{ZeroethOrderPhi}{
\phi_0'' + \left[ 4A' + {h' \over 2h} \right] \phi_0' = 0 \,,
}
and the Planck brane boundary condition \eno{BDgrav} becomes $\big[\phi_0'\big]_{0^-} = 0$. The solution of \eno{ZeroethOrderPhi} satisfying this boundary condition is $\phi_0 = 1$. At the next order in $\omega$, the equation \eno{ETwoThree} becomes
\eqn{LeadingCorrection}{
\phi_2'' + \left[ 4A' + {h' \over 2h} \right] \phi_2' =
e^{-2A} \left( 1 - {1 \over h} \right) \,.
}
away from $r=0$. This equation is readily integrated. Plugging the result into \eno{FoundEta}, one sees that
\eqn{EtaResult}{
\eta = - \left[ {2 \over e^{2A} \sqrt{h} \left( 1 - {1 \over h}
\right)} \right]_{0^-}
\int_{-\infty}^0 dr \, e^{2A} \sqrt{h} \left( 1 - {1 \over h}
\right) \,.
}
If we wish the Planck brane to be at some radius other than $0$, say $r=r_*$, then we should evaluate the prefactor in \eno{EtaResult} at $r_*^-$ and change the upper limit of integration to $r_*$. Assuming that $r_*$ is well into the ultraviolet region, where $A \approx r/L + a_1 + \ldots$ and $h \approx h_{\rm UV}$, one sees that the dominant contribution to the integral in \eno{EtaResult} comes from the ultraviolet region. Plugging these approximate forms into \eno{EtaResult}, one finds
\eqn{ApproxEta}{
\eta \approx -L \,,
}
as claimed at the beginning of this section. Using the numerical solution exhibited in figure~\ref{EXAMPLE}, and setting $r_* = 3L$, one finds $-\eta/L \approx 0.998$.
\section{Discussion}
\label{DISCUSSION}
The main idea of a time warp compactification is that particles could travel faster than the observed speed of light if they can propagate through some region of an extra-dimensional spacetime where time is gravitationally blue-shifted. There are some serious obstacles to realizing this idea. First, there are the constraints from the null energy condition, as discussed in sections~\ref{INTRODUCTION} and~\ref{BULK}, extending the arguments of \cite{Cline:2001yt}. Second, we have seen in section~\ref{GRAVITON} that it is non-trivial to obtain a four-dimensional, spin-two, infrared-massless graviton. And third, we must remember that there are stringent experimental limits on violations of Lorentz invariance.
As an example of the experimental limits, consider for example vacuum Cerenkov effects, as discussed in \cite{Coleman:1998ti,Jacobson:2001tu} for the case of photon emission, and in \cite{Moore:2001bv,Cline:2003xy} for the case of graviton emission. For electrons, the bound quoted in \cite{Coleman:1998ti} is $(c_e-c_\gamma)/c_\gamma \lsim 5 \times 10^{-13}$, where $c_\gamma$ is the speed of light and $c_e$ is the limit on the speed of electrons. Observation of primary protons in cosmic rays with energies of up to $10^{20}\,{\rm eV}$ indicates an even tighter bound for protons, $(c_p-c_\gamma)/c_\gamma \lsim 10^{-23}$. Can we possibly get away with a construction like the one exhibited in figure~\ref{EXAMPLE}, in which which maximum speed varies across the extra dimension by a factor of $1.7$?
The approach considered in section~\ref{CORRELATORS}, where Lorentz violation originates entirely from a strongly coupled unparticle sector, may provide a way to evade the existing constraints on Lorentz violations with unparticle Green's functions modified on a scale $1/L$ comparable to the $\rm TeV$ scale. Such modifications are in principle discoverable at the LHC, for example through the unusual kinematic constraints illustrated in figure~\ref{PHASECUTOFF}. It should be noted, however, that I did not try to quantify the extent to which virtual unparticles might communicate Lorentz violations into visible sector propagators and couplings.
One could also entertain the possibility that the visible sector itself is dual to a time warp geometry. It's hard to see how to accommodate this idea without setting the scale $1/L$ where Green's functions are modified quite high. The idea that boost invariance is an emergent infrared symmetry is an old one, dating back at least to \cite{Chadha:1982qq}. But the experimental constraints on high-energy modifications of dispersion relations are pretty tight: see for example \cite{Jacobson:2001tu}. Nevertheless, it seems to me significant that one can start with a generally covariant theory, spontaneously break Lorentz invariance at a high scale and, through quite an explicit extra-dimensional construction, recover it in the infrared.
A notable feature of time warp geometries is that the speed of light, $h(r)$, varies exponentially slowly both in the infrared and the ultraviolet, as a function of proper distance $r$ in the fifth dimension: see the asymptotic expressions~\eno{IRasymph} and~\eno{UVasymph}. This suggests the possibility of generating extremely small but non-zero differences between the maximum attainable velocities of different Standard Model particles by having them propagate on branes at significantly different locations deep in the infrared part of the geometry. If a particle comes from a string stretched between two branes, then its maximum attainable velocity is dictated by the brane that is deeper in the infrared, as explained in \cite{Peeters:2006iu,Herzog:2006gh,Liu:2006nn,Chernicoff:2006hi,Mateos:2007vn,Ejaz:2007hg,Argyres:2008eg,CasalderreySolana:2008ne} in a finite-temperature setting. But can one arrange for appropriate couplings among particles on branes separated in such a way?
The time warp geometry I constructed is just one example based on the simplest possible lagrangian. A diverse collection of other solutions, with remarkably variable $h_{\rm UV}$, can be found just by varying the parameters in this lagrangian. See for example \cite{Gubser:2008pf}, where, apparently, an exponentially large $h_{\rm UV}$ was achieved by varying $q$ over a modest range. Although the lagrangian I use is not taken directly from a string theory construction, the ingredients are generic enough that I certainly expect it, or something with qualitatively similar solutions, can be embedded into string theory constructions. More generally, one could try to support a time warp geometry with different combinations of fields. Interesting field combinations include $B_p = dt \wedge \omega_{p-1}$, where $B_p$ is a $p$-form gauge potential and $\omega_{p-1}$ is a $(p-1)$-form on the extra dimensions; or perhaps some fermion bilinear like $\bar\psi \gamma^1 \gamma^2 \gamma^3 \psi$. In most circumstances, I expect that a violation of the null energy condition would be necessary in order to achieve a static geometry. Orientifold planes violate the null energy condition, but in a way that allows overall, boost-invariant warping of the geometry, not time warping. Recall that the constraints on $h$ came from considering the combination $R^0_0 - R^1_1$, where the $1$ direction is one of the usual dimensions of space; but orientifold planes extended over ${\bf R}^{3,1}$ are, by themselves, Lorentz-invariant, so they do not contribute to $R^0_0 - R^1_1$. Perhaps one could arrange for some Casimir effect to generate $w < -1$ on the Planck brane; or perhaps some higher derivative terms in the bulk would loosen the constraints.
I have left a number of issues unexplored. Here is a partial list:
\begin{itemize}
\item Although I have speculated that a stable configuration, including a violation of the null energy condition on the Planck brane, could be achieved, I have by no means demonstrated this explicitly. To demonstrate stability, one would presumably have to start by studying the coupled perturbations of all the bulk and brane fields---already a non-trivial problem.
\item The key feature of the bulk geometry is the $SO(3,1)$ symmetry that emerges in the infrared. One could plausibly arrange other emergent symmetries. For example, if a scalar runs in the infrared to an extremum of a potential where some gauge symmetry is restored, then this symmetry could be fairly described as emergent. Without some cutoff like the Planck brane, a gauge symmetry in the bulk corresponds to a global symmetry on the boundary. It might be instructive to see how (approximate) gauge invariance results from an appropriate cutoff.
\item Although I have shown that one can contrive to have a spin-two graviton which propagates at the infrared speed of light, I did not show that the low-energy physics includes standard Einstein gravity. I expect that perturbations of non-transverse-traceless components of the metric mix with other fields, and it is a matter of calculation to find out how they affect low-energy four-dimensional physics.
\item If there really is more-or-less standard gravity in the four-dimensional effective lagrangian, then black hole physics provides another interesting set of questions. Can one see part way inside a black hole with particles that propagate faster than the infrared speed of light?
\item Time warp geometries with more than one extra dimension might offer some novel possibilities. For example, one might try to evade the null energy constraints by considering an extra-dimensional which is non-compact due to a finite-volume ``spike'' along which $h$ increases without ever reaching a maximum.
\item I considered only the simplest interface between time warps and unparticle physics. A host of related calculations could be revisited, either with some rough constraints in mind (like the ultraviolet ``un-shell'' condition $\omega_{\cal U}^2/h_{\rm UV} - \vec{k}_{\cal U}^2 \geq 0$), or with some more precise evaluations of Green's functions in hand.
\item The conditions \eno{fConditions} on $X$, $f(X)$, and $V_{\rm brane}(|\psi|)$ at the Planck brane involve exponentially small quantities and, most likely, some fine-tuning; also the choice of $\eta$ in section~\ref{GRAVITON} seems to be a fine-tuning. Fine-tuning might be hard to avoid altogether, but I would not be surprised if the example I studied explicitly is far from the most natural time warp construction.
\end{itemize}
I hope to report on some of these issues in the future.
\section*{Acknowledgments}
I thank J.~Khoury, A.~Nellore, S.~Pufu, F.~Rocha, L.~Senatore, P.~Steinhardt, and A.~Yarom for useful discussions. This work was supported in part by the Department of Energy under Grant No.\ DE-FG02-91ER40671 and by the NSF under award number PHY-0652782.
\clearpage
|
train/arxiv
|
BkiUZrg25V5ha7jY6jW4
| 5 | 1 |
\section*{}
Functions entering into polarized cross sections:\footnote{The
coefficients corresponding to $\langle{\mathcal O}_8^{\psi'}
(^1S_0)\rangle$ production can be obtained from Ref.~\cite{ChoLeibov2}
and will not be repeated here.}
$ q q \to \psi^{(\lambda)} g $:
\begin{mathletters}\label{qq}
\begin{eqnarray}
&&A_{qq}[^3S_1^{(8)}] =
\frac{4 \alpha_s^3 \pi^2}
{81 M^3 \hat s^2}\,
\frac{\left( 4 (\hat t^2 + \hat u^2) - \hat t \hat u \right)
(\hat s^2 - 2 \hat t \hat u + M^4)}
{\hat t \hat u (\hat s - M^2)^2}, \nonumber\\
&&B_{qq}[^3S_1^{(8)}] =
-\frac{16 \alpha_s^3 \pi^2}
{81 M \hat s^2}\,
\frac{\left( 4 (\hat t^2 + \hat u^2) - \hat t \hat u \right)}
{\hat t \hat u (\hat s - M^2)^2}, \\
&&C_{qq}[^3S_1^{(8)}] =
-\frac{16 \alpha_s^3 \pi^2}
{81 M \hat s^2}\,
\frac{\left( 4 (\hat t^2 + \hat u^2) - \hat t \hat u \right)}
{\hat t \hat u (\hat s - M^2)^2}, \nonumber\\
&&D_{qq}[^3S_1^{(8)}] = 0,\nonumber \\
&& \nonumber \\
&&A_{qq}[^3P_J^{(8)}] =
\frac{80 \alpha_s^3 \pi^2}
{27 M^3 \hat s^2}\,
\frac{\hat s^2 - 2 \hat t \hat u + 3 M^4}
{\hat s (\hat s - M^2)^2}, \nonumber \\
&&B_{qq}[^3P_J^{(8)}] =
-\frac{640 \alpha_s^3 \pi^2}
{27 M \hat s^2}\,
\frac{\hat t \hat u + \hat u M^2 - M^4}
{\hat s^2 (\hat s - M^2)^3}, \\
&&C_{qq}[^3P_J^{(8)}] =
-\frac{640 \alpha_s^3 \pi^2}
{27 M \hat s^2}\,
\frac{\hat t \hat u + \hat t M^2 - M^4}
{\hat s^2 (\hat s - M^2)^3}, \nonumber \\
&&D_{qq}[^3P_J^{(8)}] =
\frac{640 \alpha_s^3 \pi^2}
{27 M \hat s^2}\,
\frac{\hat s^2 + \hat s M^2 - 2 \hat t \hat u}
{\hat s^2 (\hat s - M^2)^3}, \nonumber
\end{eqnarray}
\end{mathletters}
$ g q \to \psi^{(\lambda)} q $:
\begin{mathletters}\label{gq}
\begin{eqnarray}
&&A_{gq}[^3S_1^{(8)}] =
-\frac{\alpha_s^3 \pi^2}
{54 M^3 \hat s^2}\,
\frac{\left( 4 (\hat s^2 + \hat u^2) - \hat s \hat u \right)
(\hat t^2 - 2 \hat s \hat u + M^4)}
{\hat s \hat u (\hat t - M^2)^2}, \nonumber\\
&&B_{gq}[^3S_1^{(8)}] =
\frac{2 \alpha_s^3 \pi^2}
{27 M \hat s^2}\,
\frac{\left( 4 (\hat s^2 + \hat u^2) - \hat s \hat u \right)}
{\hat s \hat u (\hat t - M^2)^2}, \\
&&C_{gq}[^3S_1^{(8)}] =
\frac{4 \alpha_s^3 \pi^2}
{27 M \hat s^2}\,
\frac{\left( 4 (\hat s^2 + \hat u^2) - \hat s \hat u \right)}
{\hat s \hat u (\hat t - M^2)^2}, \nonumber \\
&&D_{gq}[^3S_1^{(8)}] =
\frac{4 \alpha_s^3 \pi^2}
{27 M \hat s^2}\,
\frac{\left( 4 (\hat s^2 + \hat u^2) - \hat s \hat u \right)}
{\hat s \hat u (\hat t - M^2)^2}, \nonumber \\
&& \nonumber \\
&&A_{gq}[^3P_J^{(8)}] =
-\frac{10 \alpha_s^3 \pi^2}
{9 M^3 \hat s^2}\,
\frac{\hat t^2 - 2 \hat s \hat u + 3 M^4}
{\hat t (\hat t - M^2)^2}, \nonumber \\
&&B_{gq}[^3P_J^{(8)}] =
\frac{80 \alpha_s^3 \pi^2}
{9 M \hat s^2}\,
\frac{\hat s \hat u + \hat u M^2 - M^4}
{\hat t^2 (\hat t - M^2)^3}, \\
&&C_{gq}[^3P_J^{(8)}] =
\frac{80 \alpha_s^3 \pi^2}
{9 M \hat s^2}\,
\frac{\hat t + M^2}
{\hat t^2 (\hat t - M^2)^2}, \nonumber \\
&&D_{gq}[^3P_J^{(8)}] =
\frac{80 \alpha_s^3 \pi^2}
{9 M \hat s^2}\,
\frac{\hat t^2 - M^2(2 \hat s + \hat t)}
{\hat t^2 (\hat t - M^2)^3}, \nonumber
\end{eqnarray}
\end{mathletters}
$ g g \to \psi^{(\lambda)} g $:\footnote{We have introduced the
variable $\hat z = \sqrt{\hat t \hat u}$ to simplify some of the
coefficients.}
\begin{mathletters}\label{gg}
\begin{eqnarray}
&&A_{gg}[^3S_1^{(1)}] =
\frac{10 \alpha_s^3 \pi^2 M}
{81 \hat s^2}\,
\frac{\hat s^2(\hat s - M^2)^2 +
\hat t \hat u\ (\hat s \hat t + \hat t \hat u + \hat u \hat s - \hat s^2)}
{(\hat s - M^2)^2 (\hat t - M^2)^2 (\hat u - M^2)^2} \nonumber \\
&&B_{gg}[^3S_1^{(1)}] =
-\frac{20 \alpha_s^3 \pi^2 M^3}
{81 \hat s^2}\,
\frac{(\hat s^2 + \hat t^2)}
{(\hat s - M^2)^2 (\hat t - M^2)^2 (\hat u - M^2)^2}, \\
&&C_{gg}[^3S_1^{(1)}] =
-\frac{20\alpha_s^3 \pi^2 M^3}
{81 \hat s^2}\,
\frac{(\hat s^2 + \hat u^2)}
{(\hat s - M^2)^2 (\hat t - M^2)^2 (\hat u - M^2)^2}, \nonumber \\
&&D_{gg}[^3S_1^{(1)}] =
-\frac{40\alpha_s^3 \pi^2 M^3}
{81 \hat s^2}\,
\frac{\hat s^2}
{(\hat s - M^2)^2 (\hat t - M^2)^2 (\hat u - M^2)^2}, \nonumber \\
&& \nonumber \\
&&A_{gg}[^3S_1^{(8)}] =
\frac{\alpha_s^3 \pi^2}
{36 M^3 \hat s^2}\,
\frac{\left[ 27 (\hat s^2 - \hat t \hat u - M^2 \hat s) + 19 M^4 \right]}
{(\hat s - M^2)^2 (\hat t - M^2)^2 (\hat u - M^2)^2} \nonumber \\
&& \qquad\qquad\qquad \times
\left[
\hat s^2(\hat s - M^2)^2 +
\hat t \hat u\ (\hat s \hat t + \hat t \hat u + \hat u \hat s - \hat s^2)
\right], \nonumber \\
&&B_{gg}[^3S_1^{(8)}] =
-\frac{\alpha_s^3 \pi^2}
{18 M \hat s^2}\,
\frac{\left[ 27 (\hat s^2 - \hat t \hat u - M^2 \hat s) + 19 M^4 \right]
(\hat s^2 + \hat t^2)}
{(\hat s - M^2)^2 (\hat t - M^2)^2 (\hat u - M^2)^2}, \\
&&C_{gg}[^3S_1^{(8)}] =
-\frac{\alpha_s^3 \pi^2}
{18 M \hat s^2}\,
\frac{\left[ 27 (\hat s^2 - \hat t \hat u - M^2 \hat s) + 19 M^4 \right]
(\hat s^2 + \hat u^2)}
{(\hat s - M^2)^2 (\hat t - M^2)^2 (\hat u - M^2)^2}, \nonumber \\
&&D_{gg}[^3S_1^{(8)}] =
-\frac{\alpha_s^3 \pi^2}
{9 M \hat s^2}\,
\frac{\left[ 27 (\hat s^2 - \hat t \hat u - M^2 \hat s) + 19 M^4 \right] \hat s^2}
{(\hat s - M^2)^2 (\hat t - M^2)^2 (\hat u - M^2)^2}, \nonumber
\end{eqnarray}
\eject
\begin{eqnarray}
A_{gg}[^3P_J^{(8)}] &=&
\frac{5 \alpha_s^3 \pi^2}
{M^3 \hat s^2}
\biggl\{
M^2 \hat s^3 (\hat s - M^2)^3
(\hat s^4 - 2 M^2 \hat s^3 + 7 M^4 \hat s^2 - 6 M^6 \hat s + 3 M^8)
\nonumber \\
&& \qquad\quad\,
+ \hat s^2 \hat z^2 (\hat s - M^2) (\hat s^6 - 8 M^2 \hat s^5
+ 23 M^4 \hat s^4 - 50 M^6 \hat s^3 + 56 M^8 \hat s^2 \nonumber \\
&& \qquad\qquad\qquad\qquad\quad\quad
- 31 M^{10} \hat s + 6 M^{12} )
\nonumber \\
&& \qquad\quad\,
- \hat s \hat z^4 (4 \hat s^6 - 9 M^2 \hat s^5 + 31 M^4 \hat s^4
- 71 M^6 \hat s^3 + 77 M^8 \hat s^2 - 34 M^{10} \hat s + 6 M^{12})
\nonumber \\
&& \qquad\quad\,
+ \hat z^6 (6 \hat s^5 + 4 M^2 \hat s^4 + 20 M^4 \hat s^3 - 33 M^6 \hat s^2
+ 22 M^8 \hat s - 3 M^{10})
\nonumber \\
&& \qquad\quad\,
- 2 \hat z^8 (2 \hat s^3 + 2 M^2 \hat s^2 + 5 M^4 \hat s - 3 M^6)
\nonumber \\
&& \qquad\quad\,
+ \hat z^{10} (\hat s - M^2)
\biggr \}\bigg/
\left (\hat s \hat z^2 (\hat s - M^2)^3 (M^2 s + \hat z^2)^3 \right ),
\nonumber \\
B_{gg}[^3P_J^{(8)}] &=&
-\frac{5 \alpha_s^3 \pi^2}
{M \hat s^2} \biggl\{
4 \hat u^5 (M^2 - \hat u)^7 \nonumber \\
&&\qquad\qquad\,
- \hat t \hat u^3 (M^2 - \hat u)^4
(M^8 - 7 M^6 \hat u + 42 M^4 \hat u^2 - 52 M^2 \hat u^3 + 24 \hat u^4)
\nonumber \\
&& \qquad\qquad\,
+ \hat t^2 \hat u^2 (M^2 - \hat u)^3
(2 M^{10} - M^8 \hat u - 39 M^6 \hat u^2 + 152 M^4 \hat u^3
- 166 M^2 \hat u^4
\nonumber \\
&& \qquad\qquad\qquad\qquad\qquad\quad\ \,
+ 68 \hat u^5)
\nonumber \\
&& \qquad\qquad\,
- \hat t^3 \hat u (M^2 - \hat u)^2
(M^{12} + 9 M^{10} \hat u + 2 M^8 \hat u^2 - 134 M^6 \hat u^3
+ 361 M^4 \hat u^4 \nonumber \\
&& \qquad\qquad\qquad\qquad\qquad\quad\ \,
- 339 M^2 \hat u^5 + 116 \hat u^6)
\nonumber \\
&& \qquad\qquad\,
+ \hat t^4 \hat u (M^2 - \hat u)
(11 M^{12} + 9 M^{10} \hat u + 16 M^8 \hat u^2 - 274 M^6 \hat u^3
+ 589 M^4 \hat u^4 \nonumber \\
&& \qquad\qquad\qquad\qquad\qquad\quad
- 471 M^2 \hat u^5 + 128 \hat u^6)
\nonumber \\
&& \qquad\qquad\,
+ \hat t^5 (M^2 - \hat u)
(4 M^{12} - 51 M^{10} \hat u + 2 M^8 \hat u^2 - 36 M^6 \hat u^3
+ 282 M^4 \hat u^4 \nonumber \\
&& \qquad\qquad\qquad\qquad\qquad\ \
- 329 M^2 \hat u^5 + 80 \hat u^6)
\nonumber \\
&& \qquad\qquad\,
- \hat t^6 (20 M^{12} - 129 M^{10} \hat u + 94 M^8 \hat u^2 - 12 M^6 \hat u^3
+ 150 M^4 \hat u^4 - 147 M^2 \hat u^5
\nonumber \\
&& \qquad\qquad\qquad
+ 8 \hat u^6)
\nonumber \\
&& \qquad\qquad\,
+ 8 \hat t^7 (5 M^{10} - 19 M^8 \hat u + 6 M^6 \hat u^2 + 6 M^4 \hat u^3
- 3 M^2 \hat u^4 + 5 \hat u^5)
\nonumber \\
&& \qquad\qquad\,
- 8 \hat t^8
(5 M^8 - 11 M^6 \hat u - 2 M^4 \hat u^2 + 7 M^2 \hat u^3 - 5 \hat u^4)
\nonumber \\
&& \qquad\qquad\,
+ 20 \hat t^9 (M^2 - \hat u)^2 (M^2 + \hat u)
\nonumber \\
&& \qquad\qquad\,
- 4 \hat t^{10} (M^4 - \hat u^2)
\biggr \}\bigg/
\left ( \hat s^2 \hat t^2 \hat u^2
(\hat s - M^2)^3 (\hat t - M^2)^3 (\hat u - M^2)^3
\right ),
\nonumber \\
C_{gg}[^3P_J^{(8)}] &=&
B_{gg}[^3P_J^{(8)}]|_{\hat t \leftrightarrow \hat u}, \\
D_{gg}[^3P_J^{(8)}] &=&
\frac{10 \alpha_s^3 \pi^2}
{M \hat s^2} \biggl\{
4 M^2 \hat s^6 (\hat s - M^2)^5
\nonumber \\
&& \qquad\quad\ \,
- M^2 \hat s^4 \hat z^2 (\hat s - M^2)^2 (22 \hat s^3 - 38 M^2 \hat s^2
+ 19 M^4 \hat s - 4 M^6)
\nonumber \\
&& \qquad\quad\ \,
- 2 \hat s^3 \hat z^4 (\hat s^5 - 22 M^2 \hat s^4 + 62 M^4 \hat s^3
- 62 M^6 \hat s^2 + 27 M^8 \hat s - 4 M^{10})
\nonumber \\
&& \qquad\quad\ \,
+ \hat s^2 \hat z^6 (2 \hat s^4 - 17 M^2 \hat s^3 + 66 M^4 \hat s^2
- 31 M^6 \hat s + 8 M^8 )
\nonumber \\
&& \qquad\quad\ \,
+ 2 \hat s \hat z^8 (3 \hat s^3 - 6 M^2 \hat s^2 - 3 M^4 \hat s + 2 M^6)
\nonumber \\
&& \qquad\quad\ \,
- 2 \hat s \hat z^{10} (5 \hat s - 3 M^2) + 4 \hat z^{12}
\biggr \} \bigg/
\left (\hat s^2 \hat z^4 (\hat s - M^2)^3 (M^2 \hat s + \hat z^2)^3
\right ).
\nonumber
\end{eqnarray}
\end{mathletters}
|
train/arxiv
|
BkiUc2k4dbgg3WyisoDR
| 5 | 1 |
\section{Introduction}
\label{intro}
Finding new localized wave solutions, studying their dynamics in the nonlocal integrable equations and obtaining new integrable equations from the nonlocal reductions are active areas of research in the study of integrable systems. Recently in \cite{1}, Ablowitz and Musslimani introduced a reverse space nonlocal nonlinear Schr\"{o}dinger (NNLS) equation to explain the wave propagation in a nonlocal medium. A special property associated with this equation is the existence of $\cal{PT}$-symmetry when the self-induced potential obeys the $\cal{PT}$-symmetry condition \cite{2}. The presence of nonlocal field as well as $\cal{PT}$-symmetric complex potential make the nonlocal equations more interesting subject. Various recent studies have shown that the analysis of NNLS equation and its variant have both physical and mathematical perspectives \cite{3b}-\cite{8a}. Further only a few investigations on the dynamics of solitons in the coupled version of NNLS equation have been reported in the literature \cite{57}-\cite{8a}. In particular, breathing one soliton solution is constructed for the following nonlocal Manakov equation through inverse scattering transform \cite{8a},
\bea
iq_{j,t}(x,t)+q_{j,xx}(x,t)&+&2\sum_{l=1}^{2}q_l(x,t)q_l^{*}(-x,t)q_{j}(x,t)=0, ~j=1,2.
\label{1.1a}
\eea
However the soliton shows singularity in a finite time at $x=0$. The above equation is a vector generalization of reverse space NNLS equation. As we pointed out above Eq. (\ref{1.1a}) also possess self-induced potential $V(x,t)=2\sum_{l=1}^{2}q_l(x,t)q_l^{*}(-x,t)$ and $\cal{PT}$-symmetry since the later obeys the $\cal{PT}$-symmetric condition. In the first part of accompanying work \cite{8b}, we have constructed general soliton solution for Eq. (\ref{1.1a}) and its augmented version
\bea
iq_{j,t}^*(-x,t)-q_{j,xx}^*(-x,t)&-&2\sum_{l=1}^{2}q_l^{*}(-x,t)q_l(x,t)q_{j}^{*}(-x,t)=0, ~j=1,2.
\label{1.1b}
\eea
by bilinearizing themt in a non-standard way. The obtained soliton solutions are in general non-singular \cite{8b}. In the present second part we study the dynamics of obtained two-soliton solution which has been reported in the previous part \cite{8b}. Before proceeding further, we first summarize the results presented in \cite{8b}.
To construct soliton solution of Eq. (\ref{1.1a}), we also augment Eq. (\ref{1.1b}) given above in the bilinear process. Since, we have treated the fields $q_l(x,t)$ and $q_l^*(-x,t)$, $l=1,2$, present in nonlocal nonlinearity as independent fields, to bilinearize them, we introduce two auxiliary functions, namely $s^{(1)}(-x,t)$ and $s^{(2)}(-x,t)$ in the non-standard bilinear process. By solving the obtained bilinear equations systematically, first we have derived the non-degenerate one soliton solution. From this soliton solution, we have deduced the degenerate one soliton solution and then the solutions which already exist in the literature. We have also constructed degenerate two-soliton solution for Eq. (\ref{1.1a}). As a continuation of the first part \cite{8b} in the present work we study their dynamics.
We show that there exists three types of shape changing collisions in (\ref{1.1a}) for two specific parametric conditions, where the first shape changing collision is similar to the one that arises in the case of local Manakov equation \cite{6a}.
The second type of shape changing collision is similar to the one that occurs in the mixed CNLS equation \cite{7a2}. Besides these two collision scenario, we also observe a third type of collision which is a variant of the second type of shape changing collision and it has not been observed in any local 2-CNLS equation. By carrying out the asymptotic analysis for the fields $q_j(x,t)$ and $q_j^*(-x,t)$ in a novel way, we deduce the conservation equation, expression for the phase shift and relative separation distances for all the three collisions. More surprisingly, in the new types of shape changing collisions, the difference in quasi-intensity of the two modes of a soliton before collision is not equal to the difference in quasi-intensity of the same after collision. However, the total quasi-intensity of solitons before collision is equal to the total quasi-intensity of the solitons after collision in both the modes. In another type of collision the total quasi-intensity of individual solitons as well the total quasi-intensity of soliton before and after collision in both the modes are conserved. Finally, by tuning the imaginary part of the wave numbers we unearth a new type of localized resonant wave pattern that arises during first and second type of collision processes. We also demonstrate the existence of bright soliton bound states in the nonlocal Manakov equation.
The outline of the paper is as follows. In section 2, by performing an asymptotic analysis, we investigate three types of shape changing collisions via intensity redistribution. Our aim here is to calculate the total energy of the solitons in both the modes, as well as phase shifts and relative separation distances. In Sec. 3, we explain the observation of localized resonant pattern creation during the collision of degenerate two solitons. In this section, we also demonstrate the occurrence of bound states that occur between the interaction of degenerate two solitons. We present our conclusions in Sec. 4.
\section{Asymptotic analysis of degenerate two bright nonlocal soliton solution: Shape changing or switching collision}
Differing from the local case, we perform the asymptotic analysis on the degenerate two-soliton solutions of Eq. (\ref{1.1a}) and Eq. (\ref{1.1b}). To perform the asymptotic analysis, we rewrite the two-soliton solution of (\ref{1.1a}) as a nonlinear superposition of two one-solitons. The resultant expressions are similar to the form given in Eqs. (15a)-(15b) in \cite{8b} but differ in amplitudes and phases.
As far as Eqs. (\ref{1.1a}) and (\ref{1.1b}) are concerned, one can identify different types of shape changing collisions. In particular, here we point out three interesting cases, which we designate as Type-I, Type-II and a variant of Type-II collisions.
\subsection{Type-I shape changing collision}
We visualize Type-I collision for the following choice, namely
\bea
\bar{k}_{1R}<0,~\bar{k}_{2R}>0,~\bar{k}_{1R}<\bar{k}_{2R},~\bar{k}_{1I},~\bar{k}_{2I}>0,~\bar{k}_{1I}<\bar{k}_{2I}\nonumber,\\
k_{1R}>0,~k_{2R}<0,~k_{1R}>k_{2R},~k_{1I},~k_{2I}>0,~k_{1I}<k_{2I} \label{2s.1a},
\eea
where $k_j=k_{jR}+ik_{jI}$, $\bar{k}_j=\bar{k}_{jR}+i\bar{k}_{jI}$, $j=1,2$, For this choice, the nonlocal solitons exhibit a shape changing collision similar to the one that occurs in the local Manakov equation \cite{6a}. We call this type of collision as local Manakov type collision or Type-I collision. For the parametric restrictions (\ref{2s.1a}) the solitons $S_1$ and $S_2$ are well separated initially. The variables $\xi_{jR}$ and $\bar{\xi}_{jR}$'s in the two nonlocal solitons behave asymptotically as (i) $\xi_{1R}$, $\bar{\xi}_{1R}\sim 0$, $\xi_{2R}$, $\bar{\xi}_{2R}\rightarrow \pm\infty$ as $t\pm\infty$ (soliton 1 ($S_1$)) and (ii) $\xi_{2R}$, $\bar{\xi}_{2R}\sim 0$, $\xi_{1R}$, $\bar{\xi}_{1R}\rightarrow \mp\infty$ as $t\pm\infty$ (soliton 2 ($S_2$)). Here the variables, $\xi_{jR}$ and $\bar{\xi}_{jR}$ are the real parts of the wave variables $\xi_j$ and $\bar{\xi}_{j}$ and are equal to $-k_{jI}(x+2k_{jR}t)$ and $-\bar{k}_{jI}(x-2\bar{k}_{jR}t)$, respectively.
\subsection{Type-II shape changing collision and its variant}
The nonlocal solitons exhibit another interesting collision scenario for the following parametric condition:
\bea
\bar{k}_{1R}>0,~\bar{k}_{2R}<0,~\bar{k}_{1R}>\bar{k}_{2R},~\bar{k}_{1I},~\bar{k}_{2I}>0,~\bar{k}_{1I}<\bar{k}_{2I}\nonumber\label{2s.1b},\\
k_{1R}<0,~k_{2R}>0,~k_{1R}<k_{2R},~k_{1I},~k_{2I}>0,~k_{1I}<k_{2I}.
\eea
For this choice, the nonlocal Manakov equation admits a collision similar to the one that occurs in the local mixed CNLS equation \cite{7a2}. To differentiate this second type of collision from the earlier one which is pointed out in the previous paragraph we call this collision as nonlocal mixed CNLS like collision or Type-II collision. For the parametric restriction given in (\ref{2s.1b}) the wave variables $\xi_{jR}$ and $\bar{\xi}_{jR}$'s behave asymptotically as (i) $\xi_{1R}$, $\bar{\xi}_{1R}\sim 0$, $\xi_{2R}$, $\bar{\xi}_{2R}\rightarrow \mp\infty$ as $t\pm\infty$ (soliton 1 ($S_1$)) and (ii) $\xi_{2R}$, $\bar{\xi}_{2R}\sim 0$, $\xi_{1R}$, $\bar{\xi}_{1R}\rightarrow \pm\infty$ as $t\pm\infty$ (soliton 2 ($S_2$)).
For the same parametric restriction chosen in (\ref{2s.1b}) we also come across another type of new shape changing collision which we call as variant of Type-II collision which has not been observed in any local $2$-CNLS equation. Thus in the nonlocal Manakov equation, we observe three types of shape changing collisions whereas in the local Manakov equation we come across only one type of shape changing collision \cite{6a}. We perform the asymptotic analysis for all the three types of shape changing collisions. Our results show that the asymptotic analysis carried out on Type-II and its variant collisions match with each other.
\subsection{Asymptotic forms in Type-I shape changing collision}
Now we can check that the parametric choice given in (\ref{2s.1a}) for the Type-I collision leads to the following asymptotic forms.
\underline{(i) Before Collision:} ($t\rightarrow-\infty$)\\
\hspace{1.5cm}
In the limit $t\rightarrow-\infty$, the two-soliton solution reduces to the following two independent one soliton solutions:\\
(a) \underline{Soliton 1:} ($\xi_{1R}$, $\bar{\xi}_{1R})\sim 0$, $\xi_{2R}$, $\bar{\xi}_{2R}\rightarrow +\infty$
\bes
\bea
q_j(x,t)=\frac{A_j^{1-}(k_1+\bar{k}_1)\e^{\frac{(\bar{\xi}_{1R}-\xi_{1R})}{2}+i\frac{(\bar{\xi}_{1I}-\xi_{1I})}{2}}}{2i[\cosh(\chi_{jR}^{1-})\cos(\chi_{jI}^{1-})+i\sinh(\chi_{jR}^{1-})\sin(\chi_{jI}^{1-})]},
\eea
where $A_j^{1-}=\frac{i}{(k_1+\bar{k}_1)}\e^{\rho_j^1-\frac{\theta_j^1-}{2}}$,~$\rho_j^1=\ln\al_1^{(j)}$,~$\theta_j^{1-}=\Del_1^{(j)}-\rho_j^1$,~
$\chi_{jR}^{1-}=\frac{\xi_{1R}+\bar{\xi}_{jR}+\theta_{jR}^{1-}}{2}$,\\$\chi_{jI}^{1-}=\frac{\xi_{1I}+\bar{\xi}_{1I}+\theta_{jI}^{1-}}{2}$,~$j=1,2$. Here, subscript $j$ ($=1,2$) represents the modes $q_1$ and $q_2$, respectively, and the superscript denotes the soliton 1 at $t\rightarrow-\infty$. The parameters $A_j^{1-}$ and $\theta_{jR}^{1-}$ denote the amplitude and phase of the soliton 1 in both the components before collision. The corresponding asymptotic analysis on the fields $q_j^{*}(-x,t)$ yields the following expression,
\bea
q_j^{*}(-x,t)=\frac{\hat{A}_j^{1-}(k_1+\bar{k}_1)\e^{\frac{-(\bar{\xi}_{1R}-\xi_{1R})}{2}-i\frac{(\bar{\xi}_{1I}-\xi_{1I})}{2}}}{2i[\cosh(\hat{\chi}_{jR}^{1-})\cos(\hat{\chi}_{jI}^{1-})+i\sinh(\hat{\chi}_{jR}^{1-})\sin(\hat{\chi}_{jI}^{1-})]},\label{as}
\eea
where
$\hat{A}_j^{1-}=\frac{i}{(k_1+\bar{k}_1)}\e^{\hat{\rho}_j^1-\frac{\hat{\theta}_j^1-}{2}}$,~$\hat{\rho}_j^1=\ln\ba_1^{(j)}$,~$\hat{\theta}_j^{1-}=\ga_1^{(j)}-\hat{\rho}_j^1$,~$\hat{\chi}_{jR}^{1-}=\frac{\xi_{1R}+\bar{\xi}_{jR}+\hat{\theta}_{jR}^{1-}}{2}$,\\$\hat{\chi}_{jI}^{1-}=\frac{\xi_{1I}+\bar{\xi}_{1I}+\hat{\theta}_{jI}^{1-}}{2}$,~$j=1,2$.
In Eq. (\ref{as}), ` hat' corresponds to field $q_j^{*}(-x,t)$.\\
(b) \underline{Soliton 2:} ($\xi_{2R}$, $\bar{\xi}_{2R})\sim 0$, $\xi_{1R}$, $\bar{\xi}_{1R}\rightarrow -\infty$
\bea
q_j(x,t)=\frac{A_j^{2-}(k_2+\bar{k}_2)\e^{\frac{(\bar{\xi}_{2R}-\xi_{2R})}{2}+i\frac{(\bar{\xi}_{2I}-\xi_{2I})}{2}}}{2i[\cosh(\chi_{jR}^{2-})\cos(\chi_{jI}^{2-})+i\sinh(\chi_{jR}^{2-})\sin(\chi_{jI}^{2-})]},
\eea
where
$A_j^{2-}=\frac{i}{(k_2+\bar{k}_2)}\e^{\Del_7^{(j)}-\del_{11}-\frac{\theta_j^{2-}}{2}}$,~$\theta_j^{2-}=\mu_4^{(j)}-\Del_7^{(j)}$,~$\chi_{jR}^{2-}=\frac{\xi_{2R}+\bar{\xi}_{2R}+\theta_{jR}^{2-}}{2}$,~$\chi_{jI}^{2-}=\frac{\xi_{2I}+\bar{\xi}_{2I}+\theta_{jI}^{2-}}{2}$,~$j=1,2$. In the above, amplitude and phase of the soliton 2 before collision is represented by $A_j^{2-}$ and $\theta_{jR}^{2-}$, respectively. Here, the superscript denotes the soliton 2 before collision. In the same limit, the asymptotic expression of $q_j^{*}(-x,t)$ turns out to be
\bea
q_j^{*}(-x,t)=\frac{\hat{A}_j^{2-}(k_2+\bar{k}_2)\e^{\frac{-(\bar{\xi}_{2R}-\xi_{2R})}{2}-i\frac{(\bar{\xi}_{2I}-\xi_{1I})}{2}}}{2i[\cosh(\hat{\chi}_{jR}^{2-})\cos(\hat{\chi}_{jI}^{2-})+i\sinh(\hat{\chi}_{jR}^{2-})\sin(\hat{\chi}_{jI}^{2-})]},
\eea
where
$\hat{A}_j^{2-}=\frac{i}{(k_2+\bar{k}_2)}\e^{\ga_7^{(j)}-\del_{11}-\frac{\hat{\theta}_j^{2-}}{2}}$,~$\hat{\theta}_j^{2-}=\varphi_4^{(j)}-\ga_7^{(j)}$,~$\hat{\chi}_{jR}^{2-}=\frac{\xi_{2R}+\bar{\xi}_{2R}+\hat{\theta}_{jR}^{2-}}{2}$,~$\hat{\chi}_{jI}^{2-}=\frac{\xi_{2I}+\bar{\xi}_{2I}+\hat{\theta}_{jI}^{2-}}{2}$,~$j=1,2$.
\ees\\
\underline{(ii) After Collision:} ($t\rightarrow+\infty$)\\
\hspace{1.5cm}
In this limit, $t\rightarrow+\infty$, the two-soliton solution reduces to the following two one soliton solutions:\\
(a) \underline{Soliton 1:} ($\xi_{1R}$, $\bar{\xi}_{1R})\sim 0$, $\xi_{2R}$, $\bar{\xi}_{2R}\rightarrow -\infty$
\bes
\bea
q_j(x,t)=\frac{A_j^{1+}(k_1+\bar{k}_1)\e^{\frac{(\bar{\xi}_{1R}-\xi_{1R})}{2}+i\frac{(\bar{\xi}_{1I}-\xi_{1I})}{2}}}{2i[\cosh(\chi_{jR}^{1+})\cos(\chi_{jI}^{1+})+i\sinh(\chi_{jR}^{1+})\sin(\chi_{jI}^{1+})]},
\eea
where
$A_j^{1+}=\frac{i}{(k_1+\bar{k}_1)}\e^{\mu_1^{(j)}-\del_{14}-\frac{\theta_j^{1+}}{2}}$,~$\theta_j^{1+}=\mu_5^{(j)}-\mu_1^{(j)}$,~$\chi_{jR}^{1+}=\frac{\xi_{1R}+\bar{\xi}_{1R}+\theta_{jR}^{1+}}{2}$,~$\chi_{jI}^{1+}=\frac{\xi_{1I}+\bar{\xi}_{1I}+\theta_{jI}^{1+}}{2}$,~$j=1,2$. Here, the quantities $A_j^{1+}$ and $\theta_{jR}^{1+}$ define the amplitude and phase of the soliton 1 after collision. In the superscript of the above expressions $1+$ denotes the soliton 1 at $t\rightarrow+\infty$.
\bea
q_j^*(-x,t)=\frac{\hat{A}_j^{1+}(k_1+\bar{k}_1)\e^{\frac{-(\bar{\xi}_{1R}-\xi_{1R})}{2}-i\frac{(\bar{\xi}_{1I}-\xi_{1I})}{2}}}{2i[\cosh(\hat{\chi}_{jR}^{1+})\cos(\hat{\chi}_{jI}^{1+})+i\sinh(\hat{\chi}_{jR}^{1+})\sin(\hat{\chi}_{jI}^{1+})]},
\eea
where $\hat{A}_j^{1+}=\frac{i}{(k_1+\bar{k}_1)}\e^{\varphi_1^{(j)}-\del_{14}-\frac{\hat{\theta}_j^{1+}}{2}}$,~$\hat{\theta}_j^{1+}=\varphi_5^{(j)}-\varphi_1^{(j)}$,~$\hat{\chi}_{jR}^{1+}=\frac{\xi_{1R}+\bar{\xi}_{1R}+\hat{\theta}_{jR}^{1+}}{2}$,~$\hat{\chi}_{jI}^{1+}=\frac{\xi_{1I}+\bar{\xi}_{1I}+\hat{\theta}_{jI}^{1+}}{2}$,~$j=1,2$.
(b) \underline{soliton 2:} ($\xi_{2R}$, $\bar{\xi}_{2R})\sim 0$, $\xi_{1R}$, $\bar{\xi}_{1R}\rightarrow -\infty$
\bea
q_j(x,t)=\frac{A_j^{2+}(k_2+\bar{k}_2)\e^{\frac{(\bar{\xi}_{2R}-\xi_{2R})}{2}+i\frac{(\bar{\xi}_{2I}-\xi_{2I})}{2}}}{2i[\cosh(\chi_{jR}^{2+})\cos(\chi_{jI}^{2+})+i\sinh(\chi_{jR}^{2+})\sin(\chi_{jI}^{2+})]},
\eea
where $A_j^{2+}=\frac{i}{(k_2+\bar{k}_2)}\e^{\rho_j^2-\frac{\theta_j^{2+}}{2}}$,~$\rho_j^2=\ln\al_2^{(j)}$,~$\theta_j^{2+}=\Del_4^{(j)}-\rho_j^2$,
$\chi_{jR}^{2+}=\frac{\xi_{2R}+\bar{\xi}_{2R}+\theta_{jR}^{2+}}{2}$,\\$\chi_{jI}^{2+}=\frac{\xi_{2I}+\bar{\xi}_{2I}+\theta_{jI}^{2+}}{2}$,~$j=1,2$. The amplitude and phase of the soliton 2 in the nonlinear Schr\"{o}dinger field after collision is represented by $A_j^{2+}$ and $\theta_j^{2+}$, respectively.
\bea
q_j^*(-x,t)=\frac{\hat{A}_j^{2+}(k_2+\bar{k}_2)\e^{\frac{-(\bar{\xi}_{2R}-\xi_{2R})}{2}-i\frac{(\bar{\xi}_{2I}-\xi_{2I})}{2}}}{2i[\cosh(\hat{\chi}_{jR}^{2+})\cos(\hat{\chi}_{jI}^{2+})+i\sinh(\hat{\chi}_{jR}^{2+})\sin(\hat{\chi}_{jI}^{2+})]},
\eea \ees
where $\hat{A}_j^{2+}=\frac{i}{(k_2+\bar{k}_2)}\e^{\hat{\rho}_j^2-\frac{\hat{\theta}_j^{2+}}{2}}$,~$\hat{\rho}_j^2=\ln\ba_2^{(j)}$,~$\hat{\theta}_j^{2+}=\ga_4^{(j)}-\hat{\rho}_j^2$,~ $\hat{\chi}_{jR}^{2+}=\frac{\xi_{2R}+\bar{\xi}_{2R}+\hat{\theta}_{jR}^{2+}}{2}$,\\$\hat{\chi}_{jI}^{2+}=\frac{\xi_{2I}+\bar{\xi}_{2I}+\hat{\theta}_{jI}^{2+}}{2}$,~$j=1,2$.
One can find the explicit forms of the various constants which appear in the asymptotic forms given in Appendix A.
Similarly we can calculate the asymptotic forms of Type-II and its variant collisions. However, to avoid too many details, we do not present their explicit forms, but only demonstrate numerically the typical cases.
From the above asymptotic forms of solitons $S_1$ and $S_2$, we conclude that a definite intensity redistribution has occurred among the modes of the nonlocal solitons which can be identified from the amplitude changes in the solitons $S_1$ and $S_2$. During the collision process, the phases of the solitons have also changed. The conservation of total energy (or intensity) of the solitons is yet another quantity which characterizes these three shape changing collisions. The conservation of energy which occurs in the Type-II and its variant collisions is entirely different from the collision in the local mixed CNLS equation. In order to show the intensity redistribution among the modes of the nonlocal solitons from the asymptotic forms,
we calculate the explicit expressions of the amplitudes and phases of the solitons. The obtained expressions of all the quantities which appear from asymptotic forms are given in the Appendix A.
\subsection{Intensity redistribution}
In this subsection, first we demonstrate how the intensity redistribution and conservation of energy occur between the solitons in the Type-I, Type-II and variant of Type-II collisions. To demonstrate this, we begin our analysis with the asymptotic forms obtained in the previous sub-section.
\subsubsection{Intensity redistribution in Type-I collision}
In Type-I collision, the analysis reveals that the amplitudes of the solitons $S_1$ and $S_2$ are changing from $\frac{(k_1+\bar{k}_1)A_j^{1-}}{2i}$ and $\frac{(k_2+\bar{k}_2)A_j^{2-}}{2i}$ to $\frac{(k_1+\bar{k}_1)A_j^{1+}}{2i}$ and $\frac{(k_2+\bar{k}_2)A_j^{2+}}{2i}$, $j=1,2$, respectively, due to collision. Similarly the amplitudes of the fields $q_j^{*}(-x,t)$, $j=1,2$ are also changing, during the evolution process, from $\frac{(k_1+\bar{k}_1)\hat{A}_j^{1-}}{2i}$ and $\frac{(k_2+\bar{k}_2)\hat{A}_j^{2-}}{2i}$ to $\frac{(k_1+\bar{k}_1)\hat{A}_j^{1+}}{2i}$ and $\frac{(k_2+\bar{k}_2)\hat{A}_j^{2+}}{2i}$, $j=1,2$. Here, $A_j^{\pm i}$'s are polarization vectors of the $i$th soliton. This is because of the energy sharing interaction that occurs between them.
In Type-I collision, the quasi-intensity (quasi-power) of the soliton $S_1$ in the first mode $q_1(x,t)$ shares with the soliton $S_1$ in $q_2(x,t)$ mode and the same kind of intensity sharing occurs between the modes of the soliton $S_2$ also. This in turn confirms that the intensity redistribution occurs in between the modes. Even though the intensity redistribution occurs among the solitons that are present in the modes $q_1(x,t)$ and $q_2(x,t)$ the total energy of the individual solitons is conserved which can be confirmed from
\bes
\bea
A_1^{1-}\cdot\hat{A}_1^{1-}+A_2^{1-}\cdot\hat{A}_2^{1-}=A_1^{1+}\cdot\hat{A}_1^{1+}+A_2^{1+}\cdot\hat{A}_2^{1+}=1,\\
A_1^{2-}\cdot\hat{A}_1^{2-}+A_2^{2-}\cdot\hat{A}_2^{2-}=A_1^{2+}\cdot\hat{A}_1^{2+}+A_2^{2+}\cdot\hat{A}_2^{2+}=1,
\eea \ees
where the explicit forms of $A_j^{\pm i}$, $i,j=1,2$ are given in Appendix A. In the above, subscripts denote the modes while superscripts represent the soliton number. The above conservation form, reveals the fact that the total quasi-intensity of the individual solitons is conserved.
In the local case, the total energy of each soliton is calculated by adding the absolute squares of the amplitudes of the individual modes of the solitons \cite{6a}. Even though the amplitudes of both the co-propagating solitons are altered after the interaction, the total energy does not vary and is conserved.
In addition to the above, the total energy of the solitons is also conserved. This can be verified by the following conservation form
\bea
&&A_1^{1-}\cdot\hat{A}_1^{1-}+A_2^{1-}\cdot\hat{A}_2^{1-}+A_1^{2-}\cdot\hat{A}_1^{2-}+A_2^{2-}\cdot\hat{A}_2^{2-}\nonumber\\
&=&A_1^{1+}\cdot\hat{A}_1^{1+}+A_2^{1+}\cdot\hat{A}_2^{1+}+A_1^{2+}\cdot\hat{A}_1^{2+}+A_2^{2+}\cdot\hat{A}_2^{2+}
=2.\label{int1}
\eea
Eq. (\ref{int1}) confirms that the total energy of the solitons $S_1$ and $S_2$ before collision is equal to the total energy of the solitons after collision.
The change in amplitude of each one of the solitons in both the components can be evaluated by introducing the transition amplitude $T_j^{l}$ by $T_j^{l}=\frac{A_j^{l+}}{A_j^{l-}}$, where $A_j^{l+}$ is the amplitude of the $l$-th soliton in the $j$-th component after collision and $A_j^{l-}$ is the amplitude of the soliton in the corresponding mode before collision. To calculate the intensity exchange among the modes of the solitons we multiply the transition amplitude $T_j^l=\frac{A_j^{l+}}{A_j^{l-}}$ by the transition amplitude $\hat{T}_j^{l}=\frac{\hat{A}_j^{l+}}{\hat{A}_j^{l-}}$ of field $q_j^{*}(-x,t)$, where $\hat{A}_j^{l+}$ and $\hat{A}_j^{l-}$ are the amplitudes of the solitons of the field $q_j^{*}(-x,t)$ of each mode after and before collision respectively.
This definition also differs from the local Manakov case in which we multiply $T_j^l$ by its own complex conjugate transition element $T_j^{l*}$, to get $|T_j^{l}|^2$ \cite{7a}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.3\linewidth]{2soliton_manakto_type_collision_q1.eps}~~~~\includegraphics[width=0.3\linewidth]{2soliton_manakto_type_collision_q2.eps}
\caption{Type-I shape changing collision in CNNLS equation: (a) and (b) are the local Manakov type energy sharing collision plotted for the parametric values $k_{1}=0.5+i0.8$, $\bar{k}_1=-0.5+i0.8$, $k_{2}=-2+i$, $\bar{k}_{2}=2+i$, $\al_{1}^{(1)}=1+i$, $\al_{2}^{(1)}=1.5+i$, $\al_{1}^{(2)}=0.5+i$, $\al_2^{(2)}=2+i$, $\ba_{1}^{(1)}=1-i$, $\ba_{2}^{(1)}=-1.5-i$, $\ba_{1}^{(2)}=-0.5-i$ and $\ba_2^{(2)}=2-i$.}
\label{fig2}
\end{figure}
The intensity exchange between the solitons $S_1$ and $S_2$ due to Type-I collision is defined by
\ben
T_j^{l}\cdot\hat{T}_j^{l}=\frac{A_j^{l+}}{A_j^{l-}}\cdot\frac{\hat{A}_j^{l+}}{\hat{A}_j^{l-}},~~l,j=1,2 \label{i3},
\een
where all the quantities in the expression (\ref{i3}) are given in the Appendix A. By suitably fixing the parameters, we can make the right hand side of the above expression to be equal to one. For this special parametric choice, we can come across a pure elastic collision (or shape preserving collision). For all other parametric values there occurs a change in the amplitude in solitons and it leads to the shape changing collision. As in the local CNLS case, one can make one of the transition matrices vanish by suitably fixing the values of the parameters. For this case, the intensity of any one of the solitons in one of the modes becomes zero.
\subsubsection{Intensity redistribution in Type-II collision and its variant}
In Type-II collision process also the amplitude of the solitons changes in both the fields $q_j(x,t)$ and $q_j^*(-x,t)$. In this collision scenario, the quasi-intensity of soliton $S_2$ is enhanced in both the modes while the quasi-intensity of soliton $S_1$ is suppressed. This collision scenario is entirely different from the one that occurs in the Type-I collision. A remarkable feature of the Type-II collision is that the total energy of individual solitons is not conserved, so that
\ben
{A}_1^{l-}\cdot\hat{A}_1^{l-}-A_2^{l-}\cdot\hat{A}_2^{l-}\neq A_1^{l+}\cdot \hat{A}_1^{l+}-A_2^{l+}\cdot\hat{A}_2^{l+},~l=1,2.
\label{c1}
\een
From Eq. (\ref{c1}), we infer that the difference in quasi-intensity of soliton $S_1$ in both the modes before collision is not equal to the same after collision. This is also true for the soliton $S_2$ as well. In the local mixed CNLS equation the energy difference turns out to be the same before and after collision \cite{7a2}. The free parameters that appear in the degenerate nonlocal two soliton solution (28a)-(28c) given in \cite{8b} do allow the similar kind of shape changing collision as the one happens in the case of local mixed CNLS equation.
In Type-II shape changing collision, the total intensity of the solitons $S_1$ and $S_2$ in both the components before collision is equal to the the total intensity of the solitons $S_1$ and $S_2$ after collision, that is
\bea
&&A_1^{1-}\cdot\hat{A}_1^{1-}+A_2^{1-}\cdot\hat{A}_2^{1-}+A_1^{2-}\cdot\hat{A}_1^{2-}+A_2^{2-}\cdot\hat{A}_2^{2-}\nonumber\\
&&=A_1^{1+}\cdot\hat{A}_1^{1+}+A_2^{1+}\cdot\hat{A}_2^{1+}+A_1^{2+}\cdot\hat{A}_1^{2+}+A_2^{2+}\cdot\hat{A}_2^{2+}=2.\label{c2}
\eea
The intensity exchange between the solitons in the Type-II collision can also be calculated by defining the transition matrices. In this case the transition matrices are defined by $T_j^{l}\cdot\hat{T}_j^{l}=\frac{A_j^{l+}}{A_j^{l-}}\cdot\frac{\hat{A}_j^{l+}}{\hat{A}_j^{l-}}$, $l,j=1,2$. A special case in which the right hand side becomes one produces shape preserving elastic collision.
In addition to the above Type-II collision, we also observe a variant of it. In the variant of Type-II collision, the intensity of soliton $S_1$ is suppressed in both the modes whereas the intensity of soliton $S_2$ is suppressed in $q_1$ mode and is enhanced in $q_2$ mode. This collision scenario is entirely different from the previous collision processes and has not been encountered in any local $2$-CNLS equation. The variant of Type-II collision also obeys the non-conservation and conservation relations (\ref{c1}) and (\ref{c2}), respectively.
\begin{figure}[ht]
\centering
\includegraphics[width=0.3\linewidth]{2soliton_mixed_like_collision_q1.eps}~~~~\includegraphics[width=0.3\linewidth]{2soliton_mixed_like_collision_q2.eps}
\caption{Type-II shape changing collision: (a) and (b) are the mixed CNLS like shape changing collision drawn for the parametric values, $k_{1}=-0.5+i0.8$, $\bar{k}_1=0.5+i0.8$, $k_{2}=2+i$, $\bar{k}_{2}=-2+i$, $\al_{1}^{(1)}=1+i$, $\al_{2}^{(1)}=1.5+i$, $\al_{1}^{(2)}=0.5+i$, $\al_2^{(2)}=2+i$, $\ba_{1}^{(1)}=1-i$, $\ba_{2}^{(1)}=-1.5-i$, $\ba_{1}^{(2)}=-0.5-i$, $\ba_2^{(2)}=2-i$.}
\label{fig3}
\end{figure}
We recall here that Eq. (\ref{1.1a}) corresponds to three different equations, namely nonlocal version of (i) Manakov equation, (ii) defocusing CNLS equation and (iii) mixed CNLS equation, depending upon the sign of $\sigma_l$. It is noted that the shape changing collision that occurs in the local Manakov system differs from the one that occurs in the local mixed CNLS system \cite{7a2}. For example, the shape changing collision that occurs in the mixed coupled NLS equation can be viewed as an amplification process in which the amplification of signal (say soliton 1) using pump wave (say soliton 2) without any external amplification medium and without any creation of noise that does not exist in the local Manakov case \cite{6a}. Very surprisingly, the nonlocal Manakov equation simultaneously admits both the types of shape changing collisions mentioned above, that is the one occurs in the 2-CNLS equation and the other that occurs in the mixed coupled NLS equation. This type of collision has not been observed in any other $(1+1)$-dimensional nonlocal integrable system.
\begin{figure}[ht]
\centering
\includegraphics[width=0.3\linewidth]{type-III_collision_q1.eps}~~~~\includegraphics[width=0.3\linewidth]{type-III_collision_q2.eps}
\caption{A variant of Type-II shape changing collision: (a) and (b) represent the intensity switching collision plotted for the parametric values $k_{1}=-1.5 + i0.8$, $\bar{k}_1=1 +i0.8$, $k_{2}=2+i$, $\bar{k}_{2}=-2+i$, $\al_{1}^{(1)}=1+i$, $\al_2^{(1)}=1.5+i$, $\al_{1}^{(2)}=0.5+i$, $\al_2^{(2)}=2+i$, $\ba_{1}^{(1)}=1-i$, $\ba_{1}^{(2)}=-0.5-i$, $\ba_{2}^{(1)}=-1.5-i$ and $\ba_2^{(2)}=2-i$.}
\label{fig4}
\end{figure}
In Figs. 2, 3 and 4, we have demonstrated the shape changing collisions that occur in (\ref{1.1a}) for $\sigma_l=+1$. The local Manakov type shape changing collision that occurs in the system (\ref{1.1a}) is illustrated in Figs. 2a-2b whereas in Figs. 3a-3b the shape changing collision that occur in (2) as in the case of mixed CNLS equation is shown. A variant of Type-II intensity switching collision is illustrated in Figs. 4a-4b. These three figures also reveal that besides the change in amplitudes, changes also occur in phase shift and relative separation distances. In the following, we calculate these changes.
\subsection{Phase shifts}
During the collision process, another important quantity, namely the phase is also being altered. The phase identifies essentially the position of the solitons. The change in phase can be calculated from the expressions already obtained. The initial phase of the soliton $S_1$ (=$\frac{\theta_{jR}^{1-}}{2(k_{1I}+\bar{k}_{1I})}$) changes to $\frac{\theta_{jR}^{1+}}{2(k_{1I}+\bar{k}_{1I})}$. Similarly, the initial phase of the soliton $S_2$ (=$\frac{\theta_{jR}^{2-}}{2(k_{2I}+\bar{k}_{2I})}$) changes to $\frac{\theta_{jR}^{2+}}{2(k_{2I}+\bar{k}_{2I})}$. Therefore, the phase shift suffered by the soliton $S_1$ in both the modes during collision is
\bes
\ben
\Phi_1=\frac{1}{2(k_{1I}+\bar{k}_{1I})}\ln\frac{|\rho_{12}\bar{\rho}_{12}(\varrho_1\Gamma_{11}\Gamma_{22}-\varrho_2\nu_1\nu_2-\varrho_3\Gamma_{12}\Gamma_{21})|}{|\Gamma_{11}\Gamma_{22}\kappa_{21}\kappa_{12}|}.
\een
Similarly the phase shift suffered by the soliton $S_2$ is
\ben
\Phi_2=\frac{1}{2(k_{2I}+\bar{k}_{2I})}\ln\frac{|\Gamma_{11}\Gamma_{22}\kappa_{21}\kappa_{12}|}{|\rho_{12}\bar{\rho}_{12}(\varrho_1\Gamma_{11}\Gamma_{22}-\varrho_2\nu_1\nu_2-\varrho_3\Gamma_{12}\Gamma_{21})|}.
\een
From the above two phase shift expressions of solitons $S_1$ and $S_2$, we find that
\ben
\Phi_2=-\frac{(k_{1I}+\bar{k}_{1I})}{(k_{2I}+\bar{k}_{2I})}\Phi_1 \label{i4}.
\een \ees
In the Type-II and its variant shape changing collisions, the phase shift suffered by the solitons $S_1$ and $S_2$ is equal to the phase shift suffered by the soliton $S_2$ and $S_1$ in the Type-I collision, respectively. In the Type-II and its variant collisions, the phase shift of the solitons one and two is related by the same relation given in (\ref{i4}). From this relation, we infer that in both the collision processes the soliton $S_2$ gets phase shifted opposite to the soliton $S_1$. We also find that the phase shifts not only depend on the amplitude parameters $\al_i^{(j)}$ and $\ba_i^{(j)}$, $i,j=1,2$ but also on the wave numbers $k_j$, $\bar{k}_j$. This is similar to the local Manakov equation and mixed CNLS equation.
\subsection{Relative separation distances}
The changes which occur in phases of both the solitons in turn cause a change in their relative separation distances during both the collision process. The relative separation distance is nothing but the distance between the positions of the solitons after and before collision \cite{7a}. We denote them by $x_{12}^{\pm}$, where $x_{12}^{+}$ is equal to the position of soliton $S_2$ minus the position of soliton $S_1$ after collision (at $t\rightarrow + \infty$) and $x_{12}^{-}$ is equal to the position of soliton $S_2$ minus the position of soliton $S_1$ before collision (at $t\rightarrow - \infty$), that is
\ben
x_{12}^{+}=x_2^{+}-x_{1}^{+}, ~~x_{12}^{-}=x_2^{-}-x_{1}^{-},\nonumber
\een
where $x_1^{-}$ and $x_2^{-}$ denote the positions of $S_1$ and $S_2$ at $t\rightarrow -\infty$, respectively, whereas $x_1^{+}$ and $x_2^{+}$ are the positions of $S_1$ and $S_2$ at $t\rightarrow +\infty$, respectively. Their explicit forms can be obtained from the phase shifts of the solitons which turns out to be
\bes
\bea
x_{12}^{-}&=&\frac{1}{2(k_{2I}+\bar{k}_{2I})}\ln\frac{|\rho_{12}\bar{\rho}_{12}(\varrho_1\Gamma_{11}\Gamma_{22}-\varrho_2\nu_1\nu_2-\varrho_3\Gamma_{12}\Gamma_{21})|}{|\Gamma_{11}\kappa_{21}\kappa_{12}\kappa_{22}|}\nonumber\\
&&-\frac{1}{2(k_{1I}+\bar{k}_{1I})}\ln\frac{|\Gamma_{11}|}{|\kappa_{11}|},\\
x_{12}^{+}&=&\frac{1}{2(k_{2I}+\bar{k}_{2I})}\ln\frac{|\Gamma_{22}|}{|\kappa_{22}|}\nonumber\\
&&-\frac{1}{2(k_{1I}+\bar{k}_{1I})}\ln\frac{|\rho_{12}\bar{\rho}_{12}(\varrho_1\Gamma_{11}\Gamma_{22}-\varrho_2\nu_1\nu_2-\varrho_3\Gamma_{12}\Gamma_{21})|}{|\Gamma_{22}\kappa_{21}\kappa_{12}\kappa_{11}|}.
\eea
The total change in relative separation distance is given by
\bea
&&\Del x_{12}=x_{12}^{+}-x_{12}^{-}\nonumber\\
&&\hspace{1.0cm}=\frac{(k_{1I}+\bar{k}_{1I}+k_{2I}+\bar{k}_{2I})}{2(k_{1I}+\bar{k}_{1I})(k_{2I}+\bar{k}_{2I})}\ln\frac{|\Gamma_{11}\Gamma_{22}\kappa_{21}\kappa_{12}|}{|\rho_{12}\bar{\rho}_{12}(\varrho_1\Gamma_{11}\Gamma_{22}-\varrho_2\nu_1\nu_2-\varrho_3\Gamma_{12}\Gamma_{21})|}\nonumber. \eea
The above expression can also be rewritten as
\bea
&&\Del x_{12}=-\bigg(1+\frac{k_{1I}+\bar{k}_{1I}}{k_{2I}+\bar{k}_{2I}}\bigg)\Phi_1. \label{rs1}
\eea \ees
We observe that the amplitude dependent relative separation distance found above turns out to be the same as in the case of local Manakov equation and mixed CNLS equation. Similarly, we obtain the same expression for the relative separation in all three types of shape changing collisions. It is clear from the expression (\ref{rs1}), that the relative separation distance non-trivially depends on all the complex parameters, $k_j$, $\bar{k}_j$, $\al_i^{(j)}$'s and $\ba_i^{(j)}$'s, $i,j=1,2$. Thus the amplitudes, phases and the relative separation distances are all get changed during the interaction between the two nonlocal bright solitons.
\subsection{Role of complex parameters in the collision process}
From the above results it is clear that all the complex parameters, $k_j$, $\bar{k}_j$, $\al_i^{(j)}$'s and $\ba_i^{(j)}$'s, $i,j=1,2$, play important roles in the soliton collision process. In the local Manakov case, the parameters $\al_i^{(j)}$'s play a crucial role in the shape changing collision process but not the wave numbers \cite{6a}. Hereafter we focus only on the three types of shape changing collisions which occur in the nonlocal Manakov system.
In our investigations, we have identified three kinds of collisions. In Type-I collision, the quasi-intensity of soliton $S_2$ is enhanced and the quasi-intensity of soliton $S_1$ is suppressed in the first mode $q_1(x,t)$. In order to obey the conservation law, the switching of quasi-intensity reversed in the second mode $q_2(x,t)$, that is the quasi-intensity of solitons $S_2$ is suppressed in the first mode whereas the quasi-intensity of soliton $S_1$ is enhanced in the second mode. In this case, the quasi-intensities of solitons are either partially enhanced or partially suppressed. It is demonstrated in Fig. 2a and 2b for the parametric values $k_{1}=0.5+i0.8$, $\bar{k}_1=-0.5+i0.8$, $k_{2}=-2+i$, $\bar{k}_{2}=2+i$, $\al_{1}^{(1)}=1+i$, $\al_{2}^{(1)}=1.5+i$, $\al_{1}^{(2)}=0.5+i$, $\al_2^{(2)}=2+i$, $\ba_{1}^{(1)}=1-i$, $\ba_{2}^{(1)}=-1.5-i$, $\ba_{1}^{(2)}=-0.5-i$ and $\ba_2^{(2)}=2-i$. The second type of shape changing collision is illustrated in Figs. 3a-3b for the parameters $k_{1}=-0.5+i0.8$, $\bar{k}_1=0.5+i0.8$, $k_{2}=2+i$, $\bar{k}_{2}=-2+i$ with all other parameters remaining the same as mentioned above. In these Figs. 3a and 3b, we observe that the intensity of soliton $S_2$ is enhanced in the first mode and a similar change also occurs in the second mode as well. The intensity of soliton $S_1$ is suppressed in both the modes. By comparing the parameter values of Type-I and Type-II collisions, we can easily identify that the only difference in them are signs in the real part of wave number. All other parameters remain the same. A simple sign change in the real parts of the above parameters causes a dramatic change in the collision dynamics which in turn reveals the strong dependence of this process on complex parameters. The third type of shape changing collision process is demonstrated in Figs. 4a and 4b, for the parametric values $k_{1}=-1.5 + i0.8$, $\bar{k}_1=1 +i0.8$, $k_{2}=2+i$, $\bar{k}_{2}=-2+i$, $\al_{1}^{(1)}=1+i$, $\al_2^{(1)}=1.5+i$, $\al_{1}^{(2)}=0.5+i$, $\al_2^{(2)}=2+i$, $\ba_{1}^{(1)}=1-i$, $\ba_{1}^{(2)}=-0.5-i$, $\ba_{2}^{(1)}=-1.5-i$ and $\ba_2^{(2)}=2-i$. In these figures also we observe that the quasi-intensity of soliton $S_1$ is suppressed in both the modes. In contrast to this the quasi-intensity of soliton $S_2$ is enhanced in $q_2(x,t)$ mode and is suppressed in the $q_1(x,t)$ mode. We note here that all the parameter values are same for Type-II and its variant collisions except for the values of $k_1$ and $\bar{k}_1$. In all the shape changing collision process the quasi-intensity of solitons in both the components get either enhanced or suppressed. This is because of energy exchange between the modes and the solitons as well.
Finally, we note that in the second collision dynamics one may consider the soliton $S_2$ as the signal whereas the soliton $S_1$ as the pump wave (or energy reservoir). In this collision scenario the signals get enhanced or amplified without any use of external amplification medium and without any creation of noise. From these results we conclude that one can use the focusing type nonlocal medium for simultaneously to amplify the signals and to construct the optical computer equivalent to Turing machine in a mathematical sense \cite{7d}. One need not go to separately mixed focusing - defocusing type nonlinear medium for amplifying the signals.
\begin{figure}[ht]
\centering
\includegraphics[width=0.3\linewidth]{2soliton_manakto_type1_collision_resonant_q1.eps}~~~~\includegraphics[width=0.3\linewidth]{2soliton_manakto_type1_collision_resonant_q2.eps}
\caption{(a) and (b) denote the resonant pattern appearing in Type-I collision which is demonstrated for the values $k_{1}=1 + i0.3$, $\bar{k}_1=-1 +i0.3$, $k_{2}=-2+i0.2$, $\bar{k}_{2}=2+i0.2$, $\al_{1}^{(1)}=1+i$, $\al_2^{(1)}=1.5+i$, $\al_{1}^{(2)}=0.5+i$, $\al_2^{(2)}=2+i$, $\ba_{1}^{(1)}=1-i$, $\ba_{1}^{(2)}=-0.5-i$, $\ba_{2}^{(1)}=-1.5-i$ and $\ba_2^{(2)}=2-i$. }
\label{fig5}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.3\linewidth]{2soliton_mixed_like_collision_resonant_q1.eps}~~~~\includegraphics[width=0.3\linewidth]{2soliton_mixed_like_collision_resonant_q2.eps}
\caption{(a) and (b) represent the resonant pattern appearing in Type-II collision drawn for the parameter values $k_{1}=-1 + i0.3$, $\bar{k}_1=1 +i0.3$, $k_{2}=2+i0.2$, $\bar{k}_{2}=-2+i0.2$, $\al_{1}^{(1)}=1+i$, $\al_2^{(1)}=1.5+i$, $\al_{1}^{(2)}=1+i$, $\al_2^{(2)}=2+i$, $\ba_{1}^{(1)}=1-i$, $\ba_{1}^{(2)}=-1-i$, $\ba_{2}^{(1)}=1.5-i$ and $\ba_2^{(2)}=2-i$.}
\label{fig6}
\end{figure}
\section{Localized resonant patterns and bright soliton bound states}
A specific resonant behaviour has been observed during the interaction process in the long wave-short wave resonance interaction system (LSRI system) \cite{r7}. The resonant behaviour occurs exactly in the place at which the phase shift occurs during the collision process. In other words, the localized resonant patterns appear in the interaction regime and it can be considered as an intermediate state. The resonance behaviour was achieved by appropriately choosing the parameters. One can prolong this intermediate state by fixing the phase shift as large as possible. One can also observe such type of localized resonant behaviour in the present nonlocal Manakov system as well. The resonant behaviour appearing in the Type-I collision is demonstrated in Figs. 5(a)-5(b) for the parametric values $k_{1}=1 + i0.3$, $\bar{k}_1=-1 +i0.3$, $k_{2}=-2+i0.2$, $\bar{k}_{2}=2+i0.2$, $\al_{1}^{(1)}=1+i$, $\al_2^{(1)}=1.5+i$, $\al_{1}^{(2)}=0.5+i$, $\al_2^{(2)}=2+i$, $\ba_{1}^{(1)}=1-i$, $\ba_{1}^{(2)}=-0.5-i$, $\ba_{2}^{(1)}=-1.5-i$ and $\ba_2^{(2)}=2-i$. The same resonant behaviour that one observes in the Type-II shape changing collision can be visualized Figs. 6(a) - 6(b) for the parametric values $k_{1}=-1 + i0.3$, $\bar{k}_1=1 +i0.3$, $k_{2}=2+i0.2$, $\bar{k}_{2}=-2+i0.2$, $\al_{1}^{(1)}=1+i$, $\al_2^{(1)}=1.5+i$, $\al_{1}^{(2)}=1+i$, $\al_2^{(2)}=2+i$, $\ba_{1}^{(1)}=1-i$, $\ba_{1}^{(2)}=-1-i$, $\ba_{2}^{(1)}=1.5-i$ and $\ba_2^{(2)}=2-i$. The resonant behaviours shown in Figs. 5 and 6 are obtained by changing the imaginary values of the wave numbers which we have used to identify the shape changing collision process. From these figures, we observe that the localized resonant pattern arises in the phase shift regime. In addition to this, we point out that change in the imaginary part of the wave numbers leads to a switching of the Type-I collision into Type-II collision. We also note that the resonant pattern appearing during the collision process is not same as the one appear in the higher dimensional integrable systems \cite{r7}. In the local Manakov case, one does not observe such behaviour and this occurs only due to the manifestation of nonlocal nature of the system. We point out that the same type of resonant pattern also appears in the variant of Type-II shape changing collision also.
\begin{figure}[ht]
\centering
\includegraphics[width=0.3\linewidth]{2soliton_parallel_boundstate_cnnls_q1.eps}~~~~\includegraphics[width=0.3\linewidth]{2soliton_parallel_boundstate_cnnls_q2.eps}
\caption{(a) and (b) are the parallel propagation of soliton occur in bound state for the parameter values $k_1=0.5+0.8i$ $\bar{k}_1=-0.5+0.8i$, $k_{2}=0.5+0.81i$, $\bar{k}_{2}=-0.5+0.81i$, $\al_{1}^{(1)}=1+i$, $\al_2^{(1)}=1+i$, $\al_{1}^{(2)}=0.1+i$, $\al_2^{(2)}=3+i$, $\ba_{1}^{(1)}=1-i$, $\ba_{1}^{(2)}=-0.1-i$, $\ba_{2}^{(1)}=-1-i$ and $\ba_2^{(2)}=3-i$.}
\label{fig7}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.3\linewidth]{2soliton_breathing_boundstate_cnnls_q1.eps}~~~~\includegraphics[width=0.3\linewidth]{2soliton_breathing_boundstate_cnnls_q2.eps}
\caption{(a) denote a novel double hump breathing type bound state occur in the first mode $q_1$ and (b) denotes a single hump breathing type bound state occur in mode $q_2$. The Figs. (a) and (b) drawn for the values $k_1=1+0.8i$ $\bar{k}_1=-1+0.8i$, $k_{2}=1+2.3i$, $\bar{k}_{2}=-1+2.3i$, $\al_{1}^{(1)}=1+i$, $\al_2^{(1)}=1+i0.3$, $\al_{1}^{(2)}=0.3+i$, $\al_2^{(2)}=3+i0.1$, $\ba_{1}^{(1)}=1-i$, $\ba_{1}^{(2)}=-0.3-i$, $\ba_{2}^{(1)}=-1-i0.3$ and $\ba_2^{(2)}=3-i0.1$.}
\label{fig8}
\end{figure}
Multi-soliton bound states exist in both integrable and non-integrable systems. When the velocity of the solitons are equal then the solitons bind together to form the bound states \cite{13}. The bound states can be in different possible forms such as parallel solitons, composite solitons and so on. The parallel soliton bound state exists if the central position of the solitons are different whereas the composite solitons exist when the central positions of the solitons are same \cite{13}. However, these bound state solitons are unstable against small perturbations and they separate into individual solitons which propagate with their identities after some time.
In the following, we illustrate the existence of bound states in the nonlocal Manakov system (\ref{1.1a}). As we pointed out in section III. B, the soliton velocities are ruled by the parameters $k_{jR}$, $\bar{k}_{jR}$, $j=1,2$, and the central position of the solitons are governed by $\frac{\Del_{1R}}{2(k_{1I}+\bar{k}_{1I})}$ and $\frac{\Del_{2R}}{2(k_{2I}+\bar{k}_{2I})}$ respectively. To explore the parallel soliton bound state, we fix the parametric values as $k_1=0.5+0.8i$ $\bar{k}_1=-0.5+0.8i$, $k_{2}=0.5+0.81i$, $\bar{k}_{2}=-0.5+0.81i$, $\al_{1}^{(1)}=1+i$, $\al_2^{(1)}=1+i$, $\al_{1}^{(2)}=0.1+i$, $\al_2^{(2)}=3+i$, $\ba_{1}^{(1)}=1-i$, $\ba_{1}^{(2)}=-0.1-i$, $\ba_{2}^{(1)}=-1-i$ and $\ba_2^{(2)}=3-i$. The outcome is displayed in Figs. 7a and 7b. This type of bound state also exhibits oscillatory behaviour as shown in Figs. 8a and 8b for the parameter values $k_1=1+0.8i$ $\bar{k}_1=-1+0.8i$, $k_{2}=1+2.3i$, $\bar{k}_{2}=-1+2.3i$, $\al_{1}^{(1)}=1+i$, $\al_2^{(1)}=1+i0.3$, $\al_{1}^{(2)}=0.3+i$, $\al_2^{(2)}=3+i0.1$, $\ba_{1}^{(1)}=1-i$, $\ba_{1}^{(2)}=-0.3-i$, $\ba_{2}^{(1)}=-1-i0.3$ and $\ba_2^{(2)}=3-i0.1$. As evidenced from this figure one may observe a novel double hump bound soliton state in the $q_1$ component and a single hump soliton bound state in the $q_2$ component. We point out that the parallel propagation of bound state and the oscillation occurring in the amplitude of the bound state are controlled by the imaginary parts of the wave numbers which appear in the central position of the solitons. We also note here that the oscillations occurring in the amplitude of bound state are usually controlled by the amplitude parameters in any other local integrable systems.
\section{Conclusion}
In this part of the work, we have brought out the nature of degenerate soliton collisions in the nonlocal Manakov system. In particular, we have brought out three different types of shape changing collisions for two different parametric conditions. Interestingly one of them does not exist in the case of local Manakov equation. We have also explained the changes which occur in the quasi-intensity, phase shift and relative separation distance during both the types of energy sharing collisions. We have noticed that in the Type-II and its variant shape changing collisions the difference in the energy of a soliton in the two modes is not preserved during the collision process. However, the total energy of a soliton in the two modes is conserved in the Type-I shape changing collision. We have also demonstrated the occurrence of localized resonant pattern and bound state solutions in the CNNLS equation. Our study gives a better understanding of nonlocal soliton collision in the $\cal{PT}$-symmetric arrays of wave guide systems where the medium exhibits nonlocal nonlinearity. Next we plan to investigate the non-degenerate soliton solutions and their interaction dynamics in some detail.
{\bf \section*{Acknowledgements}}
The work of MS forms part of a research project sponsored by DST-SERB, Government of India, under the Grant No. EMR/2016/001818. The research work of ML is supported by a SERB Distinguished Fellowship and also forms part of the DAE-NBHM research project (2/48 (5)/2015/NBHM (R.P.)/R\&D-II/14127).
{\bf \section*{Appendix}}
\section*{A. Amplitude and phase forms obtained from asymptotic analysis}
The explicit expression for the amplitudes and phases of the solitons 1 and 2 before and after collision ($t\rightarrow\pm\infty$) obtained from the asymptotic analysis of Type-I collision are given below:
The amplitude and phase of the soliton 1 $S_1$ before collision are
\bes\bea
A_j^{1-}=\frac{\al_1^{(j)}}{\Gamma_{11}^{1/2}},~\hat{A}_j^{1-}=\frac{\ba_1^{(j)}}{\Gamma_{11}^{1/2}},~\theta_{jR}^{1-}=\ln\frac{|\Gamma_{11}|}{|\kappa_{11}|}\label{app1}.
\eea
The amplitude and phase of the soliton 2 $S_2$ before collision are
\bea
&&\hspace{-1cm}A_j^{2-}=\frac{(\kappa_{21}\bar{\varrho}_{12})^{1/2}\bigg((-1)^{j}k_1\ba_1^{(3-j)}\nu_1-\bar{k}_2\al_2^{(j)}\Gamma_{11}+\bar{k}_1\al_1^{(j)}\Gamma_{21}\bigg)}{(\Gamma_{11}\kappa_{12}\varrho_{12})^{1/2}\bigg(\varrho_{1}\Gamma_{11}\Gamma_{22}-\varrho_{2}\nu_1\nu_2-\varrho_{3}\Gamma_{21}\Gamma_{12}\bigg)^{1/2}},\\
&&\hspace{-1cm}\hat{A}_j^{2-}=\frac{(\kappa_{12}\varrho_{12})^{1/2}\bigg(-k_2\ba_2^{(j)}\Gamma_{11}+k_1\ba_1^{(j)}\Gamma_{12}+(-1)^{(3-j)}\bar{k}_1\al_1^{(3-j)}\nu_2\bigg)}{(\Gamma_{11}\kappa_{21}\bar{\varrho}_{12})^{1/2}\bigg(\varrho_{1}\Gamma_{11}\Gamma_{22}-\varrho_{2}\nu_1\nu_2-\varrho_{3}\Gamma_{21}\Gamma_{12}\bigg)^{1/2}},\\
&&\hspace{-1cm}\theta_{jR}^{2-}=\ln\frac{|\bar{\varrho}_{12}\varrho_{12}(\varrho_{1}\Gamma_{11}\Gamma_{22}-\varrho_{2}\nu_1\nu_2-\varrho_{3}\Gamma_{21}\Gamma_{12})|}{|\Gamma_{11}\kappa_{12}\kappa_{21}\kappa_{22}|}\label{app2},
\eea
The amplitude and phase of the soliton 1 $S_1$ after collision are
\bea
&&\hspace{-1cm}A_j^{1+}=\frac{(\kappa_{12}\bar{\varrho}_{12})^{1/2}\bigg((-1)^{j}k_2\ba_2^{(3-j)}\nu_1-\bar{k}_2\al_2^{(j)}\Gamma_{12}+\bar{k}_1\al_1^{(j)}\Gamma_{22}\bigg)}{(\Gamma_{22}\kappa_{21}\varrho_{12})^{1/2}\bigg(\varrho_{1}\Gamma_{11}\Gamma_{22}-\varrho_{2}\nu_1\nu_2-\varrho_{3}\Gamma_{21}\Gamma_{12}\bigg)^{1/2}},\\
&&\hspace{-1cm}\hat{A}_j^{1+}=\frac{(\kappa_{21}\varrho_{12})^{1/2}\bigg(-k_2\ba_2^{(j)}\Gamma_{21}+k_1\ba_1^{(j)}\Gamma_{22}+(-1)^{(3-j)}\bar{k}_2\al_2^{(3-j)}\nu_2\bigg)}{(\Gamma_{22}\kappa_{12}\bar{\varrho}_{12})^{1/2}\bigg(\varrho_{1}\Gamma_{11}\Gamma_{22}-\varrho_{2}\nu_1\nu_2-\varrho_{3}\Gamma_{21}\Gamma_{12}\bigg)^{1/2}},\\
&&\hspace{-1cm}\theta_{jR}^{1+}=\ln\frac{|\bar{\varrho}_{12}\varrho_{12}(\varrho_{1}\Gamma_{11}\Gamma_{22}-\varrho_{2}\nu_1\nu_2-\varrho_{3}\Gamma_{21}\Gamma_{12})|}{|\Gamma_{22}\kappa_{11}\kappa_{21}\kappa_{12}|}\label{app3}.
\eea
The amplitude and phase of the soliton 2 $S_2$ after collision are
\bea
&&A_j^{2+}=\frac{\al_2^{(j)}}{\Gamma_{22}^{1/2}},~\hat{A}_j^{2+}=\frac{\ba_2^{(j)}}{\Gamma_{22}^{1/2}},
~\theta_{jR}^{2+}=\ln\frac{|\Gamma_{22}|}{|\kappa_{22}|}\label{app4}.
\eea
\ees
To verify the non-conservation and conservation relations (\ref{c1}) and (\ref{c2}) for Type-II shape changing collision and its variant, one has to use the expressions of amplitudes and phases of the solitons before collision given in Eqs. (\ref{app1})-(\ref{app2}) for calculating the quantities $A_j^{l+}$ and $\hat{A}_j^{l+}$. Similarly to calculate the quantities $A_j^{l-}$ and $\hat{A}_j^{l-}$ for shape changing collision and its variant, one has to use the expressions of amplitudes and phases of the solitons after collision given in Eqs. (\ref{app3})-(\ref{app4}).
{\bf \section*{Conflicts of interest}}
The authors declare that they have no conflict of interest.
|
train/arxiv
|
BkiUa7w5qWTD6cbkrjmA
| 5 | 1 |
\section{Introduction}\label{intro}
The realm of strong-field physics has become a focal point of interest in the atomic, molecular, and optical physics community over the last two decades.
This was particularly supported by the rapid development of lasers producing high intensities ($10^{14}-10^{15}$~W/cm$^2$) that generate forces comparable to intra-atomic forces, and ultrashort pulse durations of the order of femtoseconds ($10^{-15}$~s) down to attoseconds ($10^{-18}$~s) \cite{gil,kra}.
The time-resolved investigation of electron dynamics in atoms and molecules has come into reach because the typical time scales involved in electronic excitations (between 50~as and 50~fs) can be accessed.
The process of tunneling ionization has been studied extensively. Following the calculation of the tunneling ionization rate
for the ground state of hydrogen in a static electric field by Landau \cite{lan}, Keldysh extended the theory to
ionization by strong electromagnetic fields \cite{kel}. Later, Ammosov, Delone, and Krainov (``ADK'') generalized
the results to slowly varying fields by introducing the quasistatic approximation and defining the tunneling ionization rate
by averaging over one optical period (``ADK theory'') \cite{adk}. A self-contained derivation of the tunneling rate
in this approximation is presented in Ref.~\onlinecite{bis}.
In the original derivation \cite{kel} Keldysh introduced the parameter
$
\gamma= \sqrt{I_p/(2 U_p)},
$
which is now known as the Keldysh parameter \cite{iva}. Here, $I_p$ is the ionization potential and $U_p$ is the ponderomotive potential,
which corresponds to the average energy of a free electron oscillating in the electric field.
According to Keldysh, $\gamma$ divides the phenomenon of strong-field ionization into two regimes: for $\gamma\ll 1$
ionization is governed by tunneling ionization \cite{lan}, while for $\gamma \gg 1$ the process is governed by perturbative multiphoton ionization \cite{mai}.
In the range of $\gamma\approx 1$ both effects compete with each other \cite{pop, yam}. In later papers the Keldysh parameter has been connected to the notion of adiabaticity of the ionization process \cite{mev, bec}. Far into the tunneling regime, the atomic response is considered to be purely adiabatic.
Adiabatic means in this context that the ionization rate at a given time is solely defined by the instantaneous electric field.
More generally, when a time-dependent process is adiabatic, the state of the system at any given time is always an eigenstate of the instantaneous Hamiltonian, which depends on one or more external parameters (like the electric field).
Consequently, the energy eigenstates and their corresponding eigenenergies become parametrized and lead to energy curves (or energy hyperplanes depending on the number of external parameters).
Nonadiabatic dynamics occur when transitions between adiabatic curves start to appear.
This is, particularly, the case when two adiabatic curves are energetically close to each other and the external parameters are changed relatively fast such that the system has no time to ``instantaneously'' respond to the change.
As a result, the system is not in one defined adiabatic state anymore but rather in a superposition of several adiabatic eigenstates.
In various fields of physics and chemistry the adiabatic representation has been used to study adiabatic and nonadiabatic effects.
Its application includes fields like Rydberg atoms \cite{rub, cla}, molecular dynamics \cite{sti, tul, sch}, atomic and molecular collisions \cite{pec, smi, mil}, and ultracold gases and trapped ions \cite{blo, due, ste}.
An important aspect in the adiabatic representation is the discreteness of eigenstates which is essential to obtain a discrete set of energy curves.
In the case of strong-field ionization, however, the instantaneous eigenstates of the Hamiltonian form a continuum.
Therefore, the identification of a nonadiabatic effect happens rather indirectly \cite{pot, zhe}: either the spectrum of the photoelectron
after the pulse or the field dependence of the ionization rate is analyzed.
Various results on nonadiabatic behavior in strong-field ionization have been presented in the literature \cite{arm, wan, yud}
and there are many different usages of the terms ``adiabatic'' and ``nonadiabatic''.
By introducing an analytic continuation in the complex plane the instantaneous Hamiltonian becomes non-Hermitian and tunneling states appear as {\em discrete} eigenstates.
These discrete states can be now used to apply the adiabatic representation to strong-field ionization dynamics.
In this paper we strictly apply the adiabatic representation to strong-field ionization and find that in the tunneling regime the ionization dynamics is defined by a {\em diabatic} rather than an adiabatic behavior. Diabatic dynamics means that the response of the system follows one specific diabatic state.
Here, the diabatic states are defined by the overlap with the field-free eigenstates.
In this formulation we find that the ionization dynamics can be divided into two regimes. Furthermore, with increasing frequency we observe a transition from the {\em diabatic} to the {\em nondiabatic} regime.
In particular, we study the few-cycle limit and find a non-constant population as a function of the optical frequency which has been interpreted in the literature as a sign of a nonadiabatic process \cite{bec, zhe}.
We show for a few-cycle pulse with a Keldysh parameter $\gamma\ll 1$ that this effect rather represents a dependence on the form of the pulse and can be fully explained by a diabatic picture depending on a single diabatic state connected to the field-free ground state.
The main text is divided into three sections:
\begin{itemize}
\item Section~\ref{adiabatic} is devoted to the general theory of the equations of motion in the adiabatic basis and introduces also diabatic states.
\item The third section presents one-photon absorption as an extreme case of a nonadiabatic/nondiabatic ionization process.
\item In Sec.~\ref{strong}, the central section of this paper, we develop the concept of diabaticity in strong-field ionization.
We examine the transition from the diabatic to the nondiabatic ionization regime.
\end{itemize}
Atomic units are employed throughout unless otherwise indicated.
\section{Adiabatic eigenstates}\label{adiabatic}
Whenever a system is given time to adjust to the parameters on which it depends, the response is called adiabatic.
In the following, we derive the quantum-mechanical equations of motion in the adiabatic basis, which is given by the states that are eigensolutions
to the Hamiltonian of the system for a set of instantaneous parameters.
Let us study a system where the Hamiltonian depends on an external time-dependent parameter $ \epsilon(t)$.
The time-dependent Schr\"odinger equation has the form
\begin{equation}
i\partial_t | \Psi (t)\rangle = \hat{H}(t) | \Psi(t) \rangle = \left\{ \hat{H}_0 + \hat{U}[\epsilon(t)]\right\} | \Psi(t) \rangle. \label{tdse}
\end{equation}
$\hat{H}_0$ describes the atomic Hamiltonian, whereas $\hat{U}$ includes all external potentials and is dependent on the parameter $\epsilon(t)$.
At a given time $t$, the instantaneous eigenstates, which constitute the adiabatic basis, are defined by\footnote{The time dependence is implicit via the parameter $\epsilon(t)$.}
\begin{equation}
\left[ \hat{H}_0 + \hat{U}(t) \right] |\Psi _n(t)\rangle=E_n (t)|\Psi _n(t)\rangle. \label{adbas}
\end{equation}
To analyze adiabatic and nonadiabatic effects we expand the electronic wavefunction in terms of the adiabatic eigenstates, $ | \Psi (t)\rangle = \sum_n \alpha_n(t) |\Psi _n(t)\rangle$.
Upon inserting this expression into Eq.~(\ref{tdse}) and projecting onto the eigenstate
$|\Psi_m(t)\rangle$, the equation of motion for the coefficient $\alpha_m(t)$ reads
\begin{equation}
i \dot {\alpha} _m(t)+i \sum_n\alpha_n(t) \langle \Psi_m(t)|\partial_t|\Psi_n(t)\rangle
=\alpha_m(t) E_m(t). \label{eom}
\end{equation}
The off-diagonal matrix elements $\langle \Psi_m(t)|\partial_t|\Psi_n(t)\rangle$ introduce couplings between different adiabatic eigenstates, thus making the dynamics nonadiabatic~\cite{zen}.
In the adiabatic approximation, where these couplings are considered to be very small, Eq. (\ref{eom}) becomes
\begin{equation}
i \dot {\alpha} _m(t)+i \alpha_m(t) \langle \Psi_m(t)|\dot{\Psi}_m(t)\rangle=\alpha_m(t) E_m(t),
\end{equation}
which is solved with the initial condition $\alpha_m(0) = 1$ by
\begin{equation}
\alpha_m(t) =\exp\bigg[-i\int_0^t dt' E_m(t')\bigg]\exp[i \gamma_m(t)],\label{coeff}
\end{equation}
where $\gamma_m(t)=i\int_0^t dt'\langle \Psi_m(t')|\dot{\Psi}_m(t')\rangle$, so that the system evolves in a specific adiabatic eigenstate with a phase. If, on the other hand,
$\langle \Psi_m(t)|\dot{\Psi}_n(t)\rangle $ cannot be neglected, the whole sum in Eq.~(\ref{eom})
has to be considered, so that different adiabatic eigenstates get coupled and nonadiabatic motion emerges.
We can use $\partial_t= \frac{\partial \epsilon}{\partial t}\partial_{\epsilon}$
and express the off-diagonal coupling elements also in terms of the change in $\epsilon$:
\begin{equation}
\langle \Psi_m|\dot{\Psi}_n\rangle =\langle \Psi_m|\partial_{\epsilon}{\Psi}_n\rangle
\frac{\partial \epsilon}{\partial t}.\label{nonadcoup}
\end{equation}
Considering a two-level system with an external perturbation proportional to $\epsilon$, the Hamiltonian of the system takes the form
\begin{equation}
\hat{H} =
\underbrace{
\begin{pmatrix}
-1 & 0 \\
0 & 1
\end{pmatrix}+
\frac{\Delta}{2}
\begin{pmatrix}
0 & 1 \\
1 & 0
\end{pmatrix}}_{\hat{H}_0}+
\underbrace{
\frac{\epsilon}{2}
\begin{pmatrix}
1 & 0 \\
0 & -1
\end{pmatrix}}_{\hat{U}}
,
\end{equation}
where $\Delta$ is an internal coupling parameter. In Fig.~\ref{fig0} the energy curves of the two adiabatic states $\ket{\Psi_1}$ and $\ket{\Psi_2}$ of this system are shown as a function of the external parameter $\epsilon$, assuming $\Delta=1$.
The internal coupling between the diabatic states $\ket{1}= (1,0)^T$ and $\ket{2}=(0,1)^T$ results in the effect that the two adiabatic curves do not cross.
This phenomenon is known as an ``avoided crossing''. We see that $\Delta$ is the energy splitting between $\ket{\Psi_1}$ and $\ket{\Psi_2}$ at the degeneracy point of the states $\ket{1}$ and $\ket{2}$.
If the parameter $\epsilon$ is changed sufficiently slowly, the system will remain in a given adiabatic state $\ket{\Psi_i}$ if, for $\epsilon\ll1$ or $\epsilon\gg 1$, the system was in the state $\ket{\Psi_i}$.
Note that in the vicinity of $\epsilon=1$ the character of the adiabatic states changes from $\ket{1}$ to $\ket{2}$ and vice versa.
If $\epsilon$ changes rapidly in the vicinity of $\epsilon=1$, the system has no time to change the character of its state; it makes a transition from one adiabatic state to the other and follows the diabatic states $\ket{1}$ and $\ket{2}$, respectively.
These jumps between adiabatic curves make the resulting dynamics nonadiabatic.
For a given value of the external parameter, we can obtain the diabatic states also by choosing the adiabatic eigenstates with the maximal overlap with the free states ($\epsilon=0$). For $\epsilon<1$, the diabatic state $\ket{1}$ has the maximal overlap with the adiabatic state $\ket{\Psi_1}$, while for $\epsilon>1$ the overlap of state $\ket{1}$ with the adiabatic state $\ket{\Psi_2}$ is maximal, and vice versa for the diabatic state $\ket{2}$. Asymptotically, the states $\ket{1}$ and $\ket{2}$ correspond to the states $\ket{\Psi_1}$ and $\ket{\Psi_2}$ before the avoided crossing, and vice versa after the crossing. Near the crossing an interpolation is performed in order to obtain a continuous and smooth state.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{2level.ps}
\caption{The energy curves of the two adiabatic states $|\Psi_1\rangle$ and $|\Psi_2\rangle$ are shown as functions of the parameter $\epsilon$.
Through the off-diagonal matrix element $\Delta/2$ a nonadiabatic transition is possible, whereupon the system follows the diabatic states $\ket{1}$ and $\ket{2}$, respectively.}
\label{fig0}
\end{figure}
A system's dynamics can of course also be formulated in other representations, e.g., in a diabatic basis \cite{lic}, where the diabatic states do cross (see the states $\ket{1}$ and $\ket{2}$ in Fig.~\ref{fig0}). Usually the basis is chosen such, that the off-diagonal couplings in Eq.~\eqref{nonadcoup} vanish or are at least small \cite{smi, bae}. However, the diabatic basis, which is derived from the adiabatic basis by a unitary transformation, is not unique and there are many different approaches for reaching a diabatic representation \cite{baer, thi, sad}. One practicable method of diabatization is a local diabatization method, which means that the diabatic state is constructed piecewise in a two-level model: At each avoided crossing between two adiabatic states the diabatic state is followed. To this end, the size of the overlap with the corresponding field-free state can be used as a criterion. This method turns out to be fruitful for the description of diabatic and nondiabatic strong-field ionization (see Sec.~\ref{strong}). Once a diabatic representation has been found, one can ask with which rate transitions between diabatic states occur. These transitions will be called nondiabatic.
In the following section we will make use of the fact that for weak perturbations the adiabatic eigenstates can be approximated through the field-free eigenstates. Therefore, the diabatic states exhibiting the maximal overlap with the field-free states are also the adiabatic states. In this case, the nondiabatic transitions are exactly the nonadiabatic transitions described above.
\section{One-photon absorption}\label{onephot}
First, we analyze the case of one-photon absorption within the adiabatic representation.
If the system is exposed to a weak electric field of the form
$F(t)=F_0 \cos(\omega t)$ (in the dipole approximation, see Sec.~\ref{strong}), with a frequency $\omega$, the system Hamiltonian is perturbed
by the term $F(t)\,\hat z$ \cite{coh}, where $\hat z$ is pointing in the direction of the field (which is assumed to be linearly polarized).
The Hamiltonian in Eq.~(\ref{tdse}) takes the form
\begin{equation}
\hat{H}(t) = \hat{H}_0 + F(t)\hat{z}, \label{field}
\end{equation}
where $\hat{H}_0$ is the atomic Hamiltonian and the electric field $F(t)$ is coupled
classically to the dipole operator $\hat{z}$ of the electron [the field $F(t)$ corresponds to the parameter $\epsilon$ of Sec.~\ref{adiabatic}].
In the following, we show that in the adiabatic representation the off-diagonal coupling elements in Eq.~(\ref{eom}) are crucial for introducing transitions.
Let $\{\Psi_{n}^{(0)}\}_{n=0}^\infty$ be the eigenstates of the field-free Hamiltonian, $H_0 |\Psi_{n}^{(0)}\rangle
= \omega_n |\Psi_{n}^{(0)}\rangle$. For simplicity, we assume that the initial and final states of interest in the one-photon transition are nondegenerate.
Performing static perturbation theory to first order, the adiabatic eigenstates read~\cite{fri}
\begin{equation}
|\Psi_{n}^{(1)}\rangle
=
|\Psi_{n}^{(0)}\rangle+\sum_{k\neq n}\frac{\langle \Psi_{k}^{(0)}|F\hat{z}|\Psi_{n}^{(0)}\rangle}{\omega_n-\omega_k}|\Psi_{k}^{(0)}\rangle.
\label{expr}
\end{equation}
Inserting Eq.~(\ref{expr}) in Eq.~(\ref{nonadcoup}) with $\epsilon$ being the field $F$, we obtain the nonadiabatic coupling elements to first order in $F$:
\begin{equation}
\frac{\partial F}{\partial t}
\langle\Psi_{m}^{(0)}| \sum_{k\neq n}\frac{\langle \Psi_{k}^{(0)}|\hat{z}|\Psi_{n}^{(0)}\rangle}{\omega_n-\omega_k}|\Psi_{k}^{(0)}\rangle
=\frac{\partial F}{\partial t} \frac{\langle \Psi_{m}^{(0)}|\hat{z}|\Psi_{n}^{(0)}\rangle}{\omega_n-\omega_m}.
\label{firstorder}
\end{equation}
We are now ready to solve Eq.~\eqref{eom} including nonadiabatic coupling. We may treat the operator
$\hat{V}_F~\!=~\!\frac{\partial F}{\partial t}~\partial_F$
as a perturbing time-dependent operator and, hence, analyze the states with time-dependent perturbation theory \cite{fri}.
The first-order correction to the zeroth-order coefficient [Eq.~\eqref{coeff}] is given by
\begin{equation}
\alpha_f^{(1)}(t) = -i\int_0^{t} dt' e^{i(\omega_f-\omega_i)t'}\frac{\partial F}{\partial t'}\frac{\langle\Psi_{f}^{(0)}|\hat{z}|\Psi_{i}^{(0)}\rangle}{\omega_f-\omega_i}.
\end{equation}
Assuming $\omega_f>\omega_i$, we obtain the total transition probability per unit time
\begin{equation}
w_i=\sum_{f} \frac{|\alpha_f^{(1)}|^2}{t}=2\pi \sum_{f} \bigg|\langle \Psi_{f}^{(0)}|\frac{F_0 \hat{z}}{2}
|\Psi_{i}^{(0)}\rangle\bigg| ^2 \delta(\omega_f-\omega_i-\omega) .
\end{equation}
This equation is exactly Fermi's golden rule \cite{sak}. In the present approach it is the
nonadiabatic coupling that induces one-photon transitions between the field-free eigenstates.
Viewed in this way, the phenomenon of one-photon absorption is entirely nonadiabatic.
In the one-photon case, the states are well separated by a large energy gap and there is no avoided crossing
due to the weak field, which is only a perturbation to the field-free states.
Note that the adiabatic states coincide with the diabatic states in the weak-field limit.
\section{Strong-field ionization of atoms}\label{strong}
While in the case of one-photon ionization the photon energy necessarily exceeds the ionization potential,
we will now examine the situation where the atomic system is irradiated
by an intense electric field $F(t)$ with a low photon energy, i.e., many photons are needed to ionize the atom.
When applying a strong external field [see Eq.~(\ref{field})] the effective potential seen by an electron gets tilted (see Fig.~\ref{fig1}).
Therefore, a barrier of finite height is created through which the electron can tunnel.
(If the electric field is so strong that the electron's energy lies above the barrier,
the electron can just leave the atom without tunneling. This effect is called above-barrier ionization \cite{ebe, scr}.)
This tunneling picture of a tilted potential relies on the length form of the light-matter interaction, i.e., $F(t)\,\hat{z}$.
Furthermore, the form of the Hamiltonian [cf. Eq.~\eqref{field}] is a result of the dipole approximation, which holds in our case,
because the size of the system of interest (a few \AA) is much smaller than the wavelength of the light pulse ($\approx 1~\mu$m) \cite{lou, cra}.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{Tiltedpotential.eps}
\caption{The pure Coulomb potential of the helium atom (solid red line) is tilted in the presence
of the electric field (dashed green line). The dotted black line denotes the field-free ground-state energy.}
\label{fig1}
\end{figure}
In order to describe strong-field ionization dynamics, the Schr\"odinger equation of the atom exposed to the field has to be solved nonperturbatively
because perturbation theory fails for these high field strengths. As shown in Sec.~\ref{adiabatic} in the adiabatic case the system will follow a given
adiabatic state without making any transition. However, in the presence of a static electric field, electronically bound states become tunneling states, which means that there is ionization via tunneling.
In the following, we will study helium as a concrete example to illustrate tunneling ionization within the framework of the adiabatic representation.
\subsection{Constructing adiabatic and diabatic states for helium}
As already discussed (see Sec.~\ref{intro}), in strong-field ionization the spectrum forms a continuum where a direct application of the adiabatic representation is inconvenient.
To overcome this problem, a rigorous analytical continuation of the Hamiltonian can be performed
by rotating the electron coordinates about an angle into the complex plane; this procedure is called complex scaling \cite{nim}.
Another way to generate discrete eigenstates is to add a complex absorbing potential (CAP) to the Hamiltonian \cite{rismey}.
It can be shown that the latter method, which is conceptually easier, is closely connected to the complex scaling approach \cite{ris}.
The key idea here is that for every tunneling state, i.e., every adiabatic atomic state that allows the electron to tunnel
through the field-induced barrier, there exists a discrete eigenstate --- a so-called Gamow vector \cite{boh} or Siegert state \cite{sie}
--- of the instantaneous Hamiltonian. A Siegert state is associated with a complex energy and lies outside the Hermitian domain of the Hamiltonian.
In fact, the associated wavefunction is exponentially divergent for large distances from the atom. Complex scaling or the use of a
CAP eliminates the divergent behavior and renders the tunneling wavefunction square integrable.
Thus, by making the Hamiltonian non-Hermitian, it becomes possible to calculate, within Hilbert space, the complex Siegert energies of tunneling states.
The imaginary part of the Siegert energy $E$ provides the tunneling rate
$\Gamma$ of each Siegert state by the relation $\Gamma = -2\ {\rm Im} (E)$ \cite{moi, san}.
In order to obtain the instantaneous eigenstates we solve Eq.~(\ref{adbas}) with the Hamiltonian in Eq.~(\ref{field}) including a CAP.
This yields the adiabatic eigenstates and corresponding eigenenergies
of the atom shown in Fig.~\ref{fig2}. A more detailed description of the methods used is given in the Appendix~\ref{app}.
\begin{figure}[!ht]
\centering
\includegraphics[width=\linewidth]{adiabaticeigenstates.eps}
\caption{The real part of the energy of the first adiabatic eigenstates as a function
of a static electric field. The inset magnifies avoided crossings for small electric fields.}
\label{fig2}
\end{figure}
We observe many avoided crossings among the higher adiabatic eigenstates for field strengths
in the range below $0.01$~a.u. ($1$~a.u.$=5.14\times 10^{9}$~V/cm), while the ground state energy does not change significantly.
One might wonder whether for sufficiently slow ramping of the electric field the atom follows
the adiabatic ground state. Indeed, for field strengths up to $0.02$~a.u. the adiabatic ground-state energy seems to remain constant. But we know that the electric field can mix a whole manifold of excited states into the field-free states. When this happens, the adiabatic ground state loses the character of the field-free ground state (cf. Fig.~\ref{fig0}). Analyzing the avoided crossings involving the adiabatic ground state around the field strength of $0.02$~a.u., we find that the ramping of the field has to be so slow that it lies in the radio frequency regime. Therefore, the system does not follow the adiabatic ground state for the frequency range of light usually employed in experiments (typically around $800$~nm, corresponding to $4\times 10^{14}$~Hz).
The electronic state
follows the instantaneous eigenstate that has the maximal overlap with the field-free ground state. This is exactly the diabatic behavior described in Sec.~\ref{adiabatic}, where the electronic state jumps from one adiabatic state to the other, keeping its field-free character. Here, we employ the diabatization method already alluded to in Sec.~\ref{adiabatic}, where we construct the diabatic state $|\Psi_i^{(d)}(t)\rangle$ from the adiabatic basis $\left\{ |\Psi_n(t)\rangle\right\}$ using the criterion of maximal overlap with the field-free state $| \Psi_i^{(0)}\rangle$, i.e.,
\begin{align}
|\Psi_i^{(d)}(t)\rangle &=|\Psi_n(t)\rangle, {\ \rm where\ }\hfill\\
|\langle \Psi_n(t)| \Psi_i^{(0)}\rangle| &>| \langle \Psi_m(t)| \Psi_i^{(0)}\rangle|,\ \forall m \neq n.\nonumber
\end{align}
This can be done as long as there is one distinct adiabatic state with a prominent character of the corresponding field-free state, so that the (orthogonal) complement of adiabatic states which are mixed in is small and can be ignored. The procedure works in principle also for excited states. However, for excited states the condition of a small admixture breaks down already at low field strengths, such that this construction method works best for the field-free ground state.
The overlap of the corresponding diabatic state $|\Psi_0^{(d)}\rangle$ with the field-free ground state $|\Psi_0^{(0)}\rangle$ is always larger than $90\%$ for field strengths considered here (see Fig.~\ref{fig3}c). Figures~\ref{fig3}(a) and \ref{fig3}(b) show the real part of the energy and the tunneling rate of $|\Psi_0^{(d)}\rangle$ as a function of the electric field.
The shift of the real part of the energy is well approximated by a quadratic behavior; for low field strengths below $0.1$~a.u. the prefactor is in accordance with the literature value of the polarizability of the helium ground state \cite{kon, chen}.
As expected, the tunneling rate increases considerably for sufficiently high field strengths. For field strengths larger than $0.07$~a.u. the ionization rate is well captured by the analytic expression derived in the tunneling limit of the strong-field approximation \cite{iva}.
\begin{figure}[htbp]
\centering
\includegraphics[width= \linewidth]{diabaticGSnew.eps}
\caption{(a) The real part of the energy of the diabatic state $|\Psi_0^{(d)}\rangle$, and (b) its tunneling rate, $\Gamma=-2{\rm Im}(E)$,
are shown as a function of the electric field. (c) The overlap of $|\Psi_0^{(d)}\rangle$ with the field-free ground state.}
\label{fig3}
\end{figure}
Studying the adiabatic eigenstates and the avoided crossings reveals the suitability of the diabatic state constructed as shown above for the description of strong-field ionization. The advantage of the diabatic basis is that the system follows one single diabatic state, which gives a clear and intuitive picture for the explanation of the physics in the tunneling regime.
\subsection{Ionization dynamics}
So far, the analysis was performed for the spectrum of adiabatic eigenstates, i.e., for static electric fields. Now we introduce dynamics by
considering a Gaussian pulse of the form
\begin{equation}
F(t) = f(t)\,\cos(\omega t) = F_0\,e^{-t^2/2\tau ^2}\,\cos(\omega t),
\end{equation}
where $F_0$ is the peak strength of the electric field, $\tau$ is connected to the full width of the pulse at half maximum
by $\tau^2=\rm{FWHM}^2/(8\ln 2)$, and $\omega$ is the field frequency.
We want to calculate the ionization probability out of the diabatic state $|\Psi_0^{(d)}\rangle$ when applying this pulse. Let us assume that we have found a diabatic basis in which this particular diabatic state can be described by a coefficient $\alpha_0^{(d)}$. Then the exact wavefunction reads $\Psi(t)=\sum_{i}\alpha_i^{(d)}(t)\Psi_i^{(d)}(t)$. In analogy to the case of the adiabatic representation, equations of motion can be obtained for the coefficients in the diabatic basis where now coupling elements between the diabatic states imply nondiabatic transitions [cf. Eq.~\eqref{eom}]. If, in a ``diabatic approximation'', the nondiabatic transitions are neglected we obtain the following equation of motion for the coefficients:
\begin{equation}
i\dot{\alpha}_i^{(d)}(t)= \left[E_i^{(d)}-i\frac{\Gamma_i^{(d)}}{2}\right]\alpha_i^{(d)}(t),\label{diabeq}
\end{equation}
where $\Gamma_i^{(d)}$ is the ionization rate of the diabatic state $i$. From the ionization rate of our distinguished diabatic state its population evolution $P_0^{(d)}(t)=|\alpha_0^{(d)}(t)|^2$ during the pulse can be inferred. To this end, the equation of motion for the probability of remaining in this particular diabatic state is calculated (we omit indices for the sake of readability):
\begin{equation}
\frac{dP}{dt}=\frac{d}{dt}|\alpha(t)|^2=\alpha^*(t)\dot{\alpha}(t)+\dot{\alpha}^*(t)\alpha(t).
\end{equation}
Inserting Eq.~\eqref{diabeq} in this equation the following rate equation for the population is obtained (cf. Ref.~\onlinecite{lou}):
\begin{equation}
\dot{P}(t) = -\Gamma[F(t)] \ P(t),
\end{equation}
which can be analytically solved by separation of variables:
\begin{equation}
P(t) = \exp\left\{-\int_{-\infty}^{t} dt'\ \Gamma[F(t')]\right\}, \label{rate}
\end{equation}
with the initial condition $P(t\!=\!- \infty)\! = 1$. Note that the rate depends on the external field. Inserting the tunneling rate of the diabatic state in Eq.~(\ref{rate}) we calculate the diabatic ionization dynamics. Thereby we observe how much is ionized out of $|\Psi_0^{(d)}\rangle$. Deviations from Eq.~(\ref{rate}) in the population dynamics can be attributed to nondiabatic behavior, i.e., transitions to other diabatic states.
The results for four selected photon energies are shown in Fig.~\ref{fig4}
for an electric field amplitude of $F_0=0.25$~a.u. The pulse duration is kept constant so that we can study the ionization regime from few- to multi-cycle pulses. The exact result refers to the numerical solution of the Schr\"odinger equation [see Eq.~\eqref{tdse}], where all dynamics are included,
while the calculation of the diabatic curve via Eq.~(\ref{rate}) involves only the diabatic state $|\Psi_0^{(d)}\rangle$.
The gray-shaded areas in the background indicate the pulse intensity.
In the frequency range shown, the evolution of the ground state population is well described by considering
only the single diabatic state. For $\omega=0.3-0.8~{\rm eV}$ [see Fig.~\ref{fig4}a)--c)] the difference between the numerically exact
and the diabatic calculation is insignificant, while for $\omega=1.5$~eV [see Fig.~\ref{fig4}d)] the discrepancy between the two methods becomes more noticeable.
This is exactly the difference which gives us a measure of nondiabaticity.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{comp.eps}
\caption{Comparison of the ground-state populations calculated via numerical solution of the Schr\"odinger equation and via the rate equation \eqref{rate} for the distinguished diabatic state for four different photon energies. The pulse intensities are highlighted in the background: the pulse amplitude is $F_0=0.25$~a.u., and the pulse duration is $400$~a.u. ($\approx 10$~fs).}
\label{fig4}
\end{figure}
To clarify this further, a comparison between the two methods is shown in Fig.~\ref{fig5}
for a peak field strength of $0.2$ a.u. by depicting the populations [Fig.~\ref{fig5}a)] and the relative difference [Fig.~\ref{fig5}b)] between them after the end of the pulse.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{comparison.eps}
\caption{(a) Ground-state population after the end of the pulse calculated via numerical solution of the Schr\"odinger equation
and from the single diabatic ground state as a function of the photon energy, and (b)
relative difference between the two results, corresponding to the degree of nondiabaticity of the ionization.
The peak field strength is $F_0=0.2$~a.u., and the pulse duration is $400$~a.u.
The corresponding Keldysh parameter $\gamma$ is shown for different regions.}
\label{fig5}
\end{figure}
One can clearly see that for sufficiently low energies the total ionization probability is reproduced exactly by considering only the
diabatic state (region I). For higher energies around $1$~eV (region II), the difference increases significantly, indicating that nondiabatic effects start to become important.
\subsection{Nondiabaticity and the special case of few-cycle pulses}
In order to find a common way of speaking we incorporate the Keldysh parameter in our considerations, which has been used as an adiabaticity parameter.
Following our language of the adiabatic representation, the ionization in the tunneling regime, $\gamma\ll 1$, is diabatic rather than adiabatic.
We conclude from Fig.~\ref{fig5} that in the region where $\gamma\approx 1$ the relative difference between the results calculated from the diabatic ionization rate via Eq.~\eqref{rate} and from the solution of the Schr\"odinger equation is greater than $10\%$. This is a clear sign of {\em nondiabatic} behavior. Already for $\gamma \approx 0.17$ the diabatic ionization probability starts to differ slightly from the total ionization probability. For a fixed pulse duration we can also divide the frequency range according to the number of cycles in the pulse. Starting from the highest frequencies studied here we have multi-cycle pulses, until we reach few-cycle pulses at a photon energy of $\approx0.8$~eV.
The dynamics for few-cycle pulses is commonly considered to be nonadiabatic (in our language this translates to nondiabatic) \cite{bec,zhe}.
We find that even for few-cycle pulses the tunneling is completely diabatic.
In the framework of ADK theory and other approaches \cite{pon} the ionization rate $\overline{\Gamma}(t)$
is obtained by integrating over one period of the field \cite{bis}:
\begin{equation}
\overline{\Gamma}(t) = \frac{1}{2\pi} \int_0^{2\pi} {d\varphi\ \Gamma[f(t) \ \cos \varphi]},
\label{aver}
\end{equation}
where $\Gamma[F]$ is the instantaneous ionization rate. Hence, the fact that the ADK theory of tunneling ionization and similar approaches cannot reproduce the correct (diabatic) ionization rate for few-cycle pulses is not due to coupling to higher states \cite{pot, zhe},
but rather because the pulse envelope changes dramatically within one cycle. In this limit the rate cannot be averaged over one period as was done
in Eq.~(\ref{aver}), whereas for multi-cycle pulses it can be used in combination with Eq.~(\ref{rate}) yielding
\begin{equation}
P(t) \approx \exp\left\{ -\int_{-\infty}^{t} dt'\ \overline{\Gamma}[{f}(t')]\right\}.
\end{equation}
Analyzing region I in Fig.~\ref{fig5} further, we observe that the ionization probability is not constant
as a function of photon energy.
But the population loss in region I is well described by the ionization out of $|\Psi_0^{(d)}\rangle$.
According to our argument above, the apparent frequency dependence is rather a dependence on the
form of the pulse or analogously on the relation between the cycles and the pulse envelope,
which appears in a pronounced way for few-cycle pulses. Preferably, to avoid confusion, it could be called form dependence.
As we have seen, the ionization behavior for few-cycle pulses can be well understood from the dynamics of a single diabatic state.
\section{Conclusion}
We have studied the dynamics of tunneling ionization in atoms and have found that, within the framework of the adiabatic representation, it is diabatic rather than adiabatic.
We have identified two distinct ionization regimes depending on their diabatic behavior.
In particular we have characterized the transition from the diabatic to the nondiabatic regime.
In the low-frequency limit the total ionization probability
is reproduced by the contribution of the tunneling probability of one single diabatic state.
This means that in this regime there are no significant transitions to other diabatic states.
For few-cycle pulses, the ionization probability depends on the frequency for a fixed pulse duration.
However, this is not a nondiabatic effect, but the effect stems from the form dependence of the pulse,
and the consequent fact that the rate cannot be averaged any longer over one period.
When nondiabatic transitions
start to happen, the difference between the diabatic state ionization probability and the total probability increases dramatically.
For frequencies in the range of the binding energy of the atom one-photon absorption can occur which is a completely nonadiabatic and even nondiabatic process.
Already for parameters $\gamma\approx 0.17$ the diabatic ionization probability starts
to differ noticeably from the total ionization probability, even though the perturbative multiphoton regime is not yet entered.
From the perspective of the adiabatic representation, the Keldysh parameter is found to be an approximate measure of diabaticity.
\section{Acknowledgments}
AK is grateful to Oriol Vendrell for fruitful discussions. This work has been supported by the Deutsche Forschungsgemeinschaft
under Grant No. SFB 925/A5.
|
train/arxiv
|
BkiUc2w5qsBC7daIebKD
| 5 | 1 |
\section{Introduction}
Cloud computing is a new paradigm where companies make money by providing computing service through the Internet. In cloud computing, users buy software and hardware resources from a provider and access to these resources through the Internet so they do not have to install and maintain by themselves. The core part of cloud computing is a data center where there are a huge number of servers. The key issue for the management of data centers is to minimize the power consumption while keeping an acceptable service level for customers~\cite{Barroso07,Chen05,Greenberg08,Mazzucco12,Meisner09,phungduc14,Schwartz12}. It is reported that under the current technology an idle server still consumes about 60\% of its peak processing jobs~\cite{Barroso07}. Thus, the only way to save power is to turn off idle servers. However, if the workload increases, OFF servers should be turned on to serve waiting customers. Servers need some setup time during which they consume energy but cannot process jobs. Therefore, customers have to wait a longer time in comparison with the case where the servers are always ON.
Although queues with setup time have been extensively investigated in the literature, most of papers deal with single server models~\cite{Takagi90,Bischof01,Choudhury98,Choudhury00} where the service time follows a general distribution. Artalejo et al.~\cite{Artalejo05} present a throughout analysis for multiserver queues with setup time where the authors consider the case in which at most one server can be in setup mode at a time. This policy is referred to as staggered setup in~\cite{Gandhi10}. It is pointed out in~\cite{Artalejo05} that the model belongs to a QBD class for which the rate matrix is explicitly obtainable. By solving difference equations, Artalejo et al.~\cite{Artalejo05} derive an analytical solution where the stationary distribution is recursively obtained without any approximation. Recently, motivated by applications in data centers, multiserver queues with setup time have been extensively investigated in the literature. In particular, Gandhi et al.~\cite{Gandhi10,Gandhi10b,gandhi11,Gandhi11b,Gandhi13} analyze multiserver queues with setup time. They consider the M/M/$c$ system with staggered setup and derive some closed form approximations for the ON-OFF policy where the number of servers in the setup mode at a time is not limited. Gandhi et al.~\cite{gandhi11} extend their analysis to the case where a free server waits for a while before shutdown. As a related model, Tian et al.~\cite{Tian99} consider M/M/$c$ model with vacation where after a service an idle server leaves for an exponentially distributed vacation.
In all the work on multiserver queue mentioned above, customers (jobs) are assumed to arrive individually according to a Poisson process. However, in cloud computing a big task might be divided into multiple subtasks to process in parallel~\cite{Dean08}.
This motivates us to consider a multiserver queueing system with state-dependent setup time under batch arrival settings.
In this paper, using a generating function approach, we derive a clear solution for all the partial generating functions for the joint stationary distribution of the number of active servers and that of customers in the system. The generating functions are obtained using recursive formulae. Special cases of our model conform to the models in~\cite{Artalejo05,Gandhi10,Tian99,phungduc14b}. Furthermore, we derive a recursion which allows to calculate all the moments of the queue length. Numerical results are presented to show the affect of batch arrivals on the performance of the system. Furthermore, we present a method to derive the waiting time distribution. We also present some variants which can be analyzed by adapting the methodologies presented in this paper. One of the most important theoretical contribution is that we prove a conditional decomposition property showing that the queue length under the conditional that all the servers are busy can be decomposed into the sum of two independent random variables with clear physical meaning.
The rest of our paper is organized as follows. First we present the model in Section~\ref{model:sec}. Section~\ref{analysis:sec} is devoted to the detailed analysis where we derive the partial generating functions and the joint stationary distribution. Section~\ref{waiting_time_distribution:sec} briefly presents the method to compute the waiting time distribution while Section~\ref{variant models} shows some variant models that could be analyzed by the methodology of this paper. In Section~\ref{decomposition}, we discuss the conditional decomposition property for the queue length. Section~\ref{waiting_time_distribution:sec} briefly shows the derivation of the waiting time distribution while in Section~\ref{variant models}, some variants are demonstrated. In Section~\ref{numerical:sec}, we provide extensive numerical results to show the performance of the system. Finally, concluding remarks are presented in Section~\ref{concluding_remark:sec}.
\section{Model}\label{model:sec}
We consider M/M/$c$ queueing systems with staggered setup. Customers arrive at the system in batch according to a Poisson process with rate $\lambda$. We assume that the batch size distribution is $\beta_i$ ($i \in \bbN = \{1,2,\dots \}$) and its generating function is given by $\beta(z)$. In this system, an idle server is turned off immediately. If there are some waiting customers, OFF servers are turned on. Furthermore, a server needs some setup time to be active so as to serve a waiting customer. We assume that the setup time of the OFF servers follows the exponential distribution with mean $1/\alpha_i$ provided that there are $i$ active servers. If a server finishes a job, this server picks a waiting customer if any. If there is not a waiting customer, the server in setup process and idle ones are turned off immediately. It should be noted that in this model a server is in either BUSY or OFF or SETUP. We assume that every customer that enters the system receives service and departs. This means that there is no abandonment. We assume that the service time of jobs follows an exponential distribution with mean $1/\mu$.
\begin{figure}
\begin{center}
\includegraphics[scale=0.25]{state_dependent_setup.eps}
\end{center}
\caption{State transition diagram ($\beta(z) = z$).}
\label{m:fig}
\end{figure}
\section{Analysis of the model}\label{analysis:sec}
\subsection{Generating functions}\label{generating_function:sec}
We present Rouche's theorem which will be repeatedly used in this section.
\begin{thm}[Rouche's Theorem (see e.g.~\cite{Adan06})]\label{rouche:thm}
Let $D$ denote a bounded region which has a simple closed contour $C$. $f(z)$ and $g(z)$ are two analytic functions on $C$ and $D$. Assume that $|f(z)| < |g(z)|$ on $C$. Then $f(z)$ has in $D$ the same number of zeros as $g(z)$ where all zeros are counted as their multiplicity.
\end{thm}
Let $C(t)$ and $N(t)$ denote the number of busy servers and the number of jobs in the system at time $t$, respectively. Under the assumptions made in Section~\ref{model:sec}, it is easy to see that $\{X(t) = (C(t), N(t)); t \geq 0\}$ forms a Markov chain in the state space
\[
\mathcal{S} = \{ (i,j); j \in \bbZ_+, i = 0,1,\dots,\min(c,j) \},
\]
where $\bbZ_+ = \{ 0,1,\dots\}$. See Figure~\ref{m:fig} for the transitions among states for the case of single arrival, i.e., $\beta(z) = z$.
In this paper, we assume that $\rho = \lambda \beta^\prime (1) /(c\mu) < 1$ which is the necessary and sufficient condition for the stability of the Markov chain.
In what follows, we assume that the Markov chain is ergodic. Under this ergodic condition, let
\[
\pi_{i,j} = \lim_{t \to \infty} \mathbb{P}(N(t)=i, C(t) = j), \qquad (i,j) \in \mathcal{S},
\]
denote the stationary probability of state $(i,j)$.
The balance equations for states $(0,j)$ ($j \in \bbN$) read as follows.
\begin{align*}
\lambda \sum_{i=1}^{j} \beta_i \pi_{0,j-i} & = (\lambda + \alpha_0) \pi_{0,j}, \qquad j \in \bbN.
\end{align*}
Let $\Pi_0 (z) = \sum_{j=0}^\infty \pi_{0,j} z^j$. Multiplying the above equation by $z^j$ and adding over $j \in \bbN$ yields,
\[
\lambda \beta(z) \Pi_0(z) = (\lambda + \alpha_0) (\Pi_0 (z) - \pi_{0,0}),
\]
or equivalently
\begin{equation}
\label{Pi0z:eq}
\Pi_0 (z) = \frac{(\lambda + \alpha_0) \pi_{0,0} }{ \lambda + \alpha_0 - \lambda \beta(z)}.
\end{equation}
The balance equation for state $(0,0)$ is given by
\[
\lambda \pi_{0,0} = \mu \pi_{1,1}.
\]
This equation is also derived from the balance between flows in and out the group of states $\{ (0,j); j \in \bbZ_+ \}$.
Indeed, we have
\[
\alpha_0 (\Pi_0(1)- \pi_{0,0}) = \mu \pi_{1,1},
\]
leading to
\[
\pi_{1,1} = \frac{ \alpha_0 (\Pi_0(1)- \pi_{0,0}) }{\mu} = \frac{\lambda}{\mu} \pi_{0,0}.
\]
Now, we shift to the case where there is one active server, i.e., $i = 1$. We have
\begin{equation}
\label{pi_{1,1}:eq}
(\lambda + \mu) \pi_{1,1} = \alpha_0 \pi_{0,1} + \mu \pi_{1,2} + 2 \mu \pi_{2,2}, \quad j = 1,
\end{equation}
\begin{equation}
(\lambda + \mu + \alpha_1) \pi_{1,j} = \lambda \sum_{i=1}^{j-1} \beta_i \pi_{1,j-i} + \alpha_0 \pi_{0,j} + \mu \pi_{1,j+1}, \quad j \geq 2. \label{pi_{1,j}:eq}
\end{equation}
We define the generating for the states with $i=1$ as follows.
\[
\Pi_1(z) = \sum_{j=0}^\infty \pi_{1,j+1} z^j.
\]
$\Pi_1(z)$ represents the generating function of the number of waiting customers while there is one active server.
Multiplying (\ref{pi_{1,1}:eq}) by $z^0$ and (\ref{pi_{1,j}:eq}) by $z^{j-1}$ and taking the sum over $j \in \bbN$ yields,
\begin{eqnarray}
\label{Pi1(z):eq}
(\lambda + \mu + \alpha_1) \Pi_1(z) - \alpha_1 \pi_{1,1} = \lambda \beta(z) \Pi_1(z) + \frac{\alpha_0}{z} (\Pi_0(z) - \pi_{0,0}) + \frac{\mu}{z} (\Pi_1 (z) - \pi_{1,1} ) + 2\mu \pi_{2,2}.
\end{eqnarray}
Arranging (\ref{Pi1(z):eq}) we obtain
\begin{equation}\label{functional_eq_Pi1z:eq}
f_1(z) \Pi_1(z) = \alpha \Pi_0 (z) + \alpha_1 z \pi_{1,1} - \alpha_0 \pi_{0,0} - \mu \pi_{1,1} + 2 \mu z \pi_{2,2},
\end{equation}
where $f_1 ( z) = (\lambda + \mu + \alpha_1) z - \lambda z \beta (z) - \mu$. Because $f_1(0) = -\mu < 0$ and $f_1(1) = \alpha_1 > 0$, $0 < \exists z_1 < 1$ such that $f_1(z_1)=0$. Furthermore, Rouche's theorem
(Theorem~\ref{rouche:thm}) shows that $z_1$ is the unique root in the unit circle. Indeed, letting $g(z) = (\lambda + \mu + \alpha_1) z $ and $f(z) = \lambda z \beta(z) + \mu$, $C = \{ z \in \mathbb{C} \ | \ |z| =1 \
$ and $D = \{ z \in \mathbb{C} \ | \ |z| <1 \}$, we see that
\[
|f(z)| \leq \lambda |z| | \beta(z) | + \mu \leq \lambda + \mu < \lambda + \mu + \alpha_1 = |g(z)|, \qquad z \in C.
\]
Thus, applying Rouche's theorem, we have that $f(z)-g(z)$ and $g(z)$ have the same number of zeros.
Since $\Pi_1(z)$ converges in $|z| \leq 1$, letting $z = z_1$ yields,
\begin{equation}
\pi_{2,2} = \frac{ (\mu - \alpha z_1) \pi_{1,1} + \alpha (\pi_{0,0} - \Pi_0 (z_1)) }{2 \mu z_1}.
\end{equation}
It should be noted that for the case $\beta(z) = z$, i.e., single arrival, we have
\[
z_1 = \frac{\lambda + \mu + \alpha_1 - \sqrt{(\lambda + \mu + \alpha_1)^2 - 4 \lambda \mu} }{2 \lambda}.
\]
\begin{remark}
At this point, we have expressed $\Pi_1(z)$ and $\pi_{2,2}$ in terms of $\pi_{0,0}$.
\end{remark}
Furthermore, letting $f_1(z) = (z-z_1) g_1 (z)$, we have that $g_1 (z)$ is an analytic function on the unit circle $|z| < 1$. Substituting this into (\ref{functional_eq_Pi1z:eq}) and arranging the result, we obtain
\[
\Pi_1 (z) = \frac{\mu \pi_{1,1} + \alpha_0 \pi_{0,0} + \alpha_1 \widehat{\pi}_0 (z) }{z_1 g_1 (z)}.
\]
where
\[
\widehat{\pi}_0 (z) = \frac{\Pi_0 (z) - \Pi_0 (z_1)}{z-z_1}.
\]
Next, we shift to the case where there are $i$ ($2 \leq i \leq c-1$) active servers.
The balance equations read as follows.
\begin{equation}
\label{pi_{i,i}:eq}
(\lambda + i\mu ) \pi_{i,i} = \alpha_{i-1} \pi_{i-1,i} + i \mu \pi_{i,i+1} + (i+1) \mu \pi_{i+1,i+1},
\end{equation}
\begin{equation}
\label{pi_{i,j}:eq}
(\lambda + i\mu + \alpha_i) \pi_{i,j} = \lambda \sum_{k=1}^{j-i} \beta_k \pi_{i,j-k} + i\mu \pi_{i,j+1} + \alpha_{i-1} \pi_{i-1,j},
\end{equation}
for $j \geq i+1$.
We define the partial generating function for the case of having $i$ active servers as follows.
\[
\Pi_i (z) = \sum_{j=i}^\infty \pi_{i,j} z^{j-i}, \qquad i = 2,3,\dots,c-1.
\]
Multiplying (\ref{pi_{i,i}:eq}) by $z^{0}$ and (\ref{pi_{i,j}:eq}) by $z^{j-i}$ and adding over $j = i,i+1,\dots$, we obtain
\begin{eqnarray*}
\lefteqn{ (\lambda + i \mu + \alpha_i) \Pi_i (z) - \alpha_i \pi_{i,i} = \lambda \beta(z) \Pi_i(z) + \frac{i\mu}{z} (\Pi_i (z) - \pi_{i,i} )} \\
& & \mbox{} + \frac{\alpha_{i-1}}{z} (\Pi_{i-1} (z) - \pi_{i-1,i-1}) + (i+1) \mu \pi_{i+1,i+1}, \qquad
\end{eqnarray*}
or equivalently
\begin{eqnarray}
\label{Pi_i (z):eq}
f_i(z) \Pi_i (z) -\alpha_i z \pi_{i,i} = (i+1) \mu z \pi_{i+1,i+1} - i\mu \pi_{i,i} + \alpha_{i-1} (\Pi_{i-1} (z) - \pi_{i-1,i-1}), \nonumber \\
\end{eqnarray}
where $f_i (z) = (\lambda + i\mu + \alpha_i) z - \lambda z \beta (z) - i\mu $. Since $f_i (0) = -i \mu < 0$ and $f_i (1) = \alpha_i > 0$, $0 < \exists z_i < 1$ such that $f_i (z_i) = 0$. Rouche's theorem also shows that $z_i$ is the unique root inside the unit circle. For the case of single arrival, i.e., $\beta(z) = z$, we have
\[
z_i = \frac{ \lambda + i \mu + \alpha_i - \sqrt{ (\lambda + i \mu + \alpha_i)^2 - 4 i \lambda \mu } }{2 \lambda}.
\]
Putting $z= z_i$ into (\ref{Pi_i (z):eq}), we obtain
\begin{eqnarray}
\label{pi_i+1,i+1}
\pi_{i+1,i+1} & = & \frac{ (i\mu - \alpha_i z_i) \pi_{i,i} + \alpha_{i-1} (\pi_{i-1,i-1} - \Pi_{i-1} (z_i) ) }{ (i+1) \mu z_i}, \nonumber \\
& & i = 1,2,\dots,c-1.
\end{eqnarray}
\begin{remark}
At this point, we have expressed the generating functions $\Pi_i (z)$ ($i=0,1,\dots,c-1$) and boundary probabilities $\pi_{i,i}$ ($i=0,1,\dots,c$) in terms of
$\pi_{0,0}$.
\end{remark}
Similar to the case $i=1$, we also have
\[
\Pi_i (z) = \frac{\mu \pi_{i,i} + \alpha_0 \pi_{i-1,i-1} + \alpha_i \widehat{\pi}_{i-1} (z) }{z_i g_i (z)}.
\]
where
\[
\widehat{\pi}_{i-1} (z) = \frac{\Pi_{i-1} (z) - \Pi_{i-1} (z_i)}{z-z_i}, \qquad g_i(z) = \frac{f_i(z)}{z-z_i}.
\]
Finally, we consider the case $i=c$, i.e., all servers are active. Balance equations are given as follows.
\begin{align}
\label{pi_{c,c}:eq}
(\lambda + c\mu) \pi_{c,c} & = \alpha_{c-1} \pi_{c-1,c} + c\mu \pi_{c,c+1}, \\
\label{pi_{c,j}:eq}
(\lambda + c\mu ) \pi_{c,j} & = \alpha_{c-1} \pi_{c-1,j} + \lambda \sum_{i=1}^{j-c} \beta_i \pi_{c,j-i} + c\mu \pi_{c,j+1}, \qquad j \geq c+1.
\end{align}
We define the generating function for the case $i=c$ as follows.
\[
\Pi_c (z) = \sum_{j=c}^\infty \pi_{c,j} z^{j-c}.
\]
Multiplying (\ref{pi_{c,c}:eq}) by $z^0$ and (\ref{pi_{c,j}:eq}) by $z^{j-c}$ and summing over $j \geq c$, we obtain
\begin{eqnarray*}
(\lambda + c\mu ) \Pi_c(z) = \frac{\alpha_{c-1}}{z} ( \Pi_{c-1}(z) - \pi_{c-1,c-1} ) + \frac{c \mu}{z} (\Pi_c(z) - \pi_{c,c}) + \lambda \beta(z) \Pi_c(z),
\end{eqnarray*}
or equivalently
\begin{align*}
\Pi_c(z) & = \frac{ \alpha_{c-1} ( \Pi_{c-1} (z) - \pi_{c-1,c-1}) -c\mu \pi_{c,c} }{ f_c (z) }, \\
& = \frac{ \alpha_{c-1} ( \Pi_{c-1} (z) - \Pi_{c-1} (1)) }{ f_c (z) }
\end{align*}
where $ f_c (z) = (\lambda + c\mu) z - \lambda z \beta(z) - c\mu $ and the second equality is due to the balance between the flows in and out the group of states
$\{ (c,j); j =c,c+1,\dots \}$. Thus, applying L'Hopital's rule and arranging the results yields
\begin{equation}\label{Pi_c(1):eq}
\Pi_c(1) = \frac{ \alpha_{c-1} \Pi_{c-1}^\prime (1) }{c\mu - \lambda \beta^\prime (1)} .
\end{equation}
\begin{remark}
It should be noted that we have expressed $\Pi_i (z)$ ($i=0,1,\dots,c$) in terms of $\pi_{0,0}$, which is uniquely determined by the following normalization condition:
\begin{equation}\label{normalization:cond}
\sum_{i=0}^c \Pi_i (1) = 1.
\end{equation}
According to (\ref{Pi_c(1):eq}), in order to calculate $\Pi_c (1)$, we need $\Pi_{c-1}^\prime (1)$ which is recursively obtained by Theorem~\ref{factorial:0c-1:thm}.
\end{remark}
\begin{remark}
Once $\pi_{i,i}$ ($i = 0,1,\dots,c$) is determined, we can calculate all the steady state probabilities $\pi_{i,j}$ by a recursive manner via the balance equations. In particular,
the calculation order is $\{ \pi_{0,j}; j \geq 0 \} \rightarrow \{ \pi_{1,j}; j \geq 1 \} \rightarrow \dots \rightarrow \{\pi_{c,j}; j \geq c\}$.
\end{remark}
In Section~\ref{factorial_moment:sec}, we show some simple recursive formulae for the partial factorial moments.
\subsection{Factorial moments}\label{factorial_moment:sec}
In this section, we derive simple recursive formulae for factorial moments. Because the generating function $\Pi_0 (z)$ is given in a simple form, its derivatives
at $z=1$ are also explicitly obtained in a simple form.
\begin{thm}\label{factorial:0c-1:thm}
The first partial moments of the queue length is recursively calculated as follows.
\begin{eqnarray}\label{first:moment}
\Pi_i^\prime (1) & = & \frac{\alpha_{i-1}}{\alpha_i} \Pi_{i-1}^\prime (1) + \frac{\lambda \beta^\prime (1) - \alpha_i - i\mu}{\alpha_i} \Pi_i(1) + \frac{ (i+1) \mu \pi_{i+1,i+1} + \alpha_i \pi_{i,i} }{\alpha_i}, \\
&& \quad i = 1,2,\dots,c-1. \nonumber
\end{eqnarray}
where $\Pi_0^\prime(1) = \pi_{0,0} \lambda \beta^\prime (1) (\lambda + \alpha_0)/\alpha_0^2$.
Furthermore, the $n$-th ($n \geq 2$) partial factorial moment is given by
\begin{eqnarray}\label{nth_moments:eq}
\Pi_i^{(n)} (1) &= & \frac{\alpha_{i-1}}{\alpha_i} \Pi_{i-1}^{(n)} (1) + \frac{n(\lambda \beta^\prime (1) - i\mu - \alpha_i) \Pi_{i}^{(n-1)} (1) }{\alpha_i} \nonumber \\
&& \mbox{} + \frac{ \sum_{k=2}^n {}_n C_k \left(\lambda \beta^{(k)} (1) + k \lambda \beta^{(k-1)}(1) \right) \Pi_i^{(n-k)} }{\alpha_i}, \qquad i = 1,2,\dots,c-1, \nonumber
\end{eqnarray}
where $\Pi_0^{(n)} (1) = n! \pi_{0,0} (\lambda \beta^\prime(1))^n (\lambda + \alpha_0)/ \alpha_0^{n+1}$.
\end{thm}
\begin{proof}
Differentiating (\ref{Pi_i (z):eq}), we obtain
\begin{eqnarray*}
f_i(z) \Pi_i^\prime (z) & = & \mbox{} - (i \mu + \alpha_i - \lambda z \beta^\prime (z) ) \Pi_i (z) + \alpha_{i-1} \Pi_{i-1}^\prime (z) + \alpha_i \pi_{i,i} + (i+1)\mu \pi_{i+1,i+1}.
\end{eqnarray*}
Substituting $z=1$ into the above equation and arranging the result yields (\ref{first:moment}).
Differentiating (\ref{Pi_i (z):eq}) for $n \geq 2$ times at $z = 1$ and arranging the result, we obtain (\ref{nth_moments:eq}).
\end{proof}
\begin{thm}\label{factorial:c:thm}
We have
\begin{equation}\label{Pi_c^{(n)} (1):eq}
\Pi_c^{(n)} (1) = \frac{ A_n }{ (n+1) (c\mu - \lambda \beta^\prime (1) ) }, \quad n \in \bbN,
\end{equation}
where
\begin{eqnarray*}
A_n = \alpha_{c-1} \Pi_{c-1}^{(n+1)} (1) + \sum_{k=2}^{n+1} {}_{n+1} C_k \left( \lambda k \beta^{(k-1)} (1) + \lambda \beta^{(k)} (1) \right) \Pi^{(n+1-k)}_c (1).
\end{eqnarray*}
\end{thm}
\begin{proof}
We have
\begin{equation*}
f_c(z) \Pi_c(z) = \alpha_{c-1} ( \pi_{c-1} (z) - \pi_{c-1,c-1}) -c\mu \pi_{c,c}.
\end{equation*}
Differentiating this equation $n \geq 1$ times, we obtain
\begin{eqnarray*}
f_c(z) \Pi_c^{(n)} (z) + \sum_{k=1}^n {}_n C_k f_c^{(k)} (z) \Pi_c^{(n-k)} (z) = \alpha_{c-1} \Pi_{c-1}^{(n)} (z),
\end{eqnarray*}
where $\Pi_c^{(-1)} (z) = 0, \forall \ |z| < 1$. Arranging this equation leads to
\begin{eqnarray}\label{nthdiff_pic(n)z:eq}
\Pi_c^{(n)} (z) = \frac{ \alpha_{c-1} \Pi_{c-1}^{(n)} (z) - \sum_{k=1}^n {}_n C_k f_c^{(k)} (z) \Pi_c^{(n-k)} (z) }{ f_c(z) }.
\end{eqnarray}
We observe inductively that both the denominator and numerator in the right hand side of (\ref{nthdiff_pic(n)z:eq}) vanish at $z=1$.
Thus, applying L'Hopital's rule and arranging the result, we obtain (\ref{Pi_c^{(n)} (1):eq}).
\end{proof}
\begin{remark}
It should be noted that in order to obtain the $n$-th factorial moment $\Pi_c^{(n)} (1)$, we need to have the $(n+1)$-th factorial moment $\Pi_{c-1}^{(n+1)} (1)$.
Fortunately, $\Pi_{c-1}^{(n+1)} (1)$ is expressed in terms of $\Pi_{0}^{(n+1)} (1)$ which is explicitly obtained for any $n$ according to Theorem~\ref{factorial:0c-1:thm}.
\end{remark}
\begin{remark}
It should be noted that when $\alpha_i = \alpha$ ($i = 0,1,\dots,c-1$), our results reduce to those presented in~\cite{phungduc14b}.
\end{remark}
\section{Waiting time distribution}\label{waiting_time_distribution:sec}
This section is devoted to the waiting time distribution of an arbitrary customer. To this end, we first find the steady state probability $p_{i,n-1}$ that an arriving customer finds $i$ servers in active model and $n-1$ ($n \geq 1$) customers standing before him. We then find the conditional waiting time $W_{i,n}$ of a tagged customer that finds $i$ active servers and $n-1$ customers stand before him. Let $\widetilde{W}_{i,n} (s)$ denote the LST of $W_{i,n}$. Let $W$ denote the waiting time of an arbitrary customer and $\widetilde{W} (s)$ denote the LST of $W$.
We then have
\[
\widetilde{W} (s) = \sum_{i=0}^c \sum_{n=i+1}^\infty p_{i,n-1} \widetilde{W}_{i,n} (s).
\]
In Artalejo et al.~\cite{Artalejo05}, explicit expression for $\widetilde{W}_{i,n} (s)$ is obtained.
In fact, $\widetilde{W}_{i,n} (s)$ is the first passage time from state $(i,n)$ to the boundary $(i,i), i = 0,1,\dots,c-1,c$.
Thus, we can obtained the waiting time distribution by inverting the LST.
\subsection{Computation of $p_{i,n}$}
Because $p_{i,n}$ denotes the probability that an arriving customer finds that there are $i$ active servers and himself at the order of $n$ (to depart from the system). We have
\[
p_{i,n} = \sum_{j=1}^{n} \pi_{i,n-j} r_j,
\]
where $r_j$ is the probability that an arriving customer finds himself at the $j$-th in the batch. According to Burke~\cite{Burke} and Cromie et al.~\cite{Cromie}, we have
\[
r_j = \frac{1}{{\rm E}[B]} \sum_{i=j}^\infty \beta_i, \qquad j = 1,2,\dots,
\]
where ${\rm E}[B] = \beta^\prime (1)$ is the mean batch size.
\subsection{Algorithm for the stationary distribution}
In this section, we present an algorithm calculating all the joint steady state probabilities. Since $\pi_{i,i}$ ($i = 0,1,\dots,c$) are obtained. We can calculate all other steady state probability using a recursive algorithm. Indeed, $\pi_{0,n}$ is recursively obtained if $\pi_{0,0}$ is given. Given that $\pi_{0,n}$ is known for any $n$ and that $\pi_{1,1}$ is known, we can recursively obtain all the probabilities $\pi_{1,n}$ for $n \geq 1$. Similarly, we could obtain all the probabilities $\pi_{i,n}$ $(i,n) \in \mathcal{S}$.
\section{Conditional Decomposition}\label{decomposition}
We have derived the following result.
\begin{eqnarray*}
\Pi_c(z) & = & \frac{ \alpha_{c-1} ( \pi_{c-1} (z) - \pi_{c-1,c-1}) -c\mu \pi_{c,c} }{ f_c(z) }, \\
\Pi_c(1) & = & \frac{ \alpha_{c-1} \Pi_{c-1}^\prime (1) }{c\mu - \lambda}.
\end{eqnarray*}
Let $Q^{(c)}$ denote the conditional queue length given that all $c$ servers are busy, i.e.,
\[
\mathbb{P} (Q^{(c)} = i) = \mathbb{P} (N = i + c \ | \ C = c),
\]
where $N$ and $C$ are the number of customers in the system and that of busy servers in the steady state, respectively.
Let $P_c(z)$ denote the generating function of $Q^{(c)}$. It is easy to see that
\begin{align*}
P_c(z) & = \frac{\Pi_c(z)}{\Pi_c(1)} \\
& = \frac{ \alpha_{c-1} ( \Pi_{c-1} (z) - \pi_{c-1,c-1}) -c\mu \pi_{c,c} }{\alpha_{c-1} \Pi_{c-1}^\prime(1) (z-1)} g(z) \\
& = \frac{\Pi_{c-1} (z) - \Pi_{c-1}(1)}{\Pi_{c-1}^\prime(1)(z-1)} g(z) \\
& = \frac{\sum_{j=1}^\infty \pi_{c-1,c-1+j} (z^j - 1) }{\Pi_{c-1}^\prime(1)(z-1)} g(z) \\
& = \frac{\sum_{j=1}^\infty \pi_{c-1,c-1+j} \sum_{i=0}^{j-1} z^i }{\Pi_{c-1}^\prime(1)} g(z)\\
& = \frac{ \sum_{i=0}^\infty \left(\sum_{j = i+1}^\infty \pi_{c-1,c-1+j} \right) z^i }{\Pi_{c-1}^\prime(1)} g(z),
\end{align*}
where we have used $c\mu \pi_{c,c} = \alpha_{c-1} (\Pi_{c-1} (1) - \pi_{c-1,c-1}) $ in the second equality and
\[
g(z) = \frac{(c\mu - \lambda \beta^\prime(1))(z-1)}{ (c\mu + \lambda )z - \lambda z \beta(z) - c\mu}.
\]
It should be noted that $g(z)$ is the generating function of the number of waiting customers in the conventional M$^{X}$/M/$c$ system without setup time (denoted by $Q^{(c)}_{ON-IDLE}$) under the condition that $c$ servers are busy.
We give a clear interpretation for the generating function:
\[
\frac{ \sum_{i=0}^\infty \left(\sum_{j = i+1}^\infty \pi_{c-1,c-1+j} \right) z^i }{\Pi_{c-1}^\prime(1)}.
\]
For simplicity, we define
\begin{eqnarray*}
q_{c-1,i} = \frac{\sum_{j = i+1}^\infty \pi_{c-1,c-1+j}}{\Pi_{c-1}^\prime(1)}, \qquad i \in \bbZ_+.
\end{eqnarray*}
We have
\[
\sum_{j=i+1}^\infty \pi_{c-1,c-1 + j} = \mathbb{P} ( N - C > i \ | \ C= c-1) \mathbb{P} (C=c-1).
\]
Thus, we have
\[
q_{c-1,i} = \frac{\mathbb{P} ( N - C > i \ | \ C= c-1)}{ \mathbb{E} [N - C \ | \ C = c-1] }.
\]
It should be noted that $N-C$ is the number of waiting customers. Thus, the discrete random variable with the distribution $q_{c-1,i}$ ($i=0,1,2,\dots$) is the
the probability that a waiting waiting customer find $i$ other customers waiting in front of him under the condition that there are $c-1$ active servers (See~Burke~\cite{Burke}). Let $Q_{Res}$ denote this random variable.
Thus our decomposition result is summarized as follows.
\[
Q^{(c)} \,{\buildrel d \over =}\ Q^{(c)}_{ON-IDLE} + Q_{Res}.
\]
\begin{remark}
Tian et al.~\cite{Tian99,Tian03b} obtain a similar result for a multiserver model with Poisson arrival and vacation, i.e., $\alpha_i = (c-i) \alpha$ and $\beta(z) = z$.
However, the random variable with the distribution $p_{c-1,i}$ here is not given a clear physical meaning in~\cite{Tian99,Tian03b}.
\end{remark}
\section{Special Cases and Variant models} \label{variant models}
\subsection{Staggered setup model}
In staggered setup policy, only one server can be allowed to be in setup process at a time. Thus, this model is a special case of the model in this paper where $\alpha_i = \alpha$ ($i = 0,1,\dots,c-1$)~\cite{phungduc14c}.
Some simpler results could be obtained if we restrict ourself to the case of single arrival, i.e., $\beta(z) = z$.
In this section is devoted to the decomposition property of the queue length where we show the single server system in Section~\ref{single_server:sec} and discuss the multiserver model in Section~\ref{multiserver:decompose:sec}.
\subsubsection{Single server}\label{single_server:sec}
We consider the single server case. The partial generating functions are given as follows.
\[
\Pi_0 (z) = \frac{(1- \rho) \alpha }{\lambda + \alpha - \lambda z}, \qquad \Pi_1 (z) = \frac{(1-\rho) \lambda \alpha}{ (\mu- \lambda z) (\lambda + \alpha - \lambda z) },
\]
where $\rho = \lambda/\mu$.
Let $\Pi(z)$ denote the generating function of the number of waiting customers. We have
\[
\Pi (z) = \Pi_0 (z) + \Pi_1(z) = (1-\rho) \left( 1 + \frac{\rho}{1-\rho z} \right) \frac{\alpha}{\lambda + \alpha - \lambda z}.
\]
It should be noted that
\[
(1-\rho) \left( 1 + \frac{\rho}{1-\rho z} \right)
\]
and
\[
\frac{\alpha}{\lambda + \alpha - \lambda z}
\]
represent the generating function of the number of waiting customers in the corresponding M/M/1 queue without setup time and that of customers arriving in the remaining setup time, respectively.
Thus, we have
\[
L \,{\buildrel d \over =}\ L_1 + L_2,
\]
where the $L$ is the queue length of the current model while $L_1$ and $L_2$ represent the queue length of the conventional M/M/1 queue and the number of customers that arrive during the remaining setup time.
\subsubsection{Multiserver}\label{multiserver:decompose:sec}
In this section, we investigate the decomposability of the queue length. In particular we answer the question: does equation (\ref{multiple_serv:eq}) hold?
\begin{equation}\label{multiple_serv:eq}
L \,{\buildrel d \over =}\ L_1 + L_2,
\end{equation}
where $L_1$ is the queue length of the M/M/$c$ without setup time and $L_2$ is the number of customers that arrive to the queue during the remaining setup time.
The generating function for the number of waiting customers in the conventional M/M/$c$ queueing system is given by
$1-C(c \rho,c) + C(c \rho,c)(1-\rho)/(1-\rho z)$ where $\rho = \lambda/(c\mu)$ and $C(c \rho,c)$ is the Erlang C formula for the waiting probability in the conventional M/M/$c$ system without setup time. Therefore, if the decomposition result is established the generating function of the number of waiting customers in the system with setup time $\Pi (z)$ must be given by the following formula.
\begin{equation}
\label{mmc_decompose:eq}
\Pi (z) = \frac{\alpha}{\alpha + \lambda - \lambda z} \left( 1- C(c \rho,c) + C(c \rho,c) \frac{1- \rho}{1- \rho z} \right).
\end{equation}
In~\cite{Gandhi10} the authors state that the decomposition property is held for the model meaning that (\ref{mmc_decompose:eq}) is true.
We prove this property. Indeed for the case where $\beta(z) = z$ after some tedious calculations, we find that
\[
\Pi_i (z) = \pi_{i,i} \frac{\lambda + \alpha}{\lambda + \alpha - \lambda z}, \qquad \pi_{i,i}= \pi_{0,0} \left( \frac{\lambda}{\mu} \right)^i \frac{1}{i!},
\]
for $i = 0,1,\dots,c-1$ and
\[
\pi_{c,c}= \pi_{0,0} \left( \frac{\lambda}{\mu} \right)^c \frac{1}{c!}, \quad \Pi_c (z) = \pi_{c,c} \frac{\lambda + \alpha}{(1-\rho z)(\lambda + \alpha - \lambda z)}.
\]
It follows from $\Pi (z) = \sum_{i=0}^c \Pi_i(z) $ and $\Pi (1) = 1$ that (\ref{mmc_decompose:eq}) is true.
From the decomposition result for the queue length, we obtain the decomposition result for the waiting time via distributional Little's law.
In particular, we have
\begin{equation}\label{multiple_serv_waiting:eq}
W \,{\buildrel d \over =}\ W_1 + W_2,
\end{equation}
where $W$ denotes the waiting time in the current system while $W_1$ and $W_2$ are the waiting time in the corresponding M/M/$c$ system without setup time and the setup time, respectively.
\begin{remark}
From the generating function, we obtain explicit expressions for the joint stationary distribution as follows.
\[
\pi_{i,j} = \pi_{i,i} \left( \frac{\lambda}{\lambda+\alpha} \right)^{j-i}, \qquad j = i,i+1,\dots, i = 0,1,\dots,c-1.
\]
Furthermore, if $\rho \neq \varphi_0 = \lambda/(\lambda + \alpha)$, we have
\[
\pi_{c,c+k} = \pi_{c,c} \left( \frac{\varphi_0^{k+1} - \rho^{k+1} }{\varphi_0 - \rho} \right), \qquad k \geq 0.
\]
If $\rho = \lambda/(\lambda + \alpha)$, we have
\[
\pi_{c,c+k} = \pi_{c,c} (k+1) \rho^k, \qquad k \geq 0.
\]
\end{remark}
\subsection{Vacation model}
A special case is the one with vacation. The model Poisson arrival is presented in~\cite{Tian99}. In vacation model, a server goes to vacation upon completion of a service and there is not a waiting customer.
Assuming that the vacation period is exponentially distributed with mean $1/\alpha$. Thus, when there are $i$ active servers and some waiting customers, $c-i$ servers come back to service with rate $(c-i) \alpha$.
Thus this vacation model is equivalent to our setup model where the setup time is exponentially distributed with mean $1/\alpha_i$ where $\alpha_i = (c-i) \alpha$ provided that there are $i$ active servers.
See Figure~\ref{state_dependent_setup:fig} for the transition among states for the model with state-dependent setup and individual arrivals.
\begin{figure}
\begin{center}
\includegraphics[scale=0.25]{staggered_setup.eps}
\end{center}
\caption{State transition diagram.}
\label{state_dependent_setup:fig}
\end{figure}
\subsection{Model with queue-length-dependent setup}
Another variant is the model where the number of setting up servers depends on the number of waiting customers \cite{Gandhi10b}. In particular, the setup rate will be $\min(j-i,c-i) \alpha$ provided that there are $i$ active servers and $j$ customers in the system~\cite{phungduc14c}. See Figure~\ref{demand_dependent_setup:fig} for the transition among states for the case of individual arrivals, i.e., $\beta(z) = z$. This model is more complex due to the inhomogeneity of boundary states where the number of customers in the system $j \leq c$. However, we can treat this model by a similar approach with a minor modification. In particular, we may define
the generating functions for homogeneous part, i.e., $j \geq c$.
\[
\Pi_i (z) = \sum_{j = c}^\infty \pi_{i,j} z^{j-i}, \qquad i = 0,1,\dots,c-1,c.
\]
This results in a set of equations of generating functions for the states $\{(i,j); j \geq c \}$. In addition, we have balance equations for the states $\{(i,j); i \leq j \leq c \}$
\begin{figure}
\begin{center}
\includegraphics[scale=0.5]{demand_dependent_setup.eps}
\end{center}
\caption{State transition diagram.}
\label{demand_dependent_setup:fig}
\end{figure}
\section{Performance Measures and Numerical Results}\label{numerical:sec}
\subsection{Power Consumption}
The cost per unit time for each state: SETUP, ON and IDLE of a server is set as follows: $C_{setup} = 1, C_{run} = 1$ and $C_{idle} = 0.6$. The power consumption of our system with staggered setup is given by
\[
P_{ON-off} = C_{setup} (1 - \sum_{i=0}^{c-1} \pi_{i,i} - \Pi_c(1) ) + C_{run} c \rho,
\]
where $c \rho = \lambda/\mu$ is the mean number of running servers. We plot four curves corresponding to the cases $\alpha = 0.1,1,10$ and 100. For comparison, we also plot the curves for the conventional M/M/$c$ queue under the same setting. It should be noted that in the conventional M/M/$c$ system, an idle server is not turned off. As a result, the cost for power consumption is given by
\[
P_{ON-idle} = C_{run} c \rho + C_{idle} (c- c \rho).
\]
\subsection{Total Cost}
The mean number of waiting customers is given by
\[
{\rm E} [Q] = \sum_{i=0}^c \Pi^\prime(1).
\]
We consider a cost function taking into account both power consumption and performance (mean number of waiting customers). Our aim is to to investigate the the characteristics of the cost function. Cost function for the ON-OFF model is given by
\[
C_{ON-off} = P_{ON-off} + \frac{1}{\delta} {\rm E}[Q].
\]
On the other hand, the cost function for ON-Idle model is given by
\[
C_{ON-idle} = P_{ON-idle} + \frac{1}{\delta} {\rm E}[Q_i],
\]
where ${\rm E}[Q_i]$ is the mean queue length $M^X$/M/c queue without setup time which could be obtained from the analysis in~\cite{Cromie}.
\begin{figure}[htbp]
\begin{tabular}{cc}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[scale=0.55]{Power_fixed_batch_rho_change_alpha01_costsetup_equal_costrun.eps}
\caption{Power Consumption $\alpha = 0.1$.}
\label{Power_fixed_batch_rho_change_alpha01_costsetup_equal_costrun:fig}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[scale=0.55]{Power_fixed_batch_rho_change_alpha1_costsetup_equal_costrun.eps}
\caption{Power Consumption $\alpha = 1$.}
\label{Power_fixed_batch_rho_change_alpha1_costsetup_equal_costrun:fig}
\end{center}
\end{minipage}
\end{tabular}
\end{figure}
In this section, we consider the case where $\alpha_i = \alpha$, i.e., staggered setup policy. In all the figures, the curves for On-Idle policy is indicated by "On-Idle" and other curves are of the On-Off model.
\subsection{Power consumption}
In this section we investigate the power consumption against the traffic intensity. Figures~\ref{Power_fixed_batch_rho_change_alpha01_costsetup_equal_costrun:fig}, \ref{Power_fixed_batch_rho_change_alpha1_costsetup_equal_costrun:fig} and
\ref{Power_fixed_batch_rho_change_alpha10_costsetup_equal_costrun:fig} against the traffic intensity for $\alpha = 1$. We observe from the three figures that the ON-Off policy always outperform the On-Idle policy. However, from the performance point of view, the waiting time in the former is longer than the latter. We will investigate the impact of setup time on the total cost of the system next section. An important observation is that keeping the traffic intensity the same, power consumption decreases with the batch size. This implies that it is more efficient to design the system so that customers arrive in group with large batch size.
\begin{figure}[htbp]
\begin{tabular}{cc}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[scale=0.55]{Power_fixed_batch_rho_change_alpha10_costsetup_equal_costrun.eps}
\caption{Power Consumption $\alpha = 10$.}
\label{Power_fixed_batch_rho_change_alpha10_costsetup_equal_costrun:fig}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[scale=0.55]{power_qos_tradeoff_vs_alpha_rho03_beta01.eps}
\caption{Total Cost $\rho = 0.3, \delta = 0.1$.}
\label{power_qos_tradeoff_vs_alpha_rho03_beta01:fig}
\end{center}
\end{minipage}
\end{tabular}
\end{figure}
\subsection{Cost function}
In this section, we investigate the cost function against various parameters. Figures~\ref{power_qos_tradeoff_vs_alpha_rho03_beta01:fig}, \ref{power_qos_tradeoff_vs_alpha_rho03_beta1:fig} and \ref{power_qos_tradeoff_vs_alpha_rho03_beta10:fig} represent the cost function against the setup rate $\alpha$ for $\delta = 0.1$, 1, and 10, respectively, provided that $\rho = 0.3$. In data center, a server typically operates under the load of 40\%~\cite{Schwartz12}. Thus, it may be interesting to investigate the cost function around this value. It should be noted that $\delta = 0.1$ corresponds to the case where the importance of performance, i.e. mean queue length is 10 times bigger than that of the power consumption while $\delta = 10$ represents opposite case where power consumption is given priority. For comparison we also plot the cost function for the ON-Idle model. We observe from the three graphs that there exists some $\alpha_{0.3}$ for which the ON-OFF model is more efficient than the ON-IDLE one when $\alpha > \alpha_{0.3}$.
Figures~\ref{power_qos_tradeoff_vs_rho_ap1delta01:fig}, \ref{power_qos_tradeoff_vs_rho_ap1delta1:fig} and \ref{power_qos_tradeoff_vs_rho_ap1delta10:fig}
show the cost function against the traffic intensity $\rho$ for $\delta = 0.1$, 1 and $10$, respectively. We observe from Figure~\ref{power_qos_tradeoff_vs_rho_ap1delta01:fig} that the ON-Idle model outperforms that ON-Off model. This implies that when the importance is placed on the performance ($\delta = 0.1$), it is better to keep the servers ON all the time. On the other hand, we observe from Figure~\ref{power_qos_tradeoff_vs_rho_ap1delta01:fig} that the ON-Off model is always better than the ON-Idle one for $\delta = 10$. This implies that when the importance is placed on the power consumption, it is better to adopt the ON-Off model.
\begin{figure}[htbp]
\begin{tabular}{cc}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[scale=0.55]{power_qos_tradeoff_vs_alpha_rho03_beta1.eps}
\caption{Total Cost $\rho = 0.3, \delta = 1$.}
\label{power_qos_tradeoff_vs_alpha_rho03_beta1:fig}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[scale=0.55]{power_qos_tradeoff_vs_alpha_rho03_beta10.eps}
\caption{Total Cost $\rho = 0.3, \delta = 10$.}
\label{power_qos_tradeoff_vs_alpha_rho03_beta10:fig}
\end{center}
\end{minipage}
\end{tabular}
\end{figure}
\begin{figure}[htbp]
\begin{tabular}{cc}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[scale=0.55]{power_qos_tradeoff_vs_rho_ap1delta01.eps}
\caption{Total Cost $\alpha = 1, \delta = 0.1$.}
\label{power_qos_tradeoff_vs_rho_ap1delta01:fig}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[scale=0.55]{power_qos_tradeoff_vs_rho_ap1delta1.eps}
\caption{Total Cost $\alpha = 1, \delta = 1$.}
\label{power_qos_tradeoff_vs_rho_ap1delta1:fig}
\end{center}
\end{minipage}
\end{tabular}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.65]{power_qos_tradeoff_vs_rho_ap1delta10.eps}
\end{center}
\caption{Total Cost $\alpha = 1, \delta = 10$.}
\label{power_qos_tradeoff_vs_rho_ap1delta10:fig}
\end{figure}
\section{Concluding remarks}\label{concluding_remark:sec}
In this paper, we have considered the M${}^{\rm X}$/M/$c$ queueing system with staggered setup where only one server can be in setup mode at a time. A server is turned off immediately after serving a job and there is no waiting customer. If there are some waiting customers, OFF servers are turned on one by one. Using a generating function approach, we have obtained the partial generating functions of the queue length. We also have obtained recursive formulae for computing the factorial moments of the number of waiting jobs. Numerical experiments have shown some insights into the performance of the system. Furthermore, it is also important to consider the case where a fixed number of servers are always kept ON in order to reduce the delay of customers. It is also interesting to find the relation between the decomposition formula in this paper with that of Fuhrmann and Cooper~\cite{Fuhrmann85}. We have obtained partial generating functions for the joint queue lengths. A possible future work may be to obtain the tail asymptotic for the joint queue lengths.
|
train/arxiv
|
BkiUbNk4ukPiEYTwk1NY
| 5 | 1 |
\section{Introduction} \label{sec:introduction}
\subsection{Sparsity regularization in X-ray CT}
Sparsity-regularized (SR) image reconstruction has shown great promise for X-ray CT. Many works, e.g, \cite{SidkyTV:06,song2007sparseness,sidky2008image,chen2008prior,Bian:10,ritschl2011improved}, have demonstrated that accurate reconstructions
can be obtained from substantially less projection data than is normally required by standard analytical methods such as filtered back-projection and algebraic reconstruction methods.
Acquiring less data is of interest in many applications of X-ray CT to reduce scan time or exposure to ionizing radiation.
The typical SR setup for X-ray CT, and the one we employ, is that an unknown discrete image $x \in \mathbb{R}^N$ is to be reconstructed from measured discrete data $b \in \mathbb{R}^m$, connected to $x$ through a linear model, $b \approx A x$, for some measurement matrix $A \in \mathbb{R}^{m\times{}N}$. A common reconstruction problem is
\begin{equation}
\im^* = \argmin_x ~R(x) \quad \text{subject to} \quad \|A x - b\|_2 \leq \epsilon, \label{eq:ineqreconprob}
\end{equation}
where $R(x)$ is a sparsity regularizer, for example the $1$-norm, the total variation (TV) semi-norm, or a $1$-norm of wavelet coefficients or coefficients in a learned dictionary, depending on which domain sparsity is expected in, and $\epsilon$ is a regularization parameter that must be chosen to balance the level of regularization enforced with the misfit to data.
In contrast to analytical and algebraic reconstruction methods, SR can admit reconstructions in the underdetermined case $m < N$ as shown in the references given above. However, from the existing individual studies it is difficult to synthesize a coherent quantitative understanding of the undersampling potential of SR in CT.
From a practical point of view, we want to know how many CT projections to acquire in order to obtain a SR reconstruction of sufficient quality to reliably solve the relevant imaging task, for example detection, classification, segmentation, etc. This question is difficult to address meaningfully in general, because specific applications pose different challenges, for example varying levels of noise and inconsistencies in the data as well as different quality requirements on the reconstruction.
But even in an application-independent setting, systematic analysis of the undersampling potential of SR in CT remains unexplored.
We consider in the present work an idealized form of the reconstruction problem \eqref{eq:ineqreconprob} with $\epsilon = 0$ and consider only synthetic noise-free data. This simplified setup allows us to study more precise questions with fewer complicating factors involved. Specifically,
we consider the three reconstruction problems, \text{P$_1$}{}, \text{LP}{} and \text{TV}{}:
\begin{align*}
(\text{P$_1$}{})&\qquad\qquad\qquad\argmin_x ~\|x\|_{1\phantom{\text{TV}}} \; \text{subject to} \quad A x = b,\\
(\text{LP})&\qquad\qquad\qquad\argmin_x ~\|x\|_{1\phantom{\text{TV}}} \; \text{subject to} \quad A x = b, \quad x \geq 0,\\
(\text{TV})&\qquad\qquad\qquad\argmin_x ~\|x\|_{\text{TV}\phantom{1}} \; \text{subject to} \quad A x = b.
\end{align*}
The first two are standard 1-norm minimization (the latter with non-negativity constraint enforced) for reconstruction of images sparse in the image domain. The last is TV minimization for sparsity in the gradient domain. The TV semi-norm is defined as
\begin{align*}
\|x\|_\text{TV} = \sum_{j=1}^N \|D_j x\|_2,
\end{align*}
where $D_j$ is a finite-difference approximation of the gradient at pixel $j$. In this work we use forward differences and Neumann boundary conditions.
In the idealized setup we are interested in the central property of \emph{recoverability}: an image is said to be recoverable (from its ideal synthetic data) if it is the unique solution to the considered reconstruction problem. For example, we say that an image $\im_\text{orig}$ is recoverable by \text{P$_1$}{} from data $b = A \im_\text{orig}$ if $\im_\text{orig}$ is the unique \text{P$_1$}{} solution.
The fundamental question we are interested in is:
\begin{center}
\emph{How few samples are enough for recovery of an image of a given sparsity by SR reconstruction?}
\end{center}
In other words, we want to study recoverability as function of sparsity and sampling levels.
In the present work we will develop and apply a systematic analysis tool known as phase-diagram analysis from the field of compressed sensing (CS) for this purpose in the setting of CT.
\subsection{Compressed sensing} \label{subsec:cs}
The field of CS addresses precisely the question of how few samples one can acquire and still \emph{provably} recover the image.
In general, obviously, we need $N$ linearly independent samples of an image $x \in \mathbb{R}^N$ to recover $x$.
What CS says is that if the image $x$ is sparse then by taking the right kind of samples we can recover $x$ by SR from fewer than $N$ samples. Furthermore, the more sparse $x$ is, the fewer samples will suffice.
CS was initiated with the works of Donoho \cite{Donoho2006} and Cand\`es et al. \cite{CandesTao2005decoding,candes2006robust}.
Before the advent of CS, SR reconstruction using the $1$-norm had been used heuristically for reduced sampling in CT \cite{delaney1998globally,li2002accurate},
but the works of Donoho and Cand\`es sparked renewed interest and a new focus on guarantees of accurate reconstruction.
An important quantity for CS guarantees is the restricted isometry property (RIP), which is defined as follows. A matrix $A$ is said to satisfy the RIP of order $s$ if there exists a constant $\delta_s \in (0,1)$ such that for all $s$-sparse signals $x$ it holds that
\begin{align}
(1-\delta_s)\|x\|_2^2 \leq \|A x\|_2^2 \leq (1+\delta_s)\|x\|_2^2.
\end{align}
An example of a RIP-based CS guarantee is (see e.g. \cite{candes2008introduction}):
If a matrix $A$ satisfies the RIP with $\delta_{2s} < \sqrt{2} - 1$, then an $s$-sparse image $x$ will be recovered by \text{P$_1$}{} from data $b = A x$.
The problem is then to identify matrices satisfying this, and unfortunately computing RIP-constants is in general NP-hard \cite{Tillmann2014}. An important class of matrices that admit RIP-results are the Gaussian sensing matrices, for which matrix elements are independent samples from the zero-mean, unit-variance normal distribution.
If the number of measurements $m$ satisfies
\begin{align}
m \geq C \cdot s \cdot \log (N/s), \label{eq:ripnumsamples}
\end{align}
where $C$ is a constant, then with high probability a Gaussian sensing matrix possesses the RIP, such that the image $x$ will be recovered.
In a certain sense the Gaussian sensing matrices constitute an \emph{optimal sampling strategy} \cite{CandesTao2006nearoptimal,candes2008introduction}, because no other matrix type can provide the same recovery guarantee for fewer samples than
\eqref{eq:ripnumsamples}. The importance of the Gaussian sensing matrices in CS is further established by many additional guarantees based for example on incoherence of the sensing matrix. It is not our intention to give a comprehensive review of CS theory here; such can be found in many places, for example the recent overview by Foucart and Rauhut \cite{FoucartRauhut:2013}.
The prominent role of the Gaussian sensing matrices and other random matrix constructions in CS gives the impression that random sensing is a key CS feature and it is
tacitly assumed in the imaging community that random sensing provides superior recoverability
performance to that of structured sampling.
This assumption has even lead researchers
to investigate hardware implementations of random sampling for CT \cite{Brady_CStomo:14}.
However, more recently novel CS guarantees have appeared for certain \emph{non-random} matrices \cite{GilbertIndyk2010}, which may be a step toward reduced focus on random sampling, although these matrices are also quite far from CT.
It is generally well-understood \cite{elad2010sparse,FoucartRauhut:2013} that current CS theory does not cover deterministic sampling setups in real-world applications. For CT in particular Petra and Schn\"orr \cite{Petra:2009,PetraSchnoerr2014} showed that CS guarantees are extremely poor.
The main sensing problem of CT is its fundamental nature of sampling the object by line integrals. Each line integral only samples a small part of the object, thus leading to sparse, highly structured and coherent CT sampling matrices. In contrast CS sensing matrices, such as the Gaussian, are dense, have random elements and are incoherent, and hence fundamentally different.
In other words, there remains a large gap between the empirically observed effectiveness of SR in CT and the mathematical CS guarantees of accurate recovery typically involving random matrices.
\subsection{Own previous work and contribution of present work}
We have recently been interested in analyzing SR in CT from a CS perspective \cite{Joergensen_TMI:2013,Joergensen_eqconpap_v2_arxiv:2014,uniqueness_arxiv_2014}.
More specifically,
we have studied recoverability from fan-beam CT data by 1-norm and TV regularization. We introduced the use of certain phase diagrams from CS to the setting of CT for systematically studying how recoverability depends on sparsity and sampling. Our work demonstrated quantitatively that recoverability from equi-angular fan-beam CT data for certain classes of test images exhibits a phase-transition phenomenon very similar to what is has been proved in CS for the Gaussian sensing matrices, as will be explained in Sec.~\ref{sec:phasediagramanalysis}.
In the present work we will further refine the phase-diagram analysis we introduced in \cite{Joergensen_eqconpap_v2_arxiv:2014,uniqueness_arxiv_2014} and demonstrate how it can be used to systematically provide quantitative insight of the undersampling potential of SR in CT by applying it to 3 cases. First, in Sec.~\ref{sec:phasediagramanalysis} we will give the sufficient theoretical background on phase-diagram analysis and the application to CT. Following that, we address in Sec.~\ref{sec:comparegaussian}, \ref{sec:randomsampling} and \ref{sec:large-scale} the following studies:
\begin{itemize}[leftmargin=0.7in]
\item[Study A:]
How does CT-sampling compare in terms of recoverability to an optimal CS sampling strategy, i.e., using Gaussian sensing matrices?
\item[Study B:] Is recoverability improved by taking random CT measurements?
\item[Study C:] How accurately can small-scale synthetic-data phase diagrams predict sufficient sampling for realistically-sized images of real objects?
\end{itemize}
Finally in Sec.~\ref{sec:conclusion} we conclude the paper.
The purpose of Study A is to put the CT phase-transition behavior we observed in \cite{Joergensen_eqconpap_v2_arxiv:2014,uniqueness_arxiv_2014} more clearly into context of CS-theory. Quite surprisingly our results demonstrate that standard CT sampling is almost comparable with Gaussian sensing matrices in terms of recoverability. This is surprising since the Gaussian sensing matrices form an optimal CS sampling strategy, as explained previously in this section.
Study B addresses the use of random sampling in CT for potentially allowing for accurate reconstruction from fewer measurements than regular structured CT sampling. By use of phase-diagram analysis we will show that random sampling does \emph{not} lead to improved performance, but rather unchanged or in some cases even substantially reduced performance.
The purpose of Study C is to establish a connection to real-world CT image reconstruction by investigating the practical utility of phase diagrams for predicting how much CT data to acquire for reconstructing accurately a large-scale image of a given sparsity.
In all three studies we use phase-diagram analysis as the main tool. Our goal is both to arrive at the particular insights of the three studies and to demonstrate phase-diagram analysis as a useful tool for systematically gaining quantitative understanding of SR in CT.
\section{Phase-diagram analysis} \label{sec:phasediagramanalysis}
\subsection{Theoretical phase-transition results}
As explained in Sec.~\ref{sec:introduction}\ref{subsec:cs} the Gaussian sensing matrices play a central role in CS. It is also possible to give a theoretical description of its \text{P$_1$}{} and \text{LP}{} recoverability in terms of phase-diagram analysis. We present two different theoretical analyses, by Donoho and Tanner (DT) and by Amelunxen, Lotz, McCoy and Tropp (ALMT).
DT established in a series of papers \cite{DonohoTanner_nonneg:2005,DonohoTanner:2009,DonohoTanner_radically:2009,DonohoTanner_finitesize:2010} phase-transition behavior of the Gaussian sensing matrices. Their analysis is based on so-called neighborliness of random polytopes and builds on earlier work by Vershik and Sporyshev \cite{Vershik1992}. For an $s$-sparse signal $x \in \mathbb{R}^N$ and $m$ samples, the DT phase diagram displays recoverability as function of $(\rho, \delta)$ for $\rho = s/m \in [0,1]$ and $\delta = m/N \in [0,1]$. For the set of $s$-sparse signals DT consider two notions of recoverability: strong, meaning that \emph{all} $s$-sparse signals are recovered, and weak, meaning that \emph{most} $s$-sparse signals are recovered at a given sampling level. DT then showed for the Gaussian sensing matrices and \text{P$_1$}{} and \text{LP}{} that asymptotically
there exist strong/weak phase-transition curves $\rho(\delta)$ such that at a sampling level of $\delta$ with
high probability all/most
signals with $\rho < \rho(\delta)$
will be recovered. Similarly, with high probability all/most signals with $\rho > \rho(\delta)$ will fail to be recovered. The strong and weak phase-transition curves for \text{P$_1$}{} and \text{LP}{} are shown in Fig.~\ref{fig:DTALMTcurves}~(left). Each curve partitions the phase space in two regions, one of full recovery (below the curve) and one of no recovery (above). We note that the weak full-recovery regions are substantially larger than their strong counterparts and that \text{LP}{} has a larger full-recovery region than \text{P$_1$}{}. Both observations intuitively make sense.
As we will demonstrate in Sec.~\ref{sec:comparegaussian}, the asymptotic weak phase-transition curves are in excellent agreement with empirical phase diagrams for finite-sized problems.
ALMT use a completely different analysis \cite{Amelunxen_arxiv:2014} based on the so-called statistical dimension of descent cones to prove non-asymptotic phase-transition behavior for the Gaussian sensing matrices. The ALMT phase diagram shows recoverability as function of $(s/N, m/N) \in [0,1]^2$. ALMT give phase-transition curves i.e. critical sampling values $m/N$ as function of sparsity values $s/N$ such that most images of a given sparsity are recovered from more samples than the critical level, and not recovered from fewer samples. The \text{P$_1$}{} and \text{LP}{} ALMT phase-transition curves are shown in Fig.~\ref{fig:DTALMTcurves}~(right). Contrary to the DT phase-transition curves, the full recovery regions are above the curves. We will demonstrate in Sec.~\ref{sec:comparegaussian} that the ALMT phase-transition curves are in excellent agreement with empirical phase diagrams.
\begin{figure}[tb]
\centering
\includegraphics[width=0.4\linewidth]{DT_pd.pdf}\qquad
\includegraphics[width=0.4\linewidth]{ALMT_pd.pdf}
\caption{Theoretical phase-transition curves for Gaussian sensing matrices. Left: Donoho-Tanner (DT) asymptotic phase-transition curves for strong and weak recovery by \text{P$_1$}{} and \text{LP}{}; recovery occurs \emph{below} the curves. Right: Amelunxen-Lotz-McCoy-Tropp (ALMT) phase-transition curves for recovery by \text{P$_1$}{} and \text{LP}{}; recovery \emph{above} the curves.}\label{fig:DTALMTcurves}
\end{figure}
Regarding recovery guarantees for TV, we are only aware of the RIP-results by Needell and Ward \cite{needell2013cstv,NeedellWard:SIR:13}.
To our knowledge it is an open question whether theoretical phase-transition results can be obtained. In the present work we demonstrate empirically that such behavior can be observed both from Gaussian and fan-beam CT sensing matrices.
In addition to the Gaussian sensing matrices, phase-transition behavior has been observed \cite{DonohoTanner:2009} for several other classes of random matrices and some theoretical analysis has been given \cite{Bayati:2012}. However, it remains open to establish phase-transition behavior for matrices occurring in practical imaging applications such as CT. Our motivation for the present work is precisely to establish that at least empirically it is possible to observe phase-transition behavior in CT.
\subsection{Experimental procedure of empirical phase-diagram analysis}\label{subsec:experimental}
Even though no theoretical phase-transition results exist for CT we can construct
empirical phase diagrams by repeatedly solving the same reconstruction problem over an ensemble of problem realizations for a range of sparsity and sampling levels. In our case we found that $100$ realizations at each sparsity and sampling level were enough to demonstrate phase-transition behavior.
Each problem realization is generated in the following way: Given sparsity and sampling levels a test image $\im_\text{orig}$ is generated, a sampling matrix $A$ is set up, and ideal data $b = A\im_\text{orig}$ is computed. From the data $b$ the appropriate reconstruction problem is solved and the reconstruction is denoted $\im^*$. Recovery is declared if $\im^*$ is sufficiently close numerically to $\im_\text{orig}$; here we test whether the relative 2-norm error $\|\im^*-\im_\text{orig}\|_2 / \|\im_\text{orig}\|_2 < \epsilon$, for some choice of threshold $\epsilon$. For \text{P$_1$}{} and \text{LP}{} we found $\epsilon = 10^{-4}$ to be suitable, while for \text{TV}{} we use $\epsilon = 10^{-3}$, as the conic optimization problem is more difficult to solve accurately.
As in \cite{Joergensen_eqconpap_v2_arxiv:2014,uniqueness_arxiv_2014} we use the commercial optimization software MOSEK \cite{MOSEK} to solve reconstruction problems required to construct a phase diagram. MOSEK uses a state-of-the-art primal-dual interior-point method, which allows us to solve \text{P$_1$}{} and \text{LP}{} (recast as linear programs) and \text{TV}{} (recast as a conic program), very accurately. An accurate solution is necessary for correctly assessing numerically whether an image is recovered, since numerical inaccuracies and approximate solutions may lead to the wrong decision. While allowing for high accuracy, interior-point methods are not efficient for large-scale problems.
For the reconstruction problems in Study C we use a large-scale optimization algorithm, which will be described there.
For the Gaussian sensing matrices, each problem realization contains a new realization of the sampling matrix, while in the fan-beam CT case a single matrix (at each sampling level) is used throughout. This is because
in CT we really are interested in the performance of a fixed matrix, which is specified by the physical scanner geometry.
For the ALMT phase diagrams we use $39$ relative sparsity levels $s/N = 0.025, 0.050, \dots, 0.975$ and $26$ sampling levels, namely from $1$ to $26$ equi-angular projection views. At $26$ views, the matrix has size $3338 \times{} 3228$ and is full rank, such that any image, independent of sparsity, will be recovered. For the DT phase diagram we use the same $26$ sampling levels in combination with $32$ sparsity levels (relative to the sampling level), i.e., $\rho = s/m = 1/32, 2/32,\dots,32/32$.
With 100 realizations at each sparsity and sampling level, a total of 101,400 reconstruction problems need to be solved for a single ALMT phase diagram (at the chosen resolution), while the same number for a DT phase diagram is 83,200. Even with the small images used in this paper, our results have taken many hours of computing time on a cluster at DTU Computing Center.
\section{Study A: How does CT compare to CS?} \label{sec:comparegaussian}
As we have explained, the Gaussian sensing matrices are central to CS, since they admit strong theoretical results
and are shown to form an optimal sampling strategy. In this study we use phase-diagram analysis to compare recoverability of fan-beam CT with the Gaussian sensing matrices. We will show that despite the lack of CS guarantees for fan-beam CT, we can empirically observe almost comparable recoverability.
\subsection{Measurement matrices}
We consider two types of measurement matrices: the Gaussian sensing matrices and a system matrix corresponding to a 2D equi-angular fan-beam scanning geometry. A Gaussian sensing matrix
is generated by drawing
independent, identically distributed elements from the standard zero-mean unit-variance normal distribution.
The 2D fan-beam CT system matrix is practically the same one we used in \cite{Joergensen_eqconpap_v2_arxiv:2014, uniqueness_arxiv_2014}, where it is described in detail, and the non-zero structure and the scanning geometry are illustrated in \cite{uniqueness_arxiv_2014}. In brief, we consider a disk-shaped image of $N$ pixels in total, inscribed in an $N_\text{side}\times{} N_\text{side}$ square pixel array. Fan-beam projections are recorded at $N_\text{v}$ equi-angular views of a $360^\circ$ scanning arc, each consisting of $2N_\text{side}$ pixels on a curved detector. The total number of measurements is $m=N_\text{v}\cdot 2 N_\text{side}$, and the $m\times{}N$ system matrix is computed by the function \texttt{fanbeamtomo} from the MATLAB$^\circledR$ toolbox AIR Tools \cite{Hansen2012}. The only difference from \cite{Joergensen_eqconpap_v2_arxiv:2014, uniqueness_arxiv_2014} is that the first angle is not chosen to be on a coordinate axes but offset by $20^\circ$. This offset regularizes the matrix by avoiding identical rows arising from rays
in opposite views aligned with the coordinate axes.
\subsection{Image-domain sparsity}
\paragraph*{Signedspikes by \text{P$_1$}{}}
We consider first the unconstrained problem \text{P$_1$}{}. The standard image class considered in CS phase-diagram studies consists of images with random-valued pixels at random locations. We refer to this image class as signedspikes, see \cite{Joergensen_eqconpap_v2_arxiv:2014} for details and illustration. Specifically we generate a signedspikes image realization as follows: Given an image size (number of pixels) $N$ and sparsity (number of non-zero pixels) $s$, select uniformly at random $s$ pixels and assign values sampled from the uniform distribution on $[-1,1]$.
\begin{figure}[tb]
\centering
\includegraphics[width=0.4\linewidth]{dt_signedspikes_gaussian.pdf}\qquad
\includegraphics[width=0.4\linewidth]{dt_signedspikes_fanbeam_equi_offset20.pdf}\\[0.1cm]
\includegraphics[width=0.4\linewidth]{almt_signedspikes_gaussian_1e-04.pdf}\qquad
\includegraphics[width=0.4\linewidth]{almt_signedspikes_fanbeam_equi_offset20_1e-04.pdf}
\caption{Phase diagrams for the signedspikes image class and \text{P$_1$}{} reconstruction. DT phase diagrams (top row) and ALMT phase diagrams (bottom row). Gaussian sensing matrices (left) and fan-beam CT system matrices (right). Theoretical phase-transition curves for Gaussian sensing matrices (red), empirical phase-transition curve at $50\%$ contour line (cyan), $5\%$ and $95\%$ contour lines (yellow and magenta).\label{fig:dtalmt_signedspikes}}
\end{figure}
We generate DT and ALMT phase diagrams as described in Sec.~\ref{sec:phasediagramanalysis}\ref{subsec:experimental} for Gaussian and fan-beam CT sensing matrices, see Fig.~\ref{fig:dtalmt_signedspikes}.
At each sparsity and sampling level, the color represents the empirical success rate ranging from $0\%$ (shown black) to $100\%$ (shown white). Overlaid in cyan is the $50\%$ contour line indicating the empirical transition curve, as well as in yellow and magenta the $5\%$ and $95\%$ contour lines to quantify the transition width. Further, in red line is shown the theoretical phase-transition curve for the Gaussian sensing matrices.
We make the following observations. First, for the Gaussian sensing matrices, both the empirical DT and ALMT phase diagrams are in perfect agreement with the theoretical DT and ALMT phase-transition curves. This was to be expected but we include it here to verify that we can indeed reproduce the expected phase-transition curves using our software implementation. Second, and much more surprising, \emph{the fan-beam CT phase diagrams are almost identical to the Gaussian case.} The single apparent difference is in the bottom left corner of the DT phase diagram, where the CT recovery region does not extend to the same level as the Gaussian case. The poor CT recovery performance here is easily explained: the two leftmost columns correspond to a single projection and two projections $180^\circ$ apart, from which it is inherently difficult to produce an accurate reconstruction. Note that this issue is not apparent from the present ALMT phase diagram. Apart from this difference, the CT
recovery performance is almost identical to the Gaussian case, in particular the transition is as sharp, as indicated by the $5\%$ and $95\%$ contour levels. On closer inspection the CT recovery region is slightly smaller than the Gaussian case, as seen by the lower cyan curve in the DT case and higher in the ALMT case.
Nevertheless, considering that the Gaussian sensing matrices form an optimal sampling strategy,
and that CT sampling matrices are highly structured, coherent and sparse, we find it extremely surprising to observe almost as good recoverability for CT.
\paragraph*{Non-negative spikes by \text{LP}{}}
\begin{figure}[tb]
\centering
\includegraphics[width=0.4\linewidth]{dt_spikes_gaussian.pdf}\qquad
\includegraphics[width=0.4\linewidth]{dt_spikes_fanbeam_equi_offset20.pdf}\\[0.1cm]
\includegraphics[width=0.4\linewidth]{almt_spikes_gaussian_1e-04.pdf}\qquad
\includegraphics[width=0.4\linewidth]{almt_spikes_fanbeam_equi_offset20_1e-04.pdf}
\caption{Phase diagrams for the non-negative spikes image class and \text{LP}{} reconstruction. DT phase diagrams (top row) and ALMT phase diagrams (bottom row). Gaussian sensing matrices (left) and fan-beam CT system matrices (right). Theoretical phase-transition curves for Gaussian sensing matrices (red), empirical phase-transition curve at $50\%$ contour line (cyan), $5\%$ and $95\%$ contour lines (yellow and magenta)\label{fig:dtalmt_spikes}}
\end{figure}
Typically in CT a non-negativity constraint can be employed since the imaged quantity, the linear attenuation coefficient, is non-negative, and hence the reconstruction problem \text{LP}{} is appropriate.
For $\text{LP}$ we consider the natural non-negative version of the signedspikes class, which we call spikes, with the single change that
values are sampled from the uniform distribution on $[0,1]$, see \cite{Joergensen_eqconpap_v2_arxiv:2014} for illustration.
We construct again empirical DT and ALMT phase diagrams and display them in Fig.~\ref{fig:dtalmt_spikes}
together with the theoretical Gaussian-case phase-transition curves for \text{LP}{}. Also in this case, the CT phase diagrams are almost identical to the Gaussian case, in terms both of the empirical phase-transition curve and the width as indicated by the $5\%$ and $95\%$ contour lines. In fact, the similarity is even larger as the cyan $50\%$ contour in the CT case coincides with the theoretical transition curve, except at the bottom-left corner of the DT phase diagram, as before caused by having only $1$ or $2$ CT projections.
In accordance with the theoretical curves, we see that even fewer samples suffice for recovery in the non-negative case compared to before.
\paragraph*{A structured image class}
CS recovery guarantees for example for the Gaussian sensing matrices state that the sufficient number of samples depends on the signal only in terms of the signal sparsity. That is, signals with structure in the non-zero locations should not require a different number of samples for recovery than unstructured signals such as the spikes images. Does the same hold for for CT? We will demonstrate that the answer is no. Due to non-zero pixels selected at random in the spikes classes there is no structure, i.e., correlation between neighboring pixels. As an example of a class of sparse images with some structure in the non-zero locations we use the 2-power class from \cite{Joergensen_eqconpap_v2_arxiv:2014}. This image class is based on a breast tissue model, but for our purpose here, it suffices to say some correlation has been introduced between neighbor pixel values.
\begin{figure}[tb]
\centering
\includegraphics[width=0.4\linewidth]{dt_fftpower_2_0_gaussian.pdf}\qquad
\includegraphics[width=0.4\linewidth]{dt_fftpower_2_0_fanbeam_equi_offset20.pdf}
\caption{DT phase diagrams for the 2-power image class and \text{LP}{} reconstruction. Gaussian sensing matrices (left) and fan-beam CT system matrices (right). Theoretical phase-transition curves for Gaussian sensing matrices (red), empirical phase-transition curve at $50\%$ contour line (cyan), $5\%$ and $95\%$ contour lines (yellow and magenta).\label{fig:dt_fftpower_2_0}}
\end{figure}
Images from the 2-power class are non-negative, so we use \text{LP}{} for reconstruction, create DT phase diagrams, see Fig.~\ref{fig:dt_fftpower_2_0}, and compare with the spikes-class DT phase diagrams in Fig.~\ref{fig:dtalmt_spikes}, omitting ALMT phase diagrams for brevity. As expected, our results verify that image structure does not matter for the Gaussian sensing matrices, as the DT phase diagram is identical to the spikes case. But, for the fan-beam CT case the phase diagram has changed drastically, most notably the transition is now much smoother as indicated by the $5\%$ and $95\%$ contour lines. Also the empirical phase-transition curve ($50\%$ contour line) has moved away from the theoretical curve. We note that at low sampling (left part) the transition is lower while at high sampling, it is higher, so recoverability can be both better and worse, depending on sampling level.
The $95\%$ contour line limits a region of almost full recovery, and
this region is not much different from the spikes case.
The 2-power result for CT is in stark contrast to the Gaussian sensing matrix behavior in Fig.~\ref{fig:dtalmt_spikes}.
We conclude that even though the spikes results suggest close resemblance of CT with the optimal CS case of Gaussian sensing matrices, the 2-power result makes it clear that CT is more complex.
\subsection{Gradient-domain sparsity}
Sparsity in the image domain is interesting due to well-developed theory, in particular for Gaussian sensing matrices. For CT it is more common to expect sparsity in the gradient domain, which has motivated the successful use of TV-regularization.
However, to the best of our knowledge, no phase-transition behavior has been proved, not even for the Gaussian case. Here, we demonstrate empirically that for both Gaussian and CT sensing matrices similar sharp phase transitions can be observed.
For generating images sparse in the gradient domain we use the image class from \cite{uniqueness_arxiv_2014} alternating-projection for (isotropic) TV, which we here refer to as altprojisotv. An image is generated in an iterative procedure of taking alternating projections onto the range of the gradient operator and thresholding the number of non-zeros in the image gradient to the desired sparsity level; see \cite{uniqueness_arxiv_2014} for details and illustration.
\begin{figure}[tb]
\centering
\includegraphics[width=0.4\linewidth]{dt_altprojisotv_gaussian_1e-03.pdf}\qquad
\includegraphics[width=0.4\linewidth]{dt_altprojisotv_fanbeam_equi_offset20_1e-03.pdf}\\[0.1cm]
\includegraphics[width=0.4\linewidth]{almt_altprojisotv_gaussian_1e-03.pdf}\qquad
\includegraphics[width=0.4\linewidth]{almt_altprojisotv_fanbeam_equi_offset20_1e-03.pdf}
\caption{Phase diagrams for the altprojisotv image class and \text{TV}{} reconstruction. DT phase diagrams (top row) and ALMT phase diagrams (bottom row). Gaussian sensing matrices (left) and fan-beam CT system matrices (right). Theoretical phase-transition curves for \text{P$_1$}{} and \text{LP}{} reconstruction for Gaussian sensing matrices (red), empirical phase-transition curve at $50\%$ contour line (cyan), $5\%$ and $95\%$ contour lines (yellow and magenta).
\label{fig:dtalmt_altprojisotv}}
\end{figure}
Once again, we construct DT and ALMT phase diagrams, see Fig.~\ref{fig:dtalmt_altprojisotv}; this time with sparsity values referring to gradient-domain sparsity. We observe also in this case a sharp phase transition both in the DT and ALMT phase diagrams. In the lack of a theoretical reference curve for TV we compare with the \text{P$_1$}{} and \text{LP}{} curves and find that transition takes place between the two curves.
An irregularity is observed in the bottom-left corner of both DT phase diagrams. The explanation is that
the altprojisotv procedure has difficulty in generating images which are extremely sparse in the gradient domain.
In spite of the irregularity, we find that our empirical TV results convincingly demonstrate a that sharp phase transition takes place also in the TV case, dividing the phase space into regimes of full and no recovery, and again that CT recoverability is similar to the Gaussian case.
\subsection{Conclusion on Study A}
We used phase-diagram analysis to compare fan-beam CT recoverability with optimal CS-sampling using the Gaussian sensing matrices. For unstructured signed images with \text{P$_1$}{} and non-negative images with \text{LP}{} we found almost identical phase-transition behavior in terms of critical sampling level and width of the transition. We thereby demonstrated that empirically fan-beam CT in the average-case performs close to the optimal. While recoverability by the Gaussian sensing matrices was unaffected by the introduction of structure in the non-zero pixels, fan-beam CT recoverability drastically changed to a much smoother transition. Interestingly, except for the lowest-sampling range, the recovery region actually became larger, meaning that many images at a given sparsity level are recovered from fewer samples than the Gaussian sensing matrices' critical sampling level. In spite of the close resemblance on the unstructured images, this example demonstrates that fan-beam CT is fundamentally different
from
the Gaussian sensing matrices.
Also in case of \text{TV}{} recoverability we found almost identical behavior of fan-beam CT and the Gaussian sensing matrices. In particular in both cases we saw a sharp phase transition, thus suggesting that the phase-transition phenomenon generalizes to \text{TV}{}. To our knowledge no theoretical explanation of this observation has been given in the literature.
\section{Study B: Is random sampling beneficial in CT?} \label{sec:randomsampling}
As mentioned in the introduction random sampling is an optimal strategy and important in many recovery guarantees. Sampling in CT is normally done in very structured manner and a natural contemplation is therefore whether the introduction of some form of randomness could lead to recovery guarantees for CT or improved recoverability compared to regular sampling.
In this study we use phase-diagram analysis to investigate whether CT sampling strategies involving randomness can improve the recoverability of sparse images, i.e., enable accurate reconstruction of images of a given sparsity from fewer measurements than regular equi-angular fan-beam CT.
\subsection{Measurement matrices}
Many forms of randomness can be conceived in CT sampling. In this work we consider two straightforward ones. First, a fan-beam geometry denoted fanbeam\_rand in which the source angular positions are no longer equi-distant but sampled uniformly from $[0,360^\circ]$.
Second, we consider a setup we denote random\_rays of independent random rays through the image. Each ray is specified by two parameters: the angle of the ray with a fixed coordinate axis and the intersection of the ray with the orthogonal diameter of the disk-shaped image. The angle and intersection are sampled from the uniform distributions on $[0,180]^\circ$ and $[-N_\text{side}/2, N_\text{side}/2]$, respectively, where $N_\text{side}$ is the diameter length and image is assumed centered around the origin.
\subsection{Image-domain sparsity}
We create DT phase diagrams as in the the previous section for the signedspikes class reconstructed by \text{P$_1$}{} and spikes reconstructed by \text{LP}{}, see Fig.~\ref{fig:random_image_domain}. ALMT phase diagrams are omitted for brevity. As the purpose of this study is not to compare with the Gaussian sensing matrices but equi-angular fan-beam CT sampling, we do not show the theoretical phase-transition curves as in the previous section but instead in dashed red line the empirical phase-transition curves for the equi-angular fan-beam CT geometry, which was shown in cyan in Fig.~\ref{fig:dtalmt_signedspikes} and Fig.~\ref{fig:dtalmt_spikes}.
Compared to the equi-angular fan-beam case, we observe essentially no difference for the fanbeam\_rand case: The empirical phase-transition curves follow the dashed red line closely in both signedspikes with \text{P$_1$}{} and spikes with \text{LP}{} phase diagrams. The random\_rays setup has very similar phase diagrams, but in the signedspikes case, the transition curve is slightly lower than in the equi-angular fan-beam case. In other words, on this set of image-domain sparsity test cases, randomness does not lead improved recoverability, but rather comparable or slightly reduced.
\begin{figure}[tb]
\centering
\newcommand{0.65\linewidth}{0.4\linewidth}
\includegraphics[width=0.65\linewidth]{dt_signedspikes_fanbeam_rand.pdf}\qquad
\includegraphics[width=0.65\linewidth]{dt_signedspikes_random_rays.pdf}\\[0.1cm]
\includegraphics[width=0.65\linewidth]{dt_spikes_fanbeam_rand.pdf}\qquad
\includegraphics[width=0.65\linewidth]{dt_spikes_random_rays.pdf}\\
\caption{DT phase diagrams. Signedspikes image class and \text{P$_1$}{} reconstruction (top row) and spikes image class and \text{LP}{} reconstruction (bottom row). Fan-beam with random source positions (left) and random rays geometry (right). Empirical phase-transition curve for equi-angular fan-beam CT (dashed red), empirical phase-transition curve at $50\%$ contour line (cyan), $5\%$ and $95\%$ contour lines (yellow and magenta). \label{fig:random_image_domain}}
\end{figure}
\subsection{Gradient-domain sparsity}
For TV, we create phase diagrams for the altprojisotv class with both of the random-sampling CT setups, see Fig.~\ref{fig:random_gradient_domain}, and compare with the equi-angular fan-beam results in Fig.~\ref{fig:dtalmt_altprojisotv} indicated again by dashed red line. In both TV cases we observe \emph{worse} recoverability than equi-angular fan-beam.
The fanbeam\_rand setup has a slightly lower empirical phase-transition curve and the transition is wider than for equi-angular fan-beam, as indicated by the larger distance between the $5\%$ and $95\%$ contour lines. This means that on average slightly more projections are needed to recover the same image and further that the critical sampling level sufficient for recovery is less well-defined than for the equi-angular fan-beam case where the phase-transition is sharper.
For random\_rays the transition curve is substantially lower, meaning that on average more projections are needed for recovery of a same-sparsity image compared to equi-angular fan-beam. The largest difference is seen in the left half of the phase diagram, i.e. at fewer samples. One possible explanation of the reduced recoverability here is that with relatively few and independent rays, the probability that some pixels are not intersected by any ray is relatively large. Thus there is no information about such a pixel in the data, so the reconstructed value is solely determined by the regularizer. In contrast, in a fan-beam setup with dense projection-view sampling as in our case, all pixels will be intersected by at least one ray from each projection view.
\begin{figure}[tb]
\centering
\newcommand{0.65\linewidth}{0.4\linewidth}
\includegraphics[width=0.65\linewidth]{dt_altprojisotv_fanbeam_rand.pdf}\qquad
\includegraphics[width=0.65\linewidth]{dt_altprojisotv_random_rays.pdf}
\caption{DT phase diagrams for the altprojisotv image class and \text{TV}{} reconstruction. Fan-beam with random source positions (left) and random rays geometry (right). Empirical phase-transition curve for equi-angular fan-beam CT (dashed red), empirical phase-transition curve at $50\%$ contour line (cyan), $5\%$ and $95\%$ contour lines (yellow and magenta). \label{fig:random_gradient_domain}}
\end{figure}
\subsection{Conclusion on Study B}
By use of phase-diagram analysis we have compared two random-sampling strategies for CT with the more standard equi-angular fan-beam CT. The analysis revealed, in contrast to what might have been anticipated from the key role of randomness in CS, that random sampling does not improve recoverability in CT. On the contrary, in some cases random sampling even leads to worse recoverability, most notably for the random\_rays setup.
\section{Study C: Linking to realistic CT systems}
\label{sec:large-scale}
In this section, we begin the task of linking the small-scale recovery results to
realistic CT systems.
What we are interested in is whether phase diagrams can be used to predict critical sampling levels as function of sparsity in a realistic CT system.
The studies presented should not be regarded as complete,
and many issues for future research will be highlighted. Broadly speaking, the two
main areas of concern are test phantom and optimization algorithm.
A good test phantom presents a challenge. The small-scale phase-diagram results use phantom
ensembles generated from a probabilistic model. While the results provide
a sense of group recovery, a realization from any of the considered object
models does not look like an actual object that would be CT-scanned.
Which optimization algorithm to use is also an important question. For the small-scale studies MOSEK is a convenient choice because a highly accurate solution can be computed reliably and reasonably fast. This means that whether or not an image is recoverable can be easily verified numerically. Optimization algorithms for large-scale CT systems can not
involve more expensive operations than matrix-vector products, at present, ruling out software packages such as MOSEK in favor of first-order methods that are inherently less accurate, in particular for large-scale problems, where in practice it is often necessary to truncate iteration early. As we will show, having less accurate solutions makes it more difficult to decide whether an image is recoverable.
As large-scale studies are necessarily sparse, we cannot
provide comprehensive empirical evidence of sufficient sampling but only a preliminary indication
of how well phase-diagram analysis can predict sufficient sampling for SR for realistic CT systems. As we will show, even this is a complex task for example due to complicated image structure and algorithmic issues, and we will point out several future directions to pursue.
Sec.~\ref{sec:ls-phantom} presents two phantoms generated for
the present study to have different levels of realism with respect to an actual CT scanned object.
Sec.~\ref{sec:ls-alg} presents the first-order optimization algorithm we use for the large-scale
recovery studies, while Sec.~\ref{sec:ls-issues} illustrates some of the algorithmic and numerical challenges we face.
Sec. \ref{sec:ls-results} shows recovery results for
the two phantoms as a function of number of CT projections and compare with critical sampling levels predicted from small-scale phase diagrams.
\subsection{Walnut test phantoms}
\label{sec:ls-phantom}
\begin{figure}[tb]
\centering
\newcommand{0.65\linewidth}{0.65\linewidth}
\includegraphics[width=0.65\linewidth]{scannedWalnut_bboxfixed.pdf}\\
\includegraphics[width=0.65\linewidth]{structureWalnut_bboxfixed.pdf}\\
\includegraphics[width=0.65\linewidth]{textureWalnut_bboxfixed.pdf}
\caption{(Top row) tomographic slice of a walnut, (middle row) structure phantom derived
from the walnut slice image, and (bottom row) texture phantom also derived from this image. The left column shows the whole image in the gray scale window of
[0,0.5] cm$^{-1}$, except for the original walnut image where it is [-0.1,0.5] cm$^{-1}$.
The middle column shows a blown-up region of interest in the narrower gray scale window
[0.3,0.4] cm$^{-1}$ in order to see the texture on the walnut meat. The right column illustrates the gradient-magnitude image in the gray scale window [0,0.01] cm$^{-1}$,
except for the original walnut image where it is [0,0.05] cm$^{-1}$.
\label{fig:walnutScan}}
\end{figure}
In the present large-scale study, there are two links that need to be established to
relate the small-scale phase-diagram analysis to realistic CT: the system size
needs to be extrapolated up, in this case to $N_\text{side}=1024$; and the results from
the various probabilistic phantom models need to extend to realistic structure as seen
in actual CT-scan objects. We address both by designing two large-scale test phantoms with increasing realism from an actual CT scan of a walnut. The idea of scanning a walnut comes from \cite{SiltanenTV:2014}.
In choosing a test phantom for image recovery studies, we aim for an image
with gradient-domain sparsity to illustrate the effectiveness of TV in reducing the necessary
number of samples for accurate image recovery. Yet, the phantom should also have features
somewhat representative of what would be encountered in CT applications.
Typical computer phantoms for CT image
reconstruction testing, composed of simple geometric shapes of uniform gray levels,
are unrealistically sparse in the gradient domain. Such phantoms would be helpful in
extrapolation of small-scale phase-diagram analysis, but do not have much bearing in actual CT applications.
The basis of the test phantoms we generate is a cone-beam CT scan data set of a walnut. The data consists of 1600 equiangular $1024^2$-pixel projections acquired on a Zeiss Xradia 410 Versa micro-CT scanner operated at a 40 kV source voltage, 5 s exposure per projection, and $10.51$ cm source-to-center and $4.51$ cm center-to-detector distances. The central slice is reconstructed onto a $1024^2$-pixel image (pixel size $46.0773 \cdot 10^{-6}$ m) from the corresponding rows of data using 500 iterations of a SIRT-type algorithm.
The first and simplest phantom, the \emph{structure} phantom, is derived from this image
by equalizing the image gray value histogram to 7 discrete gray levels including the background value of 0.
The second and more complex phantom, the \emph{texture} phantom, is derived from the walnut image by performing TV-denoising on the original walnut image after thresholding small background pixel values to zero.
The two versions of the walnut phantoms including blow-ups and gradient-domain images are shown in Fig.~\ref{fig:walnutScan} and gradient-domain sparsity values are given in Table~\ref{tab:walnutdata}.
The studies are idealized in that there is no data
inconsistency; in actual CT the projection data $b$ will in general not be in the range
of the projection operator $A$, and there is in this case no solution to the linear
system $Ax=b$.
\subsection{Large-scale first-order optimization algorithm}
\label{sec:ls-alg}
\begin{algorithm}[tb]
\hrulefill
\begin{algorithmic}[1]
\State INPUT: data $b$
\State INPUT: tuning parameter $\lambda$
\State $\nu = \| A \|_2 / \| S \|_2$
\State $L \gets \| (A, \nu S) \|_2$
\State $ \tau \gets 1/L; \; \sigma \gets 1/L; \; \theta \gets 1; \; k \gets 0$
\State initialize $x_0$, $y_0$, and $z_0$ to zero vectors
\State $\bar{x}_0 \gets x_0$
\Repeat
\State $y_{k+1} \gets y_k+\sigma( A \bar{x}_k -b)$ \label{dualDataUpdate}
\State $ z_k^\prime = z_k + \sigma \nu S \bar{x}_k$
\State $z_{k+1} \gets z_k^\prime
( (\lambda / \nu) / \max(\lambda /\nu, | z_k^\prime |))$ \label{dualGradUpdate}
\State $x_{k+1} \gets x_k - \tau (\sm{A}^T y_{k+1} +
\nu S^T z_{k+1}) $ \label{primalupdate}
\State $\bar{x}_{k+1} \gets x_{k+1} + \theta(x_{k+1} - x_k)$
\State $k \gets k+1$
\Until{$k \ge K$}
\State OUTPUT: $x_K$
\end{algorithmic}
\hrulefill
\caption{Pseudo-code for $K$ steps of the CP algorithm instance
for solving Eq. (\ref{p1tv}). When $S=I$ and $S=D$ this algorithm applies to
\text{P$_1$}{} and \text{TV}{}, respectively. The variables $y_k$ and $z_k$ are dual to the sinogram
and
image, respectively. For gradient-domain sparsity (TV) $z_k$ has
the dimension of the image gradient, and for image-domain sparsity (\text{P$_1$}{})
$z_k$ has the dimension of the image itself.}
\label{alg:p1tv}
\end{algorithm}
We consider large-scale solvers for problems \text{P$_1$}{} and \text{TV}{}. There has been much
recent research on first-order algorithms \cite{beck2009fast,chambolle2011first},
motivated by exactly the type
of problem we face here. We require a solver that can handle the non-smoothness
of \text{P$_1$}{} and \text{TV}{}, and which can be applied to large-scale systems such as CT,
where the images can contain 10$^6$ pixels in 2D or 10$^9$ voxels in 3D and data sets of similar size.
The CT system specifically presents another challenge in that the system
matrix representing standard X-ray projection has poor conditioning \cite{jakob:2013}.
An additional difficulty in solving \text{P$_1$}{} and \text{TV}{}, compared to the form \eqref{eq:ineqreconprob}, is in satisfying the equality constraint;
achieving this constraint to numerical precision with present computational and algorithmic
technology is not possible as far as we know. We present, here, our adaptation
of the Chambolle-Pock (CP) primal-dual algorithm, which we have found to be effective
for the CT system \cite{sidky:CP:2012,sidky2013first,SidkyJTEHM:2014}.
The algorithm used is essentially the same as the one developed in Ref. \cite{SidkyJTEHM:2014}.
The CP algorithm instance is designed to solve the following optimization problem
\begin{equation}
\label{p1tv}
\argmin_x ~ \frac{\lambda}{\nu} \sum_j \| \nu S_j x \|_2 \quad \text{subject to} \quad Ax = b,
\end{equation}
where Eq. (\ref{p1tv}) becomes \text{P$_1$}{} and \text{TV}{} when the sparsifying operator is $S_j=I_j$ and
$S_j=D_j$, respectively; $I_j$ is an image where the $j$th pixel is one and all other pixels are zero; $\nu$ is a constant which balances the operator norms
\begin{equation}
\notag
\nu = \| A \|_2 / \| S \|_2,
\end{equation}
where $S$ is a matrix of $S_j$ for all $j$;
and the parameter $\lambda$, which does not affect the solution of Eq. (\ref{p1tv}),
is used to improve numerical convergence. The parameter $\lambda$ is tuned empirically.
The corresponding algorithm for solving Eq. (\ref{p1tv})
is shown in pseudo-code form in Alg.~\ref{alg:p1tv}.
Considering that the phantom-recovery studies we want to use Alg.~\ref{alg:p1tv} for involve multiple runs over different system
matrices $A$ corresponding to CT sampling with different numbers of projections, we
found it most practical to obtain results for fixed iteration number $K$ and tuning
parameter $\lambda$. The computational time for performing the expensive operations
$Ax$ and $A^T x$ makes consideration of a prescribed stopping criterion difficult.
For the $N_\text{side}=1024$ system of interest these time-limiting operations take
~1 second for our GPU-accelerated projection codes.
A fixed stopping criterion entails variable numbers of iterations, and we
have observed that for Alg. \ref{alg:p1tv} the number of iterations can vary from
1,000 to over 100,000 iterations for a convergence criterion of interest. In terms of
computational time, this range translates to ~20 minutes to well over a day.
As a result, a study may not be completed in a reasonable amount of time;
thus, we fix $K$ and $\lambda$ for our phantom-recovery study.
Because
large-scale first-order optimization algorithms are seeing many new developments at present, it is
likely that there either exists or will be a better alternative to Alg.~\ref{alg:p1tv}.
In fact, we invite the interested reader to find such an alternative, which
can have an important impact on CT imaging!
For example, as will be seen shortly, Alg.~\ref{alg:p1tv}
has limited success for phantom recovery studies for \text{P$_1$}{} on systems of realistic size.
\subsection{Algorithm issues} \label{sec:ls-issues}
We demonstrate first some of the challenges in carrying out large-scale recovery studies by applying Alg.~\ref{alg:p1tv} to a medium-scale problem
using a $N_\text{side}=128$ version of the structure walnut phantom.
The phantom has a gradient-domain sparsity of 1826 and a pixel sparsity of 2664 out of a total of 11620 pixels.
We do a recovery study by studying the root-mean-square (RMSE) reconstruction error as function of the number of projections.
We discuss in detail specific issues of the sampling recovery study
for the purpose of understanding the large-scale results.
\paragraph*{The tuning parameter $\lambda$ and convergence}
The tuning parameter $\lambda$ does not affect the solution of \text{P$_1$}{} or \text{TV}{}, but it can
have a large impact on convergence. To illustrate this we show results of single runs for
$N_\text{side} = 128$ and $N_\text{v}=21$ for both \text{P$_1$}{} and \text{TV}{} in Fig.~\ref{fig:lambdaconv}.
The value $N_\text{v}=21$ is chosen because it is the smallest number of views for which accurate recovery
is obtained for both \text{P$_1$}{} and \text{TV}{}.
Note that we are showing results for $K=100,000$ iterations for \text{P$_1$}{}, while only $K=10,000$ for \text{TV}{}.
\begin{figure}[tb]
\centering
\newcommand{0.65\linewidth}{0.49\linewidth}
\includegraphics[width=0.65\linewidth]{imageRMSE_l1_lambda_21views.pdf}
\includegraphics[width=0.65\linewidth]{imageRMSE_tv_lambda_21views.pdf}
\caption{Image RMSE curves resulting from Alg. \ref{alg:p1tv} run with different values of $\lambda$ for $N_\text{v}=21$ and data generated from the $N_\text{side}=128$ version of the structure walnut phantom. Results for \text{P$_1$}{} and \text{TV}{} are shown on the left and right, respectively. \label{fig:lambdaconv}}
\end{figure}
\begin{figure}[tb]
\centering
\newcommand{0.65\linewidth}{0.49\linewidth}
\includegraphics[width=0.65\linewidth]{recovery_l1_lambda_new.pdf}
\includegraphics[width=0.65\linewidth]{recovery_tv_lambda_new.pdf}
\caption{Image and data RMSE plots for the $N_\text{side}=128$ version of the structure walnut phantom using Alg. \ref{alg:p1tv} with different values of $\lambda$. The results for \text{P$_1$}{} (left) are obtained for $K=10^5$ iterations except for the indicated curve for $K=10^4$. The results for \text{TV}{} (right) are obtained for $K=10^4$ iterations.\label{fig:lambda}}
\end{figure}
It is clear that convergence rates change significantly with $\lambda$, and consequently
recovery curves will be affected by $\lambda$. While $\lambda$ is specific to Alg.~\ref{alg:p1tv}, optimization algorithms generally entail parameters with large effect on convergence rate.
The impact on recovery curves is seen in Fig.~\ref{fig:lambda}, where we
we compare recovery curves obtained at different $\lambda$ for \text{P$_1$}{} ($K=100,000$ iterations) and \text{TV}{} ($K=10,000$ iterations). While overall the recovery curves are similar some differences appear in particular near the jump in error for \text{P$_1$}{}. This can complicate the accurate estimation of the jump location. Overall, in this case, the lowest
image RMSE is obtained for $\lambda=5\times10^{-4}$. For the large-scale system $N_\text{side}=1024$ we have found the value of $\lambda=1\times10^{-4}$ to be useful
for \text{P$_1$}{} and \text{TV}{}, and for different values of $N_\text{v}$ and $N_\text{side}$. One could envision
a strategy where Alg.~\ref{alg:p1tv} is run with a small set of $\lambda$ values and the lowest
image RMSE at iteration $K$ is taken for the recovery plot. In the large-scale results presented shortly,
we found this to be unnecessary, and $\lambda$ is simply fixed at $1 \times 10^{-4}$.
\paragraph*{Recovery plots and difficulty with \text{P$_1$}{}}
The phantom recovery plots for \text{P$_1$}{} and \text{TV}{} in Fig.~\ref{fig:lambda} both show the distinct jump in RMSE at a certain number of projections, at which the image is recovered. We recognize this from the small-scale $N_\text{side}=64$ studies in \cite{Joergensen_eqconpap_v2_arxiv:2014}.
The price of using fixed $K$, however, is that convergence results across projection numbers
are not uniform as the data discrepancy varies with view number.
Furthermore, the recovery curve can be severely affected by poor convergence. If instead of $K=100,000$ only take $K=10,000$ as in the \text{TV}{} case, the remaining recovery curve in Fig.~\ref{fig:lambda} is obtained. The previously abrupt change in error is considerably smoothed and shifted to a different number of views.
The issue of convergence, here, is ubiquitous in iterative image reconstruction
for CT and it can be traced to the use of matched projection, $A$, and back-projection,
$A^T$, where it is well-known in the CT community that matched projector/back-projector
pairs can lead to Moire artifacts that decay extremely slowly \cite{de2004distance}. As a result,
many iterative algorithms in CT employ a different back-projection matrix $B \ne A^T$,
\cite{zeng2000unmatched}. For our purpose we must use the matched pair, in order to solve a well-defined optimization problem.
For the larger system, sufficient iteration for \text{P$_1$}{} lies out of reach with Alg.~\ref{alg:p1tv} and we
focus only on phantom recovery for TV.
\subsection{Large-scale recovery results}
\label{sec:ls-results}
\paragraph*{Predicting sufficient sampling from phase diagrams}
\begin{figure}[tb]
\centering
\includegraphics[width=0.4\linewidth]{predict_almt_altprojisotv_fanbeam_equi_offset20_1e-03.pdf}\qquad
\includegraphics[width=0.405\linewidth]{predict_dt_altprojisotv_fanbeam_equi_offset20_1e-03.pdf}
\caption{Prediction of critical sampling for TV and walnut phantoms by ALMT (left) and DT (right) phase diagrams. \label{fig:prediction}}
\end{figure}
We will use the phase diagrams from Study A to predict critical sampling levels for large-scale \text{TV}{} reconstruction. We found in \cite{Joergensen_eqconpap_v2_arxiv:2014} that the ALMT phase diagram of a given image class remains unchanged
at image resolutions $N_\text{side} =32$, $64$ and $128$, i.e., is independent of resolution. We assume this holds also for the DT phase diagram and we use the DT and ALMT phase diagrams from Fig.~\ref{fig:dtalmt_altprojisotv} (which are for $N_\text{side}=64$) to predict critical sampling levels for the two walnut images at $N_\text{side} = 1024$.
We illustrate in Fig.~\ref{fig:prediction} how to determine critical sampling levels given a sparsity level.
The number of pixels inside the disk is $823592$ and the gradient sparsity levels of the structure and texture walnut images are given in Table~\ref{tab:walnutdata}. In the ALMT phase diagram we can trace vertical lines at each $s/N$ value and find the intersections (indicated by circles) with the empirical phase-transition curve, which gives the predicted critical $m/N$ values. By multiplication of $N$ and division by the number of rays in a single projection, i.e. $2048$, we get the critical number of projections, see Table~\ref{tab:walnutdata}.
To do the same in the DT phase diagram we combine $\delta = m/N$ and $\rho = s/m$ into $\rho = s/(\delta N)$, i.e., a fixed sparsity $s$ traces out a hyperbola on $\delta \in (0,1)$. For the hyperbola of each walnut image we find the intersection point $(m/N, s/m)$ with the empirical phase-transition curve. Up to the accuracy of reading off the figure, the two components lead to identical critical values of $m$, from which we find the critical number of projections for each walnut image, see Table~\ref{tab:walnutdata}.
We note that the larger number of gradient non-zeroes in the texture walnut image leads to prediction of a higher critical sampling level. Similar plots for image-domain sparsity could be constructed based on Fig.~\ref{fig:dtalmt_signedspikes} and Fig.~\ref{fig:dtalmt_spikes} and the fixed-sparsity curves would then reflect that the walnut images have more non-zeroes in the pixel domain than in the gradient domain, yielding higher predicted critical sampling levels for \text{P$_1$}{}/\text{LP}{} than for \text{TV}{}.
\paragraph*{Recovery of the large-scale walnut phantoms}
\begin{figure}[tb]
\centering
\newcommand{0.65\linewidth}{0.49\linewidth}
\includegraphics[width=0.65\linewidth]{structureWalnutRecovery_new.pdf}
\includegraphics[width=0.65\linewidth]{textureWalnutRecovery_new.pdf}
\hspace*{0.65\linewidth}
\caption{
Image and data RMSE plots for the $N_\text{side}=1024$ structure (left) and texture (right) walnut phantom using Alg. \ref{alg:p1tv} with $\lambda=10^{-4}$. The results are obtained at $K=10^4$ iterations.
\label{fig:walnuterrorcurves}}
\end{figure}
\begin{table}[htb]
\centering
\begin{tabular}{l|c|c|c|c}
Walnut image & Gradient sparsity & Recovered at & DT prediction & ALMT prediction \\
\hline
Structure & $\phantom{1}45,074$ & $68$ & $\phantom{1}69.3$ & $\phantom{1}71.7$ \\
Texture & $186,306$ & ? & $188.7$ & $185.8$
\end{tabular}
\caption{Walnut test images with gradient-domain sparsity levels, number of projections at which recovery is observed, and DT and ALMT phase-diagram predictions of critical sampling levels. A reference point of full sampling is $N_\text{v} \geq 403$ projections, where the system matrix has more rows than columns.\label{tab:walnutdata}}
\end{table}
We employ Alg.~\ref{alg:p1tv} to solve TV on the large-scale $N_\text{side}=1024$ CT system for the structure and texture walnut phantoms.
The resulting recovery plots are shown in
Fig.~\ref{fig:walnuterrorcurves}.
For the structure walnut we observe an abrupt change in image
RMSE with $N_\text{v}= 68$ yielding accurate recovery, as decided by the first point where there is essentially no further decrease in RMSE. The predicted critical sampling levels from DT and ALMT phase-diagram
analysis are only slightly higher at $N_\text{v} = 69.3$ and $N_\text{v} = 71.7$, respectively, cf. Table~\ref{tab:walnutdata}. This result is rather remarkable
in that the extrapolation is extended quite far from the size of the original phase-diagram
analysis. Also, the structure phantom is clearly different from any expected
realization of any of the studied probabilistic phantoms models.
The recovery curve for the texture phantom, on the other hand, does not exhibit an abrupt change in reconstruction error, rather a gradual improvement all the way up to ~200 projections.
We therefore can not point to a specific critical sampling level.
\paragraph*{Reconstructed images for the structure and texture phantoms}
\begin{figure}[htb]
\centering
\newcommand{0.65\linewidth}{0.65\linewidth}
\includegraphics[width=0.65\linewidth]{structureWalnutRecon_bboxfixed.pdf}\\
\includegraphics[width=0.65\linewidth]{structureReconDiff_bboxfixed.pdf}\\
\includegraphics[width=0.65\linewidth]{textureWalnutRecon_bboxfixed.pdf}\\
\includegraphics[width=0.65\linewidth]{textureReconDiff_narrow_bboxfixed.pdf}
\caption{First row: reconstructed images from data generated by the structure
walnut with 40 (left), 60 (middle), and 68 (right) projection views (gray scale window
[0.3,0.4] cm$^{-1}$). Second row: same as first row except the structure walnut image is subtracted from the reconstructed images (gray scale window [-0.01,0.01] cm$^{-1}$).
Third row: reconstructed images from data generated by the texture
walnut with 80 (left), 120 (middle), and 160 (right) projection views (gray scale window
[0.3,0.4] cm$^{-1}$). Fourth row: same as third row except the texture walnut image is subtracted from the reconstructed images (gray scale window [-0.001,0.001] cm$^{-1}$).
\label{fig:recons}}
\end{figure}
It is illuminating to inspect some of the reconstructed images in Fig. \ref{fig:recons}, which
correspond to the plots in Fig. \ref{fig:walnuterrorcurves}. The second and third reconstructions
for the structure phantom straddle the sharp transition in the corresponding image RMSE curve,
and it can be seen clearly in the difference image that the result for $N_v=60$ is not recovered,
while that of $N_v=68$ is much closer to the test phantom. We point out, however, that the
difference images are displayed in a narrow 4\% gray scale window and visually the $N_v=60$ image
appears the same as the structure phantom. That the discrepancies between reconstruction and phantom
are so small emphasizes the challenge for the large-scale optimization algorithms; for actual application
where images are presented for visual inspection such
accurate solution to Eq. (\ref{eq:ineqreconprob}) would not be necessary. The results
for the texture phantom are also quite interesting in that we see the reconstructed image is visually
accurate for as few views as $N_v=80$. That there is no sharp recovery transition for the texture phantom
is likely due to the fact that the object variations occur on two scales: the jumps of the structure
borders, and the splotches of the walnut meat texture. It also can not be ruled out that a sharper recovery transition will occur if the accuracy of the computed solutions is improved even further.
\subsection{Conclusion on Study C}
In this study we have taken first steps toward phase-diagram analysis for prediction of critical sampling levels for realistic CT systems. Both test phantom design and accurate large-scale optimization is more difficult than for small-scale studies and we have demonstrated how phantom appearance as well as parameters and convergence of the algorithm can affect recovery studies. For the simplest, and piece-wise constant, structure walnut phantom we found the critical sampling level to be predicted very well by phase-diagram analysis. The situation for the texture walnut phantom was more complex, which motivates further and more extensive large-scale studies, including of the influence of texture on recovery and possibly a different definition of image recovery itself.
\section{Conclusion} \label{sec:conclusion}
We have presented a systematic framework of phase-diagram analysis from CS for analyzing the undersampling potential of SR in X-ray CT.
In three, quite different, studies we have demonstrated the potential of phase-diagram analysis: We saw that under certain conditions X-ray CT in terms of recoverability performs comparable with an optimal CS sampling strategy of Gaussian sensing matrices; that random sampling in X-ray CT in terms of recoverability does not perform better and in some cases worse than a regular fan-beam sampling setup; and that at least in a simple case the critical sampling level for a large-scale X-ray CT system can be predicted.
An interesting future direction is to address the question: can the observed phase-transition behavior in X-ray CT be theoretically explained, in particular the high degree of similarity with the Gaussian sensing matrix case?
\section*{Acknowledgment}
The authors are grateful to Martin Lotz for providing code to compute the ALMT phase transitions, to Jared Tanner for providing tabulated values of the DT phase transitions on his website, to Carsten Gundlach for assistance in acquiring the walnut micro-CT dataset, and to Rick Chartrand for inspiring discussions of compressed sensing and tomography.
The work of JSJ was supported by Advanced Grant 291405 `High-Definition Tomography' from the European Research Council.
This work was supported in part by NIH
Grant Nos. R01-CA158446, R01-CA120540, and R01-EB000225.
The contents of this article are solely the responsibility of
the authors and do not necessarily represent the official
views of the National Institutes of Health.
\bibliographystyle{vancouver}
|
train/arxiv
|
BkiUeynxK7ICUn2IaXky
| 5 | 1 |
\section{Acknowledgement}\label{sec:Acknowledgement}
The work described in this paper was supported by the Research Grants Council
of the Hong Kong Special Administrative Region, China (No. CUHK 14206921 of the
General Research Fund) and Australian Research Council (ARC) Discovery Projects (DP200102940, DP220103044).
\section{Background and Empirical Study}
\section{Background and Motivating Example}
\subsection{Background}
\label{sec: bg_alert_and_tickets}
\paragraph{\textbf{Alert}}
Alerts are fired by monitors that continuously detect anomalies in cloud systems, which automatically notify on-call engineers for investigation~\cite{yang2021aid}\cite{yang2022characterizing}\cite{chen2020towards}.
An alert has many attributes as presented in Fig.~\ref{fig: bg_cases} (top), including \textit{alert ID}, \textit{title}, \textit{creation time}, \textit{region}, \textit{owning service}, \textit{owning component}, \textit{severity}, \textit{monitor ID}, etc.
The \textit{title} is generated by following a template pre-defined by engineers.
The \textit{severity} indicates how serious the issue is, which has three levels, i.e., low, medium and high.
A service (\textit{owning service}) consists of many components (\textit{owning component}), where each component has its own functionality or feature.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{Figures/bg_case.pdf}
\vspace{-2ex}
\caption{An example of an alert and its resultant ticket.}
\label{fig: bg_cases}
\end{figure}
\paragraph{\textbf{Support Ticket}}
As presented in Fig.~\ref{fig: bg_cases} (bottom), a support ticket usually contains attributes such as \textit{ticket ID}, \textit{creation time}, \textit{summary}, \textit{region}, \textit{product name}, and \textit{category}. The \textit{summary} is free text written by customers in natural language.
The \textit{region} is where the customer's product is deployed.
The \textit{category} is a coarse-to-fine text description initially selected by the customer, which facilitates triaging a ticket to a proper support engineer.
In addition, a ticket may also include a long detailed description (hidden in the figure).
Modern cloud platforms adopt similar schemes of alerts and support tickets described above.
For example, CloudWatch of AWS~\cite{aws_cloudwatch}, Alerting of GCP~\cite{gcp_alerting} and Azure Monitor~\cite{azure_monitor} share similar alerting mechanisms, and their alerts carry similar attributes.
Besides, their ticket management systems require similar attributes from customers as in Fig.~\ref{fig: bg_cases}, i.e., AWS Support~\cite{aws_ticket}, Google Support Hub~\cite{gcp_ticket}, and Azure Support~\cite{gcp_ticket}.
In this work, we only leverage the \textit{common} features that all these popular cloud platforms own to ensure generalizability.
\subsection{Alert-Alert Relation}\label{sec: alert_alert_relation}
The alert-alert relation denotes that two alerts could be correlated if they are caused by the same incident.
The relation originates from the hierarchical structures of modern cloud systems that consist of inter-dependent components or services~\cite{chen2022online}.
When an incident happens, multiple components or services could be impacted due to failure propagation~\cite{wang2021fast}\cite{chen2020towards}, which will fire alerts within a short period associated with the same incident.
During the diagnosis of an incident, in \cloud, on-call engineers will manually mark these alerts and assess the severity of the incident according to the number of customers impacted.
According to the diagnosis history in \cloud from 2020/01/01 to 2022/06/01, as shown in Fig.~\ref{fig: cross-service-alerts}, we found incidents with a higher severity tend to affect more services.
Especially, 70\% of high-severity incidents affect more than one service.
We studied the resultant alerts of historical incidents. We calculated the max alert duration of the incidents (i.e., the time interval between the earliest and the latest alerts triggered by the incident).
As shown in Fig.~\ref{fig: incident_duration}, we found that the max alert duration of 93\% of incidents is within four hours.
This serves as evidence to automatically identify the correlated alerts within an incident~(in Section~\ref{sec: incident_profiling}).
\begin{figure}[t]
\centering
\mbox{
\subfigure[The number of services impacted by incidents \label{fig: cross-service-alerts}]{\includegraphics[width=0.465\columnwidth]{Figures/num_service_affcted.pdf}}\quad
\subfigure[Distribution of max alert duration of incidents
\label{fig: incident_duration}]
{\includegraphics[width=0.465\columnwidth]{Figures/incident_duration.pdf}}
}
\vspace{-3ex}
\caption{Statistics of alert and incident data in \cloud.}
\label{main figure label}
\end{figure}
\begin{figure}[t]
\centering
\begin{minipage}{.465\columnwidth}
\centering
\includegraphics[width=\columnwidth]{Figures/time_interval.pdf}
\vspace{-4ex}
\caption{Time interval between alerts and resultant tickets}
\label{fig: time_interval}
\end{minipage}\quad%
\begin{minipage}{.465\columnwidth}
\centering
\includegraphics[width=\columnwidth]{Figures/ticket_num_change.pdf}
\vspace{-4ex}
\caption{Ticket number trend during an incident}
\label{fig: ticket_num_change}
\end{minipage}
\end{figure}
\subsection{Ticket-Alert Relation}\label{sec: alert_ticket_relation}
The ticket-alert relation denotes that a ticket can correlate with a responsible alert inside the cloud systems.
When a particular type of issue happens inside the cloud system (alerts are also fired), the customer could experience particular problems.
Fig.~\ref{fig: bg_cases} presents an example. If the API PUT (for container allocation) in the Kubernetes services is degraded, the customer can experience an error when deploying a container.
In \cloud, if a ticket is related to a cloud-side issue, the support engineers are required to annotate the responsible alert ID after diagnosis.
Based on the annotated alert-ticket pairs collected from 2020/01/01 to 2022/06/01, we study the time interval between alert generation and ticket submission.
Fig.~\ref{fig: time_interval} shows the results, where a negative time interval indicates that an alert is fired before the ticket is submitted. We found around 92\% of tickets have responsible alerts fired before customers submit the tickets. This allows us to correlate responsible alerts for most tickets in runtime~(in Section~\ref{sec: correlation}).
For clarification, we summarize these important terminologies (i.e., alert, ticket, incident, alert-alert relation and ticket-alert relation) in Table~\ref{tab: term_definition} for reference.
\begin{table}[t]
\centering
\caption{Terminology Definition}
\label{tab: term_definition}
\begin{tabular}{c|c}
\hline
\textbf{Terminology} & \textbf{Definition} \\ \hline
Alert\centering & \makecell*[l]{An alert is triggered when abnormal behavior of\\ a component is detected.}\\ \hline
Ticket\centering & \makecell*[l]{A request raised by a customer to ask the cloud vendor\\ for help.} \\ \hline
Incident\centering & \makecell*[l]{Unexpected interruptions affecting services' availability\\ or performance, which usually trigger a series of alerts.} \\ \hline
\makecell*[c]{Alert-Alert\\Relation} & \makecell*[l]{Two alerts are correlated if they are caused by the\\ same incident. (Section~\ref{sec: alert_alert_relation})}\\\hline
\makecell*[c]{Ticket-Alert\\Relation} & \makecell*[l]{A ticket is correlated with an alert if the former is\\ caused by the latter. (Section~\ref{sec: alert_ticket_relation})}\\
\hline
\end{tabular}
\end{table}
\subsection{A Motivating Example}\label{sec: case_analysis}
We present a real-world incident in July 2021 in \cloud and its resultant tickets as a motivating example.
The impact of the incident started at 05:08 AM (UTC).
It was caused by the availability loss of the DiskRP (disk resource provider) service that provides a control plane service for managed disks. Since its gateway queue was full, a large proportion of incoming requests were rejected.
As a consequence, services relying on DiskRP experienced interruptions. On-call engineers' diagnosis confirmed that three services were impacted, i.e., virtual machine (VM), Databricks, and Kubernetes (K8S).
Customers using these services were affected, which led to overwhelming tickets.
As shown in Fig.~\ref{fig: ticket_num_change}, the ticket numbers of the services simultaneously increased right after the impact started, which implies the three services could be impacted by the same incident concurrently.
In particular, the CSS team received around \textit{four} times the number of tickets than usual within a short period and assigned \textit{twice} the number of support engineers to handle these tickets.
We list some samples of alerts and tickets related to this incident in Table~\ref{tab: motivating_example}.
These tickets ($t_1 \sim t_8$) carry dissimilar semantics due to different use scenarios and services for different customers.
Therefore, it is hard to know that these tickets are actually caused by the same incident, rendering the difficulty for support engineers to group them and handle the burst of tickets efficiently.
\definecolor{aurometalsaurus}{rgb}{0.43, 0.5, 0.5}
\begin{table*}
\centering
\caption{Alerts caused by the same incident and the resultant tickets (some features are omitted due to space limitation.)}
\vspace{-2ex}
\label{tab: motivating_example}
\def1.2{1.2}
\begin{tabular}{ccl|cl}
\hline
\multicolumn{1}{c|}{\multirow{2}{*}{\textbf{Service}}} &
\multicolumn{2}{c|}{\textbf{Tickets}} & \multicolumn{2}{c}{\textbf{Alerts}} \\ \cline{2-5}
\multicolumn{1}{c|}{} & \multicolumn{1}{c|}{Category } & \multicolumn{1}{c|}{Summary} & \multicolumn{1}{c|}{Component} & \multicolumn{1}{c}{Title} \\ \hline
\multicolumn{1}{c|}{\multirow{2}{*}{VM}} & \multicolumn{1}{c|}{VM/Scale Update} & $t_1$: Virtual machine scale sets \textcolor{black}{resize} issue. & \multicolumn{1}{c|}{\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Resource\\ Provider\end{tabular}}} & \multirow{2}{*}{$a_1:$ \textcolor{black}{VMStart} Failures exceed 300 times.} \\ \cline{2-3}
\multicolumn{1}{c|}{} & \multicolumn{1}{c|}{VM/VM Start} & $t_2$: \textcolor{black}{Server} did not \textcolor{black}{start} on time. & \multicolumn{1}{c|}{} & \\ \hline
\multicolumn{1}{c|}{\multirow{2}{*}{Databricks}} & \multicolumn{1}{c|}{Databricks/\textcolor{black}{Job Issue}} & $t_3$: Unable to \textcolor{black}{open cluster} of Databricks. & \multicolumn{1}{c|}{\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Control \\ Plane\end{tabular}}} & \multirow{2}{*}{$a_2:$ Databricks \textcolor{black}{cluster creation} fails.} \\ \cline{2-3}
\multicolumn{1}{c|}{} & \multicolumn{1}{c|}{Databricks/Cluster Launch} & $t_4$: Unable to \textcolor{black}{provision clusters}. & \multicolumn{1}{c|}{} & \\ \hline
\multicolumn{1}{c|}{\multirow{2}{*}{K8S}} & \multicolumn{1}{c|}{K8S/Cluster Update} & $t_5$: Unable to \textcolor{black}{autoscale}. & \multicolumn{1}{c|}{\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Resource \\ Scheduler\end{tabular}}} & \multirow{1}{*}{$a_3:$ \textcolor{black}{The PUT operation} success rate \textless{}80\%.} \\ \cline{2-3}
\multicolumn{1}{c|}{} & \multicolumn{1}{c|}{K8S/Cluster Update} & $t_6$: Cannot \textcolor{black}{upgrade} node pool, stuck. & \multicolumn{1}{c|}{} & \textcolor{aurometalsaurus}{$a_4:$ CPU utilization exceeds 90\%.} \\ \hline
\end{tabular}
\vspace{-4ex}
\end{table*}
We propose to aggregate these tickets by simultaneously leveraging the aforementioned alert-alert relations and ticket-alert relations.
We take Table~\ref{tab: motivating_example} as an example to elaborate our intuition.
First, we need to know what alerts are triggered by an incident, i.e., profiling the incident.
In this example, we link the alerts $a_1-a_2-a_3$ via capturing the alert-alert relations (i.e., they are caused by the same incident).
Second, we need to know what tickets are caused by these alerts, namely, linking $a_1-(t_1,t_2)$, $a_2-(t_3,t_4)$, and $a_3-(t_5,t_6)$.
Finally, because the alerts $a_1 \sim a_3$ are linked as an incident and $t_1 \sim t_6$ are further linked to these alerts, we can aggregate $t_1 \sim t_6$ as the same cluster even though they possess dissimilar semantics.
\noindent\textbf{Challenges.} To achieve this, \nm should address the following two challenges originated from the large scale and complicated architecture of cloud systems~\cite{li2021fighting}\cite{wang2021fast}\cite{chen2022online}.
\textit{Challenge 1: Massive and noisy alerts.}
Cloud systems could contain thousands of interdependent services.
These services are closely monitored from various aspects to capture any unexpected behaviors.
For example, there could be hundreds, even thousands of high-severity alerts reported in \cloud per day.
Some alerts are \textbf{regular alerts} that are reported frequently (due to sensitive monitoring rules) and periodically (due to periodical monitoring). These regular alerts are generally not related to a particular cloud incident and only report usual system runtime status such as CPU/memory usage rate (e.g., $a_4$ in Table~\ref{tab: motivating_example}).
In contrast, \textbf{indicative alerts} are caused by an actual problem of cloud systems. For example, the alerts $a_1$ $\sim$ $a_3$ in Table~\ref{tab: motivating_example} are indicative alerts.
It is challenging to identify the indicative alerts and correctly link them among massive and noisy alerts.
\textit{Challenge 2: High feature cardinality.} High feature cardinality refers to a situation where a feature has a large number of unique values.
For example, the feature \textit{category} of a ticket has more than 3,000 options, and the features \textit{component} and \textit{monitor ID} of alerts have more than 2,000 and 10,000 options, respectively.
Using traditional one-hot encoding~\cite{li2021deeplv} methods to process these features would lead to a high-dimensional feature space, resulting in the curse of dimensionality~\cite{wiki:Curse_of_dimensionality}.
Additionally, linking alerts to tickets requires the consideration of various combinations of features between them.
However, due to the high feature cardinality, the number of possible combinations grows exponentially, making it difficult to identify the most effective combinations that accurately reflect the correlation between alerts and tickets. This constitutes a significant challenge in our work.
\section{Industrial Case}\label{sec: case_study}
\section{Industrial Experience}\label{sec: case_study}
In this section, we share our industrial experience by presenting a success case and a failure case from the real-world deployment of \nm in \cloud.
\definecolor{crimsonglory}{rgb}{0.75, 0.0, 0.2}
\definecolor{darkblue}{rgb}{0.0, 0.0, 0.55}
\subsection{A success case}
In September 2021, a datacenter maintenance activity resulted in the accidental shutdown of a water tower pump, which is a critical component of cooling systems. To prevent overheating and potential damage to users' data, the maintenance personnel had to shut down the downstream storage hardware. This caused a storage service disruption, leading to cascading impacts on several dependent services such as the SQL DB and Workflow App, and triggering alerts.
The CSS team received a substantial number of tickets describing a wide range of issues in response to these events. To assist with the situation, \nm continuously collected and analyzed the generated alerts and tickets. The partial output of \nm's analysis is presented in Fig.~\ref{fig: success_case}.
\nm successfully linked the storage alert with corresponding alerts from SQL DB and Workflow App, as demonstrated by the \textcolor{crimsonglory}{\textbf{red}} arrows in Fig.~\ref{fig: success_case}. Additionally, the tickets caused by these events were linked to their respective root cause events, as depicted by the \textcolor{darkblue}{\textbf{blue}} arrows. This allowed the tickets to be aggregated, despite their semantic differences, and the results were pushed to the support engineers.
With the information provided by \nm, support engineers were able to initiate batch communications with potentially impacted customers and avoid duplicative manual inspections. Throughout the resolution process, the customers were continuously informed of the mitigation progress of the incident.
\subsection{A failure case}
\nm could sometimes fail when it cannot find responsible alerts in the cloud systems for a ticket. In August 2021, the CSS team received multiple tickets complaining of 503 (service unavailable) errors when the customers were using Web Services. Though the tickets were suspected to be caused by an internal issue due to their similar symptoms, \nm did not correlate them with any alert. Only around five hours after the first ticket had been received, a related alert was fired and correlated by \nm.
According to the after-the-fact analysis of on-call engineers, the root cause of this incident turned out to be bad configurations of a Canary (gray) release for a few tenants.
The developers did not configure a specific monitor for each of the tenants but monitored all tenants as a whole.
As a result, the monitor was not sensitive enough and only triggered when most of the tenants' requests failed.
Nevertheless, \nm continuously runs and could still correlate the alert with the resultant tickets after the alert was finally fired. In this way, \nm can potentially discover such under-monitoring cases and guide the configuration of monitors to improve system reliability~\cite{li2022intelligent}.
Fortunately, such cases (tickets submissions before alerts) are rare in \cloud with comprehensive monitoring according to our study (Section~\ref{sec: alert_ticket_relation}).
\begin{figure}[t]
\centering
\includegraphics[width=0.85\linewidth]{Figures/success_case.pdf}
\caption{A success case of \nm in \cloud}
\label{fig: success_case}
\end{figure}
\section{conclusion}
This paper tackles the problem of aggregating duplicate customer support tickets for cloud systems.
Previous solutions that mainly rely on customer-side information (i.e., textual similarity between tickets) are sub-optimal for tickets of large-scale cloud systems.
The main cause is the complexity of cloud systems that consist of many inter-dependent services, where the customers may experience distinct issues even though they are affected by the same incident.
To overcome this limitation,
we propose \nm to leverage alerts of cloud systems to facilitate ticket aggregation.
Specifically, we propose graph-based incident profiling (GIP) to model alert-alert relations and attentive interaction network (AIN) to model alert-ticket relations, respectively.
In this way, we can aggregate the tickets that are linked to the same incident (linked alerts) even though they carry dissimilar semantics.
We evaluate \nm based on three datasets collected from the real-world production environment in a large-scale cloud vendor, \cloud.
\nm achieves the F1 score of 0.871$\sim$0.935 and outperforms state-of-the-art methods by 12.4\%$\sim$31.2\% across the three datasets.
For future work, we will deploy \nm to more services in \cloud and conduct rigorous user studies among the support engineers to understand the usefulness in accelerating support tickets. In addition, we plan to extend \nm with the ability to conduct root cause analysis based on the correlated alerts.
\section{Data Availability}
The ticket data used in this work is collected from a real-world cloud vendor, which is highly confidential and contains a lot of personally identifiable information (PII).
To protect customers' privacy, we decide not to release the original dataset. However, to facilitate the community to benefit from our work, we release the source code of \nm together with some \textit{synthetic data samples} on Github (\url{https://github.com/OpsPAI/iPACK.git}).
\section{Discussion}
\subsection{customer-side problems}
\subsection{Empirical Study}
\label{sec: empirical_study}
To understand the impact of cloud incidents on support tickets and alerts in a real production environment, we conduct the following empirical study based on two years of real-world data (01/01/2020 to 01/01/2022) in \cloud.
Due to confidential issues, we apply normalization to the data.
Particularly, we aim to answer the following research questions (RQ).
\begin{itemize}[leftmargin=*, topsep=0pt]
\item \textbf{RQ1}: How many services can be affected by an incident?
\item \textbf{RQ3}: \jy{What correlations do tickets and alerts exhibit, if any?}
\end{itemize}
\subsubsection{\textbf{RQ1} Cross-Service Impact of Incidents}
In the workflow of incident processing, engineers analyze and mark which services are affected for each incident.
Based on this annotated data, we calculate the CDF (cumulative distribution function) of the number of services impacted by an incident. We plot the figure by varying the incident severity.
As shown in figure~\ref{fig: cross-service-alerts}, more severe incidents tend to affect more services. Nearly 70\% high-severity incidents affect at least two services.
In contrast, around 70\% of medium-level incidents and 80\% low-level incidents only affect one service, but they are still possible to affect more than one services.
Although customers of a specific service may experience similar problems when they are affected by an incident, they could write different free texts to describe the symptoms.
The issue is compounded by the fact that incidents with cross-service impact are common. It is more challenging to identify duplicate tickets, because customers from different services tend to perceive distinct failure symptoms, leading to tickets with dissimilar semantics.
\begin{tcolorbox}
[colback=black!10
colframe=black!10
width=1\linewidth
arc=0mm, auto outer arc,
box align=center,
left = 0pt,
right = 0pt,
top=0pt,
bottom=0pt,
]
\textbf{Finding 1}
Cloud incidents can produce cross-service impacts.
Customers using the impacted services can submit quite different tickets, which makes it challenging to identify duplicate tickets.
\end{tcolorbox}
\subsubsection{\textbf{RQ2} Cost of Handling Tickets}
\label{sec: empirical_sr_cost}
See Figure~\ref{fig: ttc}. May be meaningless after normalization.
\section{Experiments}
We answer the following research questions (RQs) to evaluate the performance of \nm:
\begin{itemize}[leftmargin=*, topsep=0pt]
\item \textbf{RQ1}: How effective is \nm in aggregating duplicate tickets caused by the same incident?
\item \textbf{RQ2}: How effective is AIN in correlating a ticket to the responsible event?
\item \textbf{RQ3}: How does graph-based incident profiling (GIP) impact the effectiveness of \nm?
\end{itemize}
\subsection{Experimental Setting}
\subsubsection{Dataset}\label{sec: dataset}
We collect the datasets from the production environment of \cloud from 2020/01/01 to 2022/06/01.
To evaluate the generality of \nm, we collect three datasets from different physically isolated regions (i.e., $\mathcal{A}$, $\mathcal{B}$, and $\mathcal{C}$), which cover 81 services serving different numbers of customers.
Each dataset is collected from tens of services and includes hundreds of incidents and hundreds of thousands of alerts.
For each incident, the datasets contain tens of to hundreds of resulting tickets.
Note that we hide the specific figures of the dataset statistics due to the confidential policy of \cloud.
We use the data before 2022/01/01 to compute PMI values~(Section~\ref{sec: incident_profiling}) and train AIN~(Section~\ref{sec: correlation}). The data after the date is used for evaluation.
\subsubsection{Comparative solutions}\label{sec: baselines}
Recent studies have been working on user feedback analysis such as duplicate bug report detection~\cite{wang2008approach}\cite{nguyen2012duplicate}\cite{zhou2012learning}\cite{budhiraja2018lwe}\cite{chaparro2019reformulating} and emerging issue detection~\cite{gao2018online}\cite{zheng2019ifeedback}\cite{gao2019emerging}.
We select the following state-of-the-art approaches as our comparative solutions:
\textbf{Categorization.} We aggregate tickets by referring to their feature \textit{category} (Section~\ref{sec: bg_alert_and_tickets}), i.e., if two tickets share the same category, then they are aggregated into the same cluster.
\textbf{iFeedback.} iFeedback is proposed and adopted by WeChat in their production environment~\cite{zheng2019ifeedback}, which targets aggregating similar user feedback by identifying frequent word combinations (and groups of combinations).
For example, if the word combination of ``pay'' and ``fail'' bursts, an issue may happen to the payment feature of the product.
\textbf{LWE.} LWE~\cite{budhiraja2018lwe} is a method integrating Latent Dirichlet Allocation (LDA) and word embeddings to leverage the advantages of both techniques.
LWE first utilizes LDA to represent all tickets and roughly identify candidates of duplicated tickets.
Then, the candidates are represented using word embeddings to conduct more fine-grained clustering.
\textbf{BERT.} BERT~\cite{DBLP:conf/naacl/DevlinCLT19} is a popular pretraining model in natural language processing and has shown its power in capturing the semantics of user feedback in recent studies~\cite{haering2021automatically}\cite{liu2020automated}\cite{wu2021identifying}.
Because these studies do not directly aggregate user feedback, in this work, we adopt BERT to first represent the tickets as dense vectors, based on which we use agglomerative hierarchical clustering~\cite{wiki:hierarchical_clustering} to aggregate tickets.
\begin{table*}[t]
\linespread{1.1}
\caption{Effectiveness
of aggregating duplicate tickets caused by the same cloud incident.}
\label{tab: clustering_accuracy}
\small
\centering
\begin{tabular}{c|ccc|ccc|ccc}
\toprule
\multirow{2}{*}{Methods} &
\multicolumn{3}{c|}{Dataset $\mathcal{A}$} & \multicolumn{3}{c|}{Dataset $\mathcal{B}$} & \multicolumn{3}{c}{Dataset $\mathcal{C}$} \\
& Precision & Recall & \textbf{F1 score} & Precision & Recall & \textbf{F1 score} & Precision & Recall & \textbf{F1 score}\\
\midrule
\midrule
Categorization & 0.930 & 0.205 & 0.336 & 0.943 & 0.373 & 0.535 & 0.925 & 0.207 & 0.338 \\
iFeedback & 0.901 & 0.590 & \underline{0.713} & 0.876 & 0.473 & 0.614 & 0.886 & 0.626 & 0.733 \\
LWE & 0.862 & 0.453 & 0.594 & 0.824 & 0.515 & 0.634 & 0.861 & 0.672 & \underline{0.755}\\
BERT & 0.884 & 0.587 & 0.705 & 0.854 & 0.710 & \underline{0.775} & 0.843 & 0.629 & 0.720 \\
LinkCM & 0.931 & 0.507 & 0.657 & 0.892 & 0.538 & 0.671 & 0.901 & 0.628 & 0.740 \\
\midrule
LinkCM w/ GIP & 0.900 & 0.685 & 0.778 & 0.886 & 0.756 & 0.816 & 0.899 & 0.809 & 0.852 \\
\textbf{\nm} & 0.912 & 0.960 & \textbf{0.935} & 0.882 & 0.861 & \textbf{0.871} & 0.899 & 0.888 & \textbf{0.894} \\
\bottomrule
\end{tabular}
\end{table*}
\textbf{LinkCM.} LinkCM~\cite{gu2020efficient} is proposed to facilitate the triage of a customer-reported alert by matching it with an alert of cloud systems.
LinkCM learns the correlation by purely fusing the titles between the report and alert via a decomposable attention mechanism and transfer learning. In our scenario, if two tickets are correlated to the same event by LinkCM, they are grouped together.
LinkCM can also link a ticket to an event as AIN does, so we combine GIP with LinkCM (i.e., \textbf{LinkCM w/ GIP}) as a strong baseline for comparison.
\subsubsection{Implementation Details}\label{sec: implementation}
We have implemented \nm with approximately 3000 lines of Python code and packaged it as a serverless function~\cite{yang2021aid} for ease of use in \cloud. The iPACK system is deployed on a CentOS Linux server with 60GB of RAM and an Intel(R) Core(TM) i7-5930K CPU @ 3.50GHz. The AIN component of \nm is trained and tested with the GPU acceleration of an NVIDIA GeForce GTX TITAN X. We have set the default hyper-parameters of the AIN as $k$=$128$ and $r$=$256$, and the model is trained until its training loss stops decreasing for ten continuous epochs, using an early stopping approach.
As for the comparative solutions, as they are not open-sourced, we have followed the implementation in their respective papers and leveraged well-established libraries to ensure accuracy. For example, we have used AllenNLP~\cite{allennlp} for LinkCM, scikit-learn~\cite{scikit-learn} and gensim~\cite{gensim} for LWE, and HuggingFace~\cite{huggingface} for BERT.
\subsection{Evaluation Metrics}
\textbf{Metrics for evaluating ticket aggregation (RQ1 and RQ3).}
Given a sequence of tickets, our approach assigns a unique cluster ID, denoted as "incident-{number}" to tickets that are caused by the same incident. Tickets that are not related to a cloud-side issue are marked with the cluster ID "non-incident".
To evaluate the accuracy of our ticket aggregation, we use the widely accepted Rand Index~\cite{rand1971objective, achtert2012evaluation, gates2017impact} for pair-wise comparison in clustering.
We conduct pair-wise comparisons between the ground-truth cluster ID and the predicted cluster ID for all tickets.
The results are used to calculate the following metrics:
True Positives (TP), which are pairs of duplicate tickets correctly predicted to have the same cluster label;
True Negatives (TN), which are pairs of non-duplicate tickets correctly predicted to have different cluster labels;
False Positives (FP), which are pairs of non-duplicate tickets wrongly predicted to have the same cluster label; and
False Negatives (FN), which are pairs of duplicate tickets wrongly predicted to have different cluster labels.
Based on the results, we use the following metrics to evaluate the aggregation results: $precision=\frac{TP}{TP + FP}$, $recall=\frac{TP}{TP + FN}$, and $F1~score= 2 \cdot \frac{precision~\cdot~recall}{precision~+~recall}$.
\textbf{Metrics for evaluating ticket-event correlation (RQ2).}
The correlation of tickets with an event, referred to as AIN in Section~\ref{sec: correlation}, is a crucial component of \nm. This component generates a ranked list of potential responsible events for a given ticket based on the probability scores (as determined by AIN's output) in descending order.
To assess the accuracy of this step, we use the metric Acc@K (accuracy@K). For each ticket, if the actual ground-truth event appears within the top-K positions of the list, we consider the ticket to be a "hit". The Acc@K metric is calculated as the ratio of the number of hit tickets to the total number of tickets, represented as $Acc@K=\frac{\#~of~hit~tickets}{\#~of~all~tickets}$.
In our evaluation, we consider three values of K (i.e., 1, 2, and 3) and also compute the average of these three metrics to provide a comprehensive assessment.
\subsection{Experimental Details}
\subsubsection{\textbf{RQ1} The Effectiveness of \nm}
In this RQ, we aim to evaluate how accurately \nm can aggregate the duplicate tickets by comparing it with all comparative solutions (Section~\ref{sec: baselines}).
The evaluation is conducted using datasets $\mathcal{A}$, $\mathcal{B}$ and $\mathcal{C}$ and the results are reported in terms of precision, recall, and F1 score. Precision reflects the degree of correctness in the clustering results, while recall represents the completeness of the results. The F1 score is a balance between precision and recall and provides a comprehensive measure of the overall performance of the approach. The results are presented in Table~\ref{tab: clustering_accuracy}. The highest F1 score is emphasized in \textbf{bold}, and the second-best score is \underline{underlined}.
We can make the following observations:
(1) \nm achieves the best F1 score across all three datasets, i.e., 0.935, 0.871, and 0.894, outperforming the second-best methods by 31.2\%, 12.4\% and 18.4\% in dataset $\mathcal{A}$, $\mathcal{B}$ and $\mathcal{C}$, respectively.
(2) Categorization can achieve the highest precision (0.930$\sim$0.943) although its recall is considerably low (0.205$\sim$0.373).
The reason is that the ticket feature \textit{category} is defined in a fine-grained manner by support engineers in \cloud.
Therefore, it tends to aggressively split the complete set of duplicate tickets into many small groups, leading to a low recall score.
However, tickets in each such small group share precisely similar semantics as evidenced by the high precision.
(3) iFeedback, LWE, and BERT show lower precision but higher recall than Categorization.
The reason is that these methods can capture more coarse-grained semantic similarity between tickets.
Consequently, they can generate larger clusters (higher recall) but introduce additional noises (lower precision)
(4) LinkCM can achieve a higher precision among all baseline methods except Categorization. Moreover, after combining with GIP, LinkCM w/ GIP can increase its recall because more tickets are aggregated together through event-event linking.
However, it still under-performs \nm in terms of the overall F1 score because LinkCM cannot correlate a ticket to an event as accurately as \nm does (will show in RQ2).
For instance, LinkCM may associate a cluster of similar tickets with the wrong event. Therefore, even though related events are linked together, similar tickets are separated into different clusters, resulting in high precision but low recall.
{
\begin{tcolorbox}[breakable,width=\linewidth-2pt,boxrule=0pt,top=1pt, bottom=0pt, left=1pt,right=1pt, colback=gray!20,colframe=gray!20]
\textbf{Answer to RQ1.}
\nm achieves the best F1 score among all state-of-the-art baselines across three datasets collected from different regions.
\nm slightly sacrifices precision compared with the Categorization method but achieves the highest F1 score 0.871$\sim$0.935, outperforming state-of-the-art methods by 12.4\%$\sim$31.2\%.
\end{tcolorbox}
}
\subsubsection{\textbf{RQ2} The Effectiveness of ticket-event correlation}\label{sec: exp_corr_acc}
In this RQ, the focus is on evaluating the accuracy of the ticket-event correlation step of \nm, i.e., the proposed attentive interaction Network (AIN).
The performance of AIN is compared with LinkCM~\cite{gu2020efficient} and four popular machine learning algorithms: LR (logistic regression), SVM (support vector machine), RF (random forest), and LightGBM (light gradient boosting machine). Additionally, the contribution of the attentive feature interaction component to AIN is studied.
To ensure a fair comparison, categorical features are represented as one-hot vectors, which are then concatenated with the representation of textual features extracted using BERT. This allows for a consistent input feature representation for all models compared. A variant of AIN is also developed by removing its attentive feature interaction component (referred to as "AIN w/o atten." in Table~\ref{tab: corr_results}). This variant instead concatenates all feature embeddings into a single feature vector as the input for the prediction layer, as illustrated in Fig.~\ref{fig: AIN_framework}.
For clarity, this experiment is conducted using all pairs of ticket-event data from datasets $\mathcal{A}$, $\mathcal{B}$ and $\mathcal{C}$.
We compare AIN with the baselines and its variant in terms of Acc@1, Acc@2, Acc@3 and the average of these metrics.
We can make the following observations in the results shown in Table~\ref{tab: corr_results}:
(1) The proposed AIN model outperforms all baseline models in terms of all four evaluation metrics. Notably, AIN achieves the highest Acc@1 score of 0.817, indicating its superior ability in accurately linking tickets to events and facilitating more effective ticket aggregation.
(2) The introduction of the attentive feature interaction component results in significant improvements in AIN's performance, with a 21.4\% increase in Acc@1 and a 17.8\% increase in the average accuracy. This demonstrates that the component plays a crucial role in identifying effective feature combinations for accurate ticket-event linking.
(3) Interestingly, AIN w/o atten. underperforms LinkCM and achieves similar performance as LightGBM.
The reason is that AIN w/o atten. adopts simple concatenation of feature embedding, which fails to capture effective feature combinations.
(4) LinkCM can outperform other baseline methods since its decomposable attention mechanism is able to capture the semantic matching between tickets and events.
On the other hand, the relatively low Acc@1 scores of LR, SVM, RF and LightGBM may be due to the sparsity and high dimensionality of the input features. However, RF and LightGBM exhibit improved accuracy over LR and SVM, as they alleviate these problems through feature selection.
\begin{table}[tbp]
\caption{Effectiveness of correlating a ticket to an event.}
\label{tab: corr_results}
\small
\begin{tabular}{c|cccc}
\toprule
Models & Acc@1 & Acc@2 & Acc@3 & Average \\
\midrule
\midrule
LR & 0.519 & 0.657 & 0.733 & 0.636 \\
SVM & 0.332 & 0.409 & 0.493 & 0.411 \\
RF & 0.563 & 0.684 & 0.761 & 0.669 \\
LightGBM & 0.658 & 0.723 & 0.832 & 0.712 \\
LinkCM & 0.743 & 0.769 & 0.882 & 0.798 \\
\midrule
AIN w/o atten. & 0.673 & 0.762 & 0.824 & 0.753 \\
AIN & \textbf{0.817} & \textbf{0.907} & \textbf{0.936} & \textbf{0.887} \\
$\Delta$(\%) & +21.4\% & +19.0\% & +13.6\% & +17.8\% \\
\bottomrule
\end{tabular}
\end{table}
{\begin{tcolorbox}[breakable,width=\linewidth-2pt,boxrule=0pt,top=1pt, bottom=0pt, left=1pt,right=1pt, colback=gray!20,colframe=gray!20]
\textbf{Answer to RQ2.} AIN outperforms all other baseline methods by a large margin in correlating a ticket to the event that causes it.
The proposed attentive feature combination is the key to achieve the performance, which improves the average accuracy of AIN by 17.8\%.\end{tcolorbox}
}
\subsubsection{\textbf{RQ3} The impact of graph-based incident profiling (GIP) of \nm}\label{sec: GIP_ablation_study}
We propose GIP to reduce regular events (noisy events) and link correlated indicative events to profile an incident, which bridges the tickets linked to the events even though they are semantically different.
We evaluate its impact on \nm using the union of all three datasets as in RQ2.
We conduct the evaluation from the following two aspects:
(1) \textit{The ratio of events reduced.}
GIP builds a fully-connected event graph (link every two events with positive PMI values) and then prunes this graph via Algorithm~\ref{algo: graph_construction}.
We measure the effectiveness of GIP with the ratio of nodes and edges that are pruned (reduced).
Fig.~\ref{fig: GIP_ablation} (left) presents the ratio of nodes and edges in the event graph without or with GIP (we normalize the ratio for better presentation).
We can observe that only 2\% of nodes and 0.2\% of edges remain after using GIP, which shows GIP can reduce the large volume of events effectively.
(2) \textit{The impact on the overall performance in aggregating duplicate tickets.}
Though GIP can reduce the number of events, we aim to further evaluate whether it can accurately remove the regular events and link the correlated events as expected.
To achieve this, we compare the ticket aggregation performance of \nm with or without GIP.
After removing GIP, we regard those tickets linked to the same event by AIN as belonging to the same cluster.
The results are shown in Fig.~\ref{fig: GIP_ablation} (right).
We can observe that after applying GIP, its precision drops slightly, but the recall is largely improved.
As a result, the overall F1 score is improved by 18.9\%, from 0.743 to 0.884.
This indicates that only a small portion of events are not correctly linked;
however,
more duplicate tickets are accurately aggregated via event-event linking.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{Figures/GIP_ablation.pdf}
\caption{The Effectiveness of Graph-based Profiling (GIP)}
\label{fig: GIP_ablation}
\end{figure}
{
\begin{tcolorbox}[breakable,width=\linewidth-2pt,boxrule=0pt,top=1pt, bottom=0pt, left=1pt,right=1pt, colback=gray!20,colframe=gray!20]
\textbf{Answer to RQ3.} GIP can greatly boost the overall performance of \nm. On the one hand, GIP reserves only 2\% nodes and 0.2\% edges in the pruned event graph. On the other hand, GIP accurately reserves and links the indicative events and improves the F1 score from 0.743 to 0.884.
\end{tcolorbox}
}
\section{Introduction}
In the era of Cloud Computing, cloud platforms such as Amazon AWS, Microsoft Azure, and Google Cloud Platform serve millions of users worldwide.
When customers
encounter a technical problem with the platform; they often resort to cloud providers for help by submitting a \textit{support ticket} (ticket for short), which consists of a textual issue description and some basic attributes (e.g., date and product name).
From the cloud provider's perspective, once a ticket is received, it is essential to provide timely assistance to customers to avoid user dissatisfaction and financial loss~\cite{aws_response}\cite{azure_response}.
In practice, \textit{incidents} (i.e., unexpected service interruptions) are inevitable for large-scale cloud platforms~\cite{liu2019bugs}\cite{cotroneo2019bad}.
Though much effort has been devoted to ensure the reliability of cloud systems~\cite{li2021fighting}\cite{zhao2020understanding}\cite{Muhammad2021fault}, customers could still be impacted by incidents.
For a large-scale cloud platform serving millions of customers, incidents could trigger a large number of tickets, among which many could be duplicate as the tickets are reported in a distributed and uncoordinated manner.
To reduce the burden of support engineers, it is essential to \emph{precisely and comprehensively aggregate the tickets, i.e., clustering the duplicate tickets caused by the same incident}.
By doing this, the support team can resolve the tickets more efficiently.
To aggregate the tickets caused by the same incident, a common practice
is to check if multiple tickets with similar symptom descriptions are reported within a short period.
The intuition behind this is that customers using the same functionalities or services tend to
encounter similar problems if they are caused by the same incident (e.g., service unavailability).
Most existing studies on duplicate issue report detection measure the semantic similarity between two reports based on their textual descriptions, using natural language processing techniques such as word frequency~\cite{sun2010discriminative}\cite{sun2011towards}\cite{zheng2019ifeedback}, word embedding~\cite{yang2016combining}\cite{budhiraja2018dwen}, topic modeling~\cite{budhiraja2018lwe}, and pretrained model~\cite{haering2021automatically}.
Such semantic similarity-based approaches work well for traditional software systems
(e.g., NetBeans~\cite{lazar2014generating}, Eclipse and Firefox~\cite{lamkanfi2013eclipse}).
However, they are sub-optimal for aggregating duplicate tickets in cloud systems due to the large-scale and heterogeneous architecture of cloud systems~\cite{cotroneo2019bad}\cite{yang2021aid}\cite{wang2021fast}.
The main reason is that customers of cloud systems could encounter distinct issues (with distinct symptoms) caused by the same incident.
On the one hand, customers using the same service may experience different issues due to various usage scenarios.
For example, when the control plane of the virtual machine (VM) service is problematic, the customers could complain about various problems related to VM creation, upgrade or deletion, depending on their particular scenarios.
On the other hand, multiple services can be impacted by the same incident due to the notorious failure propagation problem~\cite{li2021fighting}\cite{wang2021fast}\cite{chen2021graph} in cloud systems.
For example, when an infrastructure-level service (e.g., a storage service) is interrupted, other services depending on it (e.g., VM and Web application) can be impacted too.
As a result, customers using different services may observe different symptoms and submit tickets with dissimilar descriptions.
Consequently, it is insufficient to tackle this problem by solely utilizing textual descriptions of tickets.
To address existing studies' limitations, we propose introducing cloud-side runtime information, i.e., \textit{alerts}, to facilitate ticket aggregation in cloud systems.
Modern cloud systems widely adopt monitors to continuously detect anomalies (unexpected behaviors) of cloud systems~\cite{aws_cloudwatch}\cite{azure_monitor}\cite{gcp_alerting}.
Once an anomaly is detected, an alert describing the anomaly will be fired to notify on-call engineers for inspection promptly.
The services (and their internal components) are interdependent in cloud systems~\cite{yang2021aid}\cite{wang2021fast}; therefore, when an incident impacts multiple components or services, multiple alerts will be triggered within a short period~\cite{zhao2020understanding}\cite{chen2020towards}, that is, these alerts are correlated with each other (i.e., \textbf{alert-alert relation}).
According to our study in \cloud, the correlated alerts caused by most (93\%) incidents are fired within four hours.
On the other hand, a particular issue of a component (e.g., problematic API for VM allocation) in cloud systems can reflect a particular customer-side issue (e.g., cannot create a VM).
So, it is possible to find a \textit{responsible alert} within the component that captures the issue resulting in the ticket (i.e., \textbf{ticket-alert relation}).
In \cloud, we find that for 92\% of customer tickets; the alert system has already fired responsible alerts that cause these tickets before the tickets are submitted.
Motivated by these two kinds of relations, we propose to formulate the ticket aggregation problem in cloud systems as a two-stage linking problem, i.e., alert-alert linking and ticket-alert linking.
Intuitively, if the same incident triggers multiple inter-linked alerts and these alerts are further linked to different tickets, then we consider these tickets should be aggregated (i.e., caused by the same incident).
In doing this, it is possible to aggregate semantically different tickets via alert-alert links.
However, designing such a framework mainly faces two challenges originating from the large scale and complexity of cloud systems:
First, alerts are massive and noisy.
The main reason is that cloud systems consist of a large number of interdependent services.
Each service adopts comprehensive monitors to capture any abnormal patterns to ensure its reliability~\cite{chen2022online}. These monitors could be sensitive.
As a result, various alerts are continuously fired every second~\cite{li2021fighting}, so it is challenging to correctly identify and link alerts that are relevant to the ongoing incident.
Second, features of both alerts and tickets have high cardinality, which means each of their features consists of too many unique values.
When considering linking alerts and tickets, the number of feature combinations grows exponentially due to the high cardinality.
Consequently, it is hard to identify effective feature combinations between them and conduct correct correlation.
In this paper, we propose \nm to address these challenges. Specifically, \nm mainly consists of three steps, i.e., \textit{alert parsing}, \textit{incident profiling} and \textit{ticket-event correlation}.
The first two steps address the first challenge, and the third step addresses the second challenge.
In the \textit{alert parsing} step, we preprocess (parse) alerts as more coarse-grained \textit{events} to reduce redundant alerts.
Next, in the \textit{incident profiling} step, we propose GIP (graph-based incident profiling) to automatically filter noisy events and link events caused by the same incident.
As a result, each incident is represented as an event graph by considering alert-alert relations.
Afterward, in the \textit{ticket-event correlation} step, we propose AIN (attentive interaction network) to correlate a ticket to a responsible event by considering ticket-alert relations.
Finally, we aggregate these tickets that are linked to the events within the same event graph (i.e., incident), which are provided to CSS (customer support service) team to accelerate processing the tickets.
This work makes the following major contributions:
\begin{itemize}[leftmargin=*, topsep=0pt]
\item We are the first to propose to introduce cloud runtime information (i.e., alerts) to aggregate duplicate tickets.
We propose \nm to leverage the alert-alert relations and ticket-alert relations to achieve this goal.
\item We evaluate \nm on three datasets collected from the production environment of \cloud.
The evaluation results show that \nm outperforms state-of-the-art methods by 12.4\%$\sim$31.2\%, which confirm the effectiveness of \nm.
We also share our industrial experience of applying \nm in a large-scale cloud platform, \cloud.
\end{itemize}
\section{Methodology}
\subsection{Overview of \nm}
The goal of \nm is to aggregate duplicate tickets that are caused by the same cloud incident among all tickets.
Due to the large scale and heterogeneous architecture~\cite{cotroneo2019bad}\cite{yang2021aid}\cite{wang2021fast} of cloud systems, it is
insufficient to solely consider the textual similarity of tickets to achieve this goal.
To address this problem, we introduce cloud run-time information (i.e., alerts) and formulate it as a two-stage linking problem. Intuitively, \nm first finds links between alerts by leveraging alert-alert relations.
These inter-linked alerts constitute a graph to represent an incident.
Then \nm identifies the tickets that are caused by these alerts according to ticket-alert relations.
The tickets linked to the alerts within the same graph (i.e., incident) are aggregated.
Thus, we can aggregate the tickets with dissimilar semantics via the bridge of alert-alert links.
As shown in Fig.~\ref{fig: overall_framework}, \nm consists of three steps: \textit{alert parsing}, \textit{incident profiling} and \textit{ticket-event correlation}. In the \textit{alert parsing} step, we parse alerts as more coarse-grained \textit{events} to reduce redundant alerts.
Next, in the \textit{incident profiling step}, we propose a graph-based incident profiling (GIP) method to remove the regular events (i.e., parsed regular alerts) and link correlated indicative events.
Then, in the \textit{ticket-event correlation}, we propose an attentive interaction network (AIN) to correlate a ticket to an event.
Finally, if two tickets are correlated to the events within the same event graph (i.e., the same incident), we aggregate the tickets as the same cluster.
The results of the ticket aggregation are presented to the CSS (Customer Support Services) team to streamline the ticket processing process and improve efficiency. This allows support engineers to send out batch notifications to potentially affected customers and provide quick guidance for service recovery. Additionally, the results can aid on-call engineers in conducting impact assessments, including identifying affected services and determining the extent of customer impact caused by the incident (e.g., number of affected customers).
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{Figures/overall_framework.pdf}
\caption{The overall framework of \nm.}
\label{fig: overall_framework}
\end{figure}
\subsection{Alert Parsing}\label{sec: alert_parsing}
The title of an alert is generated following an engineer-specified template.
Monitors may be triggered multiple times during an incident causing massive redundant alerts.
To reduce the volume of alerts and avoid redundancy, we parse each alert to its corresponding template and aggregate the alerts sharing the same template as an \textbf{event}.
Take $a_1$ in Table~\ref{tab: motivating_example} as an example; multiple similar alerts can fire concurrently such as ``VMStart Failures exceed 100/150/200/250 times'', which are aggregated as ``VMStart Failures exceed~$\ast$~times''.
We formulate this problem as the well-studied log parsing problem~\cite{zhu2019tools} following~\cite{wang2021fast}. We propose to customize a widely-adopted log parsing algorithm, Drain~\cite{he2017drain} to parse the alerts into templates (events).
Drain works by extracting the common parts of alert titles from each group of alerts, where the group is determined by calculating the overlap of words.
To enhance Drain in our scenario, we observe that if two alerts are reported by different monitors or belong to different components, the two alerts must have distinct templates.
Therefore, we first divide all alerts into different partitions according to both \textit{monitor~ID} and \textit{owning component}. We then apply Drain in each partition to extract the templates.
In this way, we can reduce the noises in each partition and also accelerate the processing by parallel computing.
Finally, each alert is parsed as an event, which introduces two features, i.e., \textit{event template} and \textit{event~ID} (a hash value of its template).
Within a fixed time window (Section~\ref{sec: incident_profiling}), for events sharing the same template, we reserve the latest event and discard the rest of the events to reduce its volume.
The following steps are applied to events instead of raw alerts.
\subsection{Incident Profiling}\label{sec: incident_profiling}
The goal of this step is to represent an incident via \textit{linking the correlated events that are caused by the same incidents}.
In doing this, the linked events can then be used to bridge semantically different tickets in the next step (Section~\ref{sec: correlation}).
To learn relations between events, some existing solutions leverage manual annotations~\cite{chen2022online}\cite{chen2020identifying}, which are not practical because such labels are hard to obtain and usually insufficient in real-world practice.
While there are unsupervised solutions~\cite{zhao2020understanding}\cite{chen2021graph}, they require prior knowledge (e.g., the precise topology of cloud services) to estimate alert relations. However, such prior knowledge is usually inaccurate and requires extensive efforts to collect, update and validate~\cite{yang2021aid}\cite{chen2021graph}\cite{chen2022online}.
We propose an unsupervised approach, i.e., \underline{G}raph-based \underline{I}ncident \underline{P}rofiling (\pf), which does not rely on prior knowledge.
The input is a series of events within a time window, and the output is one or multiple graphs of the events.
Each graph profiles an incident containing indicative events related to the incident.
GIP has a \textit{static event relation learning} step and a \textit{dynamic event graph construction} step.
Intuitively, if two events are correlated, these events are more likely to be triggered within a short period frequently in the history~\cite{chen2021graph}\cite{chen2022online}.
We model such frequent patterns in the first \textit{static event relation learning} step.
Then, in the \textit{dynamic event graph construction} step, we dynamically link the events possessing the learned frequent patterns and remove regular events in the runtime.
\subsubsection{Static Event Relation Learning}\label{sec: static_event_relation_learning}
In this step, we assign a static score to each event pair weighing how likely they co-occur in history.
To this end, we first collect a series of historical events in chronological order.
Then we apply a \textit{four-hour-long} sliding time window on these events with a step size of one hour. We adopt \textit{four} hours as the window length because it can cover most alerts within an incident according to our study in Section~\ref{sec: alert_alert_relation}. The one-hour step size allows us to introduce enough new events for learning the static event relations and avoid separating co-occurred events into two different windows.
Each window $w_i$ contain multiple events, i.e., $w_i = [e_1, e_2, e_3, ... ]$.
If two events appear in the same window, we count it as a co-occurrence.
Based on these windows, we compute the point-wise mutual information (PMI) score~\cite{wiki:Pointwise_mutual_information} for each event pair, which is a popular metric to measure co-occurrence associations~\cite{qin2021relation}\cite{yao2019graph}.
Formally, the PMI value for the event pair $(e_i, e_j)$ is :
\begin{align}
PMI(e_i, e_j) = log\frac{p(e_i,e_j)}{p(e_i) p(e_j)},
\end{align}
where $p(e_i,e_j) = \frac{C(e_i, e_j)}{M}$, $p(e_i) = \frac{C(e_i)}{M}$. $C(e_i, e_j)$ denotes the number of windows that contain both $e_i$ and $e_j$, and $C(e_i)$ is the number of windows that contain $e_i$. $M$ is the total number of windows.
A higher PMI value indicates two events are more likely to co-occur in history, and a positive PMI value indicates they are more likely to co-occur than appear individually.
We use $d(e_i, e_j)$ to denote the pre-computed PMI value for the event pair $(e_i, e_j)$.
\begin{algorithm}[t]
\small
\caption{Dynamic Event Graph Construction}
\label{algo: graph_construction}
\SetAlgoLined
\KwIn{Pre-computed PMI values in $d$, a window of latest events $w_j=[e_1, e_2, e_3, ...]$, hyper-parameter $\mu\in$[0,1]}
\KwOut{$g_o=\{g_1, g_2, ...\}$}
\kwInit{g $\leftarrow$ Empty undirected graph;
r $\leftarrow$ Empty list
}
\For{$i\leftarrow 1$ \KwTo $l$}{
\For{$j\leftarrow i$ \KwTo $l$}{
\If{
$d(e_i, e_j) > 0$
}{
g.AddWeightedEdge(($e_i$, $e_j$),
weight=$d$($e_i$, $e_j$))
}
}
}
\For{each node $e_i \in$ g}{
$\mathcal{W}$ = GetWeightsOfOutEdges($e_i$)
AscendingSort($\mathcal{W}$)
$\gamma$ = SearchKneePoint($\mathcal{W}$) // Kneedle algorithm
\If{$\gamma< \mu$} {
g.RemoveNode($e_i$)
}
}
$g_o \leftarrow$ GetSubGraphs($g$)
\end{algorithm}
\subsubsection{Dynamic Event Graph Construction}
We then dynamically construct event graphs in the runtime by utilizing the learned static PMI values.
The input to this step is the events collected within the latest four-hour-long time window.
The output is one or more event graphs, each of which contains correlated events caused by the incident.
Intuitively, we aim to link the events with high PMI values because they are possibly caused by the same ongoing incident in the runtime, considering they frequently co-occur in history.
However, regular (noisy) events tend to co-occur with various types of events because they frequently appear regardless of whether there is an incident.
In contrast, indicative events only frequently co-occur with only a small portion of events.
Based on the difference between regular events and indicative events, we propose a novel algorithm to prune the regular events automatically, and the remaining indicative events are correlated.
The pseudocode of the algorithm is shown in Algorithm~\ref{algo: graph_construction}.
First, we link every pair of events with positive PMI values constituting a single initial event graph $g$ with the PMI values as weights of edges (line $1\sim 7$).
Then, for each node, we calculate a knee point (i.e., $\gamma$ in Algorithm~\ref{algo: graph_construction}) based on the PMI values of all its out edges, i.e., $\mathcal{W}$ (line $9\sim 11$).
Specifically, we adopt the Kneedle algorithm~\cite{satopaa2011finding} to calculate $\gamma$.
A small $\gamma$ for a node denotes that most PMI values of its linked neighbors are large, namely, the node frequently co-occurs with many neighbors (events). This implies that the node is more likely to be a regular event.
Therefore, we remove the node if its $\gamma$ is less than a threshold $\mu$ (line $12\sim 14$).
As revealed by previous studies~\cite{zhao2020understanding}\cite{chen2020incidental}, regular events make up a large portion of all events.
Therefore, we empirically set $\mu=0.8$ to remove most events aggressively, which turns out to be effective in our scenario (Section~\ref{sec: GIP_ablation_study}).
Finally, we extract subgraphs (i.e., connected component~\cite{wiki:Component_(graph_theory)}) from the the pruned graph $g$ (line $16$).
\subsection{Ticket-Event Correlation}\label{sec: correlation}
After profiling incidents as several event graphs (i.e., event-event linking), we correlate each ticket to the event that captures the internal cloud issue resulting in the ticket (i.e., event-ticket linking).
If two tickets are correlated to inter-linked events (i.e., they are caused by the same incident), we can then aggregate them as the same cluster.
We mainly address the challenge caused by the high cardinality of features of tickets and events (Section~\ref{sec: case_analysis}).
Inspired by factorization machine~\cite{rendle2010factorization} in the field of recommendation systems,
we propose an attentive interaction network (AIN), which decomposes feature combinations as Hadamard products of low-dimension feature embeddings.
In this way, we bypass directly encoding the exponentially-growing feature combinations with high-dimension feature vectors.
The input of AIN is a ticket-event pair and the output is a probability representing how likely the input pair is correlated.
Fig.~\ref{fig: AIN_framework} shows the overall framework of AIN composed of three layers, i.e., \textit{embedding layer}, \textit{attentive interaction layer}, and \textit{prediction layer}, which are elaborated as follows.
\textbf{Embedding Layer.} The embedding layer represents all features ($f_i$ for a ticket feature and $\hat{f}_i$ for an event feature) as trainable vectors (i.e., embeddings) denoted as $\mathbf{v_i}\in\mathbb{R}^k$, where $k$ is a user-defined hyper-parameter.
For \textit{summary} of tickets and \textit{event template} of events (denoted as $f_1$ and $\hat{f}_1$ in Fig.~\ref{fig: AIN_framework}), we resort to the power of pretrained model BERT (Bidirectional Encoder Representations from Transformers)~\cite{DBLP:conf/naacl/DevlinCLT19} to embed their semantics as vectors.
We exclude the detailed ticket description since it potentially introduces noises, and the summary already provides the essential part~\cite{yang2016combining}\cite{haering2021automatically}\cite{wang2008approach}.
The remaining features are initialized as random vectors.
\textbf{Attentive Interaction Layer.}
After each feature is associated with an embedding vector, the attentive interaction layer models the feature combination of two features as the Hadamard product (i.e., element-wise product denoted as $\odot$) of their corresponding embedding vectors.
For $ \mathbf{u}=\mathbf{x}\odot \mathbf{y}$ we have $u_i=x_iy_i$.
The attentive interaction layer models combinations of features across a ticket and an event, formally,
\begin{align}
\mathbf{z} = \sum_i^n\sum_j^m a_{ij} (\mathbf{v}_i\odot \mathbf{v}_j), \label{equ: afm}
\end{align}
where $n$ and $m$ are the numbers of ticket and event features, respectively.
To identify the effective feature combinations for different ticket-event pairs, AIN computes an importance score $a_{ij}$ for each combination result $v_i\odot v_j$ in Equation~(\ref{equ: afm}).
Afterwards, these feature combinations are summarized as a single representation $\mathbf{z}\in \mathbb{R}^k$ by computing their weighted average.
The importance weight $a_{ij}$ is calculated as follows:
\begin{align}
\hat{a}_{ij} &= \mathbf{h}^T\phi(\mathbf{W}(\mathbf{v}_i\odot\mathbf{v}_j) + \mathbf{b}), \label{equ: afm_weight}\\
a_{ij} &= \frac{e^{\hat{a}_{ij}}}{\sum_i^n\sum_j^me^{\hat{a}_{ij}}}, \label{equ: afm_softmax}
\end{align}
Equation~(\ref{equ: afm_weight}) denotes a fully-connected (FC) neural network that takes the combination of two features as input and outputs their (unnormalized) importance weight.
where $\phi(x)=max(0,x)$ is the ReLU activation function. $\mathbf{h}^T\in\mathbb{R}^{r}$, $\mathbf{W}\in\mathbb{R}^{(r\times k)}$ and $\mathbf{b}\in\mathbb{R}^{r}$ are trainable parameters. $r$ is a hyper-parameter that denotes the size of the hidden layer.
Equation~(\ref{equ: afm_softmax}) normalizes the importance weights to $[0,1]$.
The importance weights control how much each feature combination contributes for prediction.
For example, in Equation~(\ref{equ: afm}), for $a_{ij}$ close to~1, its corresponding feature combination will dominate the summarized vector $\mathbf{z}$.
This means that the prediction mostly depends on the feature combination of $\mathbf{v_i}$ and $\mathbf{v_j}$. In addition, the weights are automatically learned by the FC in Equation~\ref{equ: afm_weight}, we actually force AIN to select the effective feature combinations when learning from the data.
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{Figures/ain.pdf}
\caption{The overall framework of AIN.}
\label{fig: AIN_framework}
\end{figure}
\textbf{Prediction Layer.}
We formulate ticket-event correlation as a binary classification problem.
Particularly, to calculate the correlation probability $p$, an FC neural network is applied on $\mathbf{z}$, i.e., $p = \sigma(\mathbf{w}_o^T\mathbf{z} + b_o)$,
where $\mathbf{w}_o\in\mathbb{R}^{k}$ and $b_o \in \mathbb{R}$ are trainable parameters, and $\sigma(x)={\frac {1}{1+e^{-x}}}$ is the Sigmoid function producing a probability within the range of $[0,1]$.
To update all trainable parameters, we utilize the popular Adam optimizer~\cite{diederik2015adam} to minimize the following binary cross-entropy loss $\mathcal{L}_{BCE}$ via fitting training data with $N$ ticket-event pairs.
\begin{equation}
\mathcal{L}_{BCE} =-\sum_{i}^{N}\big(y_i log{(p_i)} + (1-y_i)log(1-p_i)\big),
\label{equ:bce}
\end{equation}
where $y_i=1$ for positive (i.e., correlated) ticket-event pairs and $y_i=0$ for negative (i.e., unrelated) pairs.
The positive samples are collected by extracting the responsible alert ID of a ticket from its resolution text written by support engineers (Section~\ref{sec: alert_ticket_relation}). So, such data is gradually accumulated during the daily work routine of support engineers, which does not incur additional manual effort for data labeling.
We then randomly sample the same number of negative pairs.
The features used are \textit{event template}, \textit{event ID}, \textit{severity}, \textit{monitor ID}, \textit{owning service}, \textit{owning component} for events, and \textit{product name}, \textit{category}, \textit{summary} for tickets.
\subsection{Deployment}
\nm consists of offline parts (pre-computed) and online parts (serving online continuously) for deployment in the could systems.
The offline parts include alert parsing, static event relations learning and AIN training.
The online parts conduct alert parsing, dynamic event graph construction, and ticket-event correlation utilizing the trained AIN.
The details are as follows.
\subsubsection{Offline Parts}
Intuitively, the offline parts leverage the historical data to prepare intermediate data (e.g., PMI values) or model (e.g., AIN) for online use. Specifically, \nm parse all collected alerts to events (Section~\ref{sec: alert_parsing}). Then, static event relations learning is conducted (Section~\ref{sec: static_event_relation_learning}), which computes PMI values for all event pairs. The PMI values are then stored in a Redis database for reference.
After that, AIN is trained using historical ticket-event pairs.
\cloud continuously collects the alert and ticket data; in order to capture the latest system update (e.g., new alerts), the offline parts are executed periodically (e.g., once every month).
\subsubsection{Online Parts}
In the online deployment, \nm is periodically executed (e.g., every five minutes) and pushes its latest analysis results to the CSS team. Support engineers can also manually trigger \nm when needed (e.g., a large volume of tickets are received).
Considering cloud services and customers are physically isolated in different regions, \nm is applied separately in different regions.
Once executed, \nm collects the latest alerts and tickets within the latest four-hour-long time window to analyze. We can reduce the great volume of ticket-event pairs by filtering with region and time.
The tickets and alerts in the same time window and region constitute a \emph{chunk}.
In each chunk, after parsing alerts as events, GIP is applied to link events as event graphs (i.e., incidents).
Then, we apply AIN to link each ticket to one of the events.
For each ticket, AIN recommends a list of events ranked by the associated correlation probabilities.
Note that we exclude the tickets whose largest probability in the ranked list is smaller than a confidence threshold $\theta=0.8$, because they are more likely caused by a customer-side issue (e.g., incorrect configurations).
Next, tickets that are correlated to the events within the same event graph are aggregated as a cluster.
Based on the aggregation results, on the one hand, on-call engineers can conduct impact assessment (i.e., how many customers are impacted) for an incident; on the other hand, the CSS team can avoid duplicate manual inspection and make batched communication to customers. (e.g., provide the latest mitigation progress of the internal incident).
\section{Related Work}
\subsection{Incident Analysis}
Researchers have devoted sustained efforts on empirical studies~\cite{gunawi2016does, huang2017gray, liu2019bugs, cotroneo2019bad, chen2020towards} of cloud incidents in the last few years. Gunawi et al.~\cite{gunawi2016does} discussed why outages still take place in cloud environments by analysing headline news and public postmortem reports of 32 popular Internet services. Huang et al.~\cite{huang2017gray} discussed their experiences with gray failure in production cloud-scale systems and demonstrated its broad scope and consequences. Chen et al.~\cite{chen2020towards} presented a comprehensive study on how alerts and incidents are managed in large-scale public cloud.
Cloud alerts are notoriously blamed for its great volume.
In general, there are two threads of studies proposed towards resolving the challenge. The major thread aims to correlate alerts that are caused by the same incident~\cite{wang2021fast}\cite{gu2020efficient}\cite{zhao2020understanding}.
Given a large number of alerts happening, Chen et al.~\cite{chen2020incidental} empirically found that only a small portion of alerts matters and proposed to prioritize alerts based on historical data.
Chen et al.~\cite{chen2020identifying}\cite{chen2019outage} proposed to predict the link between two alerts by combining alert textual information and the topology information among alerts (i.e., the topology of components that generate these alerts).
These studies either require experts' manual annotations~\cite{chen2022online}\cite{chen2020identifying} or precise system topology~\cite{zhao2020understanding}\cite{chen2021graph}. Differently, we propose GIP, which does not require such labels or prior knowledge to identify alert-alert relations. We further leverage the alert-alert relations to aggregate tickets for efficient processing and management.
\subsection{Issue Report Analysis}
Issue reports, including app reviews, user feedback, bug reports, test reports, GitHub issues, support tickets, etc., are crucial for service providers to gain a better understanding of their customers' experiences.
A large body of research has been devoted to the analysis of issue reports, covering topics such as duplicate bug reports detection~\cite{wang2008approach}\cite{ nguyen2012duplicate}\cite{zhou2012learning}\cite{ budhiraja2018lwe}, emerging issue detection~\cite{gao2018online} \cite{zheng2019ifeedback}\cite{gao2019emerging} bug reproduction~\cite{zhao2019recdroid}\cite{cao2014symcrash}, bug report summarization~\cite{rastkar2014automatic}\cite{li2018unsupervised} and empirical studies~\cite{kucuk2021characterizing}\cite{zou2018practitioners}\cite{ma2017developers}.
Most existing studies focus on natural language text information such as titles and descriptions. In addition, some latest attempts~\cite{he2020duplicate}\cite{liu2020clustering}\cite{cooper2021takes} proposed to jointly consider multi-modality features, e.g., text and images (e.g., app screenshots), which has become a recent hot trend in the research direction.
Different from these studies that purely focus on the customer-side issue report information, in this work, we also consider ongoing alerts and incidents in the complex cloud system. We aim to bridge the cloud alerts with cloud users' tickets to facilitate efficient ticket processing.
\section{Threats to Validity}
\textbf{External Validity.} The study's object is the primary external threat. The data was collected from \cloud, as there is no publicly available dataset containing customer tickets and a large number of alerts. However, \cloud is a world-leading cloud provider with a vast scale. The data covers a broad range of services from various regions (Section~\ref{sec: dataset}). Hence, the evaluation in \cloud should be representative and convincing. Furthermore, \nm leverages the common features provided by the most popular cloud vendors (Section~\ref{sec: bg_alert_and_tickets}), making it capable of generalizing to similar cloud systems, potentially benefiting cloud customers globally.
\textbf{Internal Validity.} Implementation and parameter setting are the main internal threats to validity. For implementation, the baseline approaches are not open-sourced, so we re-implemented them by following the original papers closely. To reduce the implementation threat, we leveraged several mature libraries for implementing the core algorithms (Section~\ref{sec: baselines}). Both the proposed and baseline methods underwent peer code review. For parameter setting, we tuned all methods through grid-search and chose the best results.
|
train/arxiv
|
BkiUc0M25YjgKNDlU8PN
| 1 | 0.2 |
\section{Introduction}
\label{sect:intro}
The forthcoming Extremely Large Telescopes (ELTs)\renewcommand{\elt}{ELT\xspace}\renewcommand{\elts}{ELTs\xspace}\renewcommand{European Extremely Large Telescope (E-ELT)\renewcommand{\eelt}{E-ELT\xspace}\renewcommand{\elt}{ELT\xspace}\renewcommand{\elts}{ELTs\xspace}\xspace}{European ELT (E-ELT)\renewcommand{European Extremely Large Telescope (E-ELT)\renewcommand{\eelt}{E-ELT\xspace}\renewcommand{\elt}{ELT\xspace}\renewcommand{\elts}{ELTs\xspace}\xspace}{E-ELT\xspace}\xspace}\xspace \citep{eelt,tmt,gmt} all rely on adaptive optics (AO)\renewcommand{\ao}{AO\xspace}\renewcommand{\Ao}{AO\xspace}\xspace systems
\citep{adaptiveoptics} to provide atmospheric turbulence compensation
allowing scientific goals requiring high resolution imaging and
spectroscopy to be met. The design of these adaptive optics (AO)\renewcommand{\ao}{AO\xspace}\renewcommand{\Ao}{AO\xspace}\xspace systems requires
extensive modelling and simulation to enable performance estimates to
be made, and to explore relevant parameter spaces. Current modelling
tools fall into two broad categories: analytical models, and
Monte-Carlo simulations. Monte-Carlo simulations, while generally
computationally expensive for Extremely Large Telescope (ELT)\renewcommand{\elt}{ELT\xspace}\renewcommand{Extremely Large Telescopes (ELTs)\renewcommand{\elt}{ELT\xspace}\renewcommand{\elts}{ELTs\xspace}\renewcommand{European Extremely Large Telescope (E-ELT)\renewcommand{\eelt}{E-ELT\xspace}\renewcommand{\elt}{ELT\xspace}\renewcommand{\elts}{ELTs\xspace}\xspace}{European ELT (E-ELT)\renewcommand{European Extremely Large Telescope (E-ELT)\renewcommand{\eelt}{E-ELT\xspace}\renewcommand{\elt}{ELT\xspace}\renewcommand{\elts}{ELTs\xspace}\xspace}{E-ELT\xspace}\xspace}\xspace}{ELTs\xspace}\renewcommand{European Extremely Large Telescope (E-ELT)\renewcommand{\eelt}{E-ELT\xspace}\renewcommand{\elt}{ELT\xspace}\renewcommand{\elts}{ELTs\xspace}\xspace}{European ELT (E-ELT)\renewcommand{European Extremely Large Telescope (E-ELT)\renewcommand{\eelt}{E-ELT\xspace}\renewcommand{\elt}{ELT\xspace}\renewcommand{\elts}{ELTs\xspace}\xspace}{E-ELT\xspace}\xspace}\xspace-scale models, have the ability to
deliver high fidelity performance estimates, and include non-linear
effects, and as much system detail as is necessary (at the expense of
computational requirements).
Here, we use the Durham \ao simulation platform (DASP)\renewcommand{\dasp}{DASP\xspace}\renewcommand{\thedasp}{DASP\xspace}\renewcommand{The Durham \ao simulation platform (DASP)\renewcommand{\dasp}{DASP\xspace}\renewcommand{\thedasp}{DASP\xspace}\renewcommand{\Thedasp}{DASP\xspace}\xspace}{DASP\xspace}\xspace \citep{basden5,basden11} to model expected adaptive optics (AO)\renewcommand{\ao}{AO\xspace}\renewcommand{\Ao}{AO\xspace}\xspace
performance for a multi-conjugate adaptive optics (MCAO)\renewcommand{\mcao}{MCAO\xspace}\xspace instrument on the 39~m European Extremely Large Telescope (E-ELT)\renewcommand{\eelt}{E-ELT\xspace}\renewcommand{\elt}{ELT\xspace}\renewcommand{\elts}{ELTs\xspace}\xspace. Durham \ao simulation platform (DASP)\renewcommand{\dasp}{DASP\xspace}\renewcommand{the Durham \ao simulation platform (DASP)\renewcommand{\dasp}{DASP\xspace}\renewcommand{\thedasp}{DASP\xspace}\renewcommand{The Durham \ao simulation platform (DASP)\renewcommand{\dasp}{DASP\xspace}\renewcommand{\thedasp}{DASP\xspace}\renewcommand{\Thedasp}{DASP\xspace}\xspace}{DASP\xspace}\xspace}{DASP\xspace}\renewcommand{The Durham \ao simulation platform (DASP)\renewcommand{\dasp}{DASP\xspace}\renewcommand{\thedasp}{DASP\xspace}\renewcommand{\Thedasp}{DASP\xspace}\xspace}{DASP\xspace}\xspace is a
Monte-Carlo end-to-end simulation tool, that includes models of the
atmosphere, telescope, wavefront sensors, deformable mirrors, adaptive optics (AO)\renewcommand{\ao}{AO\xspace}\renewcommand{\Ao}{AO\xspace}\xspace
real-time control system, and performance characterisation via
generation of science point spread function (PSF)\renewcommand{\psf}{PSF\xspace}\renewcommand{point spread functions (PSFs)\renewcommand{\psf}{PSF\xspace}\renewcommand{\psfs}{PSFs\xspace}\renewcommand{Point spread function (PSF)\renewcommand{\psf}{PSF\xspace}\renewcommand{\psfs}{PSFs\xspace}\renewcommand{\Psf}{PSF\xspace}\xspace}{PSF\xspace}\xspace}{PSFs\xspace}\renewcommand{Point spread function (PSF)\renewcommand{\psf}{PSF\xspace}\renewcommand{\psfs}{PSFs\xspace}\renewcommand{\Psf}{PSF\xspace}\xspace}{PSF\xspace}\xspace images. We investigate adaptive optics (AO)\renewcommand{\ao}{AO\xspace}\renewcommand{\Ao}{AO\xspace}\xspace performance as
a function of the number of laser guide stars (LGSs)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgss}{LGSs\xspace}\xspace \citep{laserguidestar}, the number
of deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace, deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace actuator pitch and conjugate height, and explore
performance across the telescope field of view. We also investigate
the degree of elongation of laser guide stars (LGSs)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgss}{LGSs\xspace}\xspace, which is determined by sodium layer
depth in the mesosphere, and the impact of wavefront sensor (WFS)\renewcommand{\wfs}{WFS\xspace}\renewcommand{wavefront sensors (WFSs)\renewcommand{\wfs}{WFS\xspace}\renewcommand{\wfss}{WFSs\xspace}\xspace}{WFSs\xspace}\xspace pixel scale on adaptive optics (AO)\renewcommand{\ao}{AO\xspace}\renewcommand{\Ao}{AO\xspace}\xspace
performance under different signal-to-noise regimes. Our findings can
be used to aid instrument design decisions and to estimate expected
adaptive optics (AO)\renewcommand{\ao}{AO\xspace}\renewcommand{\Ao}{AO\xspace}\xspace performance for future instruments, and are complementary to
results from other modelling tools for Extremely Large Telescope (ELT)\renewcommand{\elt}{ELT\xspace}\renewcommand{Extremely Large Telescopes (ELTs)\renewcommand{\elt}{ELT\xspace}\renewcommand{\elts}{ELTs\xspace}\renewcommand{European Extremely Large Telescope (E-ELT)\renewcommand{\eelt}{E-ELT\xspace}\renewcommand{\elt}{ELT\xspace}\renewcommand{\elts}{ELTs\xspace}\xspace}{European ELT (E-ELT)\renewcommand{European Extremely Large Telescope (E-ELT)\renewcommand{\eelt}{E-ELT\xspace}\renewcommand{\elt}{ELT\xspace}\renewcommand{\elts}{ELTs\xspace}\xspace}{E-ELT\xspace}\xspace}\xspace}{ELTs\xspace}\renewcommand{European Extremely Large Telescope (E-ELT)\renewcommand{\eelt}{E-ELT\xspace}\renewcommand{\elt}{ELT\xspace}\renewcommand{\elts}{ELTs\xspace}\xspace}{European ELT (E-ELT)\renewcommand{European Extremely Large Telescope (E-ELT)\renewcommand{\eelt}{E-ELT\xspace}\renewcommand{\elt}{ELT\xspace}\renewcommand{\elts}{ELTs\xspace}\xspace}{E-ELT\xspace}\xspace}\xspace multi-conjugate adaptive optics (MCAO)\renewcommand{\mcao}{MCAO\xspace}\xspace instrumentation, for
example \citet{2014SPIE.9148E..6FA,miskaltao}.
In \S2 we describe our input models and the explored
parameter space and in \S3 we present the performance estimates
obtained. We conclude in \S4.
\section{Modelling of an E-ELT MCAO system}
We use Durham \ao simulation platform (DASP)\renewcommand{\dasp}{DASP\xspace}\renewcommand{the Durham \ao simulation platform (DASP)\renewcommand{\dasp}{DASP\xspace}\renewcommand{\thedasp}{DASP\xspace}\renewcommand{The Durham \ao simulation platform (DASP)\renewcommand{\dasp}{DASP\xspace}\renewcommand{\thedasp}{DASP\xspace}\renewcommand{\Thedasp}{DASP\xspace}\xspace}{DASP\xspace}\xspace}{DASP\xspace}\renewcommand{The Durham \ao simulation platform (DASP)\renewcommand{\dasp}{DASP\xspace}\renewcommand{\thedasp}{DASP\xspace}\renewcommand{\Thedasp}{DASP\xspace}\xspace}{DASP\xspace}\xspace to investigate different configurations for a multi-conjugate adaptive optics (MCAO)\renewcommand{\mcao}{MCAO\xspace}\xspace
system on the European Extremely Large Telescope (E-ELT)\renewcommand{\eelt}{E-ELT\xspace}\renewcommand{\elt}{ELT\xspace}\renewcommand{\elts}{ELTs\xspace}\xspace. In the study presented here, we usually consider only
the use of laser guide stars (LGSs)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgss}{LGSs\xspace}\xspace, in order to simplify our results. We assume that
the tip-tilt signal from the laser guide stars (LGSs)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgss}{LGSs\xspace}\xspace is valid, so that natural guide star (NGS)\renewcommand{\ngs}{NGS\xspace}\renewcommand{\Ngs}{NGS\xspace}\renewcommand{natural guide stars (NGSs)\renewcommand{\ngs}{NGS\xspace}\renewcommand{\Ngs}{NGS\xspace}\renewcommand{\ngss}{NGSs\xspace}\xspace}{NGSs\xspace}\xspace measurements
are not necessary. However, we also present results when using low
order natural guide stars (NGSs)\renewcommand{\ngs}{NGS\xspace}\renewcommand{\Ngs}{NGS\xspace}\renewcommand{\ngss}{NGSs\xspace}\xspace for tip-tilt correction (and ignore tip-tilt signal from
the laser guide star (LGS)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{laser guide stars (LGSs)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgss}{LGSs\xspace}\xspace}{LGSs\xspace}\xspace when doing so). A previous study has investigated many different
natural guide star (NGS)\renewcommand{\ngs}{NGS\xspace}\renewcommand{\Ngs}{NGS\xspace}\renewcommand{natural guide stars (NGSs)\renewcommand{\ngs}{NGS\xspace}\renewcommand{\Ngs}{NGS\xspace}\renewcommand{\ngss}{NGSs\xspace}\xspace}{NGSs\xspace}\xspace asterisms for a multi-object \ao (MOAO)\renewcommand{\moao}{MOAO\xspace}\renewcommand{\Moao}{MOAO\xspace}\xspace instrument on the European Extremely Large Telescope (E-ELT)\renewcommand{\eelt}{E-ELT\xspace}\renewcommand{\elt}{ELT\xspace}\renewcommand{\elts}{ELTs\xspace}\xspace \citep{basden17},
and so we do not seek to perform such a study here. We note that the
assumption of a valid laser guide star (LGS)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{laser guide stars (LGSs)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgss}{LGSs\xspace}\xspace}{LGSs\xspace}\xspace tip-tilt signal can be both pessimistic and
optimistic, depending on natural guide star (NGS)\renewcommand{\ngs}{NGS\xspace}\renewcommand{\Ngs}{NGS\xspace}\renewcommand{natural guide stars (NGSs)\renewcommand{\ngs}{NGS\xspace}\renewcommand{\Ngs}{NGS\xspace}\renewcommand{\ngss}{NGSs\xspace}\xspace}{NGSs\xspace}\xspace asterism, and laser guide star (LGS)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{laser guide stars (LGSs)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgss}{LGSs\xspace}\xspace}{LGSs\xspace}\xspace asterism diameter,
since the laser guide star (LGS)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{laser guide stars (LGSs)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgss}{LGSs\xspace}\xspace}{LGSs\xspace}\xspace locations are typically close to the edge of the field
of view, while the natural guide star (NGS)\renewcommand{\ngs}{NGS\xspace}\renewcommand{\Ngs}{NGS\xspace}\renewcommand{natural guide stars (NGSs)\renewcommand{\ngs}{NGS\xspace}\renewcommand{\Ngs}{NGS\xspace}\renewcommand{\ngss}{NGSs\xspace}\xspace}{NGSs\xspace}\xspace locations can be spread over the field.
Our key adaptive optics (AO)\renewcommand{\ao}{AO\xspace}\renewcommand{\Ao}{AO\xspace}\xspace performance metric is H-band Strehl ratio (1650~nm), as
this is an easily understandable measurement and of particular
relevance for imaging cameras, which are typically used behind multi-conjugate adaptive optics (MCAO)\renewcommand{\mcao}{MCAO\xspace}\xspace
systems.
The European Extremely Large Telescope (E-ELT)\renewcommand{\eelt}{E-ELT\xspace}\renewcommand{\elt}{ELT\xspace}\renewcommand{\elts}{ELTs\xspace}\xspace design includes a deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace that is part of the telescope
structure (the fourth mirror in the optical train). We therefore use
this as the ground layer conjugated deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace in our modelling, though it is
likely that this deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace will actually be conjugated a few hundred meters
away from ground level, an effect that we investigate.
\subsection{Simulation model details}
The European Extremely Large Telescope (E-ELT)\renewcommand{\eelt}{E-ELT\xspace}\renewcommand{\elt}{ELT\xspace}\renewcommand{\elts}{ELTs\xspace}\xspace design has four sodium laser launch locations, equally
spaced around the telescope just beyond the edge of the telescope
aperture, i.e.\ the lasers are side-launched, rather than centre
launched, so that fratricide effects are irrelevant
\citep{2013aoel.confE..58O}. At each launch location, up to two
lasers can be launched, with independent pointing possible. In the
model that we use here, the launch locations are placed 22~m from the
centre of the telescope aperture.
We model the atmosphere using a standard European Southern Observatory (ESO)\renewcommand{\eso}{ESO\xspace}\renewcommand{\renewcommand{\theeso}{ESO\xspace}the \eso}{ESO\xspace}\xspace 35 layer turbulence
profile \citep{35layer} with an outer scale of 25~m and a Fried's
parameter of 13.5~cm at zenith. Investigations into adaptive optics (AO)\renewcommand{\ao}{AO\xspace}\renewcommand{\Ao}{AO\xspace}\xspace performance with
variations of this atmospheric profile have been studied previously
\citep{basden17}, so we do not consider this further here: it is
important to realise that the performances derived here are relevant
for one atmospheric model only, and that the adaptive optics (AO)\renewcommand{\ao}{AO\xspace}\renewcommand{\Ao}{AO\xspace}\xspace performance will
differ under other atmospheric conditions. Our simulations are
performed at 30$^\circ$ from zenith.
We assume $74\times74$ sub-apertures for each laser guide star (LGS)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{laser guide stars (LGSs)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgss}{LGSs\xspace}\xspace}{LGSs\xspace}\xspace, and a telescope
diameter of 38.55~m, with a central obscuration diameter of 11~m. We
model a direction dependant telescope pupil function, since for the
European Extremely Large Telescope (E-ELT)\renewcommand{\eelt}{E-ELT\xspace}\renewcommand{\elt}{ELT\xspace}\renewcommand{\elts}{ELTs\xspace}\xspace, the observed central obscuration changes across the field of
view. Our model includes the hexagonal pattern of the segmented
mirrors, and telescope support structures (spiders), with a
representative pupil function shown in Fig.~\ref{fig:pupil}.
\begin{figure}
\includegraphics[width=\linewidth]{pupil.eps}
\caption{A representitive E-ELT pupil function, for a line of sight
1.5~arcminutes off-axis. The hexagonal pattern of the segmented
primary mirror is evident, and a slight vignetting by M4 is seen
around the central obscuration (with the effect becoming more
pronounced further off-axis). A slight defocusing can also be seen,
due to the different conjugate heights of different mirrors in the
optical train.}
\label{fig:pupil}
\end{figure}
Our laser guide star (LGS)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{laser guide stars (LGSs)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgss}{LGSs\xspace}\xspace}{LGSs\xspace}\xspace asterism is arranged regularly on a circle, the diameter of
which we investigate. The laser guide star (LGS)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{laser guide stars (LGSs)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgss}{LGSs\xspace}\xspace}{LGSs\xspace}\xspace spots are elongated to model a sodium
layer at 90~km with a full-width at half-maximum (FWHM)\renewcommand{\fwhm}{FWHM\xspace}\xspace between 5--20~km, and we include the cone
effect (or focal anisoplanatism, due to the finite distance) in our
simulations. The laser guide star (LGS)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{laser guide stars (LGSs)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgss}{LGSs\xspace}\xspace}{LGSs\xspace}\xspace point spread functions (PSFs)\renewcommand{\psf}{PSF\xspace}\renewcommand{\psfs}{PSFs\xspace}\renewcommand{Point spread function (PSF)\renewcommand{\psf}{PSF\xspace}\renewcommand{\psfs}{PSFs\xspace}\renewcommand{\Psf}{PSF\xspace}\xspace}{PSF\xspace}\xspace are atmospherically broadened to
1~arcsecond, such that the minimum spot size has a 1~arcsecond full-width at half-maximum (FWHM)\renewcommand{\fwhm}{FWHM\xspace}\xspace
along the laser guide star (LGS)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{laser guide stars (LGSs)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgss}{LGSs\xspace}\xspace}{LGSs\xspace}\xspace axis. The laser guide star (LGS)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{laser guide stars (LGSs)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgss}{LGSs\xspace}\xspace}{LGSs\xspace}\xspace photon return is generally assumed to
be in the high-light regime (we use $10^6$ detected photons per
sub-aperture per frame) unless otherwise stated. However, we also
investigate more realistic photon flux: the flux from the European Southern Observatory (ESO)\renewcommand{\eso}{ESO\xspace}\renewcommand{\renewcommand{\theeso}{ESO\xspace}the \eso}{ESO\xspace}\xspace
Wendelstein laser guide star (LGS)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{laser guide stars (LGSs)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgss}{LGSs\xspace}\xspace}{LGSs\xspace}\xspace unit returns between 5--21 million photons per second
per m$^2$ \citep{caliaPrivate}, depending on location on the sky.
With a 500~Hz frame rate and 0.5~m sub-apertures, we can therefore
expect between 2500--10000 photons per sub-aperture per frame, with
additional reductions due to throughput losses and detector quantum
efficiency. We include photon shot noise in our simulations.
We model detector readout noise at 0, 0.1 and 1 electrons rms per
pixel, corresponding to typical levels from a noiseless detector (the
default case), an electron multiplying CCD (EMCCD)\renewcommand{\emccd}{EMCCD\xspace}\renewcommand{electron multiplying CCDs (EMCCDs)\renewcommand{\emccd}{EMCCD\xspace}\renewcommand{\emccds}{EMCCDs\xspace}\xspace}{EMCCDs\xspace}\xspace, and a scientific CMOS (sCMOS)\renewcommand{\scmos}{sCMOS\xspace}\xspace detector respectively. For
simplicity, we assume that all pixels have the same rms readout noise.
This is not the case for scientific CMOS (sCMOS)\renewcommand{\scmos}{sCMOS\xspace}\xspace technology meaning that our results
will be slightly optimistic. However, this assumption has been
explored elsewhere \citep{basden19}.
We include a wavefront slope linearisation algorithm using a look-up
table to reduce the effect of non-linearity in the wavefront sensor (WFS)\renewcommand{\wfs}{WFS\xspace}\renewcommand{wavefront sensors (WFSs)\renewcommand{\wfs}{WFS\xspace}\renewcommand{\wfss}{WFSs\xspace}\xspace}{WFSs\xspace}\xspace measurements.
The laser guide star (LGS)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{laser guide stars (LGSs)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgss}{LGSs\xspace}\xspace}{LGSs\xspace}\xspace wavelength is 589~nm. We measure science point spread function (PSF)\renewcommand{\psf}{PSF\xspace}\renewcommand{point spread functions (PSFs)\renewcommand{\psf}{PSF\xspace}\renewcommand{\psfs}{PSFs\xspace}\renewcommand{Point spread function (PSF)\renewcommand{\psf}{PSF\xspace}\renewcommand{\psfs}{PSFs\xspace}\renewcommand{\Psf}{PSF\xspace}\xspace}{PSF\xspace}\xspace}{PSFs\xspace}\renewcommand{Point spread function (PSF)\renewcommand{\psf}{PSF\xspace}\renewcommand{\psfs}{PSFs\xspace}\renewcommand{\Psf}{PSF\xspace}\xspace}{PSF\xspace}\xspace performance at
1.65~$\mu$m.
We perform tomographic wavefront reconstruction at the conjugate
heights of the multi-conjugate adaptive optics (MCAO)\renewcommand{\mcao}{MCAO\xspace}\xspace deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace using a minimum mean square error (MMSE)
algorithm \citep{map} with a Laplacian regularisation to approximate
wavefront phase covariance. We assume a wavefront sensor (WFS)\renewcommand{\wfs}{WFS\xspace}\renewcommand{wavefront sensors (WFSs)\renewcommand{\wfs}{WFS\xspace}\renewcommand{\wfss}{WFSs\xspace}\xspace}{WFSs\xspace}\xspace frame rate of 500~Hz,
and ensure that the science point spread functions (PSFs)\renewcommand{\psf}{PSF\xspace}\renewcommand{\psfs}{PSFs\xspace}\renewcommand{Point spread function (PSF)\renewcommand{\psf}{PSF\xspace}\renewcommand{\psfs}{PSFs\xspace}\renewcommand{\Psf}{PSF\xspace}\xspace}{PSF\xspace}\xspace are well averaged.
The deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace are modelled using a cubic spline interpolation function,
which uses given actuator heights and positions to compute a surface
map of the deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace. The ground layer conjugate deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace (M4) has $75\times75$
actuators, while the pitch of the higher layer conjugate deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace is
explored (with the number of required actuators depending on conjugate
height, pitch and field of view). We do not consider deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace
imperfections, as this has been studied previously \citep{basden15}.
Unless otherwise stated, we use the following default parameters in
the results presented here: 6 laser guide stars (LGSs)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgss}{LGSs\xspace}\xspace evenly spaced around a 2~arcminute
diameter circle, at 90~km with a sodium layer full-width at half-maximum (FWHM)\renewcommand{\fwhm}{FWHM\xspace}\xspace of 5~km, 3 deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace
conjugated to 0~km, 4~km and 12.7~km, with a 1~m actuator pitch (when
propagated to the telescope pupil) for the higher deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace, and a 0.52~m
actuator pitch for the ground layer deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace (equal to the sub-aperture
pitch). We note that the assumption of a 5~km full-width at half-maximum (FWHM)\renewcommand{\fwhm}{FWHM\xspace}\xspace sodium layer is
optimistic, however this was chosen to alleviate spot truncation
\citep[which is an issue studied elsewhere, e.g.\ ][]{2011aoel.confE..67V} when using
our default laser guide star (LGS)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{laser guide stars (LGSs)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgss}{LGSs\xspace}\xspace}{LGSs\xspace}\xspace pixel scale of 0.23~arcsec/pixel (chosen to reduce
the computational complexity of our simulations). A 10~km width is
more typical, while a 20~km width is considered pessimistic. We also
note that the deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace conjugate heights are chosen to match tentative
designs for the first European Extremely Large Telescope (E-ELT)\renewcommand{\eelt}{E-ELT\xspace}\renewcommand{\elt}{ELT\xspace}\renewcommand{\elts}{ELTs\xspace}\xspace multi-conjugate adaptive optics (MCAO)\renewcommand{\mcao}{MCAO\xspace}\xspace system, and also note that they are
similar to the GeMS system on the Gemini South telescope
\citep{2012SPIE.8447E..0IRshort}. We use $16\times16$ pixels per
sub-aperture. Results are presented on-axis, except where stated
otherwise.
Typically, we run our simulations for 5000 Monte-Carlo iterations,
representing 10~s of telescope time, which is sufficient to obtain a
well averaged point spread function (PSF)\renewcommand{\psf}{PSF\xspace}\renewcommand{point spread functions (PSFs)\renewcommand{\psf}{PSF\xspace}\renewcommand{\psfs}{PSFs\xspace}\renewcommand{Point spread function (PSF)\renewcommand{\psf}{PSF\xspace}\renewcommand{\psfs}{PSFs\xspace}\renewcommand{\Psf}{PSF\xspace}\xspace}{PSF\xspace}\xspace}{PSFs\xspace}\renewcommand{Point spread function (PSF)\renewcommand{\psf}{PSF\xspace}\renewcommand{\psfs}{PSFs\xspace}\renewcommand{\Psf}{PSF\xspace}\xspace}{PSF\xspace}\xspace. The uncertainties in our results due to
Monte-Carlo randomness are at the 1\% level, which we have verified
using a suite of separate Monte-Carlo runs.
\section{Predicted performance and uncertainties for E-ELT MCAO}
A key cost driver for adaptive optics (AO)\renewcommand{\ao}{AO\xspace}\renewcommand{\Ao}{AO\xspace}\xspace instruments on the European Extremely Large Telescope (E-ELT)\renewcommand{\eelt}{E-ELT\xspace}\renewcommand{\elt}{ELT\xspace}\renewcommand{\elts}{ELTs\xspace}\xspace is the number of
laser guide stars (LGSs)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgss}{LGSs\xspace}\xspace required. Fig.~\ref{fig:nlgs} shows predicted adaptive optics (AO)\renewcommand{\ao}{AO\xspace}\renewcommand{\Ao}{AO\xspace}\xspace performance
at the centre of the field of view as a function of laser guide star (LGS)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{laser guide stars (LGSs)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgss}{LGSs\xspace}\xspace}{LGSs\xspace}\xspace asterism
diameter, using different numbers of laser guide star (LGS)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{laser guide stars (LGSs)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgss}{LGSs\xspace}\xspace}{LGSs\xspace}\xspace. It can be seen that, as
expected, performance increases with the number of guide stars.
However, the gain in performance moving from 6 to 8 laser guide star (LGS)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{laser guide stars (LGSs)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgss}{LGSs\xspace}\xspace}{LGSs\xspace}\xspace (typically
10--20\%) is not as significant as moving from 4 to 6 (a 30--70\% gain),
suggesting that using 6 laser guide star (LGS)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{laser guide stars (LGSs)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgss}{LGSs\xspace}\xspace}{LGSs\xspace}\xspace would present a good trade-off between
cost and performance.
\begin{figure}
\includegraphics[width=\linewidth]{plotnlgs.eps}
\caption{A figure showing on-axis MCAO performance as a function of
LGS asterism diameter. The different curves are for different
numbers of LGS, as given in the legend.}
\label{fig:nlgs}
\end{figure}
Field uniformity is not greatly affected by the number of laser guide star (LGS)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{laser guide stars (LGSs)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgss}{LGSs\xspace}\xspace}{LGSs\xspace}\xspace, as
shown in Fig.~\ref{fig:nlgsfield}: the predicted adaptive optics (AO)\renewcommand{\ao}{AO\xspace}\renewcommand{\Ao}{AO\xspace}\xspace performance
remains reasonably constant across the central 2 arcminutes,
independent of the number of laser guide star (LGS)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{laser guide stars (LGSs)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgss}{LGSs\xspace}\xspace}{LGSs\xspace}\xspace used (though with a uniformly lower
performance when fewer laser guide star (LGS)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{laser guide stars (LGSs)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgss}{LGSs\xspace}\xspace}{LGSs\xspace}\xspace are used). For comparison, we note that
we obtain a H-band Strehl ratio of about 50\% when modelling a single conjugate \ao (SCAO)\renewcommand{\scao}{SCAO\xspace}\renewcommand{Single conjugate \ao (SCAO)\renewcommand{\scao}{SCAO\xspace}\renewcommand{\Scao}{SCAO\xspace}\xspace}{SCAO\xspace}\xspace system
under identical conditions, with $74\times74$ sub-apertures, high
light level assumed, and an integrator control law, in
agreement with other studies \citep{2011aoel.confP..23C}.
\begin{figure*}
(a)\includegraphics[width=0.45\linewidth]{plotstrehlMap60_4lgs.eps}
(b)\includegraphics[width=0.45\linewidth]{plotstrehlMap100_4lgs.eps}\\
(c)\includegraphics[width=0.45\linewidth]{plotstrehlMap60_1_LGS.eps}
(d)\includegraphics[width=0.45\linewidth]{plotstrehlMap100_1_LGS.eps}\\
(e)\includegraphics[width=0.45\linewidth]{plotstrehlMap60_8lgs.eps}
(f)\includegraphics[width=0.45\linewidth]{plotstrehlMap100_8lgs.eps}
\caption{A figure showing MCAO Strehl ratio across a 3.33 arcminute
field of view, for: (top row) 4 LGS, (middle row) 6 LGS, (bottom
row) 8 LGS, and for (left column) LGS on 2~arcminute diameter ring,
(right column) LGS on 3.33~arcminute diameter ring. The LGS
positions are shown by orange triangles, and the science PSF
sampling locations by blue diamonds.}
\label{fig:nlgsfield}
\end{figure*}
\subsection{Dependence on DM conjugation}
Fig.~\ref{fig:dmheight} shows multi-conjugate adaptive optics (MCAO)\renewcommand{\mcao}{MCAO\xspace}\xspace performance as a function of deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace
conjugation height, in the case of a 2 deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace system with 6 laser guide stars (LGSs)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgss}{LGSs\xspace}\xspace, with
the lower deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace conjugated to ground level. For comparison, the
performance with 3 deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace conjugated at 0~km, 4~km and 12.7~km can be
taken from Fig.~\ref{fig:nlgs}. We can see that best performance (for the
particular $C_n^2$ profile used, shown in Fig.~\ref{fig:cn2}) is
obtained with the upper deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace conjugated at 12~km, and that performance
with 3 deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace is significantly better than with 2. When compared with
the $C_n^2$ profile (Fig.~\ref{fig:dmheight}), it is evident that
it is important to place the deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace according to where significant
turbulence strength lies.
\begin{figure}
\includegraphics[width=\linewidth]{plotdmheight.eps}
\caption{A figure showing on-axis Strehl ratio as a function of upper DM
conjugate height for a 2 DM MCAO system, with 6 LGSs.}
\label{fig:dmheight}
\end{figure}
\begin{figure}
\includegraphics[width=\linewidth]{plotcn2.eps}
\caption{The $C_n^2$ profile used in these simulations.}
\label{fig:cn2}
\end{figure}
\subsection{Dependence on DM actuator pitch}
Fig.~\ref{fig:dmpitch} shows multi-conjugate adaptive optics (MCAO)\renewcommand{\mcao}{MCAO\xspace}\xspace performance as a function of
above-ground deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace actuator pitch, for both the 2 deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace case (with the
upper deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace conjugated at 12~km), and the 3 deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace case. The pitch of the
ground layer deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace is maintained at 52~cm. It is evident that using a
1~m actuator pitch for above-ground layer deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace will lead to a small
performance degradation, compared to a smaller deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace pitch. Using a
larger deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace actuator pitch results in predicted adaptive optics (AO)\renewcommand{\ao}{AO\xspace}\renewcommand{\Ao}{AO\xspace}\xspace performance falling
quickly. A 0.75~m pitch delivers almost identical performance as a
0.5~m pitch. Therefore, when performing a cost benefit analysis, it
would seem that the reduction in performance when using a 1~m actuator
pitch is acceptable given the cost reduction, but that further
increases in pitch would yield significant performance reduction.
\begin{figure}
\includegraphics[width=\linewidth]{plotpitch.eps}
\caption{A figure showing on-axis Strehl ratio as a function of above-ground
layer DM actuator pitch, for 2 and 3 DMs, and for LGS asterism
diameters of 2 and 3.33 arcminutes, as given in the legend.}
\label{fig:dmpitch}
\end{figure}
\subsection{Conjugation height of ground layer DM}
The adaptive M4 mirror for the European Extremely Large Telescope (E-ELT)\renewcommand{\eelt}{E-ELT\xspace}\renewcommand{\elt}{ELT\xspace}\renewcommand{\elts}{ELTs\xspace}\xspace is not optically conjugated at
ground layer, but rather, a few hundred metres above ground.
Fig.~\ref{fig:groundheight} shows on-axis adaptive optics (AO)\renewcommand{\ao}{AO\xspace}\renewcommand{\Ao}{AO\xspace}\xspace system performance as a
function of ground-layer deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace conjugate height, and it is evident that
although there may be some variation in performance, this is small
over the range of likely conjugate heights, and so can be ignored. We
do not take into account the differential conjugate height across the
deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace that results from the European Extremely Large Telescope (E-ELT)\renewcommand{\eelt}{E-ELT\xspace}\renewcommand{\elt}{ELT\xspace}\renewcommand{\elts}{ELTs\xspace}\xspace design (i.e.\ since the deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace is tilted,
one side has a lower optical conjugate than the other). Instead, we
cover the range of conjugate heights.
\begin{figure}
\includegraphics[width=\linewidth]{plotgroundheight.eps}
\caption{A figure showing on-axis AO performance as a function of lowest DM
conjugation height, for a 6~LGS (2~arcminute diameter spacing), 3~DM MCAO system.}
\label{fig:groundheight}
\end{figure}
\subsection{Performance variation with LGS pixel scale}
The default LGS pixel scale used throughout these simulations equates
to 0.23~arcsec/pixel, i.e.\ a wavefront sensor (WFS)\renewcommand{\wfs}{WFS\xspace}\renewcommand{wavefront sensors (WFSs)\renewcommand{\wfs}{WFS\xspace}\renewcommand{\wfss}{WFSs\xspace}\xspace}{WFSs\xspace}\xspace field of view of 3.73~arcsec. This
relatively small field of view is used to reduce computational
requirements. As a result, our default sodium layer depth is also
relatively narrow, at 5~km full-width at half-maximum (FWHM)\renewcommand{\fwhm}{FWHM\xspace}\xspace to avoid significant spot truncation.
Therefore, we investigate adaptive optics (AO)\renewcommand{\ao}{AO\xspace}\renewcommand{\Ao}{AO\xspace}\xspace performance as a function of both pixel
scale, and sodium layer depth, as shown in Fig.~\ref{fig:sodiumdepth}.
Since adaptive optics (AO)\renewcommand{\ao}{AO\xspace}\renewcommand{\Ao}{AO\xspace}\xspace performance is highly dependent on sub-aperture noise, which
is effected by signal level, detector characteristics, pixel scale and
sodium layer depth, we also investigate different noise levels here.
It can be seen that in all cases, there is an optimum pixel scale for
a given sodium full-width at half-maximum (FWHM)\renewcommand{\fwhm}{FWHM\xspace}\xspace, signal level and readout noise level. We
investigate detected photon fluxes of 100, 1000, 2500, 10000 and
10$^6$ photons per sub-aperture per frame, and consider readout noise
levels of 0, 0.1 and 1 photo-electrons rms (respresenting noiseless,
electron multiplying CCD (EMCCD)\renewcommand{\emccd}{EMCCD\xspace}\renewcommand{electron multiplying CCDs (EMCCDs)\renewcommand{\emccd}{EMCCD\xspace}\renewcommand{\emccds}{EMCCDs\xspace}\xspace}{EMCCDs\xspace}\xspace and scientific CMOS (sCMOS)\renewcommand{\scmos}{sCMOS\xspace}\xspace technologies). We note that a likely signal level
is between 1000--10000 photons per sub-aperture per frame, once
throughput losses have been taken into account.
It can be seen that for the lowest signal levels with highest readout
noise and largest laser guide star (LGS)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{laser guide stars (LGSs)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgss}{LGSs\xspace}\xspace}{LGSs\xspace}\xspace spots (as seen on the detector, i.e.\ large
sodium layer depth and small pixel scale), that adaptive optics (AO)\renewcommand{\ao}{AO\xspace}\renewcommand{\Ao}{AO\xspace}\xspace correction is very
poor, or fails. In these cases, it would probably be possible to fine
tune the wavefront reconstruction algorithms, and use an optimal
sub-aperture processing algorithm to improve performance. However, we
do not consider this here, as such optimisation is beyond the scope of
this paper.
At the likely signal levels of between 1000--10000 photons per
sub-aperture per frame, and a realistic sodium layer depth full-width at half-maximum (FWHM)\renewcommand{\fwhm}{FWHM\xspace}\xspace of
10~km, these results suggest that a pixel scale of 0.6--0.7~arcseconds per
pixel is reasonable, being robust to changes in the sodium depth
(i.e.\ if the sodium layer depth changes, performance won't be
significantly affected). If the sodium layer has a greater extent,
then a slightly larger pixel scale (say 0.8~arcseconds per pixel) may
be favourable for low signal-to-noise cases.
High signal-to-noise cases (high light level, low readout noise) are
seen to favour smaller pixel scales (around 0.4~arcseconds per pixel),
due to the increased wavefront sensor (WFS)\renewcommand{\wfs}{WFS\xspace}\renewcommand{wavefront sensors (WFSs)\renewcommand{\wfs}{WFS\xspace}\renewcommand{\wfss}{WFSs\xspace}\xspace}{WFSs\xspace}\xspace sensitivity to spot motion (detectable phase
gradient resolution). It can also be seen that at these light levels,
given the pixel scale is large enough, increasing the sodium layer
depth does not significantly impact adaptive optics (AO)\renewcommand{\ao}{AO\xspace}\renewcommand{\Ao}{AO\xspace}\xspace performance, i.e.\ the
performance curve has a broad peak. However, at very low
pixel scales, significant truncation of laser guide star (LGS)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{laser guide stars (LGSs)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgss}{LGSs\xspace}\xspace}{LGSs\xspace}\xspace spot images occurs,
resulting in reduced performance. With large sodium layer depths, a
larger field of view can lead to increased performance, due to reduced
spot truncation, and hence higher sensitivity.
We note for the default case (5~km depth, high light level, no noise),
that the variation in performance with pixel scale is small (about
10\% in Strehl). Therefore, the results presented in the rest of this
paper using the default parameters are unlikely to be much different
when larger pixel scales are used.
\begin{figure*}
(a)\includegraphics[width=0.45\linewidth]{plotpxlScaleNoiseFHWM5000.eps}
(b)\includegraphics[width=0.45\linewidth]{plotpxlScaleNoiseFHWM10000.eps}
(c)\includegraphics[width=0.45\linewidth]{plotpxlScaleNoiseFHWM15000.eps}
(d)\includegraphics[width=0.45\linewidth]{plotpxlScaleNoiseFHWM20000.eps}
\caption{A figure showing on-axis AO performance (H-band Strehl) as a
function of wavefront sensor pixel scale, for (a) a 5~km sodium
layer FWHM, (b) a 10~km sodium layer FWHM, (c) a 15~km sodium layer
FWHM and (d) a 20~km sodium layer FWHM. Different signal levels and
readout noise levels are shown, as given by the legend in (a), using
$16\times16$ pixel sub-apertures. In summary, from dark to light
represents increasing photon flux (sig, in photons per sub-aperture
per frame), solid lines have no readout noise, dashed lines have
0.1e- noise, and dotted lines have 1e- readout noise.}
\label{fig:sodiumdepth}
\end{figure*}
Fig.~\ref{fig:elongation} shows the degree of laser guide star (LGS)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{laser guide stars (LGSs)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgss}{LGSs\xspace}\xspace}{LGSs\xspace}\xspace elongation and
truncation for different pixel scales and sodium layer depths for
sub-apertures far from the laser launch locations (40~m away). When
computing wavefront slopes for these sub-apertures, a conventional
centre of gravity algorithm has been used here, and we do not
explicitly take into account different slope noise characteristics
parallel and perpendicular to the elongation direction, nor do we
explicitly deal with bias introduced due to spot truncation, i.e.\ our
wavefront reconstruction is slightly pessimistic, and could be
improved upon in a separate study.
\begin{figure}
\includegraphics[width=\linewidth]{plotelongation.eps}
\caption{A figure showing a single sub-aperture as simulated here,
40~m from the laser launch location, for different sodium layer
depths and pixel scales as given in the figure.}
\label{fig:elongation}
\end{figure}
The laser guide star (LGS)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{laser guide stars (LGSs)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgss}{LGSs\xspace}\xspace}{LGSs\xspace}\xspace spots are truncated at the edge of the sub-apertures, and we
assume that a field-stop is in place to prevent leakage between
sub-apertures.
\subsection{LGS sub-aperture size}
Larger sub-apertures require detectors with more pixels, increased
real-time control system (RTCS)\renewcommand{\rtcs}{RTCS\xspace}\renewcommand{real-time control systems (RTCSs)\renewcommand{\rtcs}{RTCS\xspace}\renewcommand{\rtcss}{RTCSs\xspace}\xspace}{RTCSs\xspace}\xspace computation power, more expensive detectors, and generally
result in more costly adaptive optics (AO)\renewcommand{\ao}{AO\xspace}\renewcommand{\Ao}{AO\xspace}\xspace systems. We therefore explore adaptive optics (AO)\renewcommand{\ao}{AO\xspace}\renewcommand{\Ao}{AO\xspace}\xspace
performance as a function of sub-aperture size (in terms of pixel
count), in Fig.~\ref{fig:subapsize}. It is evident here that larger
sub-apertures are favourable, primarily to avoid spot truncation.
With increased pixel scale (compressing the laser guide star (LGS)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{laser guide stars (LGSs)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgss}{LGSs\xspace}\xspace}{LGSs\xspace}\xspace point spread function (PSF)\renewcommand{\psf}{PSF\xspace}\renewcommand{point spread functions (PSFs)\renewcommand{\psf}{PSF\xspace}\renewcommand{\psfs}{PSFs\xspace}\renewcommand{Point spread function (PSF)\renewcommand{\psf}{PSF\xspace}\renewcommand{\psfs}{PSFs\xspace}\renewcommand{\Psf}{PSF\xspace}\xspace}{PSF\xspace}\xspace}{PSFs\xspace}\renewcommand{Point spread function (PSF)\renewcommand{\psf}{PSF\xspace}\renewcommand{\psfs}{PSFs\xspace}\renewcommand{\Psf}{PSF\xspace}\xspace}{PSF\xspace}\xspace into fewer
pixels), the number of pixels required (sub-aperture pixel size) can
be reduced with little performance loss. However, with an increased
sodium layer depth (more elongated spots), resulting in increased spot
truncation for smaller sub-apertures, the pixel count cannot be
reduced without a more significant effect on performance, and larger
pixel count sub-apertures are favoured.
\begin{figure}
\includegraphics[width=\linewidth]{plotsubapsize.eps}
\caption{A figure showing on-axis AO performance as a function of sub-aperture
size, for different pixel scales and sodium layer depths (as given
in the legend).}
\label{fig:subapsize}
\end{figure}
There is evidently a trade-off to be made, between sub-aperture size
and system cost. We suggest that a minimum of $10\times10$ pixels per
sub-aperture would be appropriate for a pixel scale of 0.7~arcseconds
per pixel, with a total field of view of 7~arcseconds or more, though
this is highly dependent on actual sodium layer profile. We note that
this is a minimum requirement, and that greater performance would be
achieved using a larger number of pixels particularly when the sodium
layer depth is more extensive, provided that readout noise does not
dominate.
\subsection{Operation with NGS}
Fig.~\ref{fig:ngstt} shows predicted adaptive optics (AO)\renewcommand{\ao}{AO\xspace}\renewcommand{\Ao}{AO\xspace}\xspace performance as a function of
distance from the on-axis direction, for a system using 3 natural guide star (NGS)\renewcommand{\ngs}{NGS\xspace}\renewcommand{\Ngs}{NGS\xspace}\renewcommand{natural guide stars (NGSs)\renewcommand{\ngs}{NGS\xspace}\renewcommand{\Ngs}{NGS\xspace}\renewcommand{\ngss}{NGSs\xspace}\xspace}{NGSs\xspace}\xspace for
tip-tilt correction, and 6 laser guide star (LGS)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{laser guide stars (LGSs)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgss}{LGSs\xspace}\xspace}{LGSs\xspace}\xspace for higher order correction. This can
be compared directly with Fig.~\ref{fig:nlgsfield}(c), which uses only
laser guide star (LGS)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{laser guide stars (LGSs)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgss}{LGSs\xspace}\xspace}{LGSs\xspace}\xspace (that are used unphysically for tip-tilt correction).
Performance is very similar in both cases (though not identical),
which confirms that the simplification made when using only laser guide stars (LGSs)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgss}{LGSs\xspace}\xspace is
able to provide a reasonably reliable performance estimate. As stated
previously, we note that actual performance will depend somewhat on
natural guide star (NGS)\renewcommand{\ngs}{NGS\xspace}\renewcommand{\Ngs}{NGS\xspace}\renewcommand{natural guide stars (NGSs)\renewcommand{\ngs}{NGS\xspace}\renewcommand{\Ngs}{NGS\xspace}\renewcommand{\ngss}{NGSs\xspace}\xspace}{NGSs\xspace}\xspace asterism shape and availability of suitably bright targets,
though this is beyond the scope of the study presented here. In the
case presented here, the 3 natural guide stars (NGSs)\renewcommand{\ngs}{NGS\xspace}\renewcommand{\Ngs}{NGS\xspace}\renewcommand{\ngss}{NGSs\xspace}\xspace were in an asterism equally placed
around a 2~arcminute diameter circle, as were the 6 laser guide star (LGS)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{laser guide stars (LGSs)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgss}{LGSs\xspace}\xspace}{LGSs\xspace}\xspace.
\begin{figure}
\includegraphics[width=\linewidth]{plotstrehlMap60_1_LGS+NGSTT.eps}
\caption{A figure showing Strehl across a 3.33 arcminute field of view
for tip-tilt correction performed using 3 NGS, and higher order
correction using 6 LGS. This can be compared directly with
Fig.~\ref{fig:nlgsfield}(c) which shows LGS correction only (i.e.\ tip-tilt
correction is performed using the LGS). The LGS positions are shown
by orange triangles, the NGS positions by red circles, and the science PSF sampling
positions by blue diamonds.}
\label{fig:ngstt}
\end{figure}
\subsection{Comparisons with other simulation results}
A direct comparison with other previous simulation results is
difficult, due to differences in atmospheric models, science
wavelength, numbers of sub-apertures, telescope size and other
modelling uncertainties and differences. However, verification of
performance trends is possible. We find that our estimated
performance for basic models is broadly similar to other Extremely Large Telescope (ELT)\renewcommand{\elt}{ELT\xspace}\renewcommand{Extremely Large Telescopes (ELTs)\renewcommand{\elt}{ELT\xspace}\renewcommand{\elts}{ELTs\xspace}\renewcommand{European Extremely Large Telescope (E-ELT)\renewcommand{\eelt}{E-ELT\xspace}\renewcommand{\elt}{ELT\xspace}\renewcommand{\elts}{ELTs\xspace}\xspace}{European ELT (E-ELT)\renewcommand{European Extremely Large Telescope (E-ELT)\renewcommand{\eelt}{E-ELT\xspace}\renewcommand{\elt}{ELT\xspace}\renewcommand{\elts}{ELTs\xspace}\xspace}{E-ELT\xspace}\xspace}\xspace}{ELTs\xspace}\renewcommand{European Extremely Large Telescope (E-ELT)\renewcommand{\eelt}{E-ELT\xspace}\renewcommand{\elt}{ELT\xspace}\renewcommand{\elts}{ELTs\xspace}\xspace}{European ELT (E-ELT)\renewcommand{European Extremely Large Telescope (E-ELT)\renewcommand{\eelt}{E-ELT\xspace}\renewcommand{\elt}{ELT\xspace}\renewcommand{\elts}{ELTs\xspace}\xspace}{E-ELT\xspace}\xspace}\xspace
Monte-Carlo models
\citep{miskaltao,2011aoel.confE..63T,2010aoel.confE2013F}, and
slightly pessimistic when compared with analytical models of adaptive optics (AO)\renewcommand{\ao}{AO\xspace}\renewcommand{\Ao}{AO\xspace}\xspace
performance \citep{2008JOSAA..26..219N}, as expected. The
consideration of deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace conjugation is largely independent of telescope
diameter, and our results are similar to those given by
\citet{2003A&A...404.1165F,2000SPIE.4007.1032F}. Similarly, a study
of laser guide star (LGS)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{laser guide stars (LGSs)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgss}{LGSs\xspace}\xspace}{LGSs\xspace}\xspace pixel count was carried out by \citet{2011aoel.confE..67V}. A
combined study of pixel scale, sodium layer depth and laser guide star (LGS)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{laser guide stars (LGSs)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgss}{LGSs\xspace}\xspace}{LGSs\xspace}\xspace signal
level is new here.
\section{Conclusions}
We have performed adaptive optics (AO)\renewcommand{\ao}{AO\xspace}\renewcommand{\Ao}{AO\xspace}\xspace performance modelling for a multi-conjugate adaptive optics (MCAO)\renewcommand{\mcao}{MCAO\xspace}\xspace system on the
European Extremely Large Telescope (E-ELT)\renewcommand{\eelt}{E-ELT\xspace}\renewcommand{\elt}{ELT\xspace}\renewcommand{\elts}{ELTs\xspace}\xspace, using a Monte-Carlo, end-to-end adaptive optics (AO)\renewcommand{\ao}{AO\xspace}\renewcommand{\Ao}{AO\xspace}\xspace simulation code, and have
looked at number of laser guide stars (LGSs)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgss}{LGSs\xspace}\xspace, deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace configuration, laser guide star (LGS)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{laser guide stars (LGSs)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgss}{LGSs\xspace}\xspace}{LGSs\xspace}\xspace pixel scale, and
sodium layer depth. We find that using 6 laser guide star (LGS)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{laser guide stars (LGSs)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgss}{LGSs\xspace}\xspace}{LGSs\xspace}\xspace seems to be a good
compromise between performance and cost. The use of 3 deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace, rather
than 2 provides a significant performance advantage, though it is
possible to reduce the actuator pitch of these deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{Deformable mirrors (DMs)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace}{DMs\xspace}\renewcommand{Deformable mirror (DM)\renewcommand{\dm}{DM\xspace}\renewcommand{\dms}{DMs\xspace}\renewcommand{\Dms}{DMs\xspace}\renewcommand{\Dm}{DM\xspace}\xspace}{DM\xspace}\xspace to below that of
the wavefront sensors (WFSs)\renewcommand{\wfs}{WFS\xspace}\renewcommand{\wfss}{WFSs\xspace}\xspace, without significant performance loss, hence reducing system
cost. We find that the ideal pixel scale and wavefront sensor (WFS)\renewcommand{\wfs}{WFS\xspace}\renewcommand{wavefront sensors (WFSs)\renewcommand{\wfs}{WFS\xspace}\renewcommand{\wfss}{WFSs\xspace}\xspace}{WFSs\xspace}\xspace field of view
depends on sodium layer profile, and suggest that a field of view
should be chosen that is sufficient to encompass all likely sodium
layer profile depths. A pixel scale of at least
0.7~arcseconds per pixel is necessary, and at least $10\times10$
pixels per sub-aperture should be used, for the simplified Gaussian
profiles used here. We note that this is likely to lead to spot
truncation, and there is a trade-off between truncation and
sensitivity. We also find that, as expected, larger sub-apertures (in
terms of pixel count) offer better performance as this reduces
clipping of the elongated laser guide star (LGS)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{laser guide stars (LGSs)\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgs}{LGS\xspace}\renewcommand{\lgss}{LGSs\xspace}\xspace}{LGSs\xspace}\xspace spots.
\section*{Acknowledgements}
This work is funded by the UK Science and Technology Facilities
Council, grant ST/K003569/1, and a consolidated grant ST/L00075X/1.
Helpful comments from Tim Morris are acknowledged.
\bibliographystyle{mn2e}
|
train/arxiv
|
BkiUgefxK7IAEwOJKP8-
| 5 | 1 |
\section{Introduction}
Quark-hadron duality, in its most general form, is the notion
that certain rates for processes involving hadrons can be computed
simply as the underlying partonic rates\cite{Bloom}. Duality allows
us to compute many quantities which would otherwise be hopelessly
difficult. One common application of duality is to the nonleptonic
weak decays of heavy hadrons. The lore is that, for large enough
heavy quark mass, duality holds in the computation of the hadronic
width.
Several discrepancies between theory and experiment that have
recently received attention rely on quark-hadron duality. Among them
are the significant difference between lifetimes of beauty baryons and
mesons\cite{Ric}, the overestimates of the $B$-meson semileptonic
branching fraction\cite{bigi2} and the average number of charm quarks
per $B$ decay\cite{bagan}. Because the limit of experimental
knowledge about nonleptonic $B$ decays is rapidly expanding, such
issues are of great topical interest.
But when is duality valid? In many cases duality follows from
the Operator Product Expansion (OPE). This is the case, for example,
for the rate of $e^+e^-\rightarrow\hbox{hadrons}$ and for the
semileptonic decay rates of heavy hadrons. However, duality is
applied in many other cases, such as in hadronic widths of heavy
hadrons, for which there is no OPE.
Reference~\cite{bigi1} proposes an OPE-like expansion in
inverse powers of the heavy quark mass~$M$, which not only
incorporates quark-hadron duality as the lowest term in the expansion,
but also organizes the corrections by inverse powers of $M$. A main
result of that work is the claim that corrections first appear at
order $1/M^2$. The question above can be reformulated as, ``Is an
OPE-like expansion like that of Ref.~\cite{bigi1} valid?''
To investigate the validity of duality it is convenient to
work with a soluble model of strong interactions formulated as a
full-fledged field theory, so that one may test duality both in cases
with and without an OPE. The 't~Hooft model\cite{tH}, large-$N_c$ QCD
in $1+1$ dimensions, is a good laboratory for this purpose. It
contains an infinite spectrum of mesons composed of confined quarks,
realizes asymptotic freedom trivially, and inherits all the
phenomenological consequences of large-$N_c$ QCD\cite{tHN} common to
our universe, such as dominance of scattering amplitudes with the
minimum number of meson states, OZI suppression, the absence of
exotics, and others. For processes with an OPE, duality in the 't
Hooft model has been checked explicitly\cite{CCG,Ein}. However,
little is known about duality for non-OPE processes. The reason is
that, in precisely those cases for which an OPE is lacking, there is
no simple analytical method of verifying duality, and one must resort
to arguments based on numerical solutions.
In this paper we compute the hadronic weak decay width
$\Gamma(M)$ of heavy ``$B$'' mesons in the 't~Hooft model as a
function of the heavy ``$b$'' quark mass~$M$; the meaning of ``heavy''
is made precise in Sec.~\ref{review}. We compare this to the partonic
(perturbative) decay width of the heavy quark, $\Gamma_{\rm part}(M)$,
which we compute analytically. For large $M$ we find that both
$\Gamma_{\rm part}(M)$ and $\Gamma(M)$ are essentially linear in
$M$. The difference between the two appears to be asymptotically
constant and small, indicating a small $1/M$ correction to the naive
duality limit. As $M$ increases, new hadronic decay channels become
accessible, and at each of these thresholds there is a singular peak
in $\Gamma(M)$. Averaging $\Gamma(M)$ over a region in $M$ that
includes many resonances removes these peaks but does not change the
leading dependence on $M$. {\it Our conclusion is that duality holds
to leading order in $M$, but unlike the OPE-like expansion of
Ref.~\cite{bigi1}, appears to have $1/M$ corrections.}
In Ref.~\cite{roma} it is argued that there is strong experimental
evidence for the failure of duality. What is meant there is that the
pattern of corrections in powers of $1/M$ of Ref.~\cite{bigi1} is not
supported by experiment. This agrees with our result, which indicates
a violation to duality at first order in the $1/M$ expansion of
Ref.~\cite{bigi1}.
While we cannot carry over our quantitative results to the
physical world of non-planar QCD in $3+1$ dimensions, we believe that
there is nothing intrinsic to 1+1 dimensions that would make duality
work differently than in 3+1. The operator analysis that leads to the
$1/M$ expansion proceeds in 1+1 much as in 3+1.
The paper is organized as follows. In Sec.~\ref{review}, we
briefly review the 't~Hooft model and a standard method for its
numerical solution. Section~\ref{1+1} compares features of 1+1
dimensions, such as the nature of phase space and spin, to those in
3+1 dimensions. In Sec.~\ref{inc}, we present the algebraic results
of the inclusive parton-level calculation of widths. In
Sec.~\ref{exc}, we present the results of the exclusive calculation in
the 't~Hooft model. Section~\ref{res} gives our numerical results and
a discussion of their implications, and Sec.~\ref{conc} concludes.
There are other nonperturbative questions of phenomenological
interest in the area of hadronic $B$ weak decays for which there is an
established lore. One can test any of these hypotheses in the
't~Hooft model. Of particular interest is the notion that
contributions to decay amplitudes from different underlying quark
diagram topologies contribute with distinct weights, which can very
much suppress the amplitude from a given topology. For example, the
``annihilation'' diagrams, in which the valence quark-antiquark pair
annihilate through a weak current, are supposedly suppressed relative
the ``spectator'' diagrams by a factor of $f_B/M_B$, where $f_B$ is
the $B$ decay constant and $M_B$ its mass. We will address this
question in a separate publication\cite{GLII}.
\section{The 't~Hooft Model} \label{review}
The success of 't~Hooft's method of solving a strongly-coupled
theory rests on two assumptions that considerably simplify the
problem. First, one works in the limit of large $N_c$, in which it is
readily seen\cite{tHN} that diagrams including either internal
fermion-antifermion loops or the crossing of gluon lines at points
other than their vertices are suppressed by combinatorial powers of
$N_c$ compared to those that do not. These simple topological
consequences of the theory lead directly to the predictive power of
large $N_c$. Second, in 1+1 dimensions one may use the gauge freedom
of QCD to choose a linear gauge in which some chosen component of the
gluon field vanishes, and only the sole orthogonal component survives.
Then, since the gluon self-coupling term in the field strength appears
as a commutator of field components, this term vanishes in the gauge
we have selected. Consequently, gluon self-coupling vanishes in this
gauge, and so in combination with large $N_c$, gluon lines are not
permitted to cross each other, even at vertices. Moreover, ghosts are
absent in linear gauges. It follows that the only diagrams that must
be summed are ``rainbow'' diagrams for the quark mass and wave
function renormalization, and ``ladder'' diagrams for quark-antiquark
interactions\cite{tH}.
In 1+1 dimensions confinement is realized trivially, since the
lowest-order inter-quark potential obtained by taking the Fourier
transform of the gluon propagator (which gives rise to the $1/r$
Coulomb interaction in four dimensions) grows linearly with the
inter-quark separation. Although lowest-order color confinement is an
automatic consequence in two dimensions, it is a highly nontrivial
fact that the phenomenon persists in the all-orders Green function
solutions of the 't~Hooft model.
To be specific, the Lagrangian of QCD, as in four dimensions,
is
\begin{equation}
{\cal L} = -\frac 1 4 {\rm Tr} \, F_{\mu \nu} F^{\mu \nu} + \sum_a
\bar \psi_a \left( \gamma^\mu (i \partial_\mu - g A_\mu) -m_a \right)
\psi_a ,
\end{equation}
where $A_\mu$ is the $SU(N_c)$ gauge field with field strength $F_{\mu
\nu}$ defined in the usual way, and $\psi_a$ is a Dirac fermion of
bare mass $m_a$ and flavor $a$. The bare coupling $g$ not only has
dimensions of mass in two dimensions, but scales as $1/\sqrt{N_c}$ in
the large-$N_c$ limit.
The renormalization of the fermion propagator is exceptionally
simple. The only modification is a shift of the bare fermion mass by
\begin{equation} \label{mren}
m_a^2 \to m_{a,R}^2 \equiv m_a^2 - g^2 N_c / 2 \pi .
\end{equation}
Consequently, it makes good sense to describe masses in units of $g
\sqrt{N_c / 2 \pi}$, which is finite in the $N_c \to \infty$ limit.
The dividing line of $m_a^2 = 1$ ($m_{a,R}^2 = 0$) in these units acts
as a boundary between heavy and light quarks, as is numerically
verified in Refs.~\cite{JM,GM1,GM2}; for example, in \cite{GM1} it was
seen that the meson decay constant approaches the standard asymptotic
behavior $f_B \propto 1/\sqrt{M}$ for $M \geq 5$ or so. It follows
that $g\sqrt{N_c / 2 \pi}$ in 1+1 assumes a role analogous to
$\Lambda_{\rm QCD}$ in 3+1.
Quantization of the theory is most convenient in axial
light-cone gauge ($A_- = 0$), where light cone coordinates are defined
by
\begin{equation}
x^{\pm} \equiv x_{\mp} \equiv \frac{(x^0 \pm x^1)}{\sqrt{2}} ,
\end{equation}
and analogously for other vectors. The chief advantage of this choice
is that only one component of the Dirac algebra ($\gamma_-$) survives,
thus effectively eliminating the need to perform Dirac traces.
Upon solving for the Green function of a fermion-antifermion
pair with bare masses $M$ and $m$ in this model, one obtains the
bound-state eigenvalue equation
\begin{equation} \label{tHe}
\mu_n^2 \phi_n^{M\overline{m}} (x) = \left( \frac{M_R^2}{x} +
\frac{m_R^2}{1-x} \right) \phi_n^{M\overline{m}}(x) - \int^1_0 dy \,
\phi_n^{M\overline{m}} (y) \, \Pr \frac{1}{(y-x)^2},
\end{equation}
which is known as the 't~Hooft equation\cite{tH}. Here the $n$th
eigenstate $\phi_n^{M\overline{m}}$ is the meson wave function, the
$n$th eigenvalue $\mu_n^2$ is its squared mass, and $x$ is the
fraction of the meson momentum's minus component (which acts, in the
light-cone quantization, as a canonical spatial momentum component)
carried by quark $M$. We will always label the ground state (the
lowest mass meson) by $n=0$. The principal value prescription serves
to regulate the integrand singularity, which originates in the
infrared divergence of the gluon propagator. This equation has a
discrete spectrum of eigenvalues that increase approximately linearly
for large $n$, and the wave functions vanish at the boundaries $x=0$
and 1, with the asymptotic behavior $\phi_n^{M\overline{m}} (x) \to
x^{\beta_M}$ as $x
\to 0$, where
\begin{equation} \label{asymp}
M^2_R + \pi \beta_M \cot \pi \beta_M = 0 ,
\end{equation}
and similarly as $x \to 1$, exchanging $m$ for $M$. $\beta_M$ is a
monotonic function of $M^2$ (or $M_R^2$), increasing from zero to one
as $M^2 = 0 \to \infty$.
Also useful in this description is the full meson-quark vertex
$\Phi_n^{M\overline{m}}$, which is given by
\begin{equation} \label{bphi}
\Phi_n^{M\overline{m}} (z) = \int_0^1 dy \, \phi_n^{M\overline{m}} (y)
\, \Pr \frac{1}{(y-z)^2} ,
\end{equation}
for all complex values of $z$. Indeed, except for $z \in$ [0,1], the
principal value prescription is unnecessary.
The decay constant $f_n$ for meson $n$ may be computed in this
framework. It is given by
\begin{equation} \label{dec}
f_n = c_n / \sqrt{\pi} ,
\end{equation}
where
\begin{equation} \label{cn}
c_n \equiv \int_0^1 dx \, \phi_n(x) .
\end{equation}
Strictly speaking, the r.h.s.\ of (\ref{dec}) is also multiplied by a
factor $\sqrt{N_c}$, but we may absorb this factor into the
normalization of other factors by which it is multiplied in the full
amplitudes; what is important is that the final physical amplitude has
the correct $N_c$ dependence at leading order. As each new quantity
is calculated in this paper, we will point out the leading dependence
on $N_c$, but as a rule we suppress the explicit factors for ease of
notation. The 't~Hooft eigenfunctions $\phi_n$, for example, are
$O(N_c^0)$ solutions of Eq.~(\ref{tHe}), and so $c_n$ is also
$O(N_c^0)$. On the other hand, light meson decay constants have the
well-known behavior $f_n \propto \sqrt{N_c} \,$, and the full result
Eq.~(\ref{dec}), including the $\sqrt{N_c} \, $, may be verified by
direct calculation.
The 't~Hooft model wave functions $\phi_n$ and $\Phi_n$ are
calculated by means of a standard numerical method called the Multhopp
technique\cite{JM,Mul}, in which the integral equation is converted to
an equivalent infinite-dimensional eigenvalue system, which in turn
may be truncated after a desired number of modes to give approximate
wave function solutions. Since the relevant formul{\ae} for
unequal-mass mesons do not appear elsewhere, we present a summary in
Appendix A.
Whereas the eigenfunctions $\phi_n^{M\overline{m}}$ describe
the complete set of homogeneous solutions for the two-point Green
function, the solution for $1 \to 2$ meson decays (the leading decay
channels in large $N_c$) requires also three-point Green functions.
Remarkably, the requisite expressions may be written entirely in terms
of triple overlap integrals of the functions $\phi$ and
$\Phi$\cite{Ein,GM2}, without bare quark model contact-type
interactions. In physical terms, this means that the three vertices
of the diagram for the three-point Green function are resonance
dominated, without contact contributions. Nevertheless, for the
diagrams computed, it will prove to be computationally convenient to
describe part of the full amplitude in terms of these contact terms.
We exhibit these explicit expressions in Sec.~\ref{exc}, but for the
moment it is only important to note that such expressions indeed
exist.
\section{Peculiarities of 1+1 Dimensions} \label{1+1}
Despite one's hopes that exact calculations in the 't~Hooft
model may lend insight into real 3+1 strong interaction physics, we
emphasize that the two-dimensional universe possesses some unique
properties that must be remembered when comparing to the universe of
four dimensions. Therefore, the 't~Hooft model may in no way be
construed as any sort of limiting case of real QCD, and any direct
comparisons are necessarily qualitative. In other words, we espouse
the opinion that only certain conclusions based upon our numerical
studies of 't~Hooft model solutions, not the numerical results
themselves, possess any validity in 3+1 dimensions.
The most obvious signal that 1+1 and 3+1 physics are vastly
different is that the former does not possess the quantity of angular
momentum, except in the residual form of parity\footnote{Also, spinors
retain the property of chirality, since a $\gamma_5$ matrix still
exists in 1+1, signaling two inequivalent representations of the
Lorentz group.}. This is clear since finite rotations do not exist
when there is only one spatial direction, and only the improper
``rotation'' taking $x^1 \to -x^1$, namely parity, remains. It
follows that 't~Hooft model eigenstates $\phi_n(x)$ do not possess
spin, but only intrinsic parity $(-1)^{n+1}$\cite{CCG}. All of the
interesting phenomenology provided by approximate spin symmetries in
our world ({\it i.e.}, the smallness of hyperfine splittings,
relations between different helicity amplitudes, {\it etc.}) are
therefore meaningless in two dimensions.
The lack of transverse directions has important consequences
for couplings in 1+1 dimensions. As mentioned in the previous
section, gauge couplings have dimensions of mass, and so such theories
are super-renormalizable. Moreover, ``vector'' gauge bosons exist in
1+1 only through their longitudinal modes. There are also different
Lorentz invariants in 1+1, since the Levi-Civita tensor
$\epsilon^{\mu\nu}$ has only two indices. The effects of these
constraints are implicit in all the results to follow.
The amount of Lorentz-invariant phase space is of course
expected to vary between different spacetime dimensions $D$, since the
measure of the phase space integrals is the $D$-dimensional volume
element. However, the difference between 1+1 and 3+1 is particularly
dramatic. To be specific, in $D$ spacetime dimensions, the
differential width for a $1 \to 2$ decay in terms of the solid angle
of either final-state particle is given by
\begin{equation} \label{two}
d\Gamma = \frac{|{\bf p}|^{D-3}}{(2\pi)^{D-2} \, 8M^2} |{\cal
M}|^2 d\Omega,
\end{equation}
where $|{\bf p}|$ is the spatial momentum of either final-state
particle in the rest frame of the initial particle of mass $M$, and
${\cal M}$ is the invariant amplitude of the process. Note
particularly the behavior of the phase space factor $|{\bf p}|^{D-3}$
as the $|{\bf p}| = 0$ threshold is approached: For $D=4$, the
differential width vanishes with the decreasing amount of phase space
available, but for $D=2$, the differential width actually becomes
singular (barring an accidental zero in the amplitude ${\cal M}$). It
follows that two-particle decay modes near threshold are enhanced in
1+1, in stark contrast to 3+1.
\section{The Partonic Calculation} \label{inc}
Because of the small number of integrations necessary to
compute phase space in 1+1 dimensions, it is possible to perform the
partonic integrals analytically in all cases of interest to us. In
the $1 \to 3$ parton decay, one starts with $3 \times 2 = 6$
final-state momentum components, of which 2 are fixed by
energy-momentum conservation and 3 are fixed by the on-shell
conditions of the final-state partons; this leaves only one nontrivial
integration, which can be done explicitly.\footnote{Strictly speaking,
there is also a degree of freedom from the ``solid angle'' of one of
the final-state particles. However, in 1+1, this is a discrete degree
of freedom. Integration of a differential width over this quantity
gives an additional factor of $1+1=2$ for Lorentz scalars and
$1+(-1)=0$ for pseudoscalars.}
Here we consider the case of an initial quark of mass $M$
decaying into three distinguishable equal-mass quarks of mass $m \leq
M/3$. The final nontrivial integral involves a small number of square
root factors arising from the on-shell mass-energy constraints, since
both energies and momenta appear in both the phase space and the
invariant amplitude expressed in a given frame. Such expressions in
our case integrate to the standard three kinds of complete elliptic
integrals of Legendre, usually denoted by $K$, $E$, and $\Pi$. We
begin by presenting the functional form for the phase space with
constant invariant amplitude:
\begin{equation}
\Phi_3 (M;m,m,m) = \frac{1}{4\pi^3 M^2} (1 + \epsilon)^{-1/2} \left( 1
- \epsilon/3 \right)^{-3/2} K (u) ,
\end{equation}
where
\begin{equation} \label{eps}
\epsilon \equiv \frac{3m}{M} \in [0,1] ,
\end{equation}
and
\begin{equation} \label{u}
u \equiv \sqrt{\frac{(1-\epsilon)(1+\epsilon/3)^3}
{(1+\epsilon)(1-\epsilon/3)^3}} .
\end{equation}
Note that, unlike the two-body phase space given by Eq.~(\ref{two}),
this expression does not diverge for finite $m$. However, it does
possess a singularity as $m \to 0$ ($\epsilon \to 0$),
since then
\begin{equation}
\Phi_3 = \frac{3}{8\pi^3 M^2} \ln \left( \frac{M}{m} \right) \left[ 1
+ O \left( \frac{m^2}{M^2} \right) \right] .
\end{equation}
The opposite limiting case $\epsilon \to 1$, in which the three
partons are produced at rest, is equally peculiar:
\begin{equation}
\Phi_3 = \frac{3 \sqrt{3}}{32 \pi^2 M^2} \left[ 1 + O(1-\epsilon)
\right] ,
\end{equation}
which means that phase space does not vanish in this limit.
We now present the expressions for the inclusive partonic
decay width. For definiteness, we attempt to describe the couplings
in terms as similar to Standard Model (SM) notation as possible. Our
labeling of partons is exhibited in Fig.~1. The decay of the heavy
quark 1 to the lighter quark 3 is assumed to couple to a vector-like
weak current with vertex factor $(-ig_2/\sqrt{2}) V_{31}
\gamma^\mu \left( c_V - c_A \gamma_5 \right)$, carried by a gauge
boson ``$W$'' of mass $M_W$; in the SM, $c_V = c_A = 1/2$. The
coupling at the other end of the weak current, creating quark 5 and
antiquark 4, is assumed to be the same except for the ``CKM element''
$V^*_{45}$. $G_F$ is defined, as in the SM, by $\sqrt{2} g_2^2 /
8M_W^2$; note that $G_F$ is dimensionless in 1+1. Finally, the
abbreviations $\epsilon$ and $u$ are carried over from
Eqs.~(\ref{eps}) and (\ref{u}).
\INSERTFIG{fig5.eps}{1}{Parton decay diagram for the inclusive decay.
Of interest are the parton labels, as used in the text.}
The weak decay amplitude and width in this case are effects of
orders $\sqrt{N_c}$ and $N_c$, respectively. This counting may be
established in the parton diagram by observing that the pair ($5\bar
4$) in Fig.~1 can occur with each of the $N_c$ colors, but ``sewing
up'' the partons into color-singlet mesons ($5\bar 4$), ($1\bar 2$),
and ($3\bar 2$) costs a factor of $1/\sqrt{N_c}$ each. Finally, each
color may occur in the loop created by 1, 3, and 2, for one more
factor of $N_c$. It follows that the weak decay width\footnote{The
difference from the strong width, which is $O(1/N_c)$, is that the
$q\bar q W$ vertices are unsuppressed in large $N_c$, while the $q\bar
q$-gluon vertex is $O(1/\sqrt{N_c}$).} calculated from the parton
diagram is $O(N_c)$. In Sec.~\ref{exc} we show that the hadronic
calculation of the width also produces a leading factor of $N_c$.
The width is presented in two special cases. In the first, we
take $M_W \gg M$, the usual four-fermion coupling assumption. This
corresponds to using only the $g_{\mu\nu}$ term in the numerator of
the $W$ propagator\footnote{We use unitary gauge in order to avoid the
necessity of including additional charged Goldstone fields.}, $-i
\left( g_{\mu\nu} - q_\mu q_\nu /M_W^2 \right) /(q^2+i\varepsilon)$.
We then find
\begin{eqnarray} \label{partw1}
\Gamma & = & \frac{4G_F^2 M}{\pi} |V_{31}V_{45}^*|^2 (c_V^2 - c_A^2)^2
\left( 1 - \epsilon/3 \right)^{3/2} ( 1 + \epsilon)^{1/2} \nonumber \\
& & \times \left[ E(u) - 16 \left( \epsilon / 3 \right)^3 \left( 1 -
\epsilon/3 \right)^{-3} ( 1 + \epsilon)^{-1} K(u) \right] .
\end{eqnarray}
The limiting cases of this expression are given by
\begin{equation} \label{lim1}
\Gamma \to \frac{4G_F^2 M}{\pi} |V_{31}V_{45}^*|^2 (c_V^2 - c_A^2)^2
\left[ 1 - \frac{\epsilon^2}{3} + O(\epsilon^3 \ln \epsilon) \right] ,
\end{equation}
as $\epsilon \to 0$, and
\begin{equation}
\Gamma \to \frac{16G_F^2 M}{3\sqrt{3}} |V_{31}V_{45}^*|^2
(c_V^2 - c_A^2)^2 (1-\epsilon) \left[ 1 + \frac{3}{4} (1-\epsilon) +
O((1-\epsilon)^2) \right],
\end{equation}
as $\epsilon \to 1$.
We see that the width is finite as $\epsilon \to 0$ and vanishes as
$\epsilon \to 1$. The former limit shows that the width for the
partonic decay of the heavy quark is given approximately by $M$ times
a constant, dimensionless coefficient.
It is easy to understand the prefactor $(c_V^2 - c_A^2)^2$,
which means that the width in the $M_W \gg M$ limit vanishes for $V
\pm A$ currents. The decay vertex in this limit is of the form
$g^{\mu\nu}J_\mu j_\nu$ for some quark currents $J$ and $j$. Note
that the only non-vanishing components of the metric are $g^{+-}$ and
$g^{-+}$, so the vertex involves only $J_-j_+$ and $J_+j_-$. Now, $V
\pm A$ currents correspond to the quarks being all right(left)-handed.
The currents $J_{\pm}$ and $j_{\pm}$ in this chiral basis are just
bilinears of $\gamma_{\pm}$, since $\gamma_\mu \gamma_5 \psi_{R,L} =
\pm \gamma_\mu \psi_{R,L}$. However, in $1+1$ dimensions $\gamma_-
\psi_L=\gamma_+ \psi_R=0$, and so all currents of one chirality
vanish, leading to the vanishing of the decay vertex.
If we impose $c_V^2 = c_A^2 = 1/4$ from the beginning of the
calculation, then we find that only pieces obtained from contraction
with the $q_\mu q_\nu$ terms in the $W$ propagator survive, giving
rise to the width
\begin{eqnarray} \label{partw2}
\Gamma & = & \frac{G_F^2 M^5}{\pi M_W^4} |V_{31} V^*_{45}|^2
\left( \epsilon / 3 \right)^2 \left( 1 - \epsilon /3 \right)^{-3/2}
(1 + \epsilon)^{-1/2} \nonumber \\
& & \times \Biggl\{ -8 \left( \epsilon / 3 \right)^2 \left[ \left( 1+
\epsilon /3 \right)^3 + (\epsilon/3) \left( 1 - \epsilon /3 \right)^2
\right] K (u) \Biggr. \nonumber \\
& & + \left( 1 + \epsilon^2 / 9 \right)
\left( 1 - \epsilon / 3
\right)^3 ( 1 + \epsilon ) E(u) \nonumber \\ & & \Biggl. + 48 (
\epsilon / 3)^3 \left( 1 + \epsilon^2 / 27 \right) \Pi (v,u)
\Biggr\} ,
\end{eqnarray}
where
\begin{equation}
v \equiv
\frac{(1+\epsilon/3)(1-\epsilon)}{(1-\epsilon/3)(1+\epsilon)}.
\end{equation}
The asymptotic expansions in this case are
\begin{equation}
\Gamma \to \frac{G_F^2 M^5}{\pi M_W^4} \left(
\frac{\epsilon}{3} \right)^2 \left[ 1 - \frac{2\epsilon^2}{9} +
O(\epsilon^3) \right] ,
\end{equation}
for $\epsilon \to 0$, and
\begin{equation}
\Gamma \to \frac{8 G_F^2 M^5}{243 \sqrt{3} M_W^4} \left[ 1 +
\frac{1}{2} (1-\epsilon) + O((1-\epsilon)^2) \right],
\end{equation}
for $\epsilon \to 1$. Here we see that the width vanishes as
$\epsilon \to 0$ but is finite as $\epsilon \to 1$. Specifically, in
the former limit the width is approximately a dimensionless constant
times $M^3 m^2/M_W^4$.
\section{The Hadronic Calculation} \label{exc}
Matrix elements for exclusive $1 \to 2$ meson decays are most
conveniently written in terms of transition form factors. We identify
the $\bar B$ meson in 1+1 as the ground state of the $M \overline{m}$
tower of resonances to which it belongs, and subsequently label it by
{\bf 0}. Consider the ``tree'' (T) diagram of
Fig.~2, for which $\bar B = (1 \bar 2)$, where 1 is the heavy ``$b$''
quark; the matrix element is parameterized by
\begin{equation} \label{mat}
\langle {\bf m} (p') \left| \bar q \gamma^\mu Q \right| {\bf 0} (p)
\rangle = \left\{ \begin{array}{ll}
(p+p')^\mu f_+ (q^2) +(p-p')^\mu f_- (q^2) & \hbox{for $m$ even,} \\
\epsilon_{\mu\nu} (p+p')^\nu f_+ (q^2) +
\epsilon_{\mu\nu} (p-p')^\nu f_- (q^2) & \hbox{for $m$ odd,}
\end{array} \right.
\end{equation}
where $q^2 \equiv (p-p')^2$, and $Q$ and $q$ indicate the fields of
quarks with masses $M$ and $m$. The light quark field $q$ here refers
to the daughter of the heavy quark (3 in Fig.~2), not the spectator
quark (2 in Fig.~2), although both are taken to have mass $m$. The
label {\bf m} indicates the eigenvalue index of the final-state decay
product meson ($2\bar 3$) not coupled to the flavor-changing current.
In the remainder of this section, $m$ exclusively means this value and
not the value of the light quark mass.
\INSERTFIG{fig6a.eps}{2}{Diagram for ``tree'' (T) meson exclusive
decay. Numbers indicate quark labels used in the text (except {\bf
0}, which refers to the ground-state ``$\bar B$'' meson), while
letters indicate the eigenvalue index of meson resonances. One can
also consider contact-type diagrams, in which the point labeled by\/
{\bf n} is not coupled to a resonance.}
Next, we reserve the label {\bf n} in the T diagram for the
meson resonances or contact terms ($1 \bar 3$) coupled to the
flavor-changing current. {\bf n} carries the momentum transfer $q^2$,
which is the kinematic variable of interest in this system; however,
it proves more convenient to use the equivalent Lorentz-invariant
quantity $\omega \equiv p_-/q_-$, which indicates the fraction of
light-cone coordinate ``spatial'' component of the current $q_-$
carried by meson {\bf 0}. In the method of calculating the matrix
element (\ref{mat}) developed in \cite{JM}, one considers not ${\bf 0}
\to {\bf mn}$ directly, but rather the crossed process ${\bf n} \to
{\bf 0} \overline{\bf m}$ above its threshold ($q^2 \geq (\mu_0 +
\mu_m)^2$); in that case, one finds $\omega \in [0,1]$:
\begin{equation} \label{om1}
\omega (q^2) = \frac{1}{2} \left[ 1 + \left( \frac{\mu_0^2 -
\mu_m^2}{q^2} \right) - \sqrt{1- 2
\left( \frac{\mu_0^2 + \mu_m^2}{q^2} \right) + \left( \frac{\mu_0^2
- \mu_m^2}{q^2} \right)^2} \, \right].
\end{equation}
Here and below we use the same symbol $\mu$ for the masses of
heavy-light and light-light mesons, since from the index one can
immediately tell which one is appropriate ({\it e.g.}, $\mu_0$ is
heavy-light). Since $\omega$ is obtained by solving the quadratic
equation $q^2 \omega^2 + (\mu_m^2 - \mu_0^2 -q^2) \omega + \mu_0^2 =
0$, it should be pointed out that the branch choice used for $\omega$
does not affect the final numerical results for form factors or
amplitudes; the two branches simply correspond to the two possible
directions of the mesons {\bf 0} and $\overline {\bf m}$ in the rest
frame of {\bf n}. However, the branch chosen above turns out to
greatly facilitate the numerical computations. For some values of
$q^2$ below this crossed-process threshold, $\omega$ is complex, and
the following expressions for the form factors must be computed in a
different way, as discussed below.
With the aforementioned identifications, we may express the
form factors entirely in terms of resonance quantities, as promised in
Sec.~\ref{review}. The notation and form factor expressions we
present here appear in Ref.~\cite{GM2}, while the characteristic
integral expression contained within was first obtained in~\cite{Ein}.
The form factors are explicitly
\begin{equation} \label{fp1}
f_+ (q^2) = \sum_n \frac{{\cal A}_n (q^2)}{1-q^2/\mu_n^2},
\end{equation}
and
\begin{equation} \label{fm1}
f_- (q^2) = \frac{1}{q^2} \left[ \sum_n \frac{{\cal
B}_n (q^2)}{1-q^2/\mu_n^2} - \sum_n \frac{{\cal A}_n
(q^2)}{1-q^2/\mu_n^2} \left( \mu_0^2 - \mu_m^2 \right) \right] ,
\end{equation}
where the pole residue functions ${\cal A}_n$ and ${\cal B}_n$ are
given by
\begin{equation} \label{an1}
{\cal A}_n (q^2) = \frac{c_n \left[ 1 + (-1)^{n+m} \right]}
{(q^2 \omega - \mu_0^2/\omega)} F_{n0m} (\omega) ,
\end{equation}
and
\begin{equation} \label{bn1}
{\cal B}_n (q^2) = c_n \left[ 1 + (-1)^{n+m+1} \right] F_{n0m}
(\omega) ,
\end{equation}
and the triple overlap integral $F_{n0m}$ is defined by
\begin{eqnarray} \label{fn0m}
\lefteqn{F_{n0m} (\omega) \equiv} & & \nonumber \\ & &
\left[ \frac{1}{1-\omega} \int_0^\omega dv \,
\phi_n^{1\bar 3} (v) \phi_0^{1\bar 2} \left(\frac v \omega \right)
\Phi_m^{2 \bar 3} \left( \frac{v-\omega}{1-\omega} \right) - \frac 1
\omega \int^1_\omega dv \, \phi_n^{1\bar 3} (v) \Phi_0^{1 \bar 2}
\left( \frac v \omega \right) \phi_m^{2\bar 3} \left(
\frac{v-\omega}{1-\omega} \right) \right] .
\end{eqnarray}
We now compute the invariant matrix element for a current
coupling of the form $(-ig_2/\sqrt{2}) V_{31} \gamma^\mu \left( c_V -
c_A \gamma_5 \right)$, exactly as for the inclusive decay. Such a
calculation is possible for an arbitrary combination of $V$ and $A$
currents, even though we presented the matrix element only for a
current of the bilinear $V^\mu \equiv \bar q \gamma^\mu Q$, because
the two currents are related by $\gamma_\mu \gamma_5 =
\epsilon_{\mu\nu} \gamma^\nu$. The invariant matrix element is simply
the product of a linear combination of the form factors determined by
the current coupling, multiplied by the propagator of the
flavor-changing current (the ``$W$'') and finally by a factor
representing the meson formed from the flavor-changing current. The
last step amounts, via LSZ reduction of the two-point Green function,
to the insertion of a factor of the meson decay constant. In the T
diagram, we assign this meson the label {\bf k} and quark structure
($5 \bar 4$). Using Eq.~(\ref{dec}) to write the decay constant
$f_{k}$ in terms of $c_{k}$, we have at last
\begin{eqnarray} \label{mt1}
{\cal M}_T & = & c_{k} \sqrt{\frac{2}{\pi}} \frac{G_F M_W^2}
{(M_W^2-q^2)} V_{31} V^*_{45} \nonumber \\
& & \cdot \sum_n \Biggl\{ 2\left[ (c_V^2-c_A^2) \left( (-1)^{k} +
(-1)^n \right) \right] \Biggr. \nonumber \\
& & \Biggl. \hspace{2em} + \frac{q^2}{M_W^2} \left[ (c_V + c_A)
(-1)^{k} - (c_V - c_A) \right] \left[ (c_V + c_A) (-1)^n - (c_V -
c_A) \right] \Biggr\} \nonumber \\
& & \cdot \frac{c_n \mu_n^2}{q^2 - \mu_n^2} F_{n0m} (\omega) ,
\end{eqnarray}
where the on-shell process has $q^2 = \mu_{k}^2$. The pseudoscalar
parity of the ground state $\bar B$ has been taken into account in
this expression. We remind the reader that in this expression $
F_{n0m}$ is given by (\ref{fn0m}) only for $n$ such that $\mu_n^2 >
(\mu_0 + \mu_m)^2$; other methods must be employed for smaller
$\mu_n^2$, as described below.
The conversion of the decay constant $f_{k}$ to $c_{k}$ in
fact gives the only surviving factor of $\sqrt{N_c}$ in the amplitude,
which means that the weak decay width is proportional to $N_c \,$, in
agreement with the partonic result of Sec.~\ref{inc}. This may be
seen with reference to Fig.~2 by the usual large $N_c$ counting
arguments: The coupling of three mesons ({\bf 0}, {\bf m}, and {\bf n}
in this case) appears with the factor $1/\sqrt{N_c}$, while meson {\bf
n} is destroyed by the $W$ current, thus providing a decay constant at
$O(\sqrt{N_c})$. This part of the diagram alone, which is none other
than the form factors $f_{\pm} (q^2)$, is thus $O(N_c^0)$. The
creation of the meson {\bf k} from the weak current gives the
remaining factor of $\sqrt{N_c}$. Finally, the width is given by
Eq.~(\ref{two}).
The question remains what to do with the contributions from
current-coupled resonances below the thresholds for the T diagram.
The expressions listed above are inadequate because they assume the
reality of $\omega$ in the computation of contour integrals with
denominators of the form ($\omega + c \pm i\varepsilon$), where the
$\varepsilon$ arises from the Feynman prescription in the fermion
propagators, and $c$ represents other purely real numbers arising from
the loop calculation. Such integrals are naturally trivial when
$\omega$ is real and simply lead to step functions. However, when
$\omega$ is complex the results are rather more cumbersome (although
still tractable in principle). One may resort instead to other
methods\cite{JM} in order to obtain amplitudes from the
below-threshold resonances. The approach relies on sum
rules\cite{Ein} that are satisfied by the amplitudes, and are
described in Appendix B\@. The upshot is that the sum rules may be
used to describe the below-threshold amplitudes in terms of
combinations of values from the above-threshold amplitudes, a process
to which we refer as ``backsolving''.
However, backsolving has drawbacks from the practical point of
view. As the number of resonances below threshold increases, the
number of the above-threshold pole residues and corresponding
accuracies with which these are computed must increase dramatically to
maintain the accuracy of the below-threshold residues thus calculated.
For transitions of the $\bar B$ meson to light $\pi$'s, it is
known\cite{JM,GM2} that the below-threshold pole residues are very
large compared to their above-threshold fellows, and tend to alternate
in sign. Clearly, a small uncertainty in the above-threshold
calculation magnifies to a large uncertainty in the below-threshold
residues, and the alternating sign suggests delicate cancellations
among the computed residues, which makes the situation even worse.
There is a much more efficient method of calculation if we are
willing to abandon the requirement that all vertices in the
calculation of Fig.~2 are resonant couplings, and allow for quark
model-type contact terms. The calculation is based upon the
observation that $\omega$ as defined in Eq.~(\ref{om1}) is real not
only for decays in the crossed kinematic region $q^2 \geq (\mu_0 +
\mu_m)^2$, but also decays in the direct decay kinematic region $0
\leq q^2 \leq (\mu_0 - \mu_m)^2$, where $\omega\geq 1$. It therefore
makes sense to redefine $\omega \equiv q_-/p_-$ rather than $p_-/q_-$,
so that $\omega \in [0,1]$ in this range. One then finds
\begin{equation} \label{om2}
\omega (q^2) = \frac{1}{2} \left[ 1 + \left( \frac{q^2 -
\mu_m^2}{\mu_0^2} \right) -\sqrt{1- 2
\left( \frac{q^2 + \mu_m^2}{\mu_0^2} \right) + \left( \frac{q^2
- \mu_m^2}{\mu_0^2} \right)^2} \, \right].
\end{equation}
It is convenient to define the triple overlap integral
\begin{eqnarray} \label{f0nm}
\lefteqn{F_{0nm} (\omega) \equiv} & & \nonumber \\ & &
\left[ \frac{1}{1-\omega} \int_0^\omega dv \,
\phi_0^{1\bar 2} (v) \phi_n^{1\bar 3} \left(\frac v \omega \right)
\Phi_m^{3\bar 2} \left( \frac{v-\omega}{1-\omega} \right) - \frac 1
\omega \int^1_\omega dv \, \phi_0^{1\bar 2} (v) \Phi_n^{1 \bar 3}
\left( \frac v \omega \right) \phi_m^{3\bar 2} \left(
\frac{v-\omega}{1-\omega} \right) \right] ,
\end{eqnarray}
as well as the contact terms
\begin{eqnarray} \label{ct}
{\cal C}_1 & \equiv & -\frac{1}{\omega} \int_\omega^1 dv \,
\phi_0^{1\bar 2} (v) \phi_m^{3\bar 2} \left( \frac{v-\omega}{1-\omega}
\right) , \nonumber \\
{\cal C}_2 & \equiv & -\omega \int_\omega^1 dv \, \phi_0^{1\bar 2} (v)
\phi_m^{3\bar 2} \left( \frac{v-\omega}{1-\omega} \right)
\frac{1}{v (v- \omega)} , \nonumber \\
{\cal C}_3 & \equiv & -\frac{1}{1-\omega} \int_0^\omega dv \,
\phi_0^{1\bar 2} (v) \Phi_m^{3\bar 2} \left( \frac{v-\omega}{1-\omega}
\right) .
\end{eqnarray}
Note that the triple overlap is somewhat different from that in
Eq.~(\ref{fn0m}), both in the arguments of each wave function and the
definitions ((\ref{om1}) and (\ref{om2})) of $\omega$ for each case.
Furthermore, there is some flexibility in how one expresses results
containing these contact terms, since one can use the completeness of
the 't~Hooft model eigenfunctions on $x \in [0,1]$ to show $\sum_n c_n
\phi_n (x) = 1$, and from this prove identities such as
\begin{equation}
\sum_n c_n F_{0nm} (\omega) = {\cal C}_2 - {\cal C}_3 .
\end{equation}
After a lengthy but straightforward calculation, one finds
\begin{eqnarray}
f_+ (q^2) & = & \sum_n \frac{{\cal A}_n (q^2)}{1-q^2/\mu_n^2} +
\frac{1}{(q^2/\omega - \mu_0^2 \omega)} \left\{ q^2 {\cal C}_1 -
\left[ 1 + (-1)^m m_1 m_3 \right] {\cal C}_2 + {\cal C}_3 \right\} ,
\end{eqnarray}
and
\begin{eqnarray}
f_- (q^2) & = & \frac{1}{q^2} \Biggl\{ \sum_n \frac{{\cal B}_n
(q^2)}{1-q^2/\mu_n^2} - \sum_n \frac{{\cal A}_n (q^2)}{1-q^2/\mu_n^2}
\left( \mu_0^2 - \mu_m^2 \right) \Biggr. \nonumber \\ & &
+ \Biggl. (1-r) \left( q^2 {\cal C}_1 - {\cal C}_3 \right) - \left[
\left[ 1 + (-1)^{m+1} \right] - \left[ 1 + (-1)^m \right] m_1 m_3 r
\right] {\cal C}_2 \Biggr\} ,
\end{eqnarray}
where
\begin{equation}
r \equiv \frac{\mu_0^2 - \mu_m^2}{q^2/\omega - \mu_0^2 \omega} ,
\end{equation}
and the pole residue functions are given by (compare (\ref{fp1}) and
(\ref{fm1}))
\begin{equation}
{\cal A}_n (q^2) = \frac{c_n \left[ 1 + (-1)^{n+m} \right]}{\left(
q^2/\omega - \mu_0^2 \omega \right)} F_{0nm} (\omega) ,
\end{equation}
and
\begin{equation}
{\cal B}_n (q^2) = c_n \left[ 1 + (-1)^{n+m+1} \right] F_{0nm}
(\omega) .
\end{equation}
Finally, the matrix element for the decay {\bf 0} $\to$ {\bf mk},
which unlike Eq.~(\ref{mt1}) holds for all such decays allowed by
kinematics, is given by
\begin{eqnarray} \label{mt2}
{\cal M}_T & = & c_{k} \sqrt{\frac{2}{\pi}} \frac{G_F M_W^2}{(M_W^2 -
q^2)} V_{31} V^*_{45} \nonumber \\ & &
\cdot \left[ 2 (c_V^2 - c_A^2) \left\{ \sum_n \frac{\left[
(-1)^{k} q^2 + (-1)^n \mu_n^2 \right] c_n}{q^2 - \mu_n^2} F_{0nm}
(\omega) \nonumber + (-1)^{k+1} q^2 {\cal C}_1 + m_1 m_3 {\cal C}_2
\right\} \right. \nonumber \\ & &
\hspace{1em} +\frac{q^2}{M_W^2} \left[ (c_V + c_A) (-1)^{k} - (c_V -
c_A) \right] \nonumber \\ & &
\hspace{2em} \cdot \Biggl\{ \sum_n \frac{c_n}{q^2 - \mu_n^2} \left[
(c_V + c_A) (-1)^n \mu_n^2 - (c_V - c_A) q^2 \right] F_{0nm} (\omega)
\Biggr. \nonumber \\ & &
\hspace{3em} \left. \Biggl. + (c_V - c_A) q^2 {\cal C}_1 + (c_V + c_A)
m_1 m_3 {\cal C}_2 \Biggr\} \right] ,
\end{eqnarray}
where as before, $q^2 = \mu_{k}^2$.
\section{Results and Discussion} \label{res}
We computed the weak decay width of a free heavy quark with
masses $M = 2.28 \to 15.00$ in units of $g \sqrt{N_c /2\pi}$ using
Eq.~(\ref{partw1}), the $M_W \to \infty$ case. Likewise, we computed
the hadronic width using the same range of heavy quark mass and a
fixed light quark mass $m = 0.56$. The expressions used were
Eqs.~(\ref{mt2}) and (\ref{two}), with definitions (\ref{cn}),
(\ref{om2}), (\ref{f0nm}), (\ref{ct}), and with sums over all channels
${\bf 0} \to {\bf mk}$ satisfying the on-shell condition $\mu_m +
\mu_{k} \leq \mu_0$. Both widths are taken to have the same overall
multiplicative factor $2 G_F^2 |V_{31} V_{45}^*|^2 (c_V^2 - c_A^2)^2
/\pi$.
It is equally possible, in principle, to use (\ref{mt1})
instead of (\ref{mt2}) and backsolve for pole residues ${\cal A}_n$
and ${\cal B}_n$ defined in (\ref{an1})--(\ref{bn1}), or equivalently
the overlap integrals $F_{n0m}$, using the expressions in Appendix B
whenever $\mu_n < \mu_0 + \mu_m$, and obtain the hadronic width in
this way. However, as discussed in Sec.~\ref{exc}, this approach
rapidly leads to uncontrollably large numerical uncertainties.
Nevertheless, we were able to show in some simple cases with only a
few backsolved residues that both methods produce the same numerical
result within a few percent.
It is no more difficult to consider cases other than $M_W \to
\infty$. For example, if one imposes the condition $V-A$ condition
$c_V = c_A = 1/2$, then Eq.~(\ref{mt2}) is just as valid, but now one
uses the partonic width (\ref{partw2}).
Of course, the partonic width is just a single easily
evaluated function of the quark masses. The hadronic width, on the
other hand, requires first the solution of the 't~Hooft equation,
which is accomplished by means of the Multhopp technique described in
Appendix A, repeated for as many resonances as desired. Next, the
matrix elements are obtained by taking sums of overlap integrals over
these wave functions, as in Eqs.~(\ref{f0nm}) and (\ref{ct}). We
compute the first 500 eigenvectors but include only the first 50 in
our sums over resonances. The results change very little when more
resonances are included. Finally, the amplitude for a given exclusive
process is squared and multiplied by phase space to give the hadronic
width.
Clearly, such a procedure uses a significant amount of
computing time, and therefore it is not practical to compute the
hadronic width at points exceptionally finely spaced in $M$. In
practice, we computed each above-threshold amplitude at values of
$M=2.28$ and each integer mass from $M=3.00$ to 15.00. The
significance of the lower bound is that, with the given light quark
mass $m = 0.56$, this ``$b$ quark'' mass gives a ground-state ``$\bar
B$'' meson just above the threshold for producing two ground-state
``$\pi$'' mesons, {\it i.e.}, the smallest value of heavy quark mass
unstable under hadronic weak decay.
We then make the empirical observation that the amplitudes for
exclusive processes ${\cal M}_T ({\bf 0} \to {\bf mk})$ are smooth
functions of $M$. We thus obtain the value of the amplitude at all
intermediate points $M$ by fitting to a fixed power law behavior over
each interval, either by interpolating for values between adjacent
pairs of points where the amplitudes were computed directly, or by
extrapolating from the nearest two points if we are probing values of
$M$ above the process threshold but below the first explicitly
computed point. In fact, we find that the exclusive amplitudes do not
vanish at threshold and are usually\footnote{In the few exceptions to
this rule, the amplitude dips slightly for values of $M$ just above
threshold, but thenceforth assumes monotonically increasing behavior.}
monotonically increasing functions of $M$ (for example, see Fig.~3),
although the rate of this increase is dependent upon the particular
exclusive mode under consideration. The phase space is a known
function of the computed meson masses, and thus the width can be
reliably computed at any value of $M$ in the desired range.
\epsfxsize 6.0 truein
\INSERTFIG{a11.eps}{3}{Weak decay amplitude ${\cal M}_T$ for the
exclusive decay to the lowest mode ${\bf 0} \to ({\bf m} = 0), ({\bf
k} = 0)$, as a function of heavy quark mass $M$, with light quark mass
$m = 0.56$. The overall factor $2\sqrt{2/\pi} \, G_F V_{31} V_{45}^*
(c_V^2 - c_A^2)$ in the amplitude is suppressed for
convenience.\hfill}
Since the phase space in 1+1 is singular at threshold
(Eq.~(\ref{two})), one would expect a plot of width $\Gamma$ {\it vs.\
}$M$ to be very ill-behaved, with dramatic singularities increasing in
density as $M$ increases. One would expect it to be essential to use
some sort of smearing in $M$ to properly test duality between this
hadronic description of the $\Gamma$ and the smooth partonic result.
In fact, this does not appear to be the case. We refer to Fig.~4,
which is our central result. It is obtained by interpolating each
exclusive decay amplitude, as described above, at intervals of $\Delta
M = 0.01$. The remarkable result is that, after passing the first
couple of thresholds, $\Gamma$ appears to be a nearly smooth function
in $M$, barely sensitive to the phase space singularities as each new
threshold is passed. This result suggests that the effect of
individual higher resonances is quite minimal, as one might expect in
3+1 dimensions. In the 1+1 case, however, the result is all the more
surprising, since now phase space near threshold provides a large
enhancement rather than a suppression.
\epsfxsize 6.0 truein
\INSERTFIG{full_width.eps}{4}{The full decay width for the sum of
exclusive modes in the decay ${\bf 0} \to {\bf mk}$ as a function of
heavy quark mass $M$, with light quark mass $m = 0.56$. The overall
factor $8 G_F^2 |V_{31} V_{45}^*|^2 (c_V^2 - c_A^2)^2 /\pi$ in the
width is suppressed for convenience. The dashed line is the
tree-level parton result, Eq.~(\ref{partw1}).}
It is interesting to watch the width develop as more and more
resonances are added. In Figs.~5$a$--5$d$ we include exclusive
channels with the lowest 1, 3, 5, and 11 thresholds, respectively. We
now see explicitly that the full width over the range in $M$ we
consider is essentially produced by the first 11 channels, indicating
the decreasing influence of individual higher resonances. The small
wave in $\Gamma$ above all the included thresholds is an artifact due
to the interpolation routine between values of $M$ at which the
amplitudes are explicitly computed; its small size indicates the
smoothness of the amplitudes in $M$ and the reliability of the
interpolation.\\
\epsfxsize 6.0 truein
\INSERTFIG{first1.eps}{5$a$}{The full decay width as a function of
heavy quark mass $M$, with light quark mass $m = 0.56$, including only
the exclusive mode with the lowest threshold value (corresponding to
${\bf m} = {\bf k} = 0$). The scale is the same as in Fig.~4.}
\epsfxsize 6.0 truein
\INSERTFIG{first3.eps}{5$b$}{Same as Fig.~5$a$, except now including
the exclusive modes corresponding to the {\rm three} lowest threshold
values.}
\epsfxsize 6.0 truein
\INSERTFIG{first5.eps}{5$c$}{Same as Fig.~5$a$, except now including
the exclusive modes corresponding to the {\rm five} lowest threshold
values.}
\epsfxsize 6.0 truein
\INSERTFIG{first11.eps}{5$d$}{Same as Fig.~5$a$, except now including
the exclusive modes corresponding to the {\rm eleven} lowest threshold
values. Observe that this figure is almost indistinguishable from the
full result, Fig.~4.}
Another remarkable feature of Fig.~4 is the near-perfect
linearity of $\Gamma$ for values $M > 7.0$. Suppressing the
proportionality constant between $\Gamma$ and $M$, Fig.~4 appears to
obey the asymptotic form $\Gamma \approx 0.514 M -0.141$. This is
surprisingly close to what is predicted asymptotically for the partonic
rate: Suppressing the same proportionality constant in
Eq.~(\ref{lim1}), one predicts $\Gamma_{\rm part} =\frac12 M(1+O(1/M^2))$.
One may ask whether the strength of the peaks in Fig.~4 is
large enough that the mass-smeared partonic and hadronic widths
nevertheless disagree. That is, local duality appears remarkably well
satisfied, but perhaps global duality actually fails by concealing a
large portion of $\Gamma(M)$ in the very narrow threshold peaks. In
this scenario, the apparent asymptotic smoothness of Fig.~4 fools us,
for the density of threshold singularities increases with $M$ so
rapidly as to push the curve of hadronic $\Gamma (M)$ out of agreement
with $\Gamma_{\rm part} (M)$ for sufficiently large $M$. We
now argue that this possibility does not appear to be realized, at
least numerically. Let us smear in $M$ over a region of
size~$\Delta$, $1\ll\Delta\ll M$.\footnote{In practice, we use a
normalized Gaussian window function with a mean of $M$ and a variation
of $\Delta$, although the result should be independent of the
particular form used.} Owing to the approximate linearity of squared
meson masses in the excitation number, there are $\sim M^3\Delta$
thresholds in this region, and the contribution to the smeared rate
from their phase space near the threshold scales as $M^{-5/2}$ for
each.\footnote{This is verified from (\ref{two}) and the observation
that, for $M \gg 1$, $\mu_0 \propto M$.} We observe empirically that
the magnitudes of amplitudes first appearing at a given threshold mass
$M_{\rm thr}$ tend to evolve approximately no faster than as $M_{\rm
thr}^{-0.6}$. It follows that the contribution to the smeared $\Gamma
(M)$ from the region of width $\Delta$ scales approximately as
$M^{-0.7}$. For the border region, where one of the mesons $m,k$ is
highly excited and the other is near the ground state, the phase space
is seen to scale as $M^{-2}$, but the number of such states is only
$\sim M\Delta$, so again the area under these peaks contributes little
to $\Gamma(M)$. Finally, the phase space far above a given threshold
scales as $M^{-3}$, which means that, given the density of states for
various eigenvalue indices, amplitudes cannot on the average grow with
$M$ above their respective thresholds faster than $M^{1/2}$ for $k \gg
1$ and $m \gg 1$, $M^{3/2}$ for one of $m,k = O(1)$ and the other $\gg
1$, or $M^2$ for $m$ and $k = O(1)$, or else the linear behavior of
$\Gamma(M)$ will be violated. In fact, the amplitudes we have
computed all obey these constraints. We see that the linear behavior
observed requires a delicate balance of numbers and mass dependences
of amplitudes versus excitation numbers, and we hope to obtain
analytic arguments for this remarkable behavior in the future.
What are we to conclude from this result? The 't Hooft model
is exactly soluble, so it must be the case that the fully-dressed
parton diagrams give results agreeing with the hadronic calculation;
indeed, this is how the hadronic problem was solved in the first
place. The partonic width computed in (\ref{partw1}) represents only
the Born term in an expansion in strong coupling $g$, so the addition
of gluon loops is apparently necessary to bring the two results into
agreement. The small discrepancy between the curves may have this
origin, or it may simply be a limitation of the numerical accuracy of
the calculation. However, it is interesting to note that the two
curves appear to differ asymptotically by a constant, which for plots
linear in $M$ is a $1/M$ correction. Therefore, we suggest that this
effect is genuine and not a numerical artifact. In Fig.~6 we
superimpose on the hadronic width of Fig.~4 the curve $\Gamma_{\rm
part}(M)\cdot (1+0.15/M)$, and see that the fit is outstanding. From this
result, we learn that local duality for this system is violated badly
only for the first few resonances and very close to thresholds of
higher resonances, and that $1/M$ effects appear to be only at the few
percent level for $M>7$. It would be very interesting to see
explicitly what happens to the partonic inclusive width at the one- or
two-loop level.
\epsfxsize 6.0 truein
\INSERTFIG{width1M.eps}{6}{The full decay width of Fig.~4 compared to
the tree-level parton result of Eq.~(\ref{partw1}) corrected by a
$1/M$ effect: $\Gamma_{\rm part}(M)\cdot(1+0.15/M)$.}
One natural idea of how to improve the Born result is to
replace the bare quark masses with the renormalized values. This
would not sum all gluon corrections, but it would include an important
subclass of them. Unfortunately, due to the result (\ref{mren}),
masses below 1.0 (such as that of our light antiquark) have {\em
imaginary\/} renormalized values, and then our whole interpretation of
phase space, essential for the calculation of the width, becomes
ambiguous.
\section{Conclusions} \label{conc}
We have calculated the nonleptonic decay width of a
heavy-light meson in the context of the 't~Hooft model as a function
of the bare heavy quark mass, both for the Born term of the free
partonic decay (which we called the ``partonic width'') and the full
sum of allowed hadronic decays (the ``hadronic width''). We found
that these two quantities approximately agree at leading order in $M$,
with the hadronic width being slightly larger. Both quantities are
observed to grow linearly and smoothly for large $M$, despite the
effects of numerous phase space threshold singularities in the
hadronic case. The slight discrepancy between hadronic and partonic
widths is well-fit by a $1/M$ correction, $\Gamma_{\rm
hadr}(M)\approx\Gamma_{\rm part}(M)\cdot(1+0.15/M)$.
Assuming that the small discrepancy between the partonic and
hadronic results is genuine (rather than a numerical artifact) leads
one to conclude that nonleptonic heavy-light meson decays in 1+1
dimensions cannot be described in terms of an OPE that lacks $1/M$
corrections, and it naturally leads one to believe that the same
conclusion is true in 3+1. Since the lowest order of the OPE is
simply the naive free quark picture, this result also has obvious
implications for the application of quark models in such decays.
Another incisive test of quark-hadron duality in 1+1 is whether
annihilation diagrams, in which the valence quarks in the decaying
meson annihilate through a weak current, are suppressed compared to
spectator tree diagrams (Fig.~2); these studies are well
underway\cite{GLII}, and results will be forthcoming shortly.
A number of unanswered questions not addressed by this work
include the effects of loop corrections to the Born amplitude free
quark decay, the dependence of decay widths on the light quark mass,
the effects of including finite meson strong decay widths (which are
$O(1/N_c)$), the effects of identical final state quarks or mesons,
multiparticle final states (also suppressed by powers of $N_c$), and
so on. While ``two-dimensional phenomenology'' cannot be used as a
quantitative substitute for the standard four-dimensional variety, it
clearly indicates the limitations of the standard lore.
{\it Note Added}. An interesting recent work by
Blok\cite{blok} suggests that global quark-hadron duality at high
energies in the 't~Hooft model with massless quarks may be achieved by
including smearing through the $1/N_c$-suppressed widths of
resonances. Our calculation, on the other hand, does not include
finite-width effects but nevertheless achieves an effectively smeared
result, even at relatively low $M$, which supports the claim of
duality at leading order.
\vskip1.2cm
{\it Acknowledgments}
\hfil\break
This work is supported by the Department of Energy under contract
DOE-FG03-97ER40506.
|
train/arxiv
|
BkiUc3U5qhDACkwuRnx2
| 5 | 1 |
\section{Introduction}
Understanding planetary systems formation and evolution has become one
of the biggest challenges of astronomy, since the imaging of a debris
disk around $\beta$~Pictoris in the 80's (\cite{smith84}) and the
discovery of the first exoplanet around the solar-like star 51~Pegasi
during the 90's (\cite{mayor95}). While about 20 debris disks --
disks containing dust which is not primordial but produced by
collisions among larger rocky bodies -- have been resolved at optical
wavelengths today, $\beta$~Pic, a A5V star at a distance of $19.3\pm0.2$~pc
(\cite{crifo97}), remains the best studied young ($12^{+8}_{-4}$~Myr;
\cite{zuckerman01}) system, with an impressive amount of indirect
signs pointing toward the presence of planets.
The disk shows a relative inner void of matter inside
$50$~astronomical units ({\sc au}). Lecavelier des Etangs et al. (1995)
presented intriguing light variations possibly due to disk
inhomogeneities produced by a Jupiter size planet at $>6$~{\sc au}. Several
asymmetries have been identified in the disk at optical
(\cite{kalas95}, \cite{heap00}) and infrared (\cite{telesco05})
wavelengths, as well as a warp at $\sim 50$~{\sc au}\ (\cite{mouillet97};
\cite{heap00}). The structure is well reproduced by the deformation
induced on colliding planetesimals by a giant planet on a slightly
inclined orbit within 50~{\sc au}\ from the star (\cite{krist96},
\cite{mouillet97}, \cite{gorka00}, \cite{augereau01} and
\cite{thebault01}). Silicate dust is observed as circumstellar rings
at 6, 16, and 30~{\sc au}\ from the star (\cite{okamoto04}), which could be
explained by the presence of a 2--5-Jovian-mass (M$_{\rm Jup}$) planet at
$\sim 10$~{\sc au}\ from the star (\cite{freistetter07}). Evaporating star
grazing comets have been evidenced (see, e.g., \cite{lagrange00} for a
review) and dynamical simulations showed that the gravitational
perturbation of at least one giant planet at $\sim 10$~{\sc au}\ can
account for the observed rate of evaporating bodies
(\cite{beust00}). However, no planets have been detected so far,
either through direct imaging or through radial velocity studies, due
to the instrumental limitations of both techniques
(\cite{galland06}). In particular, the high spatial resolution imaging
detection capabilities were so far limited to distances typically $\ga
15$--$20$~{\sc au}.\\ We used the NAOS-CONICA instrument (NaCo), installed
on the Very Large Telescope UT4 (Yepun) set in Paranal (Chile), to
benefit from both the high image quality provided by the Nasmyth
Adaptive Optics System (NAOS; \cite{rousset03}) at infrared
wavelengths and the good dynamics offered by the Near-Infrared Imager
and Spectrograph (CONICA; \cite{lenzen03}) detector, in order to study
the immediate circumstellar environement of $\beta$~Pic.
\section{Observations and data reduction procedures}
\subsection{Observations}
\label{sec:obs}
$L'$-band images of $\beta$~Pic\ ($V=3.8$, $L'=3.5$) were obtained between
2003 November 10 and 2003 November 17 with NaCo. The visible wavefront
sensor was used with the $14\times14$ lenslet array, together with the
visible dichroic. We used the CONICA L27 camera, which provides a
pixel scale of $\sim 27$~mas. Saturated images of $\beta$~Pic\ were recorded,
with detector integration times (DITs) of $0.175$~s and number of
detector integrations (NDIT) of $100$ or $200$. Every two
exposures\footnote{An exposure is completed after $\rm DIT \times
NDIT$.}, spatial offsets were applied in order to allow sky and
instrumental background removal. Non-saturated images were also
recorded to get images of the stellar point spread function (PSF) as
well as a photometric calibration. In such case, we added the Long
Neutral Density filter (transmission $\sim 0.018$) in the CONICA
optical path, and recorded images with DITs of $0.4$s.
In addition, the binary IDS~22141S3712 (separation $\rho = 6\,630 \pm
10$~mas, position angle $\rm PA = 302.06 \pm 0.07\degr$;
\cite{vandessel93}) was observed on November~11 as an astrometric
calibrator. A mean plate scale of $27.105 \pm 0.041$~mas and a true
North orientation of $-0.10 \pm 0.07\degr$ were derived and used to
calibrate all $\beta$~Pic\ images. Saturated and non saturated exposures were
taken on the reference star \object{HR~2435} (A0II, $V=4.4$). The
purpose is to correct for the star halo (the wings of the PSF) present
in the saturated exposures. To optimize the removal of any fixed
speckle, $\beta$~Pic\ and HR~2435 were observed, as much as possible, at close
parallactic angles. Finally, twilight flat fields were also recorded
in $L'$ band.
The observing conditions varied from exceptional (coherent
energy\footnote{Since it is not possible to measure the Strehl ratio
on our saturated data, we can only use the information provided by
NAOS to assess the image quality. The coherent energies are those
measured by the system in $K$ band.} $EC > 70\%$, coherent time
$\tau_0 > 20$~ms) to reasonable ($EC \sim 50\%$, $\tau_0$ of a few
ms), and sometimes poor ($EC \sim 20$--$35\%$, $\tau_0 \sim
1$--$2$~ms), over the run; the data quality varies accordingly. The
best data set on $\beta$~Pic\ was obtained on November~10. In the following,
we will describe the data reduction and analysis of three
sets of data: (A) the very best set obtained on November 10; (B) a set
of data with slightly poorer image quality obtained on the same night,
with a shorter total exposure time, and (C) a set of data with poorer
image quality obtained on November 13. However, Set~C is
representative of the best data obtained in the nights following
November 10. The instrumental configurations and the status of image
quality for these three data sets can be found in Table~\ref{stats}.
\begin{table*}
\caption{Observing Log and companion position and flux relative to $\beta$~Pic\ for the 3 data set (A, B and C).}
\label{stats}
\centering
\renewcommand{\footnoterule}{}
\begin{tabular}{l l l l l l l l l l l l l}
\hline \hline
\multicolumn{10}{c}{} & \multicolumn{3}{c}{\underline{\hspace{1.5cm}COMPANION\hspace{1.5cm}}} \\
Set & Star & Date & DIT & NDIT & $t_\mathrm{exp} $ & $\pi$\footnote{Range of
parallactic angles at the start/end of the observation.} & $\sec z$ & $\langle EC \rangle$
\footnote{The average coherent energy as estimated on-line by AO.} & $\langle\tau_0\rangle$\footnote{The average coherence time as estimated on-line by AO.} & Separation & PA & $\Delta L'$ \\
& & & (s) & & (s) & (o) & & (\%) & (ms) & (mas) & (o) & (mag) \\
\hline
A & $\beta$~Pic & 2003-11-10 & 0.175 & 200 & 665 & $-18$/$-9$ & 1.125/1.119 & 62.71 & 11.47 & $411\pm8$ & $31.8\pm1.3$ & $7.7\pm0.3$\\
& HR~2435 & 2003-11-10 & 0.175 & 200 & 630 & $-20$/$-12$ & 1.148/1.140 & 63.06 & 10.65 & & & \\
\hline
B & $\beta$~Pic & 2003-11-10 & 0.175 & 100 & 420 & $-27$/$-19$ & 1.138/1.127 & 57.94 & 8.08 & $411\pm8$ & $31.5\pm1.3$ & $7.9\pm0.4$\\
& HR~2435 & 2003-11-10 & 0.175 & 100 & 420 & $-28$/$-20$ & 1.161/1.149 & 60.23 & 9.70 & & & \\
\hline
C & $\beta$~Pic & 2003-11-13 & 0.175 & 200 & 385 & $-24$/$-19$ & 1.161/1.149 & 46.94 & 4.50 & $401\pm8$ & $32.1\pm1.4$ & $7.6\pm0.4$ \\
& HR~2435 & 2003-11-13 & 0.175 & 200 & 350 & $-22$/$-17$ & 1.150/1.144 & 44.99 & 2.72 & & & \\
\hline
\end{tabular}
\begin{list}{}{}
\item[$^3$] Range of parallactic angles at the start/end of the observation.
\item[$^4$] The average coherent energy as estimated on-line by AO.
\item[$^5$] The average coherence time as estimated on-line by AO.
\end{list}
\end{table*}
\subsection{Data processing}
The first step was the cosmetic correction (bad-pixels, flat-fielding,
background subtraction) and recentering of individual offset positions
of $\beta$~Pic\ and HR\,2435 observations. A first method was to directly
apply standard routines from the \texttt{eclipse} library
(\cite{devillard97}), using classical cross-correlation algorithm. A
special care was taken to estimate the background at each given
position by averaging images obtained at the previous and successive
offset position in the observing sequence. Alternatively, a second
method was used with an improved software dedicated to adaptive optics
(AO) image processing (see \cite{gratadour05}). Following a different
approach for the background subtraction, images at individual offset
positions were recentered using a maximum likelihood algorithm at the
level of the tenth of a pixel or better. The same overall process was
used for both the object and the reference. Consistent results are
found in terms of recentering precision and background subtraction (at
less than the backgroung noise of 0.9~ADU).
\begin{figure}[t]
\centering
\includegraphics[width=\hsize]{fig1.ps}
\caption{$\beta$~Pic\ and HR\,2435 recentered and saturated $L'$ images (top
left and top right, respectively) in data set A. Below are the
divided (bottom left) and subtracted (bottom right) images. North is
up and East is to the left. A candidate companion is clearly
detected at a PA of $\simeq 32\degr$, i.e., along the NE side of the
disk, at a separation of about 0\farcs41 from the star.}
\label{sous_ndit200_nov10}
\end{figure}
As a second step, three parallel approaches were followed to study
the close environement of $\beta$~Pic:
\begin{itemize}
\item first approach consists in removing the PSF wings from the
saturated images of $\beta$~Pic\ by a simple minimization of the residuals.
To do so, we first divided $\beta$~Pic\ images by the ones of HR~2435 obtained
under similar conditions (same DIT, NDIT, and pupil position). We then
computed the scaling factor to be applied to the reference in order to
scale its flux to that of $\beta$~Pic. We then subtracted the reference
images to the $\beta$~Pic\ ones. Recentering and scaling processes are
repeated to minimize the residuals with respective precisions
of sub-pixels and 5\%. Tests have been performed
using rotated images of $\beta$~Pic\ for PSF subtraction. However, the fact
that PSF halo is not centro-symmetric and that the statical
aberrations are not overlaid worsen the subtraction process.
\item the second approach is to follow the same subtraction sequence
but applying the maximum likelihood algorithm of \cite{gratadour05}
(see Fig.~\ref{sous_ndit200_nov10}). The algorithm is used here to
recenter the reference star with $\beta$~Pic\ and therefore confirm the
previous estimation. Similar precisions are achieved.
\item the last approach was actually to use the MISTRAL deconvolution
algorithm (\cite{mugnier04}), based on a maximum a posteriori
scheme. Nevertheless, \texttt{MISTRAL} relies on a strict convolution
process between image and reference which is not the case for our
saturated data. A first step is therefore to perform a posteriori
correction of saturated part of the image and reference. This is done
using a simulated Airy pattern. The top of the Airy pattern replaces
the image saturated pixels. The flux level is adjusted using the
first Airy rings. Such a correction is possible because of the very
good Strehl ratio on the image. Meanwhile, if this a posteriori
correction does not significantly affect the restitution of the object
structures, it could obviously degrade the relative photometry. Hence,
the deconvolution process is an alternative approach (compared to
reference subtraction and division) for the image processing which is
less sensitive to image versus reference centering. It allows us to
provide the best measurements of relative position with a precision of
0.3 pixels, but remain uncertain for the relative photometry due to
the use of saturated images.
\end{itemize}
\section{A candidate companion in the $\beta$~Pic\ disk?}
\subsection{Results from the highest-quality data (set A)}
Using three independent approaches, the companion detection is
confirmed at the same location. The resulting images, using the
maximum likelihood algorithm for recentering, are reported in
Figure~\ref{sous_ndit200_nov10}. The companion candidate (hereafter,
the CC) point-like signal is clearly visible in the divided and subtracted images. The maximum of the signal is about $190$~ADU. Different techniques
(variable aperture photometry, 2D-gaussian fitting, PSF-fitting) were
used to extract the CC flux, giving consistent results. As the
reference recentering and rescaling actually dominate the flux
measurement precision, flux uncertainties were derived considering
respective variations of 0.3 pixel and 5\% in the subtraction
process. We obtain a contrast of $\Delta L' = 7.7\pm0.3$ between the
CC and $\beta$~Pic. Using deconvolution, we derive a separation of
$411\pm8$~mas and a PA of $31.8 \pm 1.3\degr$ relative to the primary,
i.e along the NE side of the disk.
\begin{figure}
\centering
\includegraphics[width=\hsize]{fig2.ps}
\caption{Top left: simulated planets at PA of $150\degr$, $210\degr$
and $330\degr$. Top right: composite image of $\beta$~Pic\ plus the fake
planets. Bottom left: division of the composite image by the
saturated image of HR~2435. Bottom right: scaled subtraction of the
composite image by the saturated image of HR~2435. Note that even a
slight (0.3 pixel) relative offset between $\beta$~Pic\ and HR~2435 impacts
the resulting shape of the fake planets as much as the candidate
one. In particular, triangular shapes can be observed, due to the
proximity of the slightly inner Airy ring.}
\label{test_planet_set1}
\end{figure}
The use of different methods excludes artefacts created during the
reduction process. In particular, the result of the deconvolution
rules out any effect that could be introduced by imperfect estimation
of the offset between $\beta$~Pic\ and HR~2435 saturated images due, for
instance, to a possible contribution of the disk. We did check anyway
that the disk signal is very faint and not significantly
asymmetric. To rule out detector effects, we looked for possible
remanence and electronic ghosts that could occur because our images
are saturated. Individual images inspection exclude any contamination
by these two effects that rapidly disappear after a few
frames. Artefacts due to the very good but still imperfect AO
correction are still possible. However, aberrations due to a
modulation of the deformable mirror would generally lead to the
presence of either symmetrical or anti-symmetrical patterns. Static
aberrations should be present equivalently around $\beta$~Pic\ and HR~2435, as
both stars were observed with similar pupil configurations. We have
then tested the possible impact of an imperfect removal of static
speckles due to the variation of the parallactic angle during the
observations of $\beta$~Pic\ and HR~2435 (up to $8\degr$). We processed
individual pairs of data of $\beta$~Pic\ and HR~2435 taken with parallactic
angles equal within $\pm 0.4\degr$ and added up the individual
subtracted images. The CC is still present and appears slightly
sharper (but still compatible with the instrumental resolution). In
addition, since the same signal is also observed a few nights apart
(see below), we conclude that quasi-static aberrations are unlikely.
To further assess the reality of the detection and test the CC
photometry, we added three `fake planets' at similar separations but
different PA ($150\degr$, $210\degr$, and $350\degr$) to the
recentered and stacked image of $\beta$~Pic. The fake planet images were
generated by scaling and shifting an unsaturated image of $\beta$~Pic\ taken
during the same night. To match the level of the observed signal, the
magnitudes of the fake planets are scaled to the measured flux ratio
on set A. We then subtracted the scaled image of HR~2435 to that of
the composite image. The result is shown in
Figure~\ref{test_planet_set1}, from which it is clear that the fake
planets produce features similar to the observed signal, supporting
our contrast estimate. We therefore, derive an apparente magnitude of
$L'=11.2\pm0.3$ for the CC.
\subsection{Results from lower quality data (sets B and C)}
All other sets of data were processed as before. It quickly appeared
that the detection becomes more marginal when the exposure time and/or
the image quality decrease. For the best remaining sets in terms of
atmospheric conditions and relative parallactic angle between $\beta$~Pic\ and
HR~2435 (described in Section~\ref{sec:obs} and Table~\ref{stats}),
the results are shown in Figure~\ref{sous_allsets_plussimus}. The same
point-like signal is seen in both cases, with however a lower
signal-to-noise ratio. This is due to the lower exposure times and the
(slightly) poorer image qualities. This is illustrated in
Figure~\ref{test_planet_set1}, where we also added fake planets
with a magnitudes equal to that of the fake planets used for Set A
to the data sets B and C. The fake planet signals are, as the CC
signal, much less detectable under these conditions; hence the results
from our 3 data sets are consistent.
\begin{figure}
\centering
\includegraphics[width=\hsize]{fig3.ps}
\caption{Top: Scaled subtraction of the composite image by the
saturated image of HR~2435 for sets A, B and C. Bottom: Simulated
planets at PA of $150\degr$, $210\degr$ and $330\degr$.Note that even
a slight (0.3 pixel) relative offset between $\beta$~Pic\ and HR~2435 impacts
the resulting shape of the fake planets as much as the candidate
one. In particular, triangular shapes can be observed, due to the
proximity of the slightly inner Airy ring. }
\label{sous_allsets_plussimus}
\end{figure}
\section{Bound companion or background object?}
With these data alone, it is not excluded that the CC could be a
foreground or background object. To definitely test this hypothesis,
second epoch measurements are needed. Using the known $\beta$~Pic\
proper and parallactic motions, we have considered the position of the
CC assuming it would be a stationary contaminant. It would be closer
than $0\farcs2$ to $\beta$~Pic\ in 2008 and hence, not detectable with current
technics. In past years, $\beta$~Pic has been monitored by numerous
programs of the \emph{Hubble Space Telescope}, including observations
using the Advanced Camera for Surveys (Golimowski et al.\ 2006 and
Kalas et al.\ 2005), the Near-Infrared Camera and Multi-Object
Spectrometer (Brown et al.\ 1999) and the Wide Field Planetary Cameras
(Kalas et al.\ 2000). The lack of spatial resolution or the size of
the coronographic mask prevented from getting any reliable hints
of any point sources close to the star.
To our knowledge, the highest angular resolution and dynamical data
available are the coronagraphic data taken in 1997 September with STIS
(\cite{heap00}). These authors were able to probe the close stellar
environment down to 0\farcs75 from $\beta$~Pic. They did not report any CC in
their images at the expected location of 0\farcs9 North and 0\farcs25
East to $\beta$~Pic. Based on STIS detection limits and the brightness
derived by Heap et al. (2000), all objects brighter $V \leq 17$ would
have been detected. Red giants or supergiants can be ruled out for
their distances that would bring them unrealistically far (typically $\ga 10\,000$~pc) from us. If
we consider a foreground or background stellar field contaminant, it
would therefore have a large ($V-L\geq5.8$) color, i.e a spectral type
later than mid-M. Based on a number density of L dwarfs of
$1.9\times10^{-3}$~pc$^{-3}$ given by Burgasser (2001) and Cruz et
al. (2003), the probability of finding a foreground or background
field L dwarf in a region of 500~mas radius around $\beta$~Pic\ and with
$L'=11.2$ is about~$10^{-10}$. Without any assumption on the
contaminant spectral type, one can use galactic population model
outputs to estimate the probability to find any ($L'\leq12$) galactic
source at 500~mas from $\beta$~Pic. We find a low probability of $6
\times10^{-5}$. Last, we cannot strictly rule out a contamination by
an extragalactic source, such as a high-redshift quasar. In
conclusion, a contamination appears very unlikely. In addition, the
fact that the candidate companion falls into the disk strongly favors
that it is bound to the star.
\section{Implications on the understanding of the $\beta$~Pic\ system}
With an $L'$ magnitude of $11.2\pm0.3$, and assuming a
distance of $19.3\pm0.2$~pc and an age of $12^{+8}_{-4}$~Myr
(\cite{zuckerman01}), the mass of the CC is estimated to
$9^{+3}_{-2}$~M$_{\rm Jup}$\ using COND models (\cite{baraffe03}) and
$8^{+4}_{-2}$~M$_{\rm Jup}$\ using DUSTY models (\cite{chabrier00}). The
planet should still be in the phase of cooling: Dusty and COND models
predict effective temperatures of 1\,400 and 1\,600~K,
respectively. The validity of these models in the case of young
planets formed by core accretion has been recently questionned
(\cite{marley08}) on the basis that the accretion shock might impact
the planet initial internal entropy and its subsequent early thermal
evolution. These authors claim that young giant planets could be
significantly cooler and fainter than predicted so far.
However, the treatment of the accretion shock is still a matter of debate. The
impact of the initial disk mass is also to be studied. This may be
important in the present case as $\beta$~Pic\ is significantly more massive
than the Sun. In the present case, \cite{marley08} model predicts a luminosity ten times fainter from what we observed for an $8$~M$_{\rm Jup}$\ planet at the age of $\beta$~Pic. However, we can already note that a companion at a true
separation of $\sim 8$~{\sc au}\ could not be significantly more massive
than 10--20~M$_{\rm Jup}$\ since otherwise it would have been detected through
radial velocity measurements, as shown by a detailed analysis that
will be presented in a forthcoming paper (Desort et al., in
preparation).
If the observed projected separation happens to be the physical one or
close to the physical one, then the companion explains three very
intriguing and so far unique characteristics of the $\beta$~Pic\ system: the
warp in the inner disk, the inner belts and the falling evaporating
bodies (\cite{freistetter07}). Mouillet et al.\ (1997) and Heap et
al.\ (2000) showed that the warp of the disk constrains the (mass
$M_p$, semi-major axis $a$) domain of the planet as follows:
\begin{equation}
\log (M_p / M_\star) + 2 \log a + \log t \approx 6.7,
\end{equation}
where $M_\star = 1.8$~M$_{\sun}$\ (\cite{crifo97}) and $t$ is the age of
the star. Given an age $t = 12$~Myr, masses $M_p = 6$ and $13$~M$_{\rm Jup}$\
give $a = 9.7$ and $7.6$~{\sc au}, respectively, nicely bracketting the
measured projected separation. This further strengthens the likelyhood
that the observed CC \emph{is} the planet causing the warp.
At the time of the submission of this letter, this companion was to
our knowledge the first one possibly detected around an A type MS
star. Recent results published since then by Marois et al. (2008) and
Kalas et al. (2008) show also the presence of planets at distances 24
to 118 AUs from two A types stars. This companion could be anyway the
first extrasolar planet ever imaged so close to its parent star:
8~{\sc au}. In particular, it would be located well inside the orbits of
the outer planets of the Solar System. Its closeness and location
inside the $\beta$~Pic\ disk suggest a formation process by core accretion or
disk instabilities rather than binary like formation mechanisms
proposed for the companions to 2M1207 and AB\,Pic (Chauvin et
al. 2005a, b). Further direct imaging observations should allow to
constrain the planet orbit and hence measure the planet dynamical
mass. This would in turn allow to test the models of planet
formation. This is dramatically needed as the evolutionary models are
not calibrated with real data in these ranges of masses and
ages. Finally, we note that unfortunately, if the projected separation
is the physical one, the planet orbital period is about 16~years and
should be at present time much closer to the star and hence
undetectable with NaCo.
\begin{acknowledgements}
We acknowledge financial support from the French Programme National de
Plan\'etologie (PNP, INSU), as well as from the French Agence
Nationale pour la Recherche (ANR; project $NT05-4_44463$). These results have made use of the
SIMBAD database, operated at CDS, Strasbourg, France. We also thank
the ESO staff for its help during the NaCo observations. We thank
J.-C.~Augereau for useful discussions on the $\beta$~Pic disk.
AML thanks also P.~Rubini for his
help on the lay out of the paper. Finally, we thank the referee for
his rapid and precious comments on the paper.
\end{acknowledgements}
|
train/arxiv
|
BkiUbgE4ukPiES_fI3Ic
| 5 | 1 |
\section{Introduction}
\label{section:intro}
The production of heavy quarkonium at hadron colliders provides particular challenges and opportunity for insight into the theory of Quantum Chromodynamics (QCD) as its mechanisms
of production operate at the boundary of the perturbative and non-perturbative regimes. Despite being among the most studied of the bound-quark systems, there is still no clear understanding of the mechanisms in the production of quarkonium states like the $J/\psi$ that can consistently explain both the production
cross-section and spin-alignment measurements in $e^+e^-$, heavy-ion and hadron-hadron collisions
(see review articles\,\cite{reviews} and references therein).
Data obtained by the Large Hadron Collider (LHC) collaborations can help to test existing theoretical models of both quarkonium production and $b$-production in a new energy regime, at higher transverse momenta and in wider rapidity ranges than have previously been studied.
Furthermore, quarkonium production in proton-proton collisions plays a key role as a reference point to understand heavy ion collisions and to understand the interplay between production and suppression mechanisms in such collisions\,\cite{jpsiHI}.
This paper presents a measurement of the inclusive $J/\psi$ production cross-section and the production fraction $f_B$ of non-prompt $J/\psi$ (produced via the decay of a $B$-hadron)
to inclusively-produced $J/\psi$ (hereafter referred to as the {\em non-prompt fraction}):
\begin{equation}
\label{eqn:fraction}
f_B \equiv \frac{\sigma(pp\to B+X\to J/\psi X^{\prime})}{\sigma(pp\xrightarrow[]{\textrm{Inclusive}} J/\psi X^{\prime\prime})}
\end{equation}
in the decay channel $J/\psi\to\mu^+\mu^-$ as a function of both $J/\psi$
transverse momentum and rapidity in $pp$ collisions at the LHC at a centre-of-mass energy of 7~TeV and with an integrated luminosity of up to 2.3~pb$^{-1}$.
The fraction has the advantage that acceptances and many efficiencies are the same for the numerator and denominator, and so systematic effects are reduced.
The results of these analyses are compared to those made by the CMS Collaboration~\cite{CMS} with 314~nb${}^{-1}$ of integrated luminosity and those
from the CDF Collaboration\,\cite{CDF} where appropriate.
From these measurements, the prompt $J/\psi$ production cross-section ($\sigma(pp \to J/\psi X^{\prime})$, produced directly from the proton-proton collisions
or from decays of heavier charmonium states like the $\chi_c$ or $\psi(2S)$), and the non-prompt ($\sigma(pp\to B+X\to J/\psi X^{\prime})$) $J/\psi$ production cross-section, are extracted.
These results are compared to corresponding predictions made by the Colour Evaporation Model~\cite{CEM_RHIC}, Fixed-Order Next-to-Leading Log (FONLL)~\cite{Cacciari}
and Colour Singlet NNLO$^\star$ calculations~\cite{NNLO_upsilon,onia_prod}.
Further details of the results of measurements presented here may be found in reference~\cite{HEPDATA}.
\hyphenation{spectrometer spectro-meter}
\section{The ATLAS Detector and Data Processing}
\label{section:Atlasdata}
In this section, the collection and processing of the data used in the paper are outlined. This involves a description of the most
relevant subsystems of the ATLAS detector\,\cite{ATLAS}: the trigger system,
the muon system and the inner tracking detector.
Also specified are the triggers used and the offline data processing, in particular the selection of candidate muons.
\subsection{The ATLAS detector}
\label{section:Atlasdet}
The ATLAS detector covers almost the full solid angle around the collision point with layers of
tracking detectors, calorimeters and muon chambers. For the measurements presented in this paper, the trigger system, the inner detector tracking devices (ID) and the muon spectrometer (MS) are of particular importance.
The ID covers the pseudorapidity range $|\eta |<$ 2.5. It consists of a silicon pixel detector, a silicon strip detector (SCT) and a transition radiation tracker (TRT). These detectors are located at a radial distance from the beam axis between 50.5\,mm
and 1066\,mm and are immersed in a 2 T solenoidal magnetic field. The ID barrel consists of 3 pixel layers, 4 layers of double-sided silicon strip modules and 73 layers of TRT straws. The ID end-cap has $2\times 3$ pixel layers, $2\times 9$ layers of silicon strips and $2\times 160$ layers of TRT straws.
The MS is located inside a toroidal magnetic field which provides 2.5 Tm of bending power in the barrel and 5 Tm in the end-caps. It consists of four detectors using different technologies and
is divided into a barrel region ($|\eta|<1.05$) and two end-cap regions ($1.05<|\eta|<2.7$).
Precise muon measurements are made using monitored drift tube chambers (MDT) in both the barrel and end-cap sections and using Cathode Strip Chambers (CSC) in the end-caps; fast triggers are obtained from resistive
plate chambers (RPC) in the barrel and thin gap chambers (TGC) in the end-caps. The chambers are arranged in three layers, so high-\pt\ particles leave at least three measurement points with a lever arm of several metres.
\subsection{Trigger}
\label{section:AtlasTrig}
The ATLAS detector has a three-level trigger system: level 1 (L1), level 2 (L2) and the event filter (EF). For the measurements presented here, the trigger relies on the Minimum Bias Trigger Scintillators (MBTS) and the muon trigger chambers.
The MBTS are mounted in front of each liquid argon endcap calorimeter cryostat at $z=\pm3.56$ m and are segmented into eight sectors in azimuth and two rings in pseudorapidity ($2.09<|\eta|< 2.82$ and $2.82<|\eta|<3.84$). The MBTS trigger is configured to require two hits above threshold from either side of the detector. A dedicated muon trigger at the EF level is required to confirm the candidate events chosen for these measurements. This is initiated by the MBTS L1 trigger and searches for the presence of at least one track in the entire MS. This trigger is referred to as the EF minimum bias trigger; it has an adjustable threshold on the reconstructed muon $p_T$ above which events are accepted and can be prescaled to accept a pre-determined fraction of events meeting
the trigger condition.
The L1 muon trigger is based on RPCs for the barrel and TGCs for the end-caps \cite{ATLAS}. It seeks hit coincidences within different RPC or TGC detector layers inside programmed geometrical windows which define the muon candidate $p_T$, then selects candidates above six programmable thresholds and provides a rough estimate of their positions \cite{confmuontrigger}. For the earlier data used in this analysis, the muon trigger corresponds to the lowest \pt\ threshold trigger which requires a simple two-layer time coincidence within a region of $~0.1\times 0$.1 in $\eta$-$\phi$.
No further geometrical constraint is applied.
As the instantaneous luminosity of the collider increases, the trigger requirement switches from the EF minimum bias trigger to the L1 muon trigger. Later data periods make use of triggers seeded by this L1 trigger but with additional $p_T$ cuts applied at the EF stage (these are referred to henceforth as the EF muon triggers).
\subsection{Muon identification and reconstruction}
\label{section:MuonRecon}
Muon identification and reconstruction extends to $|\eta|<2.7$, covering a \pt\ range from 1 GeV up to more than 1 TeV.
``Standalone MS tracks'' are constructed entirely based on the signal hits collected in the MS. The track parameters are obtained from the MS track and are extrapolated to the interaction point,
taking into account multiple scattering and energy loss in the traversed material. In this analysis, two categories of reconstructed muons are then defined:
\begin{itemize}
\item {\bf Muons from combined reconstruction:}
the {\em combined} muon reconstruction relies on a statistical combination of both a standalone MS track and an ID track.
Due to ID
coverage, the combined reconstruction covers $|\eta|<2.5$.
\item {\bf Muons from ID track tagging:}
a {\em tagged} muon is formed by MS track segments which are not formed into a complete MS track, but which are matched to ID tracks extrapolated to the MS.
Such a reconstructed muon adopts the measured parameters of the associated ID track.
In this paper, the muon tagging is limited to $|\eta|<2$, in order to ensure high quality tracking
and a reduction of fake muon candidates.
\end{itemize}
\noindent
The muon track helix parameters are taken from the ID measurement alone, since the MS does not add much to the precision in the lower momentum range relevant for the $J/\psi$ measurements presented here.
\section{Data and Monte Carlo Samples}
\label{section:samples}
Proton-proton collision data, at a centre-of-mass energy of 7 TeV,
are included in this analysis if taken during stable beam periods and when the MS, ID and magnet systems were collecting data of a sufficiently high quality to be suitable for physics analysis.
Monte Carlo samples are used for determining acceptance corrections,
as part of the trigger efficiency studies and in systematic cross-checks.
They are generated using \textsc{Pythia 6}\, \cite{Pythia6} and tuned
using the ATLAS MC09 tune\,\cite{MC09} which uses the MRST LO$^\star$ parton distribution functions \cite{MRSTLO}.
The passage of the generated particles through the detector is simulated with \textsc{Geant4}\ \cite{Geant} and the data are fully reconstructed
with the same software that is used to process the data from the detector.
For the signal $J/\psi$ Monte Carlo (used to derive the kinematic
acceptance corrections), the \textsc{Pythia} implementation of prompt
$J/\psi$ production sub-processes in the
NRQCD Colour Octet Mechanism framework \cite{JpsiTune} is used.
Prompt $J/\psi$ production includes {\em direct} production from the
hard interaction, as well as charmonium feed-down from excited states.
These {\em prompt} production modes are distinct from
{\em non-prompt} production that is characterised by the
production of $J/\psi$ via the decay of a $B$-hadron.
All samples are generated with polar and azimuthal isotropy
in the decay of the $J/\psi$ (the default in \textsc{Pythia})
and are reweighted at the particle level according to their respective angular dependencies in order to describe a number of
different spin-alignment scenarios
(see Section~\ref{sec:acceptance}).
The $J/\psi$ spin-alignment is not measured in this analysis, so the reweighted MC samples are used
to provide an uncertainty band on the measurement of the production cross-section, determined by the
maximum variation in acceptance across the full allowed range of $J/\psi$ spin alignment.
\subsection{Event and candidate selection}
\label{section:eventsel}
The analyses presented in this paper make use of the triggers described in Section~\ref{section:AtlasTrig}.
For the inclusive cross-section, in a given data taking period an event is retained or discarded based on the decision of a single specific trigger, without reference to any other triggers. For data from the initial running with lower instantaneous luminosity, the
L1 muon trigger is used. During later periods, with higher instantaneous luminosity, a more selective EF muon
trigger with a 4~GeV \pt\ threshold is required, and eventually, this is increased to a 6~GeV \pt\
threshold. The sample collected by these triggers and passing the data quality selections corresponds to an integrated luminosity of $2.2~\textrm{pb}^{-1}$.
For the measurement of the $B\to J/\psi$ non-prompt fraction (see Equation~\ref{eqn:fraction}), two additional triggers
are employed, and rather than using a single trigger to veto or accept events, several triggers are used simultaneously such that any one of them having fired results in the event being included. From the initial period, events triggering either the L1 muon trigger or the EF minimum bias trigger are used (whereas only the L1 muon trigger is used for the cross section). For intermediate instantaneous luminosities the L1 muon trigger is used alone since the EF minimum bias trigger is highly prescaled at this stage. For the highest instantaneous luminosities, events are accepted which pass any of the EF muon triggers with \pt\ thresholds of 4, 6 or 10~GeV. During the runs with the highest instantaneous luminosities, the triggers with $4$ and $6$ GeV are prescaled; however, the $10~\textrm{GeV}$ threshold trigger is not. The inclusion of this unprescaled trigger along with the addition of the EF minimum bias trigger
for the $B\to J/\psi$ non-prompt fraction measurement results in a slightly higher integrated luminosity of $2.3~\textrm{pb}^{-1}$.
To veto cosmic rays, events passing the trigger selection are required to have at least three tracks associated with the same reconstructed primary vertex. The three tracks must each have at least one hit in the pixel system and at least six hits in the SCT.
Each remaining event is required to contain at least one pair of reconstructed muons. Only muons associated with ID tracks that have at least one hit in the pixels and six in the SCT are accepted.
Di-muon pairs with opposite charges are considered to be $J/\psi$ candidates if at least one combined muon is present in the pair.
At least one reconstructed muon candidate is required to match a muon trigger (that is, at least one muon from the $J/\psi$ candidate should have fired the trigger). For the early data, when the trigger is essentially based on the L1 muon trigger, at least one of the offline muons is required to match the trigger muon candidate to within
$\Delta R=\sqrt{\Delta\eta^2+\Delta\phi^2}<0.4$ at the MS plane; for the later data taking, where the EF muon trigger is used,
the offline and trigger muons are required to match within $\Delta R<0.005$.
The two ID tracks from each pair of muons passing these selections are fitted to a common vertex ~\cite{VKalVrt}. No constraints are applied in the fit and a very loose vertex quality requirement, which retains over $99\%$ of the candidates, is used.
For the $B\to J/\psi$ non-prompt fraction analysis, where
lifetime information
is an important element of the fit,
additional requirements are made on the $J/\psi\to\mu^+\mu^-$ candidates. The probability of the
fit to the $J/\psi$ vertex is required to be greater than $0.005$.
For this measurement $J/\psi$ candidates are rejected if the two muon candidate tracks were used to build different primary vertices in the offline reconstruction (so that there is an ambiguity as to which primary vertex to use in the lifetime calculation). This rejects fewer than $0.2\%$ of the $J/\psi$ candidates. This selection is not applied for the cross-section analysis.
\def\ensuremath{\rightarrow}{\ensuremath{\rightarrow}}
\newcommand{ \ensuremath{L}}{ \ensuremath{L}}
\newcommand{\GammaNeg}{\ensuremath { \Gamma_{-} } \space}
\newcommand{\ensuremath { pp_{fr} } \space}{\ensuremath { pp_{fr} } \space}
\newcommand{\ensuremath { bb_{fr} } \space}{\ensuremath { bb_{fr} } \space}
\newcommand{ \ensuremath { \phi }}{ \ensuremath { \phi }}
\newcommand{ \ensuremath { \epsilon }}{ \ensuremath { \epsilon }}
\newcommand{ \ensuremath { wf }}{ \ensuremath { wf }}
\newcommand{\ensuremath{b}}{\ensuremath{b}}
\newcommand{\bbar \space}{\bbar \space}
\newcommand{\bbbar \space }{\bbbar \space }
\newcommand{ pp \space }{ pp \space }
\newcommand{\cac}{ \ccbar}
\newcommand{\ensuremath{ \bbbar \ra \Jpsi X } \space}{\ensuremath{ \bbbar \ra \Jpsi X } \space}
\newcommand{\ensuremath{ pp \ra \Jpsi X } \space}{\ensuremath{ pp \ra \Jpsi X } \space}
\newcommand{\Jpsi \ }{\Jpsi \ }
\newcommand{\Jpsipan}{\Jpsi }
\newcommand{\Ups}{\Ups}
\newcommand{ \ensuremath{M (\Jpsipan) }\space}{ \ensuremath{M (\Jpsipan) }\space}
\newcommand{ \ensuremath{\eta (\Jpsipan) }\space}{ \ensuremath{\eta (\Jpsipan) }\space}
\newcommand{ \ensuremath{\phi (\Jpsipan) }\space}{ \ensuremath{\phi (\Jpsipan) }\space}
\newcommand{ \ensuremath{M_{PGD} (\Jpsipan) }\space}{ \ensuremath{M_{PGD} (\Jpsipan) }\space}
\newcommand{ \ensuremath{M_{PGD} (\Jpsipan) }\space}{ \ensuremath{M_{PGD} (\Jpsipan) }\space}
\newcommand{\ensuremath{L_{xy} (\Jpsipan) }\space}{\ensuremath{L_{xy} (\Jpsipan) }\space}
\newcommand{\ensuremath{L_{xy} (\Jpsipan) }\space}{\ensuremath{L_{xy} (\Jpsipan) }\space}
\newcommand{\ensuremath{L_{xy} }\space}{\ensuremath{L_{xy} }\space}
\newcommand{\ensuremath{L_{xy} (B) }\space}{\ensuremath{L_{xy} (B) }\space}
\newcommand{\psubt}{ \pt \space}
\newcommand{\jpsipt}{\pt (\Jpsipan) \space}
\newcommand{ \ptb}{\pt (B) \space}
\newcommand{ \bpt}{\pt (B) \space}
\newcommand{ \bdpt}{\pt (Bd) \space}
\newcommand{ \tauB}{ $\tau$ (B) \space}
\newcommand{\begL\ }{\begL\ }
\newcommand{\begM\ }{\begM\ }
\newcommand{\lowL\ }{\lowL\ }
\newcommand{\lumihihh}{\highL\space}
\newcommand{\ipb\ }{\ipb\ }
\newcommand{\femto}{\ifb\space}
\newcommand{\psubts}{\ensuremath{\pt > \GeV }\space}
\newcommand{\absetadp}{\ensuremath{\abseta < 2.4 }\space}
\newcommand{\ensuremath{\rm Hz}\ }{\ensuremath{\rm Hz}\ }
\newcommand{\ensuremath{\rm ps}\ }{\ensuremath{\rm ps}\ }
\newcommand{\Jpsitomm}{\ensuremath{ \Jpsi \ra \mumu }\space}
\newcommand{\Jpsitoee}{\ensuremath{ \Jpsi \ra \epem}\space}
\newcommand{\Jpsitol}{\ensuremath{\Jpsi \ra \mumu}\space}
\newcommand{\mumuppi}{\ensuremath{\mumu p\pi^{-}}\space}
\newcommand{\mueeppi}{\ensuremath{\mu eep\pi^{-}\ }\space}
\newcommand{\bbarb}{\ensuremath{ pp \ra \bbbar X\ }\space}
\newcommand{\phiKK}{\ensuremath{ \phi \ra \kplus \kminus}\space}
\newcommand{\KnstoKpi}{\ensuremath{ \Kns \ra K^{\pm}\pi^{\pm}\ }\space}
\newcommand{\Kpi}{\ensuremath{ K^{\pm}\pi^{\pm}\space} \space}
\section{Inclusive {\Jpsitomm} Differential Production Cross-Section}
\label{section:mass}
The measurement of the inclusive differential cross-section is determined as
\begin{equation}
\frac{d^2\sigma(J/\psi)}{dp_Tdy} Br(J/\psi \to \mu^+\mu^-) = \frac{N^{J/\psi}_{corr}}{{\cal{L}}\cdot\Delta p_T \Delta y}
\end{equation}
where $N^{J/\psi}_{corr}$ is the $J/\psi$ yield in a given $p_T-y$ bin after continuum background subtraction and correction for detector efficiency, bin migration and acceptance effects,
${\cal L}$ is the integrated luminosity of the data sample and $\Delta p_T$ and $\Delta y$ are the $p_T$ and rapidity bin widths.
The probability $P$ that a $J/\psi\to\mu\mu$ decay is reconstructed depends on the kinematics of the decay, as well as the muon reconstruction and trigger efficiencies.
In order to recover the true number $N^{J/\psi}_{corr}$ of such decays produced in the collisions,
a weight $w$ is applied to each observed $J/\psi$ candidate, defined as the inverse of
that probability and calculated as follows:
\begin{eqnarray}
\label{weights}
P &=& w^{-1} = {\cal {A}} \cdot {\cal M} \cdot {\cal {E}}^2_{\mathrm{trk}}
\cdot {\cal {E}^+}_{\mu}(p_T^{+},\eta^{+})
\cdot {\cal {E}^-}_{\mu}(p_T^{-},\eta^{-})
\cdot {\cal {E}}_{\mathrm {trig}}
\end{eqnarray}
where ${\cal{A}}$ is the kinematic acceptance, ${\cal M}$ is a correction factor for bin migrations due to finite detector resolution, ${\cal {E}}_{\mathrm{trk}}$
is the ID tracking efficiency and ${\cal{E}}_{\mu}$ is the
single-muon offline reconstruction efficiency. Here $p_T^{\pm}$ and $\eta^{\pm}$ are the transverse momenta and pseudorapidities
of the positive and negative muons from the $J/\psi$ decay. The trigger
efficiency ${\cal{E}}_{\mathrm{trig}}$ for a given $J/\psi$ candidate is
calculated from single-muon trigger efficiencies
${\cal{E}}^{\pm}_{\mathrm{trig}}(p_T^{\pm},\eta^{\pm})$ as follows:
\begin{equation}
{\cal{E}}_{\mathrm{trig}} = 1 -
\left(1-{\cal{E}}^+_{\mathrm{trig}}(p_T^+,\eta^+)\right)\cdot
\left(1-{\cal{E}}^-_{\mathrm{trig}}(p_T^-,\eta^-)\right).
\end{equation}
The resultant weighted invariant mass peak is then fitted (see Section~\ref{sec:MLfit}) to extract $N^{J/\psi}_{corr}$.
\subsection{Acceptance}
\label{sec:acceptance}
The kinematic acceptance ${\cal {A}}(p_T,y)$ is the probability that the muons
from a $J/\psi$ with transverse momentum $p_T$ and rapidity $y$
fall into the fiducial volume of the detector. This is calculated using
generator-level Monte Carlo, applying cuts on the momenta and
pseudorapidities of the muons to emulate the detector geometry. Global
cuts of
$|\vec{p}_+|, |\vec{p}_-| > 3$\;GeV for $|\eta_+|,|\eta_-|<2.5$ are
supplemented by finer $p_T$ thresholds in slices of $\eta$ to
ensure that regions of the detector where the values of offline and trigger
efficiencies are so low as to be compatible with zero within the uncertainties (approximately 10\%)
are excluded from the analysis.
The acceptance also depends on the spin-alignment of the $J/\psi$, which
is not known for LHC conditions. The general angular distribution
for the decay $J/\psi\to\mu\mu$ in the $J/\psi$ decay frame is given by:
\begin{equation}
\label{eqn:spinalign}
\frac{d^2N}{d\cos\theta^{\star} d\phi^{\star}}\propto 1+
\lambda_{\theta}\cos^2\theta^\star+
\lambda_{\phi}\sin^2\theta^\star\cos2\phi^\star+
\lambda_{\theta\phi}\sin2\theta^\star\cos\phi^\star
\end{equation}
where $\theta^\star$ is the angle between the
direction of the positive muon
momentum in the $J/\psi$ decay frame and the $J/\psi$ line of flight, while
$\phi^\star$ is defined as the angle between the $J/\psi$ production and decay planes in the lab frame (see Figure \ref{fig:coordinates},
reference~\cite{faccioli} and references therein).
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.4\textwidth]{coordinates_star.eps}
\end{center}
\caption{Definitions of the $J/\psi$ spin-alignment angles, in the $J/\psi$
decay frame.
$\theta^\star$ is the angle between the direction of
the positive muon in that frame and the direction of
$J/\psi$ in the laboratory frame, which is directed along the
$z^\star$-axis. $\phi^\star$ is the angle between the $J/\psi$ production
($x^\star-z^\star$) plane and its decay plane formed by the direction of the
$J/\psi$ and the lepton $\ell^+$ (from \cite{faccioli}).
}
\label{fig:coordinates}
\end{figure}
A large number of possible combinations of the coefficients $\lambda_{\theta}, \lambda_{\phi}, \lambda_{\theta\phi}$
have been studied, including some with $\lambda_{\theta\phi}\neq 0$. Five extreme cases have been identified that lead to the biggest variation of acceptance
within the kinematics of the ATLAS detector and define an envelope in which the results may vary under all possible polarisation assumptions:
\begin{enumerate}
\item
Isotropic distribution, independent of $\theta^\star$ and $\phi^\star$, with
$\lambda_{\theta}=\lambda_{\phi}=\lambda_{\theta\phi}=0$, labelled as ``FLAT".
This is used as the main (central) hypothesis.
\item
Full longitudinal alignment with
$\lambda_{\theta}=-1, \lambda_{\phi}=\lambda_{\theta\phi}=0$, labelled as ``LONG".
\item
Transverse alignment with
$\lambda_{\theta}=+1, \lambda_{\phi}=\lambda_{\theta\phi}=0$, labelled as $\textrm{T}_{+0}$.
\item
Transverse alignment with
$\lambda_{\theta}=+1, \lambda_{\phi}=+1, \lambda_{\theta\phi}=0$, labelled as $\textrm{T}_{++}$.
\item
Transverse alignment with
$\lambda_{\theta}=+1, \lambda_{\phi}=-1, \lambda_{\theta\phi}=0$, labelled as $\textrm{T}_{+-}$.
\end{enumerate}
Two-dimensional acceptance maps are produced in bins of $p_T$ and $y$ of the $J/\psi$,
for each of these five scenarios, and are illustrated in Figure~\ref{fig:acc_2d}. The maps are obtained by reweighting the flat distribution
at the generator level using Equation~\ref{eqn:spinalign}. The central
value for the cross-section measurement is obtained using the flat
distribution, and the measurement is repeated using
the other scenarios to provide an envelope of maximum variation, which
is stated as a separate uncertainty.
\begin{figure}[htbp]
\begin{center}
\subfigure[$\lambda_\theta=\lambda_\phi=\lambda_{\theta\phi}=0$]{
\label{fig:acc_2d_FLAT}
\includegraphics[width=0.3\textwidth]{acceptance2D_FLAT_L1MU0.eps}
}
\subfigure[$\lambda_\theta=+1, \lambda_\phi=\lambda_{\theta\phi}=0$]{
\label{fig:acc_2d_TRP0}
\includegraphics[width=0.3\textwidth]{acceptance2D_TRP0_L1MU0.eps}
}
\subfigure[$\lambda_\theta=-1, \lambda_\phi=\lambda_{\theta\phi}=0$]{
\label{fig:acc_2d_LONG}
\includegraphics[width=0.3\textwidth]{acceptance2D_LONG_L1MU0.eps}
}
\subfigure[$\lambda_\theta=+1, \lambda_\phi=+1, \lambda_{\theta\phi}=0$]{
\label{fig:acc_2d_TRPP}
\includegraphics[width=0.3\textwidth]{acceptance2D_TRPP_L1MU0.eps}
}
\subfigure[$\lambda_\theta=+1, \lambda_\phi=-1, \lambda_{\theta\phi}=0$]{
\label{fig:acc_2d_TRPM}
\includegraphics[width=0.3\textwidth]{acceptance2D_TRPM_L1MU0.eps}
}
\caption{Kinematic acceptance maps as a function of $J/\psi$ transverse momentum and rapidity
for specific spin-alignment scenarios considered,
which are representative of the extrema of the variation of the
measured
cross-section due to spin-alignment configurations.
Differences in acceptance behaviour, particularly at low $p_T$, occur between scenarios and
can significantly influence the cross-section measurement in a
given bin.
\label{fig:acc_2d}
}
\end{center}
\end{figure}
\subsection{Bin migration corrections}
\label{sec:binmigration}
\noindent
The measured efficiency and acceptance corrected $J/\psi$ $p_T$ distribution is parameterised in each rapidity slice by a smooth analytic function smeared
with a Gaussian distribution, with resolution derived from the data.
This function is integrated numerically over each analysis bin, both with and without smearing applied, and the ratio of the two integrals is assigned
as the correction factor. The effects of this correction are minimal at low $p_T$ and at low rapidities (around 0.1\%) but increase at higher $p_T$ and at higher
rapidities (reflecting the decreasing momentum resolution) to a maximum of approximately 3\%.
\subsection{Muon trigger and reconstruction efficiency}
\label{sec:trigeff}
\noindent
The offline single muon reconstruction efficiencies are obtained from data using a tag and probe method~\cite{tagandprobe},
where muons are paired with ID tracks (``probes") of opposite charge. The pairs are divided into two categories: those in
which the probe is reconstructed as a muon (``matched'') and those in which it is not (``unmatched''). Both sets of pairs
are binned according to the $p_T$ and $\eta$ of the probe. In each of these bins, the muon reconstruction efficiency is obtained
as the ratio of the number of $J/\psi$ candidates in the peak of the matched distribution to the total number of candidates in the
two mass distributions. The efficiency is extracted as a parameter of a simultaneous fit to both distributions. The dependence of
the offline reconstruction efficiency on the muon charge is well described by MC within the acceptance. This procedure is
repeated separately for combined and tagged muons.
At higher $p_T$ (for muons with $p_T$ above 6~GeV), the efficiency determination is supported by additional tag and probe
$Z\to\mu^+\mu^-$ data\,\cite{Zmumu_tagandprobe} for improved precision in the efficiency plateau region.
A hybrid Monte Carlo and data-derived (tag and probe) scheme is used
to provide trigger efficiencies for the analysis with finer binning
than would be possible with the available data statistics. This is
necessary to avoid significant biases that would otherwise appear in
the analysis with coarsely binned efficiencies across rapidly-changing
efficiency regions. Due to significant charge dependence at low $p_T$ and high pseudorapidity,
separate trigger efficiency maps are produced for positive and negative muons. Fully simulated
samples of prompt $pp\rightarrow J/\psi\left(\mu^+\mu^-\right)X$ decays
are used to populate the $J/\psi$ $p_T-y$ plane, using a fine binning. For
each bin, the probability of a muon activating the trigger is
determined.
The derived efficiencies are then reweighted to match the data efficiencies in
the reconstructed bins in cases where discrepancies exist between the data and Monte Carlo,
and uncertainties from data are assigned.
Muon reconstruction efficiencies have been determined relative to reconstructed ID tracks.
Inner Detector tracks associated to muons and having the selection cuts used in this analysis have a reconstruction efficiency ${\cal {E}}_{\mathrm{trk}}$
of $99.5\%\pm 0.5\%$ per track (with no significant pseudorapidity or $p_T$ dependence observed within the phase space probed with this analysis),
which is applied as an additional correction to the $J/\psi$ candidate yields.
\subsection{Fit of {\Jpsi} candidate mass distributions}
\label{sec:MLfit}
The distribution of reconstructed $J/\psi$ candidates over the candidate
$p_T - y$ plane is shown in Figure~\ref{fig:yieldmap}.
The majority of $J/\psi$ candidates are reconstructed in intermediate-$p_T$, high-$y$ areas,
as at lower $p_T$ values the acceptance of the detector is limited.
\begin{figure}[htb]
\begin{center}
\hfill\includegraphics[width=0.55\textwidth]{jpsi_matched_pt_rap_binned_rap.eps}\\
\hfill\includegraphics[height=0.33\textwidth, width=0.42\textwidth, angle=90]{jpsi_matched_pt_rap_binned_pt.eps}
\includegraphics[width=0.55\textwidth]{jpsi_matched_pt_rap_binned.eps}\\
\end{center}
\caption{Distribution of reconstructed $J/\psi$ candidates (in the invariant mass interval $2.7<m_{J/\psi} < 3.5$ GeV) as a function of $J/\psi$ $p_T$ and rapidity.}
\label{fig:yieldmap}
\end{figure}
The inclusive $J/\psi$ production cross-section is determined in four slices
of $J/\psi$ rapidity: $|y|<0.75, 0.75<|y|<1.5,1.5<|y|<2.0$ and $2.0<|y|<2.4$.
In Figure~\ref{fig:Jpsimass}, the invariant mass distributions for all
oppositely charged muon pairs passing the selection for
the differential cross-section measurement are shown,
before acceptance and efficiency corrections, for the four
rapidity slices. Table~\ref{tab:Jpsimass} presents the results of the
combined signal and background fits.
In these fits the $J/\psi$ and $\psi$(2S) peaks are represented
by Gaussians, while the background is described by a
quadratic polynomial.
\begin{figure}[!htb]
\begin{center}
\label{fig:Jpsimass1}
\includegraphics[width=0.45\textwidth]{c_1_0.eps}
\label{fig:Jpsimass2}
\includegraphics[width=0.45\textwidth]{c_1_1.eps}
\label{fig:Jpsimass3}
\includegraphics[width=0.45\textwidth]{c_1_2.eps}
\label{fig:Jpsimass4}
\includegraphics[width=0.45\textwidth]{c_1_3.eps}
\caption{Invariant mass distributions of reconstructed $\ensuremath{ J/\psi\to \mumu}$
candidates used in the cross-section analysis, corresponding to an integrated luminosity of $2.2$ pb$^{-1}$.
The points are data, and the uncertainties indicated are statistical only. The solid lines are the result of
the fit described in the text.}
\label{fig:Jpsimass}
\end{center}
\end{figure}
\begin{table}[!htb]
\caption{Fitted mass, resolution and yields of $J/\psi$ candidates reconstructed in four $J/\psi$ rapidity bins. All uncertainties quoted are statistical only. The shift in mass away from the world average in the highest rapidity bin reflects the few-per-mille uncertainty in the tracking $p_T$ scale at the extreme ends of the detector.}
\label{tab:Jpsimass}
\begin{center}
\begin{tabular}{r|cccc}
\hline\hline
& \multicolumn{4}{|c}{$J/\psi$ rapidity range} \\
& $|y|<0.75$ & $0.75<|y|<1.5$ & $1.5<|y|<2.0$ & $2.0<|y|<2.4$ \\
\hline
Signal yield & $6710\pm 90$ & $10710\pm 120$ & $9630\pm 130$ & $4130\pm 90$ \\
Fitted mass (GeV) & $3.096\pm 0.001$ & $3.097\pm 0.001$ & $3.097\pm 0.001$ & $3.109\pm 0.002$ \\
Fitted resolution (MeV) & $46\pm 1$ & $64\pm 1$ & $84\pm 1$ & $111\pm 2$ \\
\hline\hline
\end{tabular}
\end{center}
\end{table}
The invariant mass distribution of $J/\psi\to\mu^+\mu^-$ candidates
in each $p_T-y$ bin is fitted using a binned minimum-$\chi^2$ method. The
$J/\psi$ and $\psi$(2S) signals are described by single Gaussians, while the background is
treated as a straight line.
For the differential cross-section measurement,
the correction weight $w$ defined in Equation~\ref{weights} is applied to each
candidate, and a new binned minimum-$\chi^2$ fit is performed
in each bin. The yields of $J/\psi$ determined from these fits,
divided by the integrated luminosity, give the inclusive production
cross-section for a given bin. Representative invariant mass
distributions are shown in Figure~\ref{fig:weightedMass3mt}.
The $\chi^2$ probability distribution of the weighted fits across all bins
is found to be consistent with the statistical expectation.
\begin{figure}[!htb]
\begin{center}
\subfigure{
\label{fig:jpsiMassBins_Weight_r0_p00mt}
\includegraphics[width=0.45\textwidth]{jpsiMassBins_Weight_r0_p0.eps}
}
\subfigure{
\label{fig:jpsiMassBins_Weight_r3_p02mt}
\includegraphics[width=0.45\textwidth]{jpsiMassBins_Weight_r3_p2.eps}
}
\subfigure{
\label{fig:jpsiMassBins_Weight_r0_p13mt}
\includegraphics[width=0.45\textwidth]{jpsiMassBins_Weight_r0_p13.eps}
}
\subfigure{
\label{fig:jpsiMassBins_Weight_r3_p16mt}
\includegraphics[width=0.45\textwidth]{jpsiMassBins_Weight_r3_p16.eps}
}
\caption{Acceptance- and efficiency-corrected invariant di-muon mass distributions scaled by integrated luminosity for selected bins in $J/\psi$ rapidity and transverse momentum.
Low- and high- $p_{T}$ bins are shown here for the central and forward rapidity ranges, to represent the complete sample. Statistical uncertainties and systematic uncertainties
due to efficiency and acceptance corrections are shown, combined in quadrature.}
\label{fig:weightedMass3mt}
\end{center}
\end{figure}
The cross-sections obtained for each bin are listed in Table~\ref{tab:Ainclxsec1},
the systematic uncertainties considered are displayed in Figure~\ref{fig:totalsystPlot} and the cross-section results are presented in Figure~\ref{fig:xsec_result}. The measurement in each $p_T-y$ analysis bin is positioned at the average $p_T$ for $J/\psi$ candidates in that bin.
Various tests of the method described above are performed using simulated samples
of known composition, and the number of $J/\psi$ in each analysis bin is successfully recovered within expectations in all cases.
\subsection{Systematic uncertainties}
Studies are performed to assess all relevant sources of systematic uncertainty on
the measurement of the $J/\psi$ inclusive production cross-section. Sources of uncertainty are listed below, ordered according to the approximate size of their contribution (starting with the largest).
\begin{enumerate}
\item{\bf Spin-alignment:}
Kinematic acceptance depends on the spin-alignment state
of the $J/\psi$ and hence affects the corrected yield. Five spin-alignment
scenarios are considered, which correspond to the extreme cases for the
acceptance corrections within the kinematics accessible in ATLAS.
In each bin, the maximal deviations in either direction are assigned as
the systematic uncertainty due to the unknown spin-alignment of the $J/\psi$. These uncertainties are regarded as theoretical rather than experimental, and are quoted independently of the statistical and experimental systematic uncertainties.
\item
{\bf Muon reconstruction:}
The single muon efficiency maps are obtained from
the data using the tag and probe method, in bins of muon transverse
momentum and pseudorapidity. Each efficiency has an uncertainty (predominantly statistical in nature, but with a systematic component from the tag and probe method)
associated with it. In order to obtain an estimate on the effects of uncertainties within these bins, the relative
uncertainties (due to systematic and statistical components) on all $J/\psi$ candidates in a bin are averaged.
Inner Detector tracks originating from muons and having the selection cuts used in this analysis have a reconstruction efficiency of $99.5\%\pm 0.5\%$ per track.
The results are corrected for this efficiency, and a systematic uncertainty on the efficiency is assigned for each track, propagated linearly into the cross-section systematic.
\item
{\bf Trigger:}
The uncertainty on the trigger efficiency has components from the data-derived efficiency determination method
(again largely statistical in nature) and from the reweighting of MC maps to the data-driven (tag and probe)
efficiency values. These errors are treated similarly to the reconstruction efficiency uncertainties.
\item{\bf Luminosity:}
The uncertainty on the integrated luminosity used for this measurement
is determined to be 3.4\%\,\cite{conflumidet}, fully correlated between bins.
\item
{\bf Acceptance:}
\begin{itemize}
\item Monte Carlo statistics:
The acceptance maps are obtained from dedicated Monte Carlo production, in bins of $J/\psi$ transverse momentum and
rapidity. The acceptance in each bin has an uncertainty due to Monte Carlo statistics. The relative error on the acceptance correction
for each $J/\psi$ candidate contributing to a particular analysis bin is averaged in quadrature to evaluate the systematic effect of these
errors on the cross-section measurement in that bin.
\item Kinematic dependence:
The impact of any discrepancies in the underlying kinematic distribution modelling in the Monte Carlo used to build the maps, or differences in the $p_T$ dependence of
the prompt and non-prompt components to the overall inclusive cross-section are studied. A correction to the acceptance maps is made based on the measured non-prompt
to prompt fraction to ensure proper correction of the two populations, and an uncertainty is assigned based on the difference in yields from using the
corrected and uncorrected maps. This uncertainty is significantly below 1\% in most analysis bins, reaching a maximum of 1.5\% in some high $p_T$, low rapidity bins.
\item Bin migration:
The changes to the measured cross-section due to the migration of entries
between the $p_T$ bins is determined by analytically smearing the efficiency and acceptance corrected $p_T$ spectrum with a
Gaussian resolution function with width based on muon $p_T$ resolutions, taken from data. The correction needed to the central value due to bin migrations
is as small as 0.1\% at low $p_T$ and low rapidity and rises to $\sim 3\%$ at high $p_T$ and high rapidity. The variation of the bin migration correction
within a given analysis bin (due to changing detector resolution and parameterisation of the $p_T$ spectrum) is taken as a systematic.
\item Final-State Radiation:
The acceptance maps correct the measured cross-section back to the $J/\psi$ kinematics, rather than the final-state muon kinematics, in order to allow proper comparison
with theoretical predictions. Emission of QED final-state radiation is known to high accuracy, so the relative uncertainty on the modelling of this correction is
determined to be less than 0.1\%.
\end{itemize}
\item
{\bf Fit:}
Invariant mass distributions for a large number of pseudo-experiments are constructed for each $p_T-y$ bin of the analysis,
with the bin contents for each pseudo-experiment being an independently Poisson-fluctuated value with mean equal to the measured
data, and uncertainty in the bin determining the variance of the fluctuations.
Within these pseudo-experiments, the candidate yields from the central fit procedure and yields from varied fitting models
are determined, and the shift per pseudo-experiment calculated. The variation in fitting models include
signal and background fitting functions and inclusion/exclusion of the $\psi(2S)$ mass region.
The means of the resultant shifts across all pseudo-experiments for each fit model are used to evaluate
the systematic uncertainty. The largest mean variation in that bin is assigned as a systematic uncertainty
due to the fit procedure.
\item{\bf {\boldmath $J/\psi$} vertex-finding efficiency:}
The loose vertex quality requirement retains over $99.9\%$ of di-muon candidates used in the analysis,
so any correction and systematics associated to the vertexing are neglected.
\end{enumerate}
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=0.94\textwidth]{systematicSummaryPlot.eps}
\end{center}
\caption{Summary of the contributions from various sources to the systematic uncertainty
on the inclusive differential cross-section, in the $J/\psi$ $p_T$ and rapidity
bins of the analysis. The total systematic and statistical uncertainties are
also overlaid. The theoretical uncertainty due to the unknown spin alignment is not included on these plots.
}
\label{fig:totalsystPlot}
\end{figure}
\noindent A summary of the various contributions to the systematic uncertainties
on the measurement in each rapidity slice as a function of $J/\psi$
$p_T$ is shown in Figure~\ref{fig:totalsystPlot}. The uncertainty due to the luminosity (3.4\%) is not shown, nor is
the spin-alignment envelope which represents a full range of variation due to the unknown spin-alignment state.
\subsection{Inclusive {$J/\psi$} cross-section results}
The results of the inclusive double-differential $J/\psi$ production
cross-section measurement are given in Table~\ref{tab:Ainclxsec1}. They are compared to CMS results\,\cite{CMS}
in Figure~\ref{fig:xsec_result} for cases where the rapidity ranges are close enough to permit
comparison. The two sets of results show good agreement within experimental uncertainties
and provide complementary measurements at low (CMS) and high (ATLAS) $p_T$, together providing a measurement over
a large kinematic range.
The systematics are dominated by the data-driven muon reconstruction
efficiency uncertainties, which are in turn dominated by their statistical uncertainties.
There is an additional overall uncertainty of $\pm 3.4\%$ (fully correlated
between bins) due to the luminosity measurement
uncertainty.
The measurement of the differential cross-section is limited
by systematic uncertainties, with statistical uncertainties
only contributing significantly near the low-$p_T$ thresholds where
yields are limited by trigger efficiency, and in the highest transverse momentum bin.
The total cross-section for inclusive $J/\psi\to\mu^+\mu^-$ production, multiplied by the branching fraction
into muons and under the FLAT production scenario for the central value, has been
measured for $J/\psi$
produced within $|y|<2.4$ and $p_T>7$~GeV
to be:
\begin{align*}
Br(J/\psi\to\mu^+\mu^-) &\sigma(pp\to J/\psi X; |y_{J/\psi}|<2.4, p^{J/\psi}_T>7~\textrm{GeV}) \\
&\qquad = 81 \pm 1 \textrm{ (stat.)} \pm 10 \textrm{(syst.)} \pm ^{25}_{20} \textrm{ (spin)} \pm 3 \textrm{ (lumi.) nb}
\end{align*}
and for $J/\psi$ within $1.5<|y|<2$ and $p_T>1$~GeV to be:
\begin{align*}
Br(J/\psi\to\mu^+\mu^-) &\sigma(pp\to J/\psi X; 1.5<|y_{J/\psi}|<2, p^{J/\psi}_T>1~\textrm{GeV}) \\
&\qquad =510 \pm 70 \textrm{ (stat.)} \pm ^{80}_{120} \textrm{(syst.)} \pm ^{920}_{130} \textrm{ (spin)} \pm 20 \textrm{ (lumi.) nb.}
\end{align*}
\begin{sidewaysfigure}[!phtb]
\begin{center}
\includegraphics[width=0.49\textwidth]{pub_jpsiXsec_pt_1.eps}
\includegraphics[width=0.49\textwidth]{pub_jpsiXsec_pt_2.eps}
\includegraphics[width=0.49\textwidth]{pub_jpsiXsec_pt_3.eps}
\includegraphics[width=0.49\textwidth]{pub_jpsiXsec_pt_4.eps}
\caption{Inclusive $J/\psi$ production cross-section as
a function of $J/\psi$ transverse momentum in the four rapidity bins.
Overlaid is a band representing the variation of the result
under various spin-alignment scenarios (see text) representing a
theoretical uncertainty. The equivalent results from CMS
\cite{CMS} are overlaid. The luminosity uncertainty (3.4\%) is not shown.
\label{fig:xsec_result}
}
\end{center}
\end{sidewaysfigure}
\providecommand{\dt}{\ensuremath{\Delta t}\xspace}
\providecommand{\Jpsi}{\ensuremath{J/\psi}\xspace}
\providecommand{\Bbar}{\ensuremath{\bar{B}}\xspace}
\providecommand{\Bz}{\ensuremath{B^0}\xspace}
\providecommand{\Bzb}{\ensuremath{\bar{B}^0}\xspace}
\providecommand{\Bp}{\ensuremath{B^+}\xspace}
\providecommand{\Bm}{\ensuremath{B^-}\xspace}
\providecommand{\fsig}{\ensuremath{f_{\rm sig}}\xspace}
\providecommand{\mmumu}{\ensuremath{m_{\mu\mu}}\xspace}
\providecommand{\sigmmumu}{\ensuremath{\sigma_{m_{\mu\mu}}}\xspace}
\providecommand{\tildet}{\ensuremath{\tau}\xspace}
\providecommand{\sigtildet}{\ensuremath{\sigma_\tau}\xspace}
\providecommand{\fcore}{\ensuremath{f_{\rm core}}\xspace}
\providecommand{\Score}{\ensuremath{S_{\rm core}}\xspace}
\providecommand{\Stail}{\ensuremath{S_{\rm tail}}\xspace}
\providecommand{\fcorebgd}{\ensuremath{f_{\rm core, bgd}}\xspace}
\providecommand{\Scorebgd}{\ensuremath{S_{\rm core, bgd}}\xspace}
\providecommand{\Stailbgd}{\ensuremath{S_{\rm tail, bgd}}\xspace}
\providecommand{\BR}{\mbox{\ensuremath{\mathcal B}}}
\section{Measurement of the Non-Prompt {$J/\psi$} Fraction}
\label{section:ratio}
Experimentally, it is possible to distinguish $J/\psi$ from
prompt production and decays of heavier charmonium states from the
$J/\psi$ produced in $B$-hadron decays (non-prompt production). The prompt decays occur very close to the primary vertex of the
parent proton-proton collision, while many of the $J/\psi$ mesons produced in $B$-hadron decays will have a measurably displaced
decay point due to the long lifetime of their $B$-hadron parent.
From the measured distances between the primary vertices and corresponding $J/\psi$ decay vertices the fraction $f_B$ of $J/\psi$ that originate
from non-prompt sources, as defined in Equation~\ref{eqn:fraction}, can be inferred. An unbinned maximum likelihood fit is used to extract this fraction from the data.
\subsection{Pseudo-proper time}
The signed projection of the $J/\psi$ flight distance, $\vec{L}$, onto its transverse momentum, $\vec{p}_T^{J/\psi}$, is constructed according to the following formula
\begin{equation}
L_{xy} \equiv {\vec{L}}\cdot {\vec{p}_T^{J/\psi}} / p_T^{J/\psi},
\end{equation}
where $\vec{L}$ is the vector from the primary vertex to the
$J/\psi$ decay vertex and $\vec{p}_T^{J/\psi}$ is the transverse
momentum vector of the $J/\psi$. Here $L_{xy}$ measures the displacement of the $J/\psi$
vertex in the transverse plane.
The probability for the decay of a $B$-hadron as a function of
proper decay time $t$ follows an exponential distribution
\begin{equation}
p(t) = \frac{1}{\tau_B} \exp(-t/\tau_B),
\end{equation}
where $\tau_B$ is the lifetime of the $B$-hadron.
For each decay the proper decay time can be calculated as
\begin{equation}
t = \frac{L}{ \beta \gamma},
\end{equation}
where $L$ is the distance between the $B$-hadron production and
decay point and $\beta\gamma$ is the Lorentz factor.
Taking the projection of the decay length and momentum on the
transverse plane for $B$-hadrons, one obtains
\begin{equation}
t = \frac{L_{xy}\ m_B}{p_T^{B}}.
\end{equation}
In this case, $L_{xy}$ is measured between the position of the
reconstructed secondary vertex and the primary vertex in the
event. The primary vertex is refitted with the two muon tracks
excluded, to avoid a bias. The uncertainty on $L_{xy}$ is calculated
from the covariance matrices of the primary and the secondary
vertices. The majority of the events contain only a single primary
vertex. In the few that contain multiple vertices, the $J/\psi$ is
assigned to a primary vertex based on the use of the tracks by the
ATLAS reconstruction software; if both $J/\psi$ tracks are included in
the reconstruction of the same primary vertex, this is the one which
is assigned. In a small number of cases (fewer than $0.2\%$) the two
tracks making the $J/\psi$ candidate are included in the
reconstruction of different primary vertices. These candidates are discarded.
Since the $B$-hadron is not reconstructed completely, one does not
know its transverse momentum. Instead the $J/\psi$
momentum is used to construct a variable called the ``pseudo-proper time''
\begin{equation}
\tau = \frac{L_{xy}\ m_{\textrm{PDG}}^{J/\psi}}{p_T^{J/\psi}}.
\end{equation}
\noindent
Here, the world average value of $m_{\textrm{PDG}}^{J/\psi}$ is used to reduce the correlation between the
fits that will be performed on the mass and the lifetime. Studies show that the
results are insensitive to this choice.
At large $p_T^{J/\psi}$, where most of the $B$-hadron transverse momentum is
carried by the $J/\psi$, the distribution of $\tau$ is
approximately exponential, with the $B$-hadron lifetime
as a parameter. At small $p_T^{J/\psi}$, the range of opening angles between the
$J/\psi$ and $B$-hadron momentum leads to a smearing of the underlying
exponential distribution.
\subsection{Fitting procedure}
The sample is divided into bins of $p_{T}$ and rapidity $y$ of the $J/\psi$ candidates.
In each bin,
a maximum likelihood fit is performed in order to
determine the fraction of the non-prompt to inclusive $J/\psi$ production cross-sections in that particular bin.
The mass and pseudo-proper time are simultaneously fitted in the entire mass region from $2.5$ to $3.5$ GeV, using the likelihood function:
\begin{equation}
L =
\prod_{i=1}^N \left[ { f_{\textrm{sig}} {P}_{\textrm{sig}}(\tau,\delta_{\tau}) {F}_{\textrm{sig}}(m_{\mu\mu},\delta_m)
+ ( 1- f_{\textrm{sig}} ) } {P}_{\textrm{bkg}}(\tau,\delta_{\tau}){F}_{\textrm{bkg}}(m_{\mu\mu}) \right]
\end{equation}
where $N$ is the total number of events in the $2.5-3.5$ GeV mass region and $f_{\textrm{sig}}$ is the fraction of signal $J/\psi$ candidates in this region determined from the fit. ${P}_{\textrm{sig}}$ and ${P}_{\textrm{bkg}}$ are
pseudo-proper time probability density distributions (PDFs) for the $J/\psi$ signal and background candidates respectively, and are described fully below. The $F_{\textrm{sig}}$, $F_{\textrm{bkg}}$ functions are the mass distribution models for signal and background.
In summary, the input variables to the maximum likelihood fit to determine the production ratio
are the pseudo-proper time $\tau$, its uncertainty $\delta_{\tau}$, the di-muon mass $m_{\mu\mu}$ and its
uncertainty $\delta_m$.
\subsubsection{Invariant mass and pseudo-proper time probability density functions}
\label{sec:ratio_pdf}
For the signal, the mass is modelled with a Gaussian distribution:
\begin{equation}
F_{\textrm{sig}} (m_{\mu\mu}, \delta_m) \equiv \frac{1}{ \sqrt{2\pi}~S \delta_m } e^{\frac{-(m_{\mu\mu}-m_{J/\psi})^{2}}{2( S\delta_m)^{2}}}
\label{pdfsig}
\end{equation}
whose mean value $m_{J/\psi}$ is the $J/\psi$ mass, determined in the fit, and whose width is the product
$S\delta_m$, where $ \delta_m$ is the measured mass error calculated for each muon pair
from the covariance matrix of the vertex reconstruction and $S$ is a global scale factor to account for a difference
between $\delta_m$ and the mass resolution from the fit. For the background, the mass distribution is assumed to follow a second-order
polynomial function.
The pseudo-proper time PDF for $J/\psi$ signal candidates, ${P}_{\textrm{sig}}$, consists of two terms.
One term describes the $J/\psi$ from $B$-hadron decays (${P}_B$), and the other describes the
$J/\psi$ from prompt decays (${P}_P$):
\begin{equation}
{P}_{\textrm{sig}}(\tildet,\delta_{\tau}) = f_B{P}_B(\tildet,\delta_{\tau}) + (1-f_B){P}_P(\tildet,\delta_{\tau}),
\end{equation}
where $f_B$ is the fraction of $J/\psi$ from $B$-hadron decays as defined in Equation~\ref{eqn:fraction}.
The pseudo-proper time distribution of the $J/\psi$ particles from $B$-hadron
decays ${P}_B(\tildet,\delta_{\tau})$ is an exponential function $E(\tau) = \exp(-\tildet/\tau_{\textrm{eff}})$ with a pseudo-proper time slope $\tau_{\textrm{eff}}$,
convolved with the pseudo-proper time resolution function $R(\tildet'-\tildet,\delta_{\tau})$:
\begin{equation}
{P}_B(\tildet,\delta_{\tau}) = R(\tildet'-\tildet,\delta_{\tau}) \otimes E(\tildet').
\end{equation}
Promptly produced $J/\psi$ particles decay at the primary vertex, and their pseudo-proper time distribution is thus given by
the resolution function:
\begin{equation}
{P}_P(\tildet,\delta_{\tildet}) = R(\tildet'-\tildet,\delta_{\tau})
\otimes \delta(\tildet') = R(\tau,\delta_{\tau}).
\end{equation}
The resolution function $R$ is a Gaussian distribution centred at $\tildet=0$
with a width $S_t\delta_{\tau}$, where $S_t$ is a scale factor (a parameter of the fit) and $\delta_{\tau}$ is the per-candidate uncertainty on \tildet,
the measured pseudo-proper lifetime determined from the covariant error matrix of the tracks.
The pseudo-proper time PDF for background candidates ${P}_{\textrm{bkg}}$
consists of the sum of a long lived component modeled with an exponential function and a prompt component modeled
by a delta function and two symmetric exponential tails. Each component is convolved with the Gaussian resolution function:
\begin{equation}
{P}_{\textrm{bkg}}(\tau,\delta_{\tau}) = \left(
(1- b_1- b_2) \delta(\tildet') +
b_1 \exp\left(\frac{-\tildet'}{\tau_{\textrm{eff1}}}\right)
+ b_2 \exp\left(\frac{-|\tildet'|}{\tau_{\textrm{eff2}}}\right)
\right) \otimes R_{\textrm{bkg}}(\tildet'-\tau,\delta_{\tau}),
\end{equation}
\noindent where $R_{\textrm{bkg}}(\tildet)$
is a Gaussian
distribution centered at $\tildet=0$ with a width
$S_{\textrm{bkg}} \delta_{\tau}$, where $S_{\textrm{bkg}}$ is a scale factor (a parameter of the fit) and $\delta_{\tau}$ is the per-candidate uncertainty on the measured $\tau$.
Parameters $\tau_{\textrm{eff1}}$ and $\tau_{\textrm{eff2}}$ are pseudo-proper time slopes of the two components of background, and $b_1$ and $b_2$ are the corresponding fractions of the background. All four parameters ($\tau_{\textrm{eff1}}$, $\tau_{\textrm{eff2}}$, $b_1$ and $b_2$) are determined from the fit.
\subsubsection{Summary of free parameters}
The full list of the parameters of the fit are as follows:
\begin{itemize}
\item $f_{\textrm{sig}}$ the fraction of signal $J/\psi$ candidates in the $2.5-3.5$ GeV mass region of the fit; $m_{J/\psi}$ the mean value of the $J/\psi$ mass; the scale factor $S$ to account for a difference between $\delta_m$ and the mass resolution from the fit;
\item $f_B$ the fraction of $J/\psi$ from $B$-hadron decays; a pseudo-proper time slope $\tau_{\textrm{eff}}$ describing the $B$-hadron decays; $S_t$ a scale factor to account for a difference between $\delta_{\tau}$ and the $B$-hadron pseudo-proper time resolution from the fit;
\item the slope parameters $\tau_{\textrm{eff1}}$, $\tau_{\textrm{eff2}}$ and $S_{\textrm{bkg}}$ describing the time evolution of the $J/\psi$ background, in analogy to the parameters of $B$-hadron decays, defined above; $b_1$ and $b_2$, fractions of the two background components.
\end{itemize}
\subsection{Results of the likelihood fits}
\label{sec:timeresults}
The results of the likelihood fit to the pseudo-proper time distributions in a representative $p_T^{J/\psi}$ bin are shown in Figure \ref{fig:pstime_fits}.
The figure shows the result of the unbinned maximum likelihood fits for the signal and background components projected onto the lifetime and invariant mass distributions.
From the results of the fit, it is possible to derive the non-prompt to inclusive production fraction as a function
of $p_T^{J/\psi}$. The $\chi^2$ probabilities and Kolmogorov-Smirnov test results for the fits across all analysis bins are found to be consistent with statistical expectations, with the lowest fit probability out of over 70 fits being 1\%.
\begin{figure}[htb]
\begin{center}
\subfigure[$|y_{J/\psi}| < 0.75$]{\includegraphics[width=0.45\textwidth]{pt6y0_t.eps}}
\subfigure[$|y_{J/\psi}| < 0.75$]{\includegraphics[width=0.45\textwidth]{pt6y0_m2mu.eps}}
\subfigure[$2.0<|y_{J/\psi}| < 2.4$]{\includegraphics[width=0.45\textwidth]{pt8y3_t.eps}}
\subfigure[$2.0<|y_{J/\psi}| < 2.4$]{\includegraphics[width=0.45\textwidth]{pt8y3_m2mu.eps}}
\end{center}
\caption{Pseudo-proper time distributions (left) of $J/\psi\to\mu^+\mu^-$
candidates in the signal region, for a selected $p_T$ bin
$9.5<p_T<10.0$~GeV in the most central and most forward rapidity regions. The points with error bars are data. The solid
line is the result of the maximum likelihood unbinned fit to all
di-muon pairs in the $2.5-3.5$ GeV mass region projected on the
narrow mass window $2.9-3.3$ GeV. The invariant mass distributions
which are simultaneously fitted with the pseudo-proper time are
shown on the right for the same bins.
}
\label{fig:pstime_fits}
\end{figure}
\subsection{Systematic uncertainties}
Several studies performed to assess all relevant sources of systematic uncertainties on the measured fraction of non-prompt to inclusive $J/\psi$ decays are outlined below,
in order of importance.
\begin{enumerate}
\item
{\bf Spin-alignment of prompt ${\boldmath J/\psi}$:}
In general, spin-alignment may be different for prompt and non-prompt $J/\psi$, which may result in different acceptances in the two cases. The central value assumes they are the same (isotropic distribution in both angles, as for the inclusive cross-section central result), but four additional scenarios for the prompt component are also considered, as discussed in Section~\ref{sec:acceptance}.
The largest variations within the four models from FLAT is calculated for each bin in turn and assigned as an uncertainty envelope on prompt production.
\item
{\bf Spin-alignment of non-prompt ${\boldmath J/\psi}$:}
The possible variation of spin-alignment in $B\to J/\psi X$ decays is expected to be much smaller than for prompt $J/\psi$ due to the averaging effect
caused by the admixture of various exclusive $B\to J/\psi X$ decays.
We assign an additional uncertainty on the non-prompt fraction (and non-prompt cross-section) for the difference in final result when using either an isotropic spin-alignment
assumption for non-prompt decays or maps reweighted to the CDF result\,\cite{CDF_bjpsi_pol} for $B\to J/\psi$ spin-alignment.
This contributes up to an additional 0.4\% uncertainty on the overall
(prompt and non-prompt) systematic due to spin-alignment on the fraction.
\item
{\bf Fit:}
A number of changes are applied to the fitting procedure,
and the fit is repeated in order to gauge the sensitivity of the fraction $f_B$ to the details of the fits:
\begin{itemize}
\item{
The central value for the fraction assumes a background model for the proper time distribution of the background that includes one exponential function with a negative slope and a symmetric double exponential term with the same absolute value, $\tau_{\textrm{eff2}}$, for the negative and positive slopes. To test the robustness of the result, this model is changed in two ways. First, the symmetric term is no longer required to be symmetric, so different values of the negative and positive slopes are allowed. Second, the sum of two asymmetric double exponentials is used, having the same negative decay constant but differing positive decay constants. The maximum deviation from the central value is taken as a systematic uncertainty.
}
\item {
The per-candidate Gaussian convolution function is changed to a per-candidate double Gaussian convolution,
allowing different scale factors (to account for differences between the resolution returned by the tracking algorithm and measured resolution) for each Gaussian to be determined from the fit.
Differences from the main fit are assigned as a systematic uncertainty.
}
\item {
The main result uses a second-order polynomial in the mass fit to describe the background. To test the sensitivity to this choice, the fits are repeated using instead polynomials
of degree one and three. Differences from the main fit are assigned as a systematic.
}
\item {
The central result takes $J/\psi$ candidates in a mass range from $2.5$ to $3.5$~GeV, to avoid the mass region of the $\psi$(2S). In order to test the stability of the result and to increase the statistics in the side bands, the analysis is repeated with a mass range from 2 to 4~GeV, but excluding the region from $3.5$ to $3.8$~GeV. The result is stable compared to the statistical uncertainties, and so no systematic uncertainty is assigned for this source.
}
\item {
The analysis relies on a simultaneous fit to the proper time and mass distributions. The likelihood used assumes no correlation between the two quantities. To test the reliability of this assumption, the mean measured invariant mass is plotted as a function of the proper time. The resulting distribution is flat, except in the negative lifetime region and at very long proper lifetimes, where residual background dominates the sample and invalidates the test. Accordingly, no explicit systematic for this correlation is assigned.
}
\end{itemize}
\item
{\bf Kinematic dependence:}
Differences in the acceptance of prompt and non-prompt $J/\psi$ due to their different momentum spectra, averaged across an analysis bin, can bias the fraction measurement.
A correction factor is calculated based on the acceptance maps with and without momentum reweighting to account for the differences between prompt and non-prompt $J/\psi$
and this correction assigned as a systematic uncertainty.
\item
{\bf Reconstruction efficiencies:}
The central result for the fraction assumes that
the reconstruction efficiencies are the same for non-prompt and prompt $J/\psi$ mesons and hence cancel in extracting the fraction.
This assumption is tested on Monte Carlo samples described in Section \ref{section:samples},
and no statistically significant shift is observed. Thus, no systematic uncertainty is assigned.
\item
{\bf Pile-up/multiple interactions:}
Some collisions result in the reconstruction of multiple primary vertices. The primary
vertex chosen determines the transverse decay displacement $L_{xy}$ used in
the proper time determination. The central value is obtained by taking
the primary vertex that is formed using both of the $J/\psi$ candidate
muons and rejecting cases where those candidates are associated
with different primary vertices. To assess the effect of this
procedure, two alternate methods where used. The first chooses the
primary vertex with the highest summed squared transverse momenta of the tracks
that form it. The second takes the same vertex, but rejects cases
where either of the muon candidates are not used in determining that
primary vertex. As no significant variation is seen in the results
from the two methods, no additional uncertainties are assigned due to
this source.
\end{enumerate}
The stability of the method used is checked using simplified Monte
Carlo trial experiment samples to perform various tests of the closure
of the analysis. The simultaneous mass and pseudo-proper time fit model is used to generate 100 simplified Monte Carlo experiments for each $p_{T}$ and $y$ bin. The number of events generated is approximately the same as the number of data events for the corresponding bin. For each event the invariant mass and pseudo-proper time values are generated randomly from the total PDF, while the per-candidate error on invariant mass and pseudo-proper time are sampled from the corresponding experimental data distributions.
For each experiment, a fit of the total PDF on the simple Monte Carlo sample is performed. The pull, $\Delta$, defined as
\begin{eqnarray}
\Delta = \frac{(f_{\textrm{generated}} - {f_{\textrm{extracted}})}}{\sigma(f_{\textrm{extracted}})}, \nonumber
\end{eqnarray}
is computed for each Monte Carlo experiment. Here $f_{\textrm{generated}}$ is the non-prompt fraction for the signal component according to which the Monte Carlo samples are generated (i.e. the result of the fit of the global model to the experimental data), while $f_{\textrm{extracted}}$ and $\sigma(f_{\textrm{extracted}})$ are the value and uncertainty obtained from the fit. The Gaussian mean and sigma are statistically compatible with zero and unity, respectively, in all bins, indicating that no bias or improper uncertainty estimate is introduced by the fit.
\subsection{Fraction of non-prompt { $J/\psi$} as a function of { $J/\psi$} transverse momentum and rapidity}
Figure \ref{fig:fraction_result} and
Tables~\ref{tab:AfractionMain0} to~\ref{tab:AfractionMain3}
show the results of the differential non-prompt fraction
measurement
as a function of average $p_T^{J/\psi}$, in each of the four rapidity bins.
The uncertainty envelopes due to the unknown spin-alignment are
overlaid as solid bands.
The measurements are compared with those of CMS~\cite{CMS} and CDF~\cite{CDF}
and build upon those results with finer rapidity binning, a much
extended rapidity coverage relative
to CDF and significantly increased $p_T$ reach relative to both experiments. Strong $p_T$
dependence of the fraction is observed: $\sim90\%$ of $J/\psi$ are
produced promptly at low $p_T$, but the fraction
of non-prompt $J/\psi$ rapidly increases at mid-$p_T$ from $\sim15\%$
at $7$~GeV to $\sim70\%$ at the highest accessible $p_T$ values.
No significant rapidity dependence is seen.
The ATLAS results exhibit good agreement with CMS results where they overlap, and also with the CDF measurements,
indicating that there is no strong dependence of the fraction on
collision energies.
\begin{sidewaysfigure}[htbp]
\begin{center}
\includegraphics[width=0.49\textwidth]{fractionresult_0.eps}
\includegraphics[width=0.49\textwidth]{fractionresult_1.eps}
\includegraphics[width=0.49\textwidth]{fractionresult_2.eps}
\includegraphics[width=0.49\textwidth]{fractionresult_3.eps}
\caption{$J/\psi$ non-prompt to inclusive fractions as
a function of $J/\psi$ transverse momentum.
Overlaid is a band representing the variation of the result
under various spin-alignment scenarios (see text) representing a
theoretical uncertainty on the prompt and non-prompt $J/\psi$ components. The equivalent results from CMS
\cite{CMS} and CDF \cite{CDF} are included.
\label{fig:fraction_result}
}
\end{center}
\end{sidewaysfigure}
\section{The Prompt and Non-Prompt Differential Production Cross-Sections}
The prompt and non-prompt $J/\psi$ production cross-sections can be derived from
the inclusive production cross-section and the non-prompt fraction. Where
necessary, $p_T$ bins in the inclusive
cross-section are merged to align bins in the prompt/non-prompt cross-section result
with those in the non-prompt fraction measurement.
The relative systematic uncertainties in each of the fraction and
inclusive cross-section measurement bins (merged where appropriate)
are taken to be uncorrelated, while the statistical uncertainties are combined taking correlations
into account. The spin alignment uncertainties are quoted independently of the experimental uncertainties.
\subsection{Non-prompt differential production cross-sections}
We assume the spin-alignment of a $J/\psi$ meson from a $B\to J/\psi X$ decay has
no net polar or azimuthal anisotropy for the central result, as the possible variation of spin-alignment
in $B\to J/\psi X$ decays is expected to be much smaller than for prompt $J/\psi$ due to the averaging effect
caused by the admixture of various exclusive $B\to J/\psi X$ decays.
We assign a spin-alignment uncertainty on the non-prompt cross-section for the difference in the final result when using either an isotropic spin-alignment
assumption for non-prompt decays or maps reweighted to the CDF result\,\cite{CDF_bjpsi_pol} for $B\to J/\psi$ spin-alignment.
The total integrated cross-section for non-prompt $J/\psi$, multiplied by the branching
fraction into muons and under the ``FLAT" production scenario, has
been measured
for $J/\psi$ mesons produced within $|y|<2.4$ and $p_T>7$~GeV
to be:
\begin{align*}
Br(J/\psi\to\mu^+\mu^-) &\sigma(pp\to B+X\to J/\psi X; |y_{J/\psi}|<2.4, p^{J/\psi}_T>7~\textrm{GeV}) \\
&\qquad = 23.0 \pm 0.6 \textrm{ (stat.)} \pm 2.8 \textrm{(syst.)} \pm 0.2 \textrm{ (spin)} \pm 0.8 \textrm{ (lumi.) nb}
\end{align*}
and for $J/\psi$ mesons produced with $1.5<|y|<2$ and $p_T>1$~GeV to be:
\begin{align*}
Br(J/\psi\to\mu^+\mu^-) &\sigma(pp\to B+X\to J/\psi X; 1.5<|y_{J/\psi}|<2, p^{J/\psi}_T>1~\textrm{GeV}) \\
&\qquad = 61 \pm 24 \textrm{ (stat.)} \pm 19 \textrm{ (syst.)} \pm 1 \textrm{ (spin)} \pm 2 \textrm{ (lumi.) nb.}
\end{align*}
\subsubsection{Comparisons with theoretical predictions}
ATLAS non-prompt $J/\psi$ production cross-section measurements are
compared to Fixed Order Next-to-Leading Logarithm (FONLL) calculations~\cite{Cacciari}
in Tables~\ref{tab:Anonpromptxsec_1} and ~\ref{tab:Anonpromptxsec_4} and in Figure~\ref{fig:nonprompt_xsec}.
FONLL~v1.3.2 is used for these predictions,
using the CTEQ6.6\,\cite{CTEQ} parton density function set.
FONLL predictions use a $B\to J/\psi X$ branching fraction of $Br(B\to J/\psi) = 0.0116$.
Uncertainty bands associated with the predictions come from
the input b-quark mass, varied within $4.75\pm 0.25$~GeV, renormalisation ($\mu_R$) and factorisation ($\mu_F$)
scales (independently) varied within $0.5<\mu_{R,F}/m<2$ (with the additional constraint that $0.5<\mu_R/\mu_F<2$)
and parton density function uncertainties.
Good agreement is seen between the experimental data and the
theoretical prediction across the full range of rapidity and
transverse momentum considered.
\begin{sidewaysfigure}[htbp]
\begin{center}
\includegraphics[width=0.49\textwidth]{nonprompt_xsec_1.eps}
\includegraphics[width=0.49\textwidth]{nonprompt_xsec_2.eps}
\includegraphics[width=0.49\textwidth]{nonprompt_xsec_3.eps}
\includegraphics[width=0.49\textwidth]{nonprompt_xsec_4.eps}
\caption{Non-prompt $J/\psi$ production cross-section as
a function of $J/\psi$ transverse momentum, compared to predictions
from FONLL theory. Overlaid is a band representing the variation of the result
under spin-alignment variation on the non-prompt $J/\psi$ component as described in the text. The central value
assumes an isotropic polarisation for both prompt and non-prompt components.
The luminosity uncertainty (3.4\%) is not shown.
\label{fig:nonprompt_xsec}
}
\end{center}
\end{sidewaysfigure}
\subsection{Prompt differential production cross-sections}
The prompt production cross-section is of direct interest for the study
of quarkonium production in QCD.
The spin-alignment state and $p_T$ dependence of the spin-alignment of
promptly produced $J/\psi$ particles are thought to be non-trivial, so the
spin-alignment uncertainty envelope on the inclusive
cross-section measurement is propagated into the prompt
cross-section measurement.
The prompt production cross-sections are presented in
Tables~\ref{tab:Apromptxsec_1} to~\ref{tab:Apromptxsec_4}.
The total cross-section for prompt $J/\psi$ (times branching fraction
into muons) under the flat production scenario has been
measured
for $J/\psi$ produced within $|y|<2.4$ and $p_T>7$~GeV to be:
\begin{align*}
Br(J/\psi\to\mu^+\mu^-) &\sigma(pp\to\textrm{prompt } J/\psi X; |y|<2.4, p_T>7~\textrm{GeV}) \\
&\qquad = 59 \pm 1 \textrm{ (stat.)} \pm 8 \textrm{(syst.)} \pm {}^{9}_{6} \textrm{ (spin)} \pm 2 \textrm{ (lumi.) nb}
\end{align*}
and for $J/\psi$ within $1.5<|y|<2$ and $p_T>1$~GeV to be:
\begin{align*}
Br(J/\psi\to\mu^+\mu^-) &\sigma(pp\to\textrm{prompt } J/\psi X; 1.5<|y|<2, p_T>1~\textrm{GeV}) \\
&\qquad = 450 \pm 70 \textrm{ (stat.)} \pm ^{90}_{110} \textrm{(syst.)} \pm {}^{740}_{110} \textrm{ (spin)} \pm 20 \textrm{ (lumi.) nb.}
\end{align*}
\begin{sidewaysfigure}[htbp]
\begin{center}
\includegraphics[width=0.49\textwidth]{prompt_xsec_1.eps}
\includegraphics[width=0.49\textwidth]{prompt_xsec_2.eps}
\includegraphics[width=0.49\textwidth]{prompt_xsec_3.eps}
\includegraphics[width=0.49\textwidth]{prompt_xsec_4.eps}
\caption{Prompt $J/\psi$ production cross-section as
a function of $J/\psi$ transverse momentum in the four rapidity bins.
Overlaid is a band representing the variation of the result
under various spin-alignment scenarios (see text) representing a
theoretical uncertainty on the prompt component. Predictions from NLO and NNLO$^\star$ calculations,
and the Colour Evaporation Model are overlaid. The luminosity uncertainty (3.4\%) is not shown.
\label{fig:prompt_xsec}
}
\end{center}
\end{sidewaysfigure}
\subsubsection{Comparisons with theoretical predictions}
\label{sec:models}
In Figure~\ref{fig:prompt_xsec} the prompt production data are
compared to the predictions of the Colour Evaporation Model (CEM)~\cite{CEM_RHIC,CEM} for prompt $J/\psi$ production (with no uncertainties defined) and a calculation of the direct $J/\psi$ production cross-section in the Colour Singlet Model (CSM)~\cite{Lansberg,Lansberg2} at next-to-leading order (NLO) and a partial next-to-next-leading order calculation (NNLO$^\star$).
The Colour Evaporation Model predictions are produced
using the CTEQ6M parton density functions, a charm quark mass of 1.2~GeV and the
renormalisation and factorisation
scales set to $\mu_0= 2\sqrt{p^2_T + m^2_{Q} + k^2_T}$ (where $p_T$ is the
transverse momentum of the $J/\psi$ and $m_Q$ is the quark mass and $k_T$ is a phenomenological fit parameter set to $1.5$~GeV$^2$).
The CEM predictions include contributions from $\chi_c$ and $\psi$(2S)
feed-down and can be directly compared with the prompt $J/\psi$ data.
The normalisation of the CEM prediction is generally lower than in data and strongly diverges in shape from measured data, showing
significant disagreement in the extended $p_T$ range probed by the measurement described in this paper.
The Colour Singlet NLO and NNLO$^\star$ predictions\footnote{The NNLO$^\star$ calculation is not a {\em full}
next-to-next-to-leading order prediction, as it does not include all loop corrections to $pp\to Q+jjj$ (where $j$ is a light parton) up
to order $\alpha^5_s$. This limits the applicability of the calculation to values above a
particular ${J/\psi}$ $p_T$ threshold (due to soft and collinear divergences).}
for direct $J/\psi$ production use a charm quark mass of 1.5~GeV, the CTEQ6M parton density function set, and
factorisation and renormalisation scales set to $\mu_0 = \sqrt{p^2_T + m^2_{Q}}$ (varied up and down by a factor of two to determine scale uncertainties).
As the calculation is for direct production, corrections must be
applied for $\chi_c$ and $\psi$(2S) feed-down to bring the calculations
in direct comparison with data. To correct for feed-down, a flat $10\%$ correction is applied to account for the contribution of $\psi\textrm{(2S)}\to J/\psi\pi\pi$ and a 40\% correction is added to account for radiative $\chi_c$ decays. This yields a total correction of 50\%. The correction
factor is not well-determined from theory or experiment so is assigned
a 100\% uncertainty.
This uncertainty is not included in the CSM theoretical uncertainty.
The NLO and NNLO$^\star$ predictions
are overlaid with the ATLAS measurements in Figure~\ref{fig:prompt_xsec} for each
rapidity region.
The dashed lines represent the central NLO and NNLO$^\star$ predictions
while the shaded areas show the range of the prediction due to
factorisation and renormalisation scale variation (although the upper
band of this uncertainty may not encapsulate the full range of
infrared uncertainties~\cite{NNLO_upsilon}).
The Colour Singlet Model predictions at NNLO$^\star$ show significant improvement in describing the $p_T$ dependence and normalisation
of prompt $J/\psi$ production over NLO, and vast improvement over earlier LO predictions that are compared to Tevatron data, although it is clear that these predictions still fall short of fully describing the production mechanisms of prompt $J/\psi$, particularly
at the highest transverse momenta explored in this analysis.
The overall scale of the central prediction is somewhat low, but these discrepancies are similar in nature to those seen between NNLO$^\star$
calculations and $\psi$(2S) production as measured by CDF~\cite{Lansberg2, psi2s_cdf} at lower $p_T$ and centre-of-mass energy
and may be attributed to higher order corrections beyond NNLO$^\star$ that are still expected to be relatively significant for hidden charm
production.
\section{Summary}
\label{section:conclusion}
Results are reported on the measurement of the inclusive cross-section of $J/\psi\to\mu^+\mu^-$ production
in proton-proton collisions at a collision energy of 7 TeV using the ATLAS detector with up to 2.3~pb$^{-1}$ of integrated luminosity.
The inclusive cross-section is measured in bins of
rapidity $y$ and transverse momentum $p_T$ of $J/\psi$, covering the range
$|y| < 2.4$ and $1<p_T<70$\;GeV. The fraction of non-prompt $J/\psi$ mesons is also measured
as a function of $J/\psi$ transverse momentum and rapidity and using the above two measurements,
double-differential cross-sections are extracted separately for promptly-produced
$J/\psi$ mesons and those coming from $B$-hadron decays.
It is found that the measurements made by ATLAS and CMS are in good agreement with each other
in the overlapping range of moderate $p_T$ values and complement each other at high (ATLAS) and low (CMS)
values of transverse momenta.
The non-prompt production fraction results are also compared to those from CDF at lower energy and reasonable agreement is found,
suggesting there is no strong dependence of the fraction on the collision energy.
The results are also compared to various theoretical calculations of
prompt as well as non-prompt $J/\psi$ production. In general, the theoretical
curves describe the non-prompt data well, but significant deviations are
observed in the prompt production spectra both in shape and normalisation, particularly at high transverse momenta. These measurements can thus provide
input towards an improved understanding and theoretical description of $J/\psi$ hadronic production.
\section{Acknowledgements}
The authors would like to thank Jean-Philippe Lansberg and Ramona Vogt for
providing theoretical predictions for prompt production and for useful
discussions. They would also like to thank Matteo Cacciari for providing
predictions for the $B\to J/\psi X$ production cross-sections in the
FONLL scheme.
We thank CERN for the very successful operation of the LHC, as well as the
support staff from our institutions without whom ATLAS could not be
operated efficiently.
We acknowledge the support of ANPCyT, Argentina; YerPhI, Armenia; ARC,
Australia; BMWF, Austria; ANAS, Azerbaijan; SSTC, Belarus; CNPq and FAPESP,
Brazil; NSERC, NRC and CFI, Canada; CERN; CONICYT, Chile; CAS, MOST and
NSFC, China; COLCIENCIAS, Colombia; MSMT CR, MPO CR and VSC CR, Czech
Republic; DNRF, DNSRC and Lundbeck Foundation, Denmark; ARTEMIS, European
Union; IN2P3-CNRS, CEA-DSM/IRFU, France; GNAS, Georgia; BMBF, DFG, HGF, MPG
and AvH Foundation, Germany; GSRT, Greece; ISF, MINERVA, GIF, DIP and
Benoziyo Center, Israel; INFN, Italy; MEXT and JSPS, Japan; CNRST, Morocco;
FOM and NWO, Netherlands; RCN, Norway; MNiSW, Poland; GRICES and FCT,
Portugal; MERYS (MECTS), Romania; MES of Russia and ROSATOM, Russian
Federation; JINR; MSTD, Serbia; MSSR, Slovakia; ARRS and MVZT, Slovenia;
DST/NRF, South Africa; MICINN, Spain; SRC and Wallenberg Foundation,
Sweden; SER, SNSF and Cantons of Bern and Geneva, Switzerland; NSC, Taiwan;
TAEK, Turkey; STFC, the Royal Society and Leverhulme Trust, United Kingdom;
DOE and NSF, United States of America.
The crucial computing support from all WLCG partners is acknowledged
gratefully, in particular from CERN and the ATLAS Tier-1 facilities at
TRIUMF (Canada), NDGF (Denmark, Norway, Sweden), CC-IN2P3 (France),
KIT/GridKA (Germany), INFN-CNAF (Italy), NL-T1 (Netherlands), PIC (Spain),
ASGC (Taiwan), RAL (UK) and BNL (USA) and in the Tier-2 facilities
worldwide.
\bibliographystyle{atlasnote}
\providecommand{\href}[2]{#2}\begingroup\raggedright
|
train/arxiv
|
BkiUc7nxaKgTv2K-T8L3
| 5 | 1 |
\section*{INTRODUCTION}
NB-IoT is a cellular protocol intended for service of massive number of devices in the internet of things (IoT). NB-IoT is a stripped down version of LTE, which removes much complexity from user equipment (UE) and utilizes narrow-band (NB) transmissions to achieve a better link budget. NB-IoT fulfills the requirement of connection density of 1.000.000 [devices/$\mathrm{km}^2$] for 5G, which can be supported by $10\times$ $200$~[kHz] carriers, i.e. $2$~[MHz] bandwidth.\cite{ciot} The 3rd Generation Partnership Project (3GPP) concluded a study on the support for non-terrestrial networks (NTN) for New Radio (NR) in 3GPP Release 16.\cite{TR38.811,TR38.821} A NTN version of NB-IoT is being pursued as a part of 3GPP Release 17 for 2022.\cite{TR36.763}
IoT devices are typically inexpensive, low-complexity battery-powered devices with required life-times of several years in order to be economically viable. These devices are a stark contrast to the ordinary ground-stations typically used in satellite communication that may provide up to $100$ [W] of transmission power.\cite{7964683} The highest power class in NB-IoT is power class 3 (PC3) - a transmission power of $23$ [dBm] equivalent to $200$ [mW].
NB-IoT is a viable candidate for providing uplink connectivity for power-constrained IoT devices in remote areas in the NTN scenario. NB-IoT in GEO is challenged by delays and propagation losses\cite{9268829} while NB-IoT in LEO is challenged by Doppler. NB-IoT utilizes narrow-band transmissions to achieve significant gains in the link-budget and the signal loss due to propagation is lowest in LEO where the distance between the UE and satellite will be smaller than in MEO and GEO. In general, MEO and GEO satellites benefit from being larger than LEO satellites, which typically have to be smaller and less expensive because of the air-drag and lower life-times in LEO.
This paper investigates the viability of NTN NB-IoT in LEO primarily at a link-level with some system considerations for small satellites in low density constellations.
The paper is structured as follows: First, a system model is introduced presenting parameters for the satellites and IoT devices within the NTN.
Then the link between the satellite and IoT device is analysed in terms of geometry, Doppler, propagation time and link budget before a discussion of the adaptations required in NB-IoT to support the LEO NTN case. Finally, the results presented throughout are discussed and concluding remarks are made.
\begin{Figure}
\centering
\includegraphics[width = 1\textwidth]{figures/market.png}
\captionof{figure}{Growth in IoT via satellite.}
\label{fig:market}
\end{Figure}
\section*{SYSTEM MODEL}
In this section system level considerations for the LEO NTN NB-IoT use-case are presented.
\subsection*{Scenario}
Consider the earth to be a sphere with radius $r_e = 6357$~[km], rotating at a velocity $V_e = 460$~[m/s] at the surface. The mass of earth is $M_e= 5.972\cdot10^{24}$~[kg].
A satellite orbits the earth in a Kepler orbit at a height $h_0=600$~[km] from the surface of the earth. The orbit is around Earths orbital axis, but in the opposite direction of Earths spin in order to achieve worst case Doppler. The satellite velocity is $v_{sat} \approx \sqrt{(G\cdot M_e/(r_e+h_0 ))} = 7.57$~[km/s] .
\subsection*{NB-IoT}
NB-IoT utilizes 180 kHz carriers, which may be deployed on a single LTE or NR carrier, within guard band or as a standalone technology. Here, NB-IoT is deployed as its standalone version and spectrum is allocated for it within the S-band (2GHz).
NB-IoT base-stations (eNB)s transmit in the entire $180$~[kHZ] in the downlink (DL) utilizing OFDM. The uplink (UL) is allocated spectrum on a separate carrier where SC-FDMA allow UEs to transmit, potentially simultaneously, in narrower bandwidths: $3.75$~[kHZ], $15$~[kHZ], $45$~[kHZ], $90$~[kHZ] and $180$~[kHZ].
A primary synchronization signal (PSS) and a secondary synchronization signal (SSS) are transmitted in the DL to enable UEs to adjust frequency and timing offsets of the UE. Typical UE devices will incur offsets as a side effect of cheap oscillators, but movement will affect the offsets by varying Doppler and propagation delay.
After synchronization UEs read the master information block (MIB) and system information blocks (SIB)s to obtain vital system parameters. Once synchronised, UEs may connect to the RAN through the RA procedure to perform transmissions.
\begin{center}
\captionof{table}{Example of static overhead on NB-IoT anchor.}
\begin{tabular}{|lr|lr|}
\hline
\textbf{DL} & Overhead & \textbf{UL} & Overhead \\ \hline
NPSS + NSSS & 15~\% & PRACH & 28~\% \\
NRS & 4~\% & DMRS & 10,29~\% \\
NPBCH & 9.52~\% & & \\
NB-SIB1 & 4.76~\% & & \\
NB-SIBx & 8~\% & & \\
PDCCH & 18.15~\% & & \\
Total & 59.42~\% & Total & 38,29~\% \\ \hline
\end{tabular}
\label{tab:overhead}
\end{center}
An example of the static overhead that can be expected on a anchor carrier in a NB-IoT cell is plotted in Tab. \ref{tab:overhead}. Since this overhead is around 60\% in the DL and 40\% in the UL it stands to reason that a non-anchor carrier is added to provide room for dynamic traffic load.
\subsection*{Satellite}
The dimensions of each unit in a CubeSat is $100\times 100\times 100$ [mm], which greatly reduces the amount of surface space available for both solar panels and antennas, but also the internal space available for batteries and RF-subsystems.
The power budget of a 1U CubeSat for imaging missions allocated enough power to the communication system to achieve $1$~[W] transmission power.\cite{4284088} The orbital average power (OAP) of a 6U Cubesat has been simulated at $17$~[W].\cite{SnyderAndreaniBeerbowerClavijoJoyLeeValeroAraujo2020} Indeed, 100+ watts are obtainable using deployable solar panels.\cite{GOM}
In a LEO satellite intended to be used as a base-station for communication infrastructure greater transmission power is advantageous so we shall assume a satellite of sufficient size to provide a total transmission power of $16$~[W], which results in $8$~[W/Carrier].
The surface-mounted antenna aperture size is limited, so
a microstrip patch antenna design for $2$ [GHz] has been chosen. The antenna has a patch size of $24\times 33$~[mm] which easily fits on a CubeSat and provides a gain of $8.48$ [dB] for a $3$ [dB] angular width of $73.4$ degrees.\cite{5993433} It should be noted that deployable antennas allow for larger aperture sizes i.e. higher gain within a narrower beam width.
\subsection*{User Equipment}
The UE in this scenario is a standard IoT device based on current TN NB-IoT chip-set. This device has a transmission power of $200$ [mW] (PC3) and a cheap oscillating crystal. The UE must compensate for Doppler to avoid interfering with other UL transmissions. To this end, the base-station node (nodeB) onboard the satellite will broadcast information that allows the position and velocity of the satellite to be estimated. The UE can use this information if it knows it's own position, to calculate and compensate for the Doppler offset and propagation delay. So UEs will either contain a GNSS module or be provisioned with their location if they are stationary.
\newpage
\section*{LINK ANALYSIS}
In this section the communication link in non-terrestrial low-earth orbits is examined.
First, geometric results are presented for a satellite pass, then results for propagation delay and Doppler frequency offset are presented and finally the link budget and channel fading models are examined.
\subsection*{Geometry}
The geometry of the satellite UE link is depicted in Fig. \ref{fig:satUElink} with key metrics of interest denoted. The curvature of Earth is not shown, but it is taken into account in the model.
\begin{Figure}
\centering
\includegraphics[width = 1\textwidth]{figures/satellite_UE_link_smallsat_paper.jpg}
\captionof{figure}{UE satellite link geometry.}
\label{fig:satUElink}
\end{Figure}
The elevation angle, $\alpha$ is the angle between the surface of earth and the direction of the satellite as seen from the UE.
Let $\alpha_{\min}$ denote the minimum elevation angle at which the satellite is visible to the UE. A typical value for a feeder link would be 10 degrees, whereas 30-40 degrees would be more realistic for IoT devices.
Here, a satellite pass is the period in which the satellite is within $\alpha_{\min}$ for a UE.
Let $\alpha_{\max}$ be the maximum elevation angle experienced by a UE during a satellite pass. Then $\alpha_{\max}$ fixes the position of a UE in the plane orthogonal to the satellite.
The angle between the UE and nadir as seen by the satellite is denoted $\beta$. This angle can be given in the orbital plane as $\beta^{orbit}$ or the orthogonal (cross-track) plane as $\beta^{orthogonal}$. In the modelled scenario $\beta^{orthogonal}$ is fixed by $\alpha_{\max}$.
The distance between the UE and the satellite is defined as $d$.
\begin{Figure}
\centering
\includegraphics[width = 1\textwidth]{figures/linkresults/elevation.pdf}
\includegraphics[width = 1\textwidth]{figures/linkresults/orbital.pdf}
\includegraphics[width = 1\textwidth]{figures/linkresults/distance.pdf}
\captionof{figure}{Development of key geometric variables during a satellite pass.}
\label{fig:geoResults}
\end{Figure}
\subsection*{Doppler shift and Propagation delay}
The change in distance over time results in Doppler effect offsetting the frequency of the signals transmitted between the UE and satellite as well as introducing a variable propagation delay between the UE and satellite.\cite{9148880}
The Doppler shift and the propagation times during a satellite pass are plotted in Fig. \ref{fig:Doppler} for S-band communication ($2$ [GHz]). The maximal Doppler offset is $\pm43$ [kHz] when the satellite is far from the UE flying towards the UE with a max rate of change of $544$ [Hz/s] experienced as the satellite passes over the UE.
The propagation delay is less than $4$ [ms] for IoT devices and up to $6.5$ [ms] for feeder links. The maximal delay rate of change is up to $20$ [$\mu$s/s] when the satellite is far from the UE.
\begin{Figure}
\centering
\includegraphics[width = 1\textwidth]{figures/linkresults/doppler.pdf}
\includegraphics[width = 1\textwidth]{figures/linkresults/dopplerrate.pdf}
\captionof{figure}{Doppler offset (above) and Doppler rate (below) during a satellite pass.}
\label{fig:Doppler}
\end{Figure}
\begin{Figure}
\centering
\includegraphics[width = 1\textwidth]{figures/linkresults/proptime.pdf}
\includegraphics[width = 1\textwidth]{figures/linkresults/proptimerate.pdf}
\captionof{figure}{Propagation time (above) and delay rate change (below) during a satellite pass.}
\label{fig:proptime}
\end{Figure}
\subsection*{Link Budget}
The link-budget is the sum of all gains and losses in dB as per Eq.~\eqref{eq:LB}.
\begin{align} \label{eq:LB}
\textrm{LB} = &P_{TX}+P_N+PL+NF_{RX}+G_{UE,Ant} \nonumber
\\&+G_{Sat,Ant} + G_{shadow}+G_{polarMiss} \nonumber
\\ &+G_{absorb}+G_{scintillation}
\end{align}
Where LB is the link-budget (SNR in the receiver) in dB, $P_{TX}$ is the transmission power, $P_N$ is the noise power, PL is the path-loss, $NF_{RX}$ is the receiver's noise-figure, $G_{shadow}$ is the loss due to shadowing, $G_{polarMiss}$ is loss due to polarization mismatch, $G_{absorb}$ is loss due to atmospheric absorption and finally $G_{scintillation}$ is scintillation loss.
\begin{center}
\fontsize{8}{7.2}\selectfont
\captionof{table}{Link budget.}
\begin{tabular}{|l|llllll|}
\hline
&\rotatebox{270}{$\mathrm{NF}_\mathrm{UE}$} &\rotatebox{270}{$\mathrm{NF}_\mathrm{Sat}$} &\rotatebox{270}{$\mathrm{G}_\mathrm{Shadow}$} &\rotatebox{270}{$\mathrm{G}_\mathrm{polarMiss}$} &\rotatebox{270}{$\mathrm{G}_\mathrm{absorption}$} &\rotatebox{270}{$\mathrm{G}_\mathrm{scintillation} \;$}
\\ dB & -9 & -3 & -3 & -3 & -0.1 & -2.2
\\ \hline
\end{tabular}
\label{tab:linkbudget}
\end{center}
Tab. \ref{tab:linkbudget} contains values for the constants of the link budget, Eq.~\eqref{eq:LB}. $G_{absorp}$ and $G_{scintillation}$ depend on the elevation angle but are fixed in this analysis as in the ongoing 3GPP work.\cite{TR36.763} The noise figure of the UE is purposely selected to be quite high and may improve as the technology matures.
The noise power is calculated as thermal noise at the receiver.
\begin{align} \label{eq:p_n}
P_N &= 10\log_{10}(k_BT\Delta f)+30 &[dBm]
\end{align}
The path-loss and antenna gains during a satellite pass are a function of the distance and angles between the UE and eNB.
\begin{align} \label{eq:pl}
PL &= 10\cdot n\cdot\log_{10}(d) &[dBm]
\end{align}
Finally, the gain of the satellite and UE antennas must be accounted for. To simplify the model of the UE, assume that $G_{UE,Ant}=0$ when $\alpha<\alpha_{\min}$. The satellite antenna gain is plotted in Fig. \ref{fig:satgain}.
\begin{figure*}
\centering
\includegraphics[width = 1\textwidth]{figures/linkresults/antenna.pdf}
\caption{Heat map of satellite antenna gain.}
\label{fig:satgain}
\end{figure*}
The resulting link budget is plotted in Fig.~\ref{fig:dlLB} and Fig.~\ref{fig:ulLB} for the DL and the UL, respectively. The dotted lines correspond to an $\alpha_{\max}$ of $62.4$, $42.7$ and $30$ degrees, respectively.
\begin{figure*}
\centering
\includegraphics[width = 1\textwidth, trim=0 25 0 0, clip]{figures/linkresults/dlbudget.pdf}
\caption{Link-budget in the downlink.}
\label{fig:dlLB}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width = 1\textwidth, trim=0 25 0 0, clip]{figures/linkresults/ulbudgett.pdf}
\caption{Link-budget in the uplink for 3.75 kHz transmissions.}
\label{fig:ulLB}
\end{figure*}
The DL is power-limited, especially on the beam-edges, while the UL is experiencing much more favorable conditions. The reason for this disparity is the much narrower bandwidth that can be used in the UL, which improves the noise by up to $\sim16.8$~[dB]. In addition, the noise figure in the satellite is a $6$~[dB] improvement over the noise figure in the UEs receiver. Thus, the UL exceeds the $16$~[dB] difference in transmission power by $6.8$~[dB].
\subsection*{Fading model}
Channel models for NTN were presented in 6.9.2-3 and 6.9.2-4 of 3GPP~TR~38.811.\cite{TR38.811} These fading models are for NTN in urban and hilly environments, respectively. Both models are delay-tab models with a LoS component and Rician gain for all other components. The model parameters are given in Tab.~\ref{tab:fading}.
\begin{center}
\fontsize{8}{7.2}\selectfont
\captionof{table}{Fading models.}
\begin{tabular}{|l|l|l|}
\hline
Model & NCU & NDH \\ \hline
Environment & Urban & Hilly \\
Noise & AWGN & AWGN \\
Tab delay [ns] & \{0, 1481\} & \{0, 168, 2199\} \\
Tab gain [dB] & \{-10.6, -23.4\} & \{-11.99, -9.89, -16.77 \} \\
K-factor & 7 & 7 \\ \hline
\end{tabular}
\label{tab:fading}
\end{center}
\vspace{1cm}
\section*{NB-IOT ADAPTATIONS}
This section presents the main areas in which NB-IoT may need modifications to work in the NTN case and assessments of what modifications, if any, are required.
\subsection*{Detection}
In order to detect a cell a UE must detect the NPSS and NSSS sequences. The time it takes to detect these signals is a function of the SNR. Simulated results for this synchronisation time are given in Tab. \ref{tab:cellsearch}.
\begin{center}
\captionof{table}{Required number of frames for cell search success.}
\resizebox{\columnwidth}{!}{\small
\begin{tabular}{lrccc|}
\cline{3-5}
& \multicolumn{1}{l|}{} & \multicolumn{3}{l|}{\textbf{Doppler}} \\ \hline
\multicolumn{1}{|l|}{\textbf{Model}} & \multicolumn{1}{l|}{\textbf{SNR}} & \textbf{\begin{tabular}[c]{@{}c@{}}28.4 {[}kHz{]} \\ 306 {[}Hz/s{]}\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}0 {[}Hz{]} \\ 580 {[}Hz/s{]}\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}0 {[}Hz{]}\\ 0 {[}Hz/s{]}\end{tabular}} \\ \hline
\multicolumn{1}{|l}{\multirow{5}{*}{LOS}} & -10~dB & 532 & 414 & 426 \\
\multicolumn{1}{|l}{} & -7~dB & 30 & 26 & 28 \\
\multicolumn{1}{|l}{} & -4~dB & 4 & 4 & 4 \\
\multicolumn{1}{|l}{} & 0~dB & 2 & 2 & 2 \\
\multicolumn{1}{|l}{} & +5~dB & 2 & 2 & 2 \\ \hline
\multicolumn{1}{|l}{\multirow{5}{*}{NCU}} & -10~dB & 3110 & 3350 & 2450 \\
\multicolumn{1}{|l}{} & -7~dB & 64 & 60 & 50 \\
\multicolumn{1}{|l}{} & -4~dB & 10 & 8 & 8 \\
\multicolumn{1}{|l}{} & 0~dB & 2 & 2 & 2 \\
\multicolumn{1}{|l}{} & +5~dB & 2 & 2 & 2 \\ \hline
\multicolumn{1}{|l}{\multirow{5}{*}{NDH}} & -10~dB & 586 & 490 & 436 \\
\multicolumn{1}{|l}{} & -7~dB & 42 & 40 & 34 \\
\multicolumn{1}{|l}{} & -4~dB & 8 & 8 & 8 \\
\multicolumn{1}{|l}{} & 0~dB & 4 & 4 & 4 \\
\multicolumn{1}{|l}{} & +5~dB & 2 & 2 & 2 \\ \hline
\end{tabular}
}
\label{tab:cellsearch}
\end{center}
The periods in which UEs may potentially have detected NPSS/SSS during a pass are plotted in Fig. \ref{fig:sync}. These periods are given for a UE that is searching all the time, but in practise a UE would attempt cell search at a certain time, which is not necessarily the moment it enters the cell edge of a satellite pass.
\begin{Figure}
\centering
\includegraphics[width = 1\textwidth]{figures/linkresults/sync.png}
\captionof{figure}{Synchronization windows during a satellite pass.}
\label{fig:sync}
\end{Figure}
The periods above do not account for decoding the MIB and SIB, which is not a negligible amount of time. The number of repetitions required to decode a MIB successfully as a function of the received SNR is plotted in Fig. \ref{fig:mib}. Evidently, it is only UEs for which $\alpha_{\max} \leq 42.7$ [degrees], or equivalently within a surface distance of $283$~[km] of nadir, that are able to access the cell for the considered link budget.
\begin{figure*}
\centering
\includegraphics[width = 1\textwidth]{figures/linkresults/MIB.png}
\caption{MIB decoding performance.}
\label{fig:mib}
\end{figure*}
\subsection*{Synchronization}
In order to avoid interference in the UL UEs must be able to synchronize transmissions in time and frequency a-priori to their reception at the eNB.
To facilitate this it has been decided in 3GPP that satellites should broadcast information that allows for the position and velocity of the satellite to be computed either in direct form or indirectly as ephemeris information. UEs must also know their own position either through provisioning for stationary devices, or through GNSS measurements. Given both the UE's and satellite's position and velocity are known to the UE it can calculate both current and future Doppler offset and propagation delay.
The first transmission from a UE will be a random access preamble (RAP) in the random access channel (RACH). The eNB will respond with a random access response (RAR) message, which will include timing advance (TA) and frequency advance (FA) components letting the UE know how far off the expected time and frequency the RAP was. The UE will use this information to tune the time-frequency adjustment.
During long UL transmissions the drift in frequency and propagation delay may be excessive if the UE does not adjust. Here the UE could compensate discretely over sequences of the transmission, which would require the transmission to be broken up and pauses to be inserted between each part to allow for timing adjustments. The pauses could even be used by the eNB to provide TA adjustments, but this has potential for creating large amounts of overhead in the DL. Another option is to process the output RF signal with a time-dependent continuous filter that is the inverse of the predicted Doppler offset and propagation delay.
\subsection*{PHY rate}
The required SNR for a block error rate of 10\% for various modulation and coding schemes (MCS) has been found through simulation\cite{10.1007/978-3-030-31831-4_18}, extrapolated for maximum ratio combining (MRC) and is given in Tab.~\ref{tab:DLtbssnr}. The achievable PHY-rate in the DL for a 100 bit payload during a satellite pass is plotted in Fig. \ref{fig:phyrate}.
\begin{Figure}
\centering
\includegraphics[width = 1\textwidth]{figures/linkresults/phyrate.png}
\captionof{figure}{Potential DL PHY-rate during a satellite pass.}
\label{fig:phyrate}
\end{Figure}
\vspace{1 cm}
\begin{center}
\captionof{table}{Simulated SNR requirement for various modulation and coding schemes (MCS).}
\begin{tabular}{|c|rrrrr|} \hline
& \multicolumn{5}{l|}{Repetitions} \\
I\_TBS & 1 & 2 & 4 & 8 & 16 \\ \hline
0& -5,8 & -8,3 & -10,6 & -12,8 & -14,7 \\
1& -4,9 & -7,2 & -9,7 & -11,9 & -13,8 \\
2& -3,9 & -6,2 & -8,8 & -11 & -12,9 \\
3& -3 & -5,4 & -8 & -10,4 & -12,2 \\
4& -2 & -4,6 & -7,2 & -9,6 & -11,4 \\
5& -1,1 & -3,7 & -6,3 & -8,9 & -10,8 \\
6& -0,2 & -2,8 & -5,6 & -8 & -10 \\
7& 0,7 & -1,9 & -4,7 & -7,3 & -9,3 \\
8& 1,4 & -1,3 & -4,1 & -6,8 & -8,9 \\
9& 2,2 & -0,4 & -3,3 & -6 & -8,1 \\
10& 3,1 & 0,4 & -2,4 & -5,2 & -7,3 \\
11& 4,2 & 1,4 & -1,5 & -4,3 & -6,6 \\
12& 5,5 & 2,7 & -0,4 & -3,3 & -5,6 \\
13& 6,9 & 3,9 & 0,9 & -2 & -4,4 \\ \hline
\end{tabular}
\label{tab:DLtbssnr}
\end{center}
\subsection*{Random Access}
As previously noted the random access preamble is the first transmission a UE makes in the UL. The requirements for this initial transmission is a frequency offset within $\pm 200$~[Hz]\cite{TR38.101} and a max timing advance that is by Eq. \eqref{eq:TA}.
\begin{align} \label{eq:TA}
TA_{\max} = \pm\dfrac{T_{CP}}{4} = \begin{cases}
\pm16.75~\mu s \;,\;\text{for format 0}
\\
\pm66.75~\mu s \;,\;\text{for format 1}
\end{cases}
\end{align}
The SNR of the received preambles will determine how many preambles are necessary for accurate detection, timing- and frequency offset estimation at the eNB. The simulation results given in Tab. \ref{tab:rap} in tandem with the UL link-budget of Fig. \ref{fig:ulLB} indicate that the RACH may not need to be configured towards coverage extension (CE) levels that take up more spectrum as a relatively low number of repetitions are sufficient.
\begin{center}
\captionof{table}{Repetition requirements for simulated RAP reception. Detection failure percentage in parenthesis.}
\begin{tabular}{@{}|l|lll|@{}}
\hline
SNR & AWGN & NCU & NDH \\ \hline
0 & 2 & 1 & 2 \\
-4 & 8 & 4 & 8 \\
-7 & 32 & 8 & 32 \\
-10 & 128 & 62 & 128 \\
-12 & 128 (8\%) & 128 & 128 (13\%) \\ \hline
\end{tabular}
\label{tab:rap}
\end{center}
\subsection*{Scheduling}
Timers should be offset by an integer $T_k$ [ms] to accommodate the propagation delay such that $T_k~>~d/C$.
The minimum viable solution here is to set $T_k$ to a fixed value and transmit it in the MIB or SIB. Alternatively, it could be the responsibility of the UE to compute $T_k$ over time and signal to the eNB when $T_k$ changes by a millisecond.
\subsection*{Paging occasions}
The core network and UEs must synchronize and agree upon paging opportunities. The alternative is that UEs observe the downlink control channel all of the time in order to not miss any paging, which is of course incredibly costly in terms of energy, or that the UE unwittingly ignores paging opportunities and so becomes unreachable by the core network.
Conventionally, discontinuous reception (DRX) and Power Save Mode (PSM) have been used to save energy by creating a formal agreement on paging ocassions between the core network and UEs and letting each UE go into a power saving mode outside of the paging ocassion. In LEO NTN NB-IoT this is complicated by the movement of cells, especially in low density constellations, where UEs may be out of coverage for extended periods of time.
A minimum viable way to accommodate this situation is to utilize PSM for UEs whilst out of coverage and then change to iDRX mode when within coverage of a satellite.
\subsection*{Discontinuous feeder links}
In constellations without continuous feeder links transparent payloads are per definition blocked. Given the relatively small range of LEO satellites and the small coverage window for UEs the transparent payload scenario is likely to perform poorly for LEO NTN~NB-IoT in general. Cross-ocean availability of feeder-links is for example not expected for LEO satellites.
In the case of regenerative payloads one way to support feeder link discontinuity is to keep part of the core network entities, such as the mobility management entity (MME) on-board the satellite. This could allow UEs to access the RAN and perform transmissions or allow for paging in the DL even when the satellite is our of range of a feeder link. In combination with a store-and-forward scheme this could allow support of devices in areas far from ground-stations.
\section*{CONCLUDING REMARKS}
\subsection*{Discussion}
The UL link-budget is $6.8$~[dB] greater than the DL link-budget. This also means that a total increase of $60$~[W] of the DL power budget will equalize the link budget for DL and $3.75$ [kHz] UL transmissions. Such a power budget may not be feasible for nano-satellites, but it should be within range for 'larger' small-satellites.
The limited DL link-budget may not be too much of a draw-back in the IoT scenario with regards to DL capacity as many IoT applications favor the UL, but a more direct consequence is the limited area in which synchronization to the cell is possible before it moves out of range. The power budget and antenna configuration presented here allow for DL synchronization within a relatively small cell ($500$ [km]). The cell size could be increased by changing the antenna configuration and upping the power budget.
Another trade-off beyond capacity and coverage of single satellites, is the trade-off between satellite cost, coverage/capacity and constellation density.
\subsection*{Conclusion}
This paper has presented the theoretical and simulated results of a link level analysis of LEO NTN~NB-IoT. It was shown that a nano-satellite could serve as a functioning base-station in a NTN~NB-IoT cellular infrastructure albeit at a diminished cell size of $560$~[km] across. Furthermore enhancements related to low-density constellations and discontinuous feeder links were discussed.
\newpage
\subsection*{Further work}
GateHouse will continue to develop our NTN NB-IoT waveform and is planning an in-orbit demonstration to validate the link-budget, adaptations and algorithms. Furthermore, the standardization activities in 3GPP for 5G NTN will continue with active participation from GateHouse.
\subsection*{Acknowledgments}
Thanks to our colleagues at GateHouse, in particular Bertel~Brander, Gert~Børsen, Johannes~Elgaard and Lars~Weje~Hangstrup for their work on the PHY layer algorithms, simulations and implementation of the NTN~NB-IoT waveform.
The work documented in this paper was in part financed by ESA 5997 NB-IoT:
'Narrowband IoT standard for Smallsat Networks'.
|
train/arxiv
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.